Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files

This commit is contained in:
Aliaksandr Valialkin 2021-06-25 13:24:42 +03:00
commit 452720c5dc
92 changed files with 2771 additions and 627 deletions

View file

@ -28,7 +28,7 @@ See [features available for enterprise customers](https://victoriametrics.com/en
## Case studies and talks
Alphabetically sorted links to case studies:
Case studies:
* [adidas](https://docs.victoriametrics.com/CaseStudies.html#adidas)
* [Adsterra](https://docs.victoriametrics.com/CaseStudies.html#adsterra)
@ -37,14 +37,19 @@ Alphabetically sorted links to case studies:
* [CERN](https://docs.victoriametrics.com/CaseStudies.html#cern)
* [COLOPL](https://docs.victoriametrics.com/CaseStudies.html#colopl)
* [Dreamteam](https://docs.victoriametrics.com/CaseStudies.html#dreamteam)
* [German Research Center for Artificial Intelligence](https://docs.victoriametrics.com/CaseStudies.html#german-research-center-for-artificial-intelligence)
* [Groove X](https://docs.victoriametrics.com/CaseStudies.html#groove-x)
* [Idealo.de](https://docs.victoriametrics.com/CaseStudies.html#idealode)
* [MHI Vestas Offshore Wind](https://docs.victoriametrics.com/CaseStudies.html#mhi-vestas-offshore-wind)
* [Sensedia](https://docs.victoriametrics.com/CaseStudies.html#sensedia)
* [Synthesio](https://docs.victoriametrics.com/CaseStudies.html#synthesio)
* [Wedos.com](https://docs.victoriametrics.com/CaseStudies.html#wedoscom)
* [Wix.com](https://docs.victoriametrics.com/CaseStudies.html#wixcom)
* [Zerodha](https://docs.victoriametrics.com/CaseStudies.html#zerodha)
* [zhihu](https://docs.victoriametrics.com/CaseStudies.html#zhihu)
See also [articles and slides about VictoriaMetrics from our users](https://docs.victoriametrics.com/Articles.html#third-party-articles-and-slides-about-victoriametrics)
## Prominent features
@ -344,6 +349,7 @@ Currently the following [scrape_config](https://prometheus.io/docs/prometheus/la
* [dockerswarm_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dockerswarm_sd_config)
* [eureka_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#eureka_sd_config)
* [digitalocean_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#digitalocean_sd_config)
* [http_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#http_sd_config)
Other `*_sd_config` types will be supported in the future.
@ -1177,7 +1183,7 @@ VictoriaMetrics de-duplicates data points if `-dedup.minScrapeInterval` command-
is set to positive duration. For example, `-dedup.minScrapeInterval=60s` would de-duplicate data points
on the same time series if they fall within the same discrete 60s bucket. The earliest data point will be kept. In the case of equal timestamps, an arbitrary data point will be kept.
The recommended value for `-dedup.minScrapeInterval` must equal to `scrape_interval` config from Prometheus configs.
The recommended value for `-dedup.minScrapeInterval` must equal to `scrape_interval` config from Prometheus configs. It is recommended to have a single `scrape_interval` across all the scrape targets. See [this article](https://www.robustperception.io/keep-it-simple-scrape_interval-id) for details.
The de-duplication reduces disk space usage if multiple identically configured [vmagent](https://docs.victoriametrics.com/vmagent.html) or Prometheus instances in HA pair
write data to the same VictoriaMetrics instance. These vmagent or Prometheus instances must have identical
@ -1746,6 +1752,8 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
Interval for checking for changes in 'file_sd_config'. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#file_sd_config for details (default 30s)
-promscrape.gceSDCheckInterval duration
Interval for checking for changes in gce. This works only if gce_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#gce_sd_config for details (default 1m0s)
-promscrape.httpSDCheckInterval duration
Interval for checking for changes in http service discovery. This works only if http_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#http_sd_config for details (default 1m0s)
-promscrape.kubernetes.apiServerTimeout duration
How frequently to reload the full state from Kuberntes API server (default 30m0s)
-promscrape.kubernetesSDCheckInterval duration

View file

@ -179,6 +179,8 @@ The following scrape types in [scrape_config](https://prometheus.io/docs/prometh
See [eureka_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#eureka_sd_config) for details.
* `digitalocean_sd_configs` is for scraping targerts registered in [DigitalOcean](https://www.digitalocean.com/)
See [digitalocean_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#digitalocean_sd_config) for details.
* `http_sd_configs` is for scraping targerts registered in http service discovery.
See [http_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#http_sd_config) for details.
Please file feature requests to [our issue tracker](https://github.com/VictoriaMetrics/VictoriaMetrics/issues) if you need other service discovery mechanisms to be supported by `vmagent`.
@ -653,6 +655,8 @@ See the docs at https://docs.victoriametrics.com/vmagent.html .
Interval for checking for changes in 'file_sd_config'. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#file_sd_config for details (default 30s)
-promscrape.gceSDCheckInterval duration
Interval for checking for changes in gce. This works only if gce_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#gce_sd_config for details (default 1m0s)
-promscrape.httpSDCheckInterval duration
Interval for checking for changes in http service discovery. This works only if http_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#http_sd_config for details (default 1m0s)
-promscrape.kubernetes.apiServerTimeout duration
How frequently to reload the full state from Kuberntes API server (default 30m0s)
-promscrape.kubernetesSDCheckInterval duration

View file

@ -316,7 +316,7 @@ var (
Name: vmNativeSrcAddr,
Usage: "VictoriaMetrics address to perform export from. \n" +
" Should be the same as --httpListenAddr value for single-node version or vmselect component." +
" If exporting from cluster version - include the tenet token in address.",
" If exporting from cluster version see https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format",
Required: true,
},
&cli.StringFlag{
@ -333,7 +333,7 @@ var (
Name: vmNativeDstAddr,
Usage: "VictoriaMetrics address to perform import to. \n" +
" Should be the same as --httpListenAddr value for single-node version or vminsert component." +
" If importing into cluster version - include the tenet token in address.",
" If importing into cluster version see https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format",
Required: true,
},
&cli.StringFlag{

View file

@ -56,27 +56,37 @@ func (cw *cWriter) printf(format string, args ...interface{}) {
//"{"metric":{"__name__":"cpu_usage_guest","arch":"x64","hostname":"host_19",},"timestamps":[1567296000000,1567296010000],"values":[1567296000000,66]}
func (ts *TimeSeries) write(w io.Writer) (int, error) {
pointsCount := len(ts.Timestamps)
if pointsCount == 0 {
return 0, nil
}
timestamps := ts.Timestamps
values := ts.Values
cw := &cWriter{w: w}
for len(timestamps) > 0 {
// Split long lines with more than 10K samples into multiple JSON lines.
// This should limit memory usage at VictoriaMetrics during data ingestion,
// since it allocates memory for the whole JSON line and processes it in one go.
batchSize := 10000
if batchSize > len(timestamps) {
batchSize = len(timestamps)
}
timestampsBatch := timestamps[:batchSize]
valuesBatch := values[:batchSize]
timestamps = timestamps[batchSize:]
values = values[batchSize:]
cw.printf(`{"metric":{"__name__":%q`, ts.Name)
if len(ts.LabelPairs) > 0 {
for _, lp := range ts.LabelPairs {
cw.printf(",%q:%q", lp.Name, lp.Value)
}
}
pointsCount := len(timestampsBatch)
cw.printf(`},"timestamps":[`)
for i := 0; i < pointsCount-1; i++ {
cw.printf(`%d,`, ts.Timestamps[i])
cw.printf(`%d,`, timestampsBatch[i])
}
cw.printf(`%d],"values":[`, ts.Timestamps[pointsCount-1])
cw.printf(`%d],"values":[`, timestampsBatch[pointsCount-1])
for i := 0; i < pointsCount-1; i++ {
cw.printf(`%v,`, ts.Values[i])
cw.printf(`%v,`, valuesBatch[i])
}
cw.printf("%v]}\n", valuesBatch[pointsCount-1])
}
cw.printf("%v]}\n", ts.Values[pointsCount-1])
return cw.n, cw.err
}

View file

@ -29,8 +29,11 @@ var (
maxQueueDuration = flag.Duration("search.maxQueueDuration", 10*time.Second, "The maximum time the request waits for execution when -search.maxConcurrentRequests "+
"limit is reached; see also -search.maxQueryDuration")
resetCacheAuthKey = flag.String("search.resetCacheAuthKey", "", "Optional authKey for resetting rollup cache via /internal/resetRollupResultCache call")
logSlowQueryDuration = flag.Duration("search.logSlowQueryDuration", 5*time.Second, "Log queries with execution time exceeding this value. Zero disables slow query logging")
)
var slowQueries = metrics.NewCounter(`vm_slow_queries_total`)
func getDefaultMaxConcurrentRequests() int {
n := cgroup.AvailableCPUs()
if n <= 4 {
@ -108,6 +111,20 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
}
}
if *logSlowQueryDuration > 0 {
actualStartTime := time.Now()
defer func() {
d := time.Since(actualStartTime)
if d >= *logSlowQueryDuration {
remoteAddr := httpserver.GetQuotedRemoteAddr(r)
requestURI := getRequestURI(r)
logger.Warnf("slow query according to -search.logSlowQueryDuration=%s: remoteAddr=%s, duration=%.3f seconds; requestURI: %q",
*logSlowQueryDuration, remoteAddr, d.Seconds(), requestURI)
slowQueries.Inc()
}
}()
}
path := strings.Replace(r.URL.Path, "//", "/", -1)
if path == "/internal/resetRollupResultCache" {
if len(*resetCacheAuthKey) > 0 && r.FormValue("authKey") != *resetCacheAuthKey {
@ -418,6 +435,23 @@ func sendPrometheusError(w http.ResponseWriter, r *http.Request, err error) {
prometheus.WriteErrorResponse(w, statusCode, err)
}
func getRequestURI(r *http.Request) string {
requestURI := r.RequestURI
if r.Method != "POST" {
return requestURI
}
_ = r.ParseForm()
queryArgs := r.PostForm.Encode()
if len(queryArgs) == 0 {
return requestURI
}
delimiter := "?"
if strings.Contains(requestURI, delimiter) {
delimiter = "&"
}
return requestURI + delimiter + queryArgs
}
var (
labelValuesRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/label/{}/values"}`)
labelValuesErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/api/v1/label/{}/values"}`)

View file

@ -13,34 +13,19 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/querystats"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/decimal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/metricsql"
)
var (
logSlowQueryDuration = flag.Duration("search.logSlowQueryDuration", 5*time.Second, "Log queries with execution time exceeding this value. Zero disables slow query logging")
treatDotsAsIsInRegexps = flag.Bool("search.treatDotsAsIsInRegexps", false, "Whether to treat dots as is in regexp label filters used in queries. "+
`For example, foo{bar=~"a.b.c"} will be automatically converted to foo{bar=~"a\\.b\\.c"}, i.e. all the dots in regexp filters will be automatically escaped `+
`in order to match only dot char instead of matching any char. Dots in ".+", ".*" and ".{n}" regexps aren't escaped. `+
`This option is DEPRECATED in favor of {__graphite__="a.*.c"} syntax for selecting metrics matching the given Graphite metrics filter`)
)
var slowQueries = metrics.NewCounter(`vm_slow_queries_total`)
// Exec executes q for the given ec.
func Exec(ec *EvalConfig, q string, isFirstPointOnly bool) ([]netstorage.Result, error) {
if *logSlowQueryDuration > 0 {
startTime := time.Now()
defer func() {
d := time.Since(startTime)
if d >= *logSlowQueryDuration {
logger.Warnf("slow query according to -search.logSlowQueryDuration=%s: remoteAddr=%s, duration=%.3f seconds, start=%d, end=%d, step=%d, query=%q",
*logSlowQueryDuration, ec.QuotedRemoteAddr, d.Seconds(), ec.Start/1000, ec.End/1000, ec.Step/1000, q)
slowQueries.Inc()
}
}()
}
if querystats.Enabled() {
startTime := time.Now()
defer querystats.RegisterQuery(q, ec.End-ec.Start, startTime)

View file

@ -1149,7 +1149,8 @@ func rollupTmin(rfa *rollupFuncArg) float64 {
minValue := values[0]
minTimestamp := timestamps[0]
for i, v := range values {
if v < minValue {
// Get the last timestamp for the minimum value as most users expect.
if v <= minValue {
minValue = v
minTimestamp = timestamps[i]
}
@ -1168,7 +1169,8 @@ func rollupTmax(rfa *rollupFuncArg) float64 {
maxValue := values[0]
maxTimestamp := timestamps[0]
for i, v := range values {
if v > maxValue {
// Get the last timestamp for the maximum value as most users expect.
if v >= maxValue {
maxValue = v
maxTimestamp = timestamps[i]
}

View file

@ -6,12 +6,24 @@ sort: 15
## tip
* FEATURE: vmagent: add service discovery for Docker (aka [docker_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#docker_sd_config)). See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/1402).
* FEATURE: vmagent: add service discovery for DigitalOcean (aka [digitalocean_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#digitalocean_sd_config)). See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1367).
* FEATURE: vmagent: show the number of samples the target returned during the last scrape on `/targets` and `/api/v1/targets` pages. This should simplify debugging targets, which may return too big or too low number of samples. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1377).
* FEATURE: vmagent: change the default value for `-remoteWrite.queues` from 4 to `2 * numCPUs`. This should reduce scrape duration for highly loaded vmagent, which scrapes tens of thousands of targets. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/1385).
* FEATURE: vmagent: show the number of samples the target returned during the last scrape on `/targets` and `/api/v1/targets` pages. This should simplify debugging targets, which may return too big or too low number of samples. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1377).
* FEATURE: vmagent: show jobs with zero discovered targets on `/targets` page. This should help debugging improperly configured scrape configs.
* FEATURE: vmagent: support for http-based service discovery (aka [http_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#http_sd_config)), which has been added since Prometheus 2.28. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1392).
* FEATURE: vmagent: support namespace in Consul serive discovery in the same way as Prometheus 2.28 does. See [this issue](https://github.com/prometheus/prometheus/issues/8894) for details.
* FEATURE: vmagent: support generic auth configs in `consul_sd_configs` in the same way as Prometheus 2.28 does. See [this issue](https://github.com/prometheus/prometheus/issues/8924) for details.
* FEATURE: [vmctl](https://docs.victoriametrics.com/vmctl.html): limit the number of samples per each imported JSON line. This should limit the memory usage at VictoriaMetrics side when importing time series with big number of samples.
* FEATURE: vmselect: log slow queries across all the `/api/v1/*` handlers (aka [Prometheus query API](https://prometheus.io/docs/prometheus/latest/querying/api)) if their execution duration exceeds `-search.logSlowQueryDuration`. This should simplify debugging slow requests to such handlers as `/api/v1/labels` or `/api/v1/series` additionally to `/api/v1/query` and `/api/v1/query_range`, which were logged in the previous releases.
* FEATURE: vminsert: sort the `-storageNode` list in order to guarantee the identical `series -> vmstorage` mapping across all the `vminsert` nodes. This should reduce resource usage (RAM, CPU and disk IO) at `vmstorage` nodes if `vmstorage` addresses are passed in random order to `vminsert` nodes.
* FEATURE: vmstorage: reduce memory usage on a system with many CPU cores under high ingestion rate.
* BUGFIX: prevent from adding new samples to deleted time series after the rotation of the inverted index (the rotation is performed once per `-retentionPeriod`). See [this comment](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1347#issuecomment-861232136) for details.
* BUGFIX: vmstorage: reduce high disk write IO usage on systems with big number of CPU cores. The issue has been introduced in the release [v1.59.0](#v1590). See [this commit](aa9b56a046b6ae8083fa659df35dd5e994bf9115) and [this comment](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1338#issuecomment-863046999) for details.
* BUGFIX: vmstorage: prevent from incorrect stats collection when multiple concurrent queries execute the same tag filter. This may help reducing CPU usage under certain workloads. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1338).
* BUGFIX: vmselect: return the last timestamp for the max / min value from `tmax_over_time(m[d])` and `tmin_over_time(m[d])` [MetricsQL functions](https://docs.victoriametrics.com/MetricsQL.html) as most users expect. See also [this issue](https://github.com/prometheus/prometheus/issues/8966).
* BUGFIX: vmselect: return the expected value for `increase_pure()` [MetricsQL function](https://docs.victoriametrics.com/MetricsQL.html) after a gap in a time series. Previously incorrect too big value could be returned after the gap from `increase_pure()`.
## [v1.61.1](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.61.1)

View file

@ -7,10 +7,6 @@ sort: 11
Below please find public case studies and talks from VictoriaMetrics users. You can also join our [community Slack channel](http://slack.victoriametrics.com/)
where you can chat with VictoriaMetrics users to get additional references, reviews and case studies.
You can also read [articles about VictoriaMetrics from our users](https://docs.victoriametrics.com/Articles.html#third-party-articles-and-slides).
Alphabetically sorted links to case studies:
* [adidas](#adidas)
* [Adsterra](#adsterra)
* [ARNES](#arnes)
@ -18,14 +14,19 @@ Alphabetically sorted links to case studies:
* [CERN](#cern)
* [COLOPL](#colopl)
* [Dreamteam](#dreamteam)
* [German Research Center for Artificial Intelligence](#german-research-center-for-artificial-intelligence)
* [Groove X](#groove-x)
* [Idealo.de](#idealode)
* [MHI Vestas Offshore Wind](#mhi-vestas-offshore-wind)
* [Sensedia](#sensedia)
* [Synthesio](#synthesio)
* [Wedos.com](#wedoscom)
* [Wix.com](#wixcom)
* [Zerodha](#zerodha)
* [zhihu](#zhihu)
You can also read [articles about VictoriaMetrics from our users](https://docs.victoriametrics.com/Articles.html#third-party-articles-and-slides).
## adidas
@ -213,15 +214,74 @@ from `Large-scale, super-load system monitoring platform built with VictoriaMetr
Numbers:
* Active time series: from 350K to 725K.
* Total number of time series: from 100M to 320M.
* Total number of datapoints: from 120 billion to 155 billion.
* Retention period: 3 months.
* Active time series: from 350K to 725K
* Total number of time series: from 100M to 320M
* Total number of datapoints: from 120 billions to 155 billions
* Retention period: 3 months
VictoriaMetrics in production environment runs on 2 M5 EC2 instances in "HA" mode, managed by Terraform and Ansible TF module.
2 Prometheus instances are writing to both VMs, with 2 [Promxy](https://github.com/jacksontj/promxy) replicas
as the load balancer for reads.
## German Research Center for Artificial Intelligence
[German Research Center for Artificial Intelligence](https://en.wikipedia.org/wiki/German_Research_Centre_for_Artificial_Intelligence) (DFKI) is one of the world's largest nonprofit contract research institutes for software technology based on artificial intelligence (AI) methods. DFKI was founded in 1988, and has facilities in the German cities of Kaiserslautern, Saarbrücken, Bremen and Berlin.
> Traditionally research groups in DFKI each used their own hardware. In mid 2020 we started an initiative to consolidate existing (and future) hardware into a central Slurm cluster to enable our researchers and students to run more and larger experiments. Based on the Nvidia deepops stack this included Prometheus for short-term metric storage. Our users liked the level of detail they got from our custom dashboards compared to our previous Zabbix-based solution, so we decided to extend the retention period to several years. Ideally we wanted PhD students to be able to recall even their earliest experiments by the time they finished their thesis. Since we do everything on-premise we needed a solution that is primarily space-efficient.
> We initially considered simply extending the retention period of the Prometheus instances included with deepops, since this would be the “batteries included” solution and appeared to be what everyone else was doing. We naively also liked the concept behind TimescaleDB, since it relies on Postgres for storage that has had decades of development. Turns out relational databases are not good at storing time-series and integration with existing exporters and Grafana would have been more difficult.
> VictoriaMetrics kept showing up in searches and benchmarks on time-series DB performance and consistently came out on top when it came to required storage. Quite frankly, the presented numbers looked like magic, so we decided to put this to the test. First impressions upon trial were excellent. Download the binary and point it at a storage location. Almost no configuration required. Apart from minor tweaks to the command line (turning on deduplication) and running it as a systemd unit we still use the same instance from the first tests today. It was further superior to Prometheus in every measurable way. It used considerably less CPU time and RAM than Prometheus and a third of the storage.
> While initially storage efficiency was the primary driver, the simplicity of setting up a testbed definitely helped. Seeing how effortlessly the single-node instance deals with our current setup gives us confidence that it will keep up with our growth for quite a while. And when the time comes that we outgrow it there is always the robust cluster variant of VictoriaMetrics that we can turn to.
> We like hassle-free experience with VictoriaMetrics. And at least for our use case a straight upgrade compared to Prometheus, while fully compatible with that ecosystem. While it can use cloud storage, there appears to be no downsides to using the filesystem instead, so it fits very well into our on-premise culture. It even comes with an excellent official Grafana dashboard to monitor performance.
Joachim Folz, Researcher, German Research Center for Artificial Intelligence (DFKI)
Numbers:
- Single-node mode
- Active time series: 130K
- Ingestion rate: 24K new samples per second
- Total number of datapoints: 160 billions
- Churn rate: 20K new time series per day
- Data size on disk: 82 GB
- Index size on disk: 300 MB
- Query rate:
- `/api/v1/query_range`: 2 queries per second
- `/api/v1/query`: 1.2 queries per second
- Query duration:
- 99th percentile: 6.5 milliseconds
- 90th percentile: 4 milliseconds
- median: 1 millisecond
- CPU usage: 0.1 CPU cores
- RAM usage: 2.8 GB
## Groove X
[Groove X](https://groove-x.com/en/) designs and produces robotics solutions. Its mission is to bring out humanitys full potential through robotics.
> We need monitoring solution for Device (Robot and Charge Station) health monitoring. At first, we used the Prometheus server, and then migrated to Thanos. But it was difficult to manage Thanos cluster and also we had a performance issue (long latency on request). Colopl, Inc. used VictoriaMetrics and we got interested in it. We built another k8s cluster besides our original Thanos cluster, and tried VictoriaMetrics in parallel for a while. It worked better and finally we decided to switch to VictoriaMetrics, because it provides low latency, it is in active development and it is easy to maintain.
> We like performance and scalability provided by VictoriaMetrics. We use metrics in our daily work, and long latency would be a big problem. Also, metrics correctness is important. We reported some inconsistencies with Prometheus during the evaluation period and received quick feedback from VictoriaMetrics developers.
Junya Hayashi, Senior Software Engineer, Groove X
Numbers:
- Active time series: 14 millions
- Ingestion rate: 235K samples per second
- Total number of datapoints: 3.2 trillions
- Churn rate: 420K new time series per day
- Data size on disk: 2 TB
- Index size on disk: 52 GB
- Query duration:
- 99th percentile: 2.6 seconds
- 90th percentile: 0.4 seconds
- median: 0.006 seconds
## Idealo.de
[idealo.de](https://www.idealo.de/) is the leading price comparison website in Germany. We use Prometheus for metrics on our container platform.
@ -231,12 +291,12 @@ VictoriaMetrics in production is very stable for us and uses only a fraction of
Numbers:
- The number of active time series per VictoriaMetrics instance is 21M.
- Total ingestion rate 120k metrics per second.
- The total number of datapoints 3.1 trillion.
- The average time series churn rate is ~9M per day.
- The average query rate is ~20 per second. Response time for 99th quantile is 120ms.
- Retention: 13 months.
- The number of active time series per VictoriaMetrics instance is 21M
- Total ingestion rate 120k metrics per second
- The total number of datapoints 3.1 trillion
- The average time series churn rate is ~9M per day
- The average query rate is ~20 per second. Response time for 99th quantile is 120ms
- Retention: 13 months
- Size of all datapoints: 3.5 TB
@ -249,12 +309,37 @@ MHI Vestas Offshore Wind is using VictoriaMetrics to ingest and visualize sensor
Numbers with current, limited roll out:
- Active time series: 270K
- Ingestion rate: 70K/sec
- Total number of datapoints: 850 billion
- Ingestion rate: 70K samples per second
- Total number of datapoints: 850 billions
- Data size on disk: 800 GiB
- Retention period: 3 years
## Sensedia
[Sensedia](https://www.sensedia.com) is a leading integration solutions provider with more than 120 enterprise clients across a range of sectors. Its world-class portfolio includes: an API Management Platform, Adaptive Governance, Events Hub, Service Mesh, Cloud Connectors and Strategic Professional Services' teams.
> Our initial requirements for monitoring solution: the metrics must be stored for 15 days, the solution must be scalable and must offer high availability of the metrics. It must being integrated into Grafana and allowing the use of PromQL when creating/editing dashboards in Grafana to obtain metrics from the Prometheus datasource. The solution also needs to receive data from Prometheus using HTTPS and needs to request a login and password to write/read the metrics. Details are available [in this article](https://nordicapis.com/api-monitoring-with-prometheus-grafana-alertmanager-and-victoriametrics/).
> We evaluated VictoriaMetrics, InfluxDB OpenSource and Enterprise, ElasticSearch, Thanos, Cortex, TimescaleDB/PostgreSQL and M3DB. We selected VictoriaMetrics because it has [good community support](http://slack.victoriametrics.com/), [good documentation](https://docs.victoriametrics.com/) and it just works.
> We started using VictoriaMetrics in the production environment days before the start of BlackFriday in 2020, the period of greatest use of the Sensedia API-Platform by customers. There was a record in the generation of metrics and there was no instability with the monitoring stack.
> We use VictoriaMetrics in cluster mode for centralized storage of metrics collected by several Prometheus servers installed in Kubernetes clusters from two different cloud providers. VictoriaMetrics has also been integrated with Grafana to view metrics.
[Aecio dos Santos Pires](http://aeciopires.com), Cloud Architect, Sensedia.
Numbers:
- Cluster mode
- Active time series: 700K
- Ingestion rate: 70K datapoints per second
- Datapoints: 112 billions
- Data size on disk: 82 GB
- Index size on disk: 30 GB
- Churn rate: 3 million of new time series per day
- Query response time (99th percentile): 500ms
## Synthesio
[Synthesio](https://www.synthesio.com/) is the leading social intelligence tool for social media monitoring and analytics.
@ -263,13 +348,13 @@ Numbers with current, limited roll out:
Numbers:
- Single node
- Active time series - 5 Million
- Datapoints: 1.25 Trillion
- Ingestion rate - 550k datapoints per second
- Disk usage - 150gb
- Index size - 3gb
- Query duration 99th percentile - 147ms
- Churn rate - 100 new time series per hour
- Active time series: 5 millions
- Datapoints: 1.25 trillions
- Ingestion rate: 550K datapoints per second
- Disk usage: 150 GB
- Index size: 3 GB
- Query duration 99th percentile: 147ms
- Churn rate: 2400 new time series per day
## Wedos.com

View file

@ -29,9 +29,9 @@ Join [our Slack](http://slack.victoriametrics.com/) or [contact us](mailto:info@
VictoriaMetrics cluster consists of the following services:
- `vmstorage` - stores the data
- `vminsert` - proxies the ingested data to `vmstorage` shards using consistent hashing
- `vmselect` - performs incoming queries using the data from `vmstorage`
- `vmstorage` - stores the raw data and returns the queried data on the given time range for the given label filters
- `vminsert` - accepts the ingested data and spreads it among `vmstorage` nodes according to consistent hashing over metric name and all its labels
- `vmselect` - performs incoming queries by fetching the needed data from all the configured `vmstorage` nodes
Each service may scale independently and may run on the most suitable hardware.
`vmstorage` nodes don't know about each other, don't communicate with each other and don't share any data.

View file

@ -4,6 +4,8 @@ sort: 19
# VictoriaMetrics Cluster Per Tenant Statistic
***The per-tenant statistic is a part of [enterprise package](https://victoriametrics.com/enterprise.html)***
<img alt="cluster-per-tenant-stat" src="per-tenant-stats.jpg">
VictoriaMetrics cluster for enterprise provides various metrics and statistics usage per tenant:

View file

@ -32,7 +32,7 @@ See [features available for enterprise customers](https://victoriametrics.com/en
## Case studies and talks
Alphabetically sorted links to case studies:
Case studies:
* [adidas](https://docs.victoriametrics.com/CaseStudies.html#adidas)
* [Adsterra](https://docs.victoriametrics.com/CaseStudies.html#adsterra)
@ -41,14 +41,19 @@ Alphabetically sorted links to case studies:
* [CERN](https://docs.victoriametrics.com/CaseStudies.html#cern)
* [COLOPL](https://docs.victoriametrics.com/CaseStudies.html#colopl)
* [Dreamteam](https://docs.victoriametrics.com/CaseStudies.html#dreamteam)
* [German Research Center for Artificial Intelligence](https://docs.victoriametrics.com/CaseStudies.html#german-research-center-for-artificial-intelligence)
* [Groove X](https://docs.victoriametrics.com/CaseStudies.html#groove-x)
* [Idealo.de](https://docs.victoriametrics.com/CaseStudies.html#idealode)
* [MHI Vestas Offshore Wind](https://docs.victoriametrics.com/CaseStudies.html#mhi-vestas-offshore-wind)
* [Sensedia](https://docs.victoriametrics.com/CaseStudies.html#sensedia)
* [Synthesio](https://docs.victoriametrics.com/CaseStudies.html#synthesio)
* [Wedos.com](https://docs.victoriametrics.com/CaseStudies.html#wedoscom)
* [Wix.com](https://docs.victoriametrics.com/CaseStudies.html#wixcom)
* [Zerodha](https://docs.victoriametrics.com/CaseStudies.html#zerodha)
* [zhihu](https://docs.victoriametrics.com/CaseStudies.html#zhihu)
See also [articles and slides about VictoriaMetrics from our users](https://docs.victoriametrics.com/Articles.html#third-party-articles-and-slides-about-victoriametrics)
## Prominent features
@ -348,6 +353,7 @@ Currently the following [scrape_config](https://prometheus.io/docs/prometheus/la
* [dockerswarm_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dockerswarm_sd_config)
* [eureka_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#eureka_sd_config)
* [digitalocean_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#digitalocean_sd_config)
* [http_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#http_sd_config)
Other `*_sd_config` types will be supported in the future.
@ -1181,7 +1187,7 @@ VictoriaMetrics de-duplicates data points if `-dedup.minScrapeInterval` command-
is set to positive duration. For example, `-dedup.minScrapeInterval=60s` would de-duplicate data points
on the same time series if they fall within the same discrete 60s bucket. The earliest data point will be kept. In the case of equal timestamps, an arbitrary data point will be kept.
The recommended value for `-dedup.minScrapeInterval` must equal to `scrape_interval` config from Prometheus configs.
The recommended value for `-dedup.minScrapeInterval` must equal to `scrape_interval` config from Prometheus configs. It is recommended to have a single `scrape_interval` across all the scrape targets. See [this article](https://www.robustperception.io/keep-it-simple-scrape_interval-id) for details.
The de-duplication reduces disk space usage if multiple identically configured [vmagent](https://docs.victoriametrics.com/vmagent.html) or Prometheus instances in HA pair
write data to the same VictoriaMetrics instance. These vmagent or Prometheus instances must have identical
@ -1750,6 +1756,8 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
Interval for checking for changes in 'file_sd_config'. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#file_sd_config for details (default 30s)
-promscrape.gceSDCheckInterval duration
Interval for checking for changes in gce. This works only if gce_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#gce_sd_config for details (default 1m0s)
-promscrape.httpSDCheckInterval duration
Interval for checking for changes in http service discovery. This works only if http_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#http_sd_config for details (default 1m0s)
-promscrape.kubernetes.apiServerTimeout duration
How frequently to reload the full state from Kuberntes API server (default 30m0s)
-promscrape.kubernetesSDCheckInterval duration

View file

@ -183,6 +183,8 @@ The following scrape types in [scrape_config](https://prometheus.io/docs/prometh
See [eureka_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#eureka_sd_config) for details.
* `digitalocean_sd_configs` is for scraping targerts registered in [DigitalOcean](https://www.digitalocean.com/)
See [digitalocean_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#digitalocean_sd_config) for details.
* `http_sd_configs` is for scraping targerts registered in http service discovery.
See [http_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#http_sd_config) for details.
Please file feature requests to [our issue tracker](https://github.com/VictoriaMetrics/VictoriaMetrics/issues) if you need other service discovery mechanisms to be supported by `vmagent`.
@ -657,6 +659,8 @@ See the docs at https://docs.victoriametrics.com/vmagent.html .
Interval for checking for changes in 'file_sd_config'. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#file_sd_config for details (default 30s)
-promscrape.gceSDCheckInterval duration
Interval for checking for changes in gce. This works only if gce_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#gce_sd_config for details (default 1m0s)
-promscrape.httpSDCheckInterval duration
Interval for checking for changes in http service discovery. This works only if http_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#http_sd_config for details (default 1m0s)
-promscrape.kubernetes.apiServerTimeout duration
How frequently to reload the full state from Kuberntes API server (default 30m0s)
-promscrape.kubernetesSDCheckInterval duration

14
go.mod
View file

@ -1,7 +1,6 @@
module github.com/VictoriaMetrics/VictoriaMetrics
require (
cloud.google.com/go v0.84.0 // indirect
cloud.google.com/go/storage v1.15.0
github.com/VictoriaMetrics/fastcache v1.6.0
@ -11,7 +10,7 @@ require (
github.com/VictoriaMetrics/metrics v1.17.2
github.com/VictoriaMetrics/metricsql v0.15.0
github.com/VividCortex/ewma v1.2.0 // indirect
github.com/aws/aws-sdk-go v1.38.57
github.com/aws/aws-sdk-go v1.38.66
github.com/cespare/xxhash/v2 v2.1.1
github.com/cheggaaa/pb/v3 v3.0.8
github.com/cpuguy83/go-md2man/v2 v2.0.0 // indirect
@ -24,7 +23,7 @@ require (
github.com/mattn/go-isatty v0.0.13 // indirect
github.com/mattn/go-runewidth v0.0.13 // indirect
github.com/oklog/ulid v1.3.1
github.com/prometheus/common v0.28.0 // indirect
github.com/prometheus/common v0.29.0 // indirect
github.com/prometheus/prometheus v1.8.2-0.20201119142752-3ad25a6dc3d9
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/urfave/cli/v2 v2.3.0
@ -35,10 +34,11 @@ require (
github.com/valyala/histogram v1.1.2
github.com/valyala/quicktemplate v1.6.3
go.uber.org/atomic v1.8.0 // indirect
golang.org/x/net v0.0.0-20210525063256-abc453219eb5
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c
golang.org/x/sys v0.0.0-20210608053332-aa57babbf139
google.golang.org/api v0.48.0
golang.org/x/net v0.0.0-20210614182718-04defd469f4e
golang.org/x/oauth2 v0.0.0-20210622215436-a8dc77f794b6
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22
golang.org/x/tools v0.1.4 // indirect
google.golang.org/api v0.49.0
gopkg.in/yaml.v2 v2.4.0
)

29
go.sum
View file

@ -146,8 +146,8 @@ github.com/aws/aws-sdk-go v1.29.16/go.mod h1:1KvfttTE3SPKMpo8g2c6jL3ZKfXtFvKscTg
github.com/aws/aws-sdk-go v1.30.12/go.mod h1:5zCpMtNQVjRREroY7sYe8lOMRSxkhG6MZveU8YkpAk0=
github.com/aws/aws-sdk-go v1.34.28/go.mod h1:H7NKnBqNVzoTJpGfLrQkkD+ytBA93eiDYi/+8rV9s48=
github.com/aws/aws-sdk-go v1.35.31/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
github.com/aws/aws-sdk-go v1.38.57 h1:Jo6uOnWNbj4jL/8t/XUrHOKm1J6pPcYFhGzda20UcUk=
github.com/aws/aws-sdk-go v1.38.57/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
github.com/aws/aws-sdk-go v1.38.66 h1:z1nEnVOb0ctiHURel3sFxZdXDrXnG6Fa+Dly3Kb0KVo=
github.com/aws/aws-sdk-go v1.38.66/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g=
github.com/benbjohnson/immutable v0.2.1/go.mod h1:uc6OHo6PN2++n98KHLxW8ef4W42ylHiQSENghE1ezxI=
github.com/benbjohnson/tmpl v1.0.0/go.mod h1:igT620JFIi44B6awvU9IsDhR77IXWtFigTLil/RPdps=
@ -765,8 +765,8 @@ github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB8
github.com/prometheus/common v0.14.0/go.mod h1:U+gB1OBLb1lF3O42bTCL+FK18tX9Oar16Clt/msog/s=
github.com/prometheus/common v0.15.0/go.mod h1:U+gB1OBLb1lF3O42bTCL+FK18tX9Oar16Clt/msog/s=
github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc=
github.com/prometheus/common v0.28.0 h1:vGVfV9KrDTvWt5boZO0I19g2E3CsWfpPPKZM9dt3mEw=
github.com/prometheus/common v0.28.0/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls=
github.com/prometheus/common v0.29.0 h1:3jqPBvKT4OHAbje2Ql7KeaaSicDBCxMYwEJU1zRJceE=
github.com/prometheus/common v0.29.0/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
@ -1040,8 +1040,9 @@ golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v
golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLdyRGr576XBO4/greRjx4P4O3yc=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20210503060351-7fd8e65b6420/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210525063256-abc453219eb5 h1:wjuX4b5yYQnEQHzd+CBcrcC6OVR2J1CN6mUy0oSxIPo=
golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210614182718-04defd469f4e h1:XpT3nA5TvE525Ne3hInMh6+GETgn27Zfm9dxsThnX2Q=
golang.org/x/net v0.0.0-20210614182718-04defd469f4e/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@ -1054,8 +1055,10 @@ golang.org/x/oauth2 v0.0.0-20210218202405-ba52d332ba99/go.mod h1:KelEdhl1UZF7XfJ
golang.org/x/oauth2 v0.0.0-20210220000619-9bb904979d93/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210413134643-5e61552d6c78/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c h1:pkQiBZBvdos9qq4wBAHqlzuZHEXo07pqV06ef90u1WI=
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210615190721-d04028783cf1/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210622215436-a8dc77f794b6 h1:pERGha6IgvMUdN6oJbwjZTt1ai5/O855Qmv1Bsc0v18=
golang.org/x/oauth2 v0.0.0-20210622215436-a8dc77f794b6/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@ -1149,8 +1152,8 @@ golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210603125802-9665404d3644/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210608053332-aa57babbf139 h1:C+AwYEtBp/VQwoLntUmQ/yx3MS9vmZaKNdw5eOpoQe8=
golang.org/x/sys v0.0.0-20210608053332-aa57babbf139/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22 h1:RqytpXGR1iVNX7psjB3ff8y7sNFinVFvkx1c8SjBkio=
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.0.0-20160726164857-2910a502d2bf/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@ -1243,8 +1246,10 @@ golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4f
golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.2 h1:kRBLX7v7Af8W7Gdbbc908OJcdgtK8bOz9Uaj8/F1ACA=
golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.3/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.4 h1:cVngSRcfgyZCzys3KYOpCFa+4dqX/Oub9tAq00ttGVs=
golang.org/x/tools v0.1.4/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@ -1280,8 +1285,9 @@ google.golang.org/api v0.41.0/go.mod h1:RkxM5lITDfTzmyKFPt+wGrCJbVfniCr2ool8kTBz
google.golang.org/api v0.43.0/go.mod h1:nQsDGjRXMo4lvh5hP0TKqF244gqhGcr/YSIykhUk/94=
google.golang.org/api v0.45.0/go.mod h1:ISLIJCedJolbZvDfAk+Ctuq5hf+aJ33WgtUsfyFoLXA=
google.golang.org/api v0.47.0/go.mod h1:Wbvgpq1HddcWVtzsVLyfLp8lDg6AA241LmgIL59tHXo=
google.golang.org/api v0.48.0 h1:RDAPWfNFY06dffEXfn7hZF5Fr1ZbnChzfQZAPyBd1+I=
google.golang.org/api v0.48.0/go.mod h1:71Pr1vy+TAZRPkPs/xlCf5SsU8WjuAWv1Pfjbtukyy4=
google.golang.org/api v0.49.0 h1:gjIBDxlTG7vnzMmEnYwTnvLTF8Rjzo+ETCgEX1YZ/fY=
google.golang.org/api v0.49.0/go.mod h1:BECiH72wsfwUvOVn3+btPD5WHi0LzavZReBndi42L18=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
@ -1341,8 +1347,9 @@ google.golang.org/genproto v0.0.0-20210420162539-3c870d7478d2/go.mod h1:P3QM42oQ
google.golang.org/genproto v0.0.0-20210513213006-bf773b8c8384/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A=
google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
google.golang.org/genproto v0.0.0-20210604141403-392c879c8b08/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
google.golang.org/genproto v0.0.0-20210608205507-b6d2f5bf0d7d h1:KzwjikDymrEmYYbdyfievTwjEeGlu+OM6oiKBkF3Jfg=
google.golang.org/genproto v0.0.0-20210608205507-b6d2f5bf0d7d/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
google.golang.org/genproto v0.0.0-20210617175327-b9e0b3197ced h1:c5geK1iMU3cDKtFrCVQIcjR3W+JOZMuhIyICMCTbtus=
google.golang.org/genproto v0.0.0-20210617175327-b9e0b3197ced/go.mod h1:SzzZ/N+nwJDaO1kznhnlzqS8ocJICar6hYhVyhi++24=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM=

View file

@ -21,10 +21,12 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/consul"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/digitalocean"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/dns"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/docker"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/dockerswarm"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/ec2"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/eureka"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/gce"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/http"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/kubernetes"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/openstack"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/proxy"
@ -118,17 +120,19 @@ type ScrapeConfig struct {
MetricRelabelConfigs []promrelabel.RelabelConfig `yaml:"metric_relabel_configs,omitempty"`
SampleLimit int `yaml:"sample_limit,omitempty"`
StaticConfigs []StaticConfig `yaml:"static_configs,omitempty"`
ConsulSDConfigs []consul.SDConfig `yaml:"consul_sd_configs,omitempty"`
DigitaloceanSDConfigs []digitalocean.SDConfig `yaml:"digitalocean_sd_configs,omitempty"`
DNSSDConfigs []dns.SDConfig `yaml:"dns_sd_configs,omitempty"`
DockerSDConfigs []docker.SDConfig `yaml:"docker_sd_configs,omitempty"`
DockerSwarmSDConfigs []dockerswarm.SDConfig `yaml:"dockerswarm_sd_configs,omitempty"`
EC2SDConfigs []ec2.SDConfig `yaml:"ec2_sd_configs,omitempty"`
EurekaSDConfigs []eureka.SDConfig `yaml:"eureka_sd_configs,omitempty"`
FileSDConfigs []FileSDConfig `yaml:"file_sd_configs,omitempty"`
GCESDConfigs []gce.SDConfig `yaml:"gce_sd_configs,omitempty"`
HTTPSDConfigs []http.SDConfig `yaml:"http_sd_configs,omitempty"`
KubernetesSDConfigs []kubernetes.SDConfig `yaml:"kubernetes_sd_configs,omitempty"`
OpenStackSDConfigs []openstack.SDConfig `yaml:"openstack_sd_configs,omitempty"`
ConsulSDConfigs []consul.SDConfig `yaml:"consul_sd_configs,omitempty"`
EurekaSDConfigs []eureka.SDConfig `yaml:"eureka_sd_configs,omitempty"`
DockerSwarmSDConfigs []dockerswarm.SDConfig `yaml:"dockerswarm_sd_configs,omitempty"`
DNSSDConfigs []dns.SDConfig `yaml:"dns_sd_configs,omitempty"`
EC2SDConfigs []ec2.SDConfig `yaml:"ec2_sd_configs,omitempty"`
GCESDConfigs []gce.SDConfig `yaml:"gce_sd_configs,omitempty"`
DigitaloceanSDConfigs []digitalocean.SDConfig `yaml:"digitalocean_sd_configs,omitempty"`
StaticConfigs []StaticConfig `yaml:"static_configs,omitempty"`
// These options are supported only by lib/promscrape.
RelabelDebug bool `yaml:"relabel_debug,omitempty"`
@ -160,30 +164,39 @@ func (sc *ScrapeConfig) mustStart(baseDir string) {
}
func (sc *ScrapeConfig) mustStop() {
for i := range sc.ConsulSDConfigs {
sc.ConsulSDConfigs[i].MustStop()
}
for i := range sc.DigitaloceanSDConfigs {
sc.DigitaloceanSDConfigs[i].MustStop()
}
for i := range sc.DNSSDConfigs {
sc.DNSSDConfigs[i].MustStop()
}
for i := range sc.DockerSDConfigs {
sc.DockerSDConfigs[i].MustStop()
}
for i := range sc.DockerSwarmSDConfigs {
sc.DockerSwarmSDConfigs[i].MustStop()
}
for i := range sc.EC2SDConfigs {
sc.EC2SDConfigs[i].MustStop()
}
for i := range sc.EurekaSDConfigs {
sc.EurekaSDConfigs[i].MustStop()
}
for i := range sc.GCESDConfigs {
sc.GCESDConfigs[i].MustStop()
}
for i := range sc.HTTPSDConfigs {
sc.HTTPSDConfigs[i].MustStop()
}
for i := range sc.KubernetesSDConfigs {
sc.KubernetesSDConfigs[i].MustStop()
}
for i := range sc.OpenStackSDConfigs {
sc.OpenStackSDConfigs[i].MustStop()
}
for i := range sc.ConsulSDConfigs {
sc.ConsulSDConfigs[i].MustStop()
}
for i := range sc.EurekaSDConfigs {
sc.EurekaSDConfigs[i].MustStop()
}
for i := range sc.DockerSwarmSDConfigs {
sc.DockerSwarmSDConfigs[i].MustStop()
}
for i := range sc.DNSSDConfigs {
sc.DNSSDConfigs[i].MustStop()
}
for i := range sc.EC2SDConfigs {
sc.EC2SDConfigs[i].MustStop()
}
for i := range sc.GCESDConfigs {
sc.GCESDConfigs[i].MustStop()
}
}
// FileSDConfig represents file-based service discovery config.
@ -272,6 +285,281 @@ func getSWSByJob(sws []*ScrapeWork) map[string][]*ScrapeWork {
return m
}
// getConsulSDScrapeWork returns `consul_sd_configs` ScrapeWork from cfg.
func (cfg *Config) getConsulSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
swsPrevByJob := getSWSByJob(prev)
dst := make([]*ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst)
ok := true
for j := range sc.ConsulSDConfigs {
sdc := &sc.ConsulSDConfigs[j]
var okLocal bool
dst, okLocal = appendSDScrapeWork(dst, sdc, cfg.baseDir, sc.swc, "consul_sd_config")
if ok {
ok = okLocal
}
}
if ok {
continue
}
swsPrev := swsPrevByJob[sc.swc.jobName]
if len(swsPrev) > 0 {
logger.Errorf("there were errors when discovering consul targets for job %q, so preserving the previous targets", sc.swc.jobName)
dst = append(dst[:dstLen], swsPrev...)
}
}
return dst
}
// getDigitalOceanDScrapeWork returns `digitalocean_sd_configs` ScrapeWork from cfg.
func (cfg *Config) getDigitalOceanDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
swsPrevByJob := getSWSByJob(prev)
dst := make([]*ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst)
ok := true
for j := range sc.DigitaloceanSDConfigs {
sdc := &sc.DigitaloceanSDConfigs[j]
var okLocal bool
dst, okLocal = appendSDScrapeWork(dst, sdc, cfg.baseDir, sc.swc, "digitalocean_sd_config")
if ok {
ok = okLocal
}
}
if ok {
continue
}
swsPrev := swsPrevByJob[sc.swc.jobName]
if len(swsPrev) > 0 {
logger.Errorf("there were errors when discovering digitalocean targets for job %q, so preserving the previous targets", sc.swc.jobName)
dst = append(dst[:dstLen], swsPrev...)
}
}
return dst
}
// getDNSSDScrapeWork returns `dns_sd_configs` ScrapeWork from cfg.
func (cfg *Config) getDNSSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
swsPrevByJob := getSWSByJob(prev)
dst := make([]*ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst)
ok := true
for j := range sc.DNSSDConfigs {
sdc := &sc.DNSSDConfigs[j]
var okLocal bool
dst, okLocal = appendSDScrapeWork(dst, sdc, cfg.baseDir, sc.swc, "dns_sd_config")
if ok {
ok = okLocal
}
}
if ok {
continue
}
swsPrev := swsPrevByJob[sc.swc.jobName]
if len(swsPrev) > 0 {
logger.Errorf("there were errors when discovering dns targets for job %q, so preserving the previous targets", sc.swc.jobName)
dst = append(dst[:dstLen], swsPrev...)
}
}
return dst
}
// getDockerSDScrapeWork returns `docker_sd_configs` ScrapeWork from cfg.
func (cfg *Config) getDockerSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
swsPrevByJob := getSWSByJob(prev)
dst := make([]*ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst)
ok := true
for j := range sc.DockerSDConfigs {
sdc := &sc.DockerSDConfigs[j]
var okLocal bool
dst, okLocal = appendSDScrapeWork(dst, sdc, cfg.baseDir, sc.swc, "docker_sd_config")
if ok {
ok = okLocal
}
}
if ok {
continue
}
swsPrev := swsPrevByJob[sc.swc.jobName]
if len(swsPrev) > 0 {
logger.Errorf("there were errors when discovering docker targets for job %q, so preserving the previous targets", sc.swc.jobName)
dst = append(dst[:dstLen], swsPrev...)
}
}
return dst
}
// getDockerSwarmSDScrapeWork returns `dockerswarm_sd_configs` ScrapeWork from cfg.
func (cfg *Config) getDockerSwarmSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
swsPrevByJob := getSWSByJob(prev)
dst := make([]*ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst)
ok := true
for j := range sc.DockerSwarmSDConfigs {
sdc := &sc.DockerSwarmSDConfigs[j]
var okLocal bool
dst, okLocal = appendSDScrapeWork(dst, sdc, cfg.baseDir, sc.swc, "dockerswarm_sd_config")
if ok {
ok = okLocal
}
}
if ok {
continue
}
swsPrev := swsPrevByJob[sc.swc.jobName]
if len(swsPrev) > 0 {
logger.Errorf("there were errors when discovering dockerswarm targets for job %q, so preserving the previous targets", sc.swc.jobName)
dst = append(dst[:dstLen], swsPrev...)
}
}
return dst
}
// getEC2SDScrapeWork returns `ec2_sd_configs` ScrapeWork from cfg.
func (cfg *Config) getEC2SDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
swsPrevByJob := getSWSByJob(prev)
dst := make([]*ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst)
ok := true
for j := range sc.EC2SDConfigs {
sdc := &sc.EC2SDConfigs[j]
var okLocal bool
dst, okLocal = appendSDScrapeWork(dst, sdc, cfg.baseDir, sc.swc, "ec2_sd_config")
if ok {
ok = okLocal
}
}
if ok {
continue
}
swsPrev := swsPrevByJob[sc.swc.jobName]
if len(swsPrev) > 0 {
logger.Errorf("there were errors when discovering ec2 targets for job %q, so preserving the previous targets", sc.swc.jobName)
dst = append(dst[:dstLen], swsPrev...)
}
}
return dst
}
// getEurekaSDScrapeWork returns `eureka_sd_configs` ScrapeWork from cfg.
func (cfg *Config) getEurekaSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
swsPrevByJob := getSWSByJob(prev)
dst := make([]*ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst)
ok := true
for j := range sc.EurekaSDConfigs {
sdc := &sc.EurekaSDConfigs[j]
var okLocal bool
dst, okLocal = appendSDScrapeWork(dst, sdc, cfg.baseDir, sc.swc, "eureka_sd_config")
if ok {
ok = okLocal
}
}
if ok {
continue
}
swsPrev := swsPrevByJob[sc.swc.jobName]
if len(swsPrev) > 0 {
logger.Errorf("there were errors when discovering eureka targets for job %q, so preserving the previous targets", sc.swc.jobName)
dst = append(dst[:dstLen], swsPrev...)
}
}
return dst
}
// getFileSDScrapeWork returns `file_sd_configs` ScrapeWork from cfg.
func (cfg *Config) getFileSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
// Create a map for the previous scrape work.
swsMapPrev := make(map[string][]*ScrapeWork)
for _, sw := range prev {
filepath := promrelabel.GetLabelValueByName(sw.Labels, "__vm_filepath")
if len(filepath) == 0 {
logger.Panicf("BUG: missing `__vm_filepath` label")
} else {
swsMapPrev[filepath] = append(swsMapPrev[filepath], sw)
}
}
dst := make([]*ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
for j := range sc.FileSDConfigs {
sdc := &sc.FileSDConfigs[j]
dst = sdc.appendScrapeWork(dst, swsMapPrev, cfg.baseDir, sc.swc)
}
}
return dst
}
// getGCESDScrapeWork returns `gce_sd_configs` ScrapeWork from cfg.
func (cfg *Config) getGCESDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
swsPrevByJob := getSWSByJob(prev)
dst := make([]*ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst)
ok := true
for j := range sc.GCESDConfigs {
sdc := &sc.GCESDConfigs[j]
var okLocal bool
dst, okLocal = appendSDScrapeWork(dst, sdc, cfg.baseDir, sc.swc, "gce_sd_config")
if ok {
ok = okLocal
}
}
if ok {
continue
}
swsPrev := swsPrevByJob[sc.swc.jobName]
if len(swsPrev) > 0 {
logger.Errorf("there were errors when discovering gce targets for job %q, so preserving the previous targets", sc.swc.jobName)
dst = append(dst[:dstLen], swsPrev...)
}
}
return dst
}
// getHTTPDScrapeWork returns `http_sd_configs` ScrapeWork from cfg.
func (cfg *Config) getHTTPDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
swsPrevByJob := getSWSByJob(prev)
dst := make([]*ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst)
ok := true
for j := range sc.HTTPSDConfigs {
sdc := &sc.HTTPSDConfigs[j]
var okLocal bool
dst, okLocal = appendSDScrapeWork(dst, sdc, cfg.baseDir, sc.swc, "http_sd_config")
if ok {
ok = okLocal
}
}
if ok {
continue
}
swsPrev := swsPrevByJob[sc.swc.jobName]
if len(swsPrev) > 0 {
logger.Errorf("there were errors when discovering http targets for job %q, so preserving the previous targets", sc.swc.jobName)
dst = append(dst[:dstLen], swsPrev...)
}
}
return dst
}
// getKubernetesSDScrapeWork returns `kubernetes_sd_configs` ScrapeWork from cfg.
func (cfg *Config) getKubernetesSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
swsPrevByJob := getSWSByJob(prev)
@ -333,225 +621,6 @@ func (cfg *Config) getOpenStackSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
return dst
}
// getDockerSwarmSDScrapeWork returns `dockerswarm_sd_configs` ScrapeWork from cfg.
func (cfg *Config) getDockerSwarmSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
swsPrevByJob := getSWSByJob(prev)
dst := make([]*ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst)
ok := true
for j := range sc.DockerSwarmSDConfigs {
sdc := &sc.DockerSwarmSDConfigs[j]
var okLocal bool
dst, okLocal = appendSDScrapeWork(dst, sdc, cfg.baseDir, sc.swc, "dockerswarm_sd_config")
if ok {
ok = okLocal
}
}
if ok {
continue
}
swsPrev := swsPrevByJob[sc.swc.jobName]
if len(swsPrev) > 0 {
logger.Errorf("there were errors when discovering dockerswarm targets for job %q, so preserving the previous targets", sc.swc.jobName)
dst = append(dst[:dstLen], swsPrev...)
}
}
return dst
}
// getConsulSDScrapeWork returns `consul_sd_configs` ScrapeWork from cfg.
func (cfg *Config) getConsulSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
swsPrevByJob := getSWSByJob(prev)
dst := make([]*ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst)
ok := true
for j := range sc.ConsulSDConfigs {
sdc := &sc.ConsulSDConfigs[j]
var okLocal bool
dst, okLocal = appendSDScrapeWork(dst, sdc, cfg.baseDir, sc.swc, "consul_sd_config")
if ok {
ok = okLocal
}
}
if ok {
continue
}
swsPrev := swsPrevByJob[sc.swc.jobName]
if len(swsPrev) > 0 {
logger.Errorf("there were errors when discovering consul targets for job %q, so preserving the previous targets", sc.swc.jobName)
dst = append(dst[:dstLen], swsPrev...)
}
}
return dst
}
// getEurekaSDScrapeWork returns `eureka_sd_configs` ScrapeWork from cfg.
func (cfg *Config) getEurekaSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
swsPrevByJob := getSWSByJob(prev)
dst := make([]*ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst)
ok := true
for j := range sc.EurekaSDConfigs {
sdc := &sc.EurekaSDConfigs[j]
var okLocal bool
dst, okLocal = appendSDScrapeWork(dst, sdc, cfg.baseDir, sc.swc, "eureka_sd_config")
if ok {
ok = okLocal
}
}
if ok {
continue
}
swsPrev := swsPrevByJob[sc.swc.jobName]
if len(swsPrev) > 0 {
logger.Errorf("there were errors when discovering eureka targets for job %q, so preserving the previous targets", sc.swc.jobName)
dst = append(dst[:dstLen], swsPrev...)
}
}
return dst
}
// getDNSSDScrapeWork returns `dns_sd_configs` ScrapeWork from cfg.
func (cfg *Config) getDNSSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
swsPrevByJob := getSWSByJob(prev)
dst := make([]*ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst)
ok := true
for j := range sc.DNSSDConfigs {
sdc := &sc.DNSSDConfigs[j]
var okLocal bool
dst, okLocal = appendSDScrapeWork(dst, sdc, cfg.baseDir, sc.swc, "dns_sd_config")
if ok {
ok = okLocal
}
}
if ok {
continue
}
swsPrev := swsPrevByJob[sc.swc.jobName]
if len(swsPrev) > 0 {
logger.Errorf("there were errors when discovering dns targets for job %q, so preserving the previous targets", sc.swc.jobName)
dst = append(dst[:dstLen], swsPrev...)
}
}
return dst
}
// getEC2SDScrapeWork returns `ec2_sd_configs` ScrapeWork from cfg.
func (cfg *Config) getEC2SDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
swsPrevByJob := getSWSByJob(prev)
dst := make([]*ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst)
ok := true
for j := range sc.EC2SDConfigs {
sdc := &sc.EC2SDConfigs[j]
var okLocal bool
dst, okLocal = appendSDScrapeWork(dst, sdc, cfg.baseDir, sc.swc, "ec2_sd_config")
if ok {
ok = okLocal
}
}
if ok {
continue
}
swsPrev := swsPrevByJob[sc.swc.jobName]
if len(swsPrev) > 0 {
logger.Errorf("there were errors when discovering ec2 targets for job %q, so preserving the previous targets", sc.swc.jobName)
dst = append(dst[:dstLen], swsPrev...)
}
}
return dst
}
// getGCESDScrapeWork returns `gce_sd_configs` ScrapeWork from cfg.
func (cfg *Config) getGCESDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
swsPrevByJob := getSWSByJob(prev)
dst := make([]*ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst)
ok := true
for j := range sc.GCESDConfigs {
sdc := &sc.GCESDConfigs[j]
var okLocal bool
dst, okLocal = appendSDScrapeWork(dst, sdc, cfg.baseDir, sc.swc, "gce_sd_config")
if ok {
ok = okLocal
}
}
if ok {
continue
}
swsPrev := swsPrevByJob[sc.swc.jobName]
if len(swsPrev) > 0 {
logger.Errorf("there were errors when discovering gce targets for job %q, so preserving the previous targets", sc.swc.jobName)
dst = append(dst[:dstLen], swsPrev...)
}
}
return dst
}
// getDigitalOceanDScrapeWork returns `digitalocean_sd_configs` ScrapeWork from cfg.
func (cfg *Config) getDigitalOceanDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
swsPrevByJob := getSWSByJob(prev)
dst := make([]*ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst)
ok := true
for j := range sc.DigitaloceanSDConfigs {
sdc := &sc.DigitaloceanSDConfigs[j]
var okLocal bool
dst, okLocal = appendSDScrapeWork(dst, sdc, cfg.baseDir, sc.swc, "digitalocean_sd_config")
if ok {
ok = okLocal
}
}
if ok {
continue
}
swsPrev := swsPrevByJob[sc.swc.jobName]
if len(swsPrev) > 0 {
logger.Errorf("there were errors when discovering digitalocean targets for job %q, so preserving the previous targets", sc.swc.jobName)
dst = append(dst[:dstLen], swsPrev...)
}
}
return dst
}
// getFileSDScrapeWork returns `file_sd_configs` ScrapeWork from cfg.
func (cfg *Config) getFileSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
// Create a map for the previous scrape work.
swsMapPrev := make(map[string][]*ScrapeWork)
for _, sw := range prev {
filepath := promrelabel.GetLabelValueByName(sw.Labels, "__vm_filepath")
if len(filepath) == 0 {
logger.Panicf("BUG: missing `__vm_filepath` label")
} else {
swsMapPrev[filepath] = append(swsMapPrev[filepath], sw)
}
}
dst := make([]*ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
for j := range sc.FileSDConfigs {
sdc := &sc.FileSDConfigs[j]
dst = sdc.appendScrapeWork(dst, swsMapPrev, cfg.baseDir, sc.swc)
}
}
return dst
}
// getStaticScrapeWork returns `static_configs` ScrapeWork from from cfg.
func (cfg *Config) getStaticScrapeWork() []*ScrapeWork {
var dst []*ScrapeWork

View file

@ -38,19 +38,27 @@ func getAPIConfig(sdc *SDConfig, baseDir string) (*apiConfig, error) {
}
func newAPIConfig(sdc *SDConfig, baseDir string) (*apiConfig, error) {
hcc := sdc.HTTPClientConfig
token, err := getToken(sdc.Token)
if err != nil {
return nil, err
}
var ba *promauth.BasicAuthConfig
if token != "" {
if hcc.BearerToken != "" {
return nil, fmt.Errorf("cannot set both token and bearer_token configs")
}
hcc.BearerToken = token
}
if len(sdc.Username) > 0 {
ba = &promauth.BasicAuthConfig{
if hcc.BasicAuth != nil {
return nil, fmt.Errorf("cannot set both username and basic_auth configs")
}
hcc.BasicAuth = &promauth.BasicAuthConfig{
Username: sdc.Username,
Password: sdc.Password,
}
token = ""
}
ac, err := promauth.NewConfig(baseDir, nil, ba, token, "", nil, sdc.TLSConfig)
ac, err := hcc.NewConfig(baseDir)
if err != nil {
return nil, fmt.Errorf("cannot parse auth config: %w", err)
}
@ -82,7 +90,13 @@ func newAPIConfig(sdc *SDConfig, baseDir string) (*apiConfig, error) {
return nil, err
}
cw := newConsulWatcher(client, sdc, dc)
namespace := sdc.Namespace
// default namespace can be detected from env var.
if namespace == "" {
namespace = os.Getenv("CONSUL_NAMESPACE")
}
cw := newConsulWatcher(client, sdc, dc, namespace)
cfg := &apiConfig{
tagSeparator: tagSeparator,
consulWatcher: cw,

View file

@ -14,12 +14,15 @@ type SDConfig struct {
Server string `yaml:"server,omitempty"`
Token *string `yaml:"token"`
Datacenter string `yaml:"datacenter"`
// Namespace only supported at enterprise consul.
// https://www.consul.io/docs/enterprise/namespaces
Namespace string `yaml:"namespace,omitempty"`
Scheme string `yaml:"scheme,omitempty"`
Username string `yaml:"username"`
Password string `yaml:"password"`
HTTPClientConfig promauth.HTTPClientConfig `yaml:",inline"`
ProxyURL proxy.URL `yaml:"proxy_url,omitempty"`
ProxyClientConfig promauth.ProxyClientConfig `yaml:",inline"`
TLSConfig *promauth.TLSConfig `yaml:"tls_config,omitempty"`
Services []string `yaml:"services,omitempty"`
Tags []string `yaml:"tags,omitempty"`
NodeMeta map[string]string `yaml:"node_meta,omitempty"`

View file

@ -37,6 +37,7 @@ type Service struct {
ID string
Service string
Address string
Namespace string
Port int
Tags []string
Meta map[string]string
@ -81,6 +82,7 @@ func (sn *ServiceNode) appendTargetLabels(ms []map[string]string, serviceName, t
"__meta_consul_address": sn.Node.Address,
"__meta_consul_dc": sn.Node.Datacenter,
"__meta_consul_health": aggregatedStatus(sn.Checks),
"__meta_consul_namespace": sn.Service.Namespace,
"__meta_consul_node": sn.Node.Node,
"__meta_consul_service": serviceName,
"__meta_consul_service_address": sn.Service.Address,

View file

@ -64,7 +64,7 @@ func TestParseServiceNodesSuccess(t *testing.T) {
"Passing": 10,
"Warning": 1
},
"Namespace": "default"
"Namespace": "ns-dev"
},
"Checks": [
{
@ -118,6 +118,7 @@ func TestParseServiceNodesSuccess(t *testing.T) {
"__meta_consul_dc": "dc1",
"__meta_consul_health": "passing",
"__meta_consul_metadata_instance_type": "t2.medium",
"__meta_consul_namespace": "ns-dev",
"__meta_consul_node": "foobar",
"__meta_consul_service": "redis",
"__meta_consul_service_address": "10.1.10.12",

View file

@ -42,11 +42,14 @@ type serviceWatcher struct {
}
// newConsulWatcher creates new watcher and start background service discovery for Consul.
func newConsulWatcher(client *discoveryutils.Client, sdc *SDConfig, datacenter string) *consulWatcher {
func newConsulWatcher(client *discoveryutils.Client, sdc *SDConfig, datacenter, namespace string) *consulWatcher {
baseQueryArgs := "?dc=" + url.QueryEscape(datacenter)
if sdc.AllowStale {
baseQueryArgs += "&stale"
}
if namespace != "" {
baseQueryArgs += "&ns=" + namespace
}
for k, v := range sdc.NodeMeta {
baseQueryArgs += "&node-meta=" + url.QueryEscape(k+":"+v)
}

View file

@ -1,9 +1,11 @@
package digitalocean
import (
"flag"
"fmt"
"net/url"
"strings"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
@ -11,6 +13,11 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/proxy"
)
// SDCheckInterval defines interval for targets refresh.
var SDCheckInterval = flag.Duration("promscrape.digitaloceanSDCheckInterval", time.Minute, "Interval for checking for changes in digital ocean. "+
"This works only if digitalocean_sd_configs is configured in '-promscrape.config' file. "+
"See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#digitalocean_sd_config for details")
// SDConfig represents service discovery config for digital ocean.
//
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#digitalocean_sd_config
@ -146,3 +153,8 @@ func addDropletLabels(droplets []droplet, defaultPort int) []map[string]string {
}
return ms
}
// MustStop stops further usage for sdc.
func (sdc *SDConfig) MustStop() {
configMap.Delete(sdc)
}

View file

@ -2,6 +2,7 @@ package dns
import (
"context"
"flag"
"fmt"
"net"
"strconv"
@ -12,6 +13,11 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discoveryutils"
)
// SDCheckInterval defines interval for targets refresh.
var SDCheckInterval = flag.Duration("promscrape.dnsSDCheckInterval", 30*time.Second, "Interval for checking for changes in dns. "+
"This works only if dns_sd_configs is configured in '-promscrape.config' file. "+
"See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dns_sd_config for details")
// SDConfig represents service discovery config for DNS.
//
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dns_sd_config

View file

@ -0,0 +1,86 @@
package docker
import (
"encoding/json"
"fmt"
"net/url"
"strings"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discoveryutils"
)
var configMap = discoveryutils.NewConfigMap()
type apiConfig struct {
client *discoveryutils.Client
port int
// filtersQueryArg contains escaped `filters` query arg to add to each request to Docker Swarm API.
filtersQueryArg string
}
func getAPIConfig(sdc *SDConfig, baseDir string) (*apiConfig, error) {
v, err := configMap.Get(sdc, func() (interface{}, error) { return newAPIConfig(sdc, baseDir) })
if err != nil {
return nil, err
}
return v.(*apiConfig), nil
}
func newAPIConfig(sdc *SDConfig, baseDir string) (*apiConfig, error) {
cfg := &apiConfig{
port: sdc.Port,
filtersQueryArg: getFiltersQueryArg(sdc.Filters),
}
if cfg.port == 0 {
cfg.port = 80
}
ac, err := sdc.HTTPClientConfig.NewConfig(baseDir)
if err != nil {
return nil, fmt.Errorf("cannot parse auth config: %w", err)
}
proxyAC, err := sdc.ProxyClientConfig.NewConfig(baseDir)
if err != nil {
return nil, fmt.Errorf("cannot parse proxy auth config: %w", err)
}
client, err := discoveryutils.NewClient(sdc.Host, ac, sdc.ProxyURL, proxyAC)
if err != nil {
return nil, fmt.Errorf("cannot create HTTP client for %q: %w", sdc.Host, err)
}
cfg.client = client
return cfg, nil
}
func (cfg *apiConfig) getAPIResponse(path string) ([]byte, error) {
if len(cfg.filtersQueryArg) > 0 {
separator := "?"
if strings.Contains(path, "?") {
separator = "&"
}
path += separator + "filters=" + cfg.filtersQueryArg
}
return cfg.client.GetAPIResponse(path)
}
func getFiltersQueryArg(filters []Filter) string {
if len(filters) == 0 {
return ""
}
m := make(map[string]map[string]bool)
for _, f := range filters {
x := m[f.Name]
if x == nil {
x = make(map[string]bool)
m[f.Name] = x
}
for _, value := range f.Values {
x[value] = true
}
}
buf, err := json.Marshal(m)
if err != nil {
logger.Panicf("BUG: unexpected error in json.Marshal: %s", err)
}
return url.QueryEscape(string(buf))
}

View file

@ -0,0 +1,26 @@
package docker
import (
"testing"
)
func TestGetFiltersQueryArg(t *testing.T) {
f := func(filters []Filter, queryArgExpected string) {
t.Helper()
queryArg := getFiltersQueryArg(filters)
if queryArg != queryArgExpected {
t.Fatalf("unexpected query arg; got %s; want %s", queryArg, queryArgExpected)
}
}
f(nil, "")
f([]Filter{
{
Name: "name",
Values: []string{"foo", "bar"},
},
{
Name: "xxx",
Values: []string{"aa"},
},
}, "%7B%22name%22%3A%7B%22bar%22%3Atrue%2C%22foo%22%3Atrue%7D%2C%22xxx%22%3A%7B%22aa%22%3Atrue%7D%7D")
}

View file

@ -0,0 +1,111 @@
package docker
import (
"encoding/json"
"fmt"
"strconv"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discoveryutils"
)
// See https://github.com/moby/moby/blob/314759dc2f4745925d8dec6d15acc7761c6e5c92/docs/api/v1.41.yaml#L4024
type container struct {
Id string
Names []string
Labels map[string]string
Ports []struct {
IP string
PrivatePort int
PublicPort int
Type string
}
HostConfig struct {
NetworkMode string
}
NetworkSettings struct {
Networks map[string]struct {
IPAddress string
NetworkID string
}
}
}
func getContainersLabels(cfg *apiConfig) ([]map[string]string, error) {
networkLabels, err := getNetworksLabelsByNetworkID(cfg)
if err != nil {
return nil, err
}
containers, err := getContainers(cfg)
if err != nil {
return nil, err
}
return addContainersLabels(containers, networkLabels, cfg.port), nil
}
func getContainers(cfg *apiConfig) ([]container, error) {
resp, err := cfg.getAPIResponse("/containers/json")
if err != nil {
return nil, fmt.Errorf("cannot query dockerd api for containers: %w", err)
}
return parseContainers(resp)
}
func parseContainers(data []byte) ([]container, error) {
var containers []container
if err := json.Unmarshal(data, &containers); err != nil {
return nil, fmt.Errorf("cannot parse containers: %w", err)
}
return containers, nil
}
func addContainersLabels(containers []container, networkLabels map[string]map[string]string, defaultPort int) []map[string]string {
var ms []map[string]string
for i := range containers {
c := &containers[i]
if len(c.Names) == 0 {
continue
}
for _, n := range c.NetworkSettings.Networks {
var added bool
for _, p := range c.Ports {
if p.Type != "tcp" {
continue
}
m := map[string]string{
"__address__": discoveryutils.JoinHostPort(n.IPAddress, p.PrivatePort),
"__meta_docker_network_ip": n.IPAddress,
"__meta_docker_port_private": strconv.Itoa(p.PrivatePort),
}
if p.PublicPort > 0 {
m["__meta_docker_port_public"] = strconv.Itoa(p.PublicPort)
m["__meta_docker_port_public_ip"] = p.IP
}
addCommonLabels(m, c, networkLabels[n.NetworkID])
ms = append(ms, m)
added = true
}
if !added {
// Use fallback port when no exposed ports are available or if all are non-TCP
m := map[string]string{
"__address__": discoveryutils.JoinHostPort(n.IPAddress, defaultPort),
"__meta_docker_network_ip": n.IPAddress,
}
addCommonLabels(m, c, networkLabels[n.NetworkID])
ms = append(ms, m)
}
}
}
return ms
}
func addCommonLabels(m map[string]string, c *container, networkLabels map[string]string) {
m["__meta_docker_container_id"] = c.Id
m["__meta_docker_container_name"] = c.Names[0]
m["__meta_docker_container_network_mode"] = c.HostConfig.NetworkMode
for k, v := range c.Labels {
m["__meta_docker_container_label_"+discoveryutils.SanitizeLabelName(k)] = v
}
for k, v := range networkLabels {
m[k] = v
}
}

View file

@ -0,0 +1,404 @@
package docker
import (
"reflect"
"testing"
)
func Test_parseContainers(t *testing.T) {
type args struct {
data []byte
}
tests := []struct {
name string
args args
want []container
wantErr bool
}{
{
name: "parse two containers",
args: args{
data: []byte(`[
{
"Id": "90bc3b31aa13da5c0b11af2e228d54b38428a84e25d4e249ae9e9c95e51a0700",
"Names": [
"/crow-server"
],
"ImageID": "sha256:11045371758ccf9468d807d53d1e1faa100c8ebabe87296bc107d52bdf983378",
"Created": 1624440429,
"Ports": [
{
"IP": "0.0.0.0",
"PrivatePort": 8080,
"PublicPort": 18081,
"Type": "tcp"
}
],
"Labels": {
"com.docker.compose.config-hash": "c9f0bd5bb31921f94cff367d819a30a0cc08d4399080897a6c5cd74b983156ec",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "crowserver",
"com.docker.compose.service": "crow-server",
"com.docker.compose.version": "1.11.2"
},
"State": "running",
"Status": "Up 2 hours",
"HostConfig": {
"NetworkMode": "bridge"
},
"NetworkSettings": {
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "1dd8d1a8bef59943345c7231d7ce8268333ff5a8c5b3c94881e6b4742b447634",
"EndpointID": "2bcd751c98578ad1ce3d6798077a3e110535f3dcb0b0735cc84bd1e03905d907",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DriverOpts": null
}
}
}
},
{
"Id": "0e0f72a6eb7d9fb443f0426a66f7b8dd7d3283ab7e3a308b2bed584ac03a33dc",
"Names": [
"/crow-web"
],
"ImageID": "sha256:36e04dd2f67950179ab62a4ebd1b8b741232263fa1f210496a53335d5820b3af",
"Created": 1618302442,
"Ports": [
{
"IP": "0.0.0.0",
"PrivatePort": 8080,
"PublicPort": 18082,
"Type": "tcp"
}
],
"Labels": {
"com.docker.compose.config-hash": "d99ebd0fde8512366c2d78c367e95ddc74528bb60b7cf0c991c9f4835981e00e",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "crowweb",
"com.docker.compose.service": "crow-web",
"com.docker.compose.version": "1.11.2"
},
"State": "running",
"Status": "Up 2 months",
"HostConfig": {
"NetworkMode": "bridge"
},
"NetworkSettings": {
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "1dd8d1a8bef59943345c7231d7ce8268333ff5a8c5b3c94881e6b4742b447634",
"EndpointID": "78d751dfa31923d73e001e5dc5751ad6f9f5ffeb88056fc950f0a952d7f24302",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DriverOpts": null
}
}
}
}
]`),
},
want: []container{
{
Id: "90bc3b31aa13da5c0b11af2e228d54b38428a84e25d4e249ae9e9c95e51a0700",
Names: []string{"/crow-server"},
Labels: map[string]string{
"com.docker.compose.config-hash": "c9f0bd5bb31921f94cff367d819a30a0cc08d4399080897a6c5cd74b983156ec",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "crowserver",
"com.docker.compose.service": "crow-server",
"com.docker.compose.version": "1.11.2",
},
Ports: []struct {
IP string
PrivatePort int
PublicPort int
Type string
}{{
IP: "0.0.0.0",
PrivatePort: 8080,
PublicPort: 18081,
Type: "tcp",
}},
HostConfig: struct {
NetworkMode string
}{
NetworkMode: "bridge",
},
NetworkSettings: struct {
Networks map[string]struct {
IPAddress string
NetworkID string
}
}{
Networks: map[string]struct {
IPAddress string
NetworkID string
}{
"bridge": {
IPAddress: "172.17.0.2",
NetworkID: "1dd8d1a8bef59943345c7231d7ce8268333ff5a8c5b3c94881e6b4742b447634",
},
},
},
},
{
Id: "0e0f72a6eb7d9fb443f0426a66f7b8dd7d3283ab7e3a308b2bed584ac03a33dc",
Names: []string{"/crow-web"},
Labels: map[string]string{
"com.docker.compose.config-hash": "d99ebd0fde8512366c2d78c367e95ddc74528bb60b7cf0c991c9f4835981e00e",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "crowweb",
"com.docker.compose.service": "crow-web",
"com.docker.compose.version": "1.11.2",
},
Ports: []struct {
IP string
PrivatePort int
PublicPort int
Type string
}{{
IP: "0.0.0.0",
PrivatePort: 8080,
PublicPort: 18082,
Type: "tcp",
}},
HostConfig: struct {
NetworkMode string
}{
NetworkMode: "bridge",
},
NetworkSettings: struct {
Networks map[string]struct {
IPAddress string
NetworkID string
}
}{
Networks: map[string]struct {
IPAddress string
NetworkID string
}{
"bridge": {
IPAddress: "172.17.0.3",
NetworkID: "1dd8d1a8bef59943345c7231d7ce8268333ff5a8c5b3c94881e6b4742b447634",
},
},
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := parseContainers(tt.args.data)
if (err != nil) != tt.wantErr {
t.Errorf("parseContainers() error = %v, wantErr %v", err, tt.wantErr)
return
}
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("parseNetworks() \ngot %v, \nwant %v", got, tt.want)
}
})
}
}
func Test_addContainerLabels(t *testing.T) {
data := []byte(`[
{
"Name": "host",
"Id": "6a1989488dcb847c052eda939924d997457d5ecd994f76f35472996c4c75279a",
"Created": "2020-08-18T17:18:18.439033107+08:00",
"Scope": "local",
"Driver": "host",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": []
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {},
"Labels": {}
},
{
"Name": "none",
"Id": "c9668d06973d976527e913ba207a3819275649f347390379ec8356db375cfde3",
"Created": "2020-08-18T17:18:18.428827132+08:00",
"Scope": "local",
"Driver": "null",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": []
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {},
"Labels": {}
},
{
"Name": "bridge",
"Id": "1dd8d1a8bef59943345c7231d7ce8268333ff5a8c5b3c94881e6b4742b447634",
"Created": "2021-03-18T14:36:04.290821903+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]`)
networks, err := parseNetworks(data)
if err != nil {
t.Fatalf("fail to parse networks: %v", err)
}
networkLabels := getNetworkLabelsByNetworkID(networks)
tests := []struct {
name string
c container
want []map[string]string
wantErr bool
}{
{
name: "get labels from a container",
c: container{
Id: "90bc3b31aa13da5c0b11af2e228d54b38428a84e25d4e249ae9e9c95e51a0700",
Names: []string{"/crow-server"},
Labels: map[string]string{
"com.docker.compose.config-hash": "c9f0bd5bb31921f94cff367d819a30a0cc08d4399080897a6c5cd74b983156ec",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "crowserver",
"com.docker.compose.service": "crow-server",
"com.docker.compose.version": "1.11.2",
},
Ports: []struct {
IP string
PrivatePort int
PublicPort int
Type string
}{{
IP: "0.0.0.0",
PrivatePort: 8080,
PublicPort: 18081,
Type: "tcp",
}},
HostConfig: struct {
NetworkMode string
}{
NetworkMode: "bridge",
},
NetworkSettings: struct {
Networks map[string]struct {
IPAddress string
NetworkID string
}
}{
Networks: map[string]struct {
IPAddress string
NetworkID string
}{
"bridge": {
IPAddress: "172.17.0.2",
NetworkID: "1dd8d1a8bef59943345c7231d7ce8268333ff5a8c5b3c94881e6b4742b447634",
},
},
},
},
want: []map[string]string{
{
"__address__": "172.17.0.2:8080",
"__meta_docker_container_id": "90bc3b31aa13da5c0b11af2e228d54b38428a84e25d4e249ae9e9c95e51a0700",
"__meta_docker_container_label_com_docker_compose_config_hash": "c9f0bd5bb31921f94cff367d819a30a0cc08d4399080897a6c5cd74b983156ec",
"__meta_docker_container_label_com_docker_compose_container_number": "1",
"__meta_docker_container_label_com_docker_compose_oneoff": "False",
"__meta_docker_container_label_com_docker_compose_project": "crowserver",
"__meta_docker_container_label_com_docker_compose_service": "crow-server",
"__meta_docker_container_label_com_docker_compose_version": "1.11.2",
"__meta_docker_container_name": "/crow-server",
"__meta_docker_container_network_mode": "bridge",
"__meta_docker_network_id": "1dd8d1a8bef59943345c7231d7ce8268333ff5a8c5b3c94881e6b4742b447634",
"__meta_docker_network_ingress": "false",
"__meta_docker_network_internal": "false",
"__meta_docker_network_ip": "172.17.0.2",
"__meta_docker_network_name": "bridge",
"__meta_docker_network_scope": "local",
"__meta_docker_port_private": "8080",
"__meta_docker_port_public": "18081",
"__meta_docker_port_public_ip": "0.0.0.0",
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
labelsMap := addContainersLabels([]container{tt.c}, networkLabels, 80)
if (err != nil) != tt.wantErr {
t.Errorf("addContainersLabels() error = %v, wantErr %v", err, tt.wantErr)
return
}
if !reflect.DeepEqual(labelsMap, tt.want) {
t.Errorf("addContainersLabels() \ngot %v, \nwant %v", labelsMap, tt.want)
}
})
}
}

View file

@ -0,0 +1,49 @@
package docker
import (
"flag"
"fmt"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/proxy"
)
// SDCheckInterval defines interval for docker targets refresh.
var SDCheckInterval = flag.Duration("promscrape.dockerSDCheckInterval", 30*time.Second, "Interval for checking for changes in docker. "+
"This works only if docker_sd_configs is configured in '-promscrape.config' file. "+
"See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#docker_sd_config for details")
// SDConfig defines the `docker_sd` section for Docker based discovery
//
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#docker_sd_config
type SDConfig struct {
Host string `yaml:"host"`
Port int `yaml:"port,omitempty"`
Filters []Filter `yaml:"filters,omitempty"`
HTTPClientConfig promauth.HTTPClientConfig `yaml:",inline"`
ProxyURL proxy.URL `yaml:"proxy_url,omitempty"`
ProxyClientConfig promauth.ProxyClientConfig `yaml:",inline"`
// refresh_interval is obtained from `-promscrape.dockerSDCheckInterval` command-line option
}
// Filter is a filter, which can be passed to SDConfig.
type Filter struct {
Name string `yaml:"name"`
Values []string `yaml:"values"`
}
// GetLabels returns docker labels according to sdc.
func (sdc *SDConfig) GetLabels(baseDir string) ([]map[string]string, error) {
cfg, err := getAPIConfig(sdc, baseDir)
if err != nil {
return nil, fmt.Errorf("cannot get API config: %w", err)
}
return getContainersLabels(cfg)
}
// MustStop stops further usage for sdc.
func (sdc *SDConfig) MustStop() {
configMap.Delete(sdc)
}

View file

@ -0,0 +1,61 @@
package docker
import (
"encoding/json"
"fmt"
"strconv"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discoveryutils"
)
// See https://docs.docker.com/engine/api/v1.40/#tag/Network
type network struct {
ID string
Name string
Scope string
Internal bool
Ingress bool
Labels map[string]string
}
func getNetworksLabelsByNetworkID(cfg *apiConfig) (map[string]map[string]string, error) {
networks, err := getNetworks(cfg)
if err != nil {
return nil, err
}
return getNetworkLabelsByNetworkID(networks), nil
}
func getNetworks(cfg *apiConfig) ([]network, error) {
resp, err := cfg.getAPIResponse("/networks")
if err != nil {
return nil, fmt.Errorf("cannot query dockerswarm api for networks: %w", err)
}
return parseNetworks(resp)
}
func parseNetworks(data []byte) ([]network, error) {
var networks []network
if err := json.Unmarshal(data, &networks); err != nil {
return nil, fmt.Errorf("cannot parse networks: %w", err)
}
return networks, nil
}
func getNetworkLabelsByNetworkID(networks []network) map[string]map[string]string {
ms := make(map[string]map[string]string)
for _, network := range networks {
m := map[string]string{
"__meta_docker_network_id": network.ID,
"__meta_docker_network_name": network.Name,
"__meta_docker_network_internal": strconv.FormatBool(network.Internal),
"__meta_docker_network_ingress": strconv.FormatBool(network.Ingress),
"__meta_docker_network_scope": network.Scope,
}
for k, v := range network.Labels {
m["__meta_docker_network_label_"+discoveryutils.SanitizeLabelName(k)] = v
}
ms[network.ID] = m
}
return ms
}

View file

@ -0,0 +1,173 @@
package docker
import (
"reflect"
"sort"
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discoveryutils"
)
func Test_addNetworkLabels(t *testing.T) {
type args struct {
networks []network
}
tests := []struct {
name string
args args
want [][]prompbmarshal.Label
}{
{
name: "ingress network",
args: args{
networks: []network{
{
ID: "qs0hog6ldlei9ct11pr3c77v1",
Ingress: true,
Scope: "swarm",
Name: "ingress",
Labels: map[string]string{
"key1": "value1",
},
},
},
},
want: [][]prompbmarshal.Label{
discoveryutils.GetSortedLabels(map[string]string{
"__meta_docker_network_id": "qs0hog6ldlei9ct11pr3c77v1",
"__meta_docker_network_ingress": "true",
"__meta_docker_network_internal": "false",
"__meta_docker_network_label_key1": "value1",
"__meta_docker_network_name": "ingress",
"__meta_docker_network_scope": "swarm",
})},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := getNetworkLabelsByNetworkID(tt.args.networks)
var networkIDs []string
for networkID := range got {
networkIDs = append(networkIDs, networkID)
}
sort.Strings(networkIDs)
var sortedLabelss [][]prompbmarshal.Label
for _, networkID := range networkIDs {
labels := got[networkID]
sortedLabelss = append(sortedLabelss, discoveryutils.GetSortedLabels(labels))
}
if !reflect.DeepEqual(sortedLabelss, tt.want) {
t.Errorf("addNetworkLabels() \ngot %v, \nwant %v", sortedLabelss, tt.want)
}
})
}
}
func Test_parseNetworks(t *testing.T) {
type args struct {
data []byte
}
tests := []struct {
name string
args args
want []network
wantErr bool
}{
{
name: "parse two networks",
args: args{
data: []byte(`[
{
"Name": "ingress",
"Id": "qs0hog6ldlei9ct11pr3c77v1",
"Created": "2020-10-06T08:39:58.957083331Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": true,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": null,
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4096"
},
"Labels": {
"key1": "value1"
}
},
{
"Name": "host",
"Id": "317f0384d7e5f5c26304a0b04599f9f54bc08def4d0535059ece89955e9c4b7b",
"Created": "2020-10-06T08:39:52.843373136Z",
"Scope": "local",
"Driver": "host",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": []
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {},
"Labels": {
"key": "value"
}
}
]`),
},
want: []network{
{
ID: "qs0hog6ldlei9ct11pr3c77v1",
Ingress: true,
Scope: "swarm",
Name: "ingress",
Labels: map[string]string{
"key1": "value1",
},
},
{
ID: "317f0384d7e5f5c26304a0b04599f9f54bc08def4d0535059ece89955e9c4b7b",
Scope: "local",
Name: "host",
Labels: map[string]string{
"key": "value",
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := parseNetworks(tt.args.data)
if (err != nil) != tt.wantErr {
t.Errorf("parseNetworks() error = %v, wantErr %v", err, tt.wantErr)
return
}
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("parseNetworks() \ngot %v, \nwant %v", got, tt.want)
}
})
}
}

View file

@ -33,6 +33,9 @@ func newAPIConfig(sdc *SDConfig, baseDir string) (*apiConfig, error) {
port: sdc.Port,
filtersQueryArg: getFiltersQueryArg(sdc.Filters),
}
if cfg.port == 0 {
cfg.port = 80
}
ac, err := sdc.HTTPClientConfig.NewConfig(baseDir)
if err != nil {
return nil, fmt.Errorf("cannot parse auth config: %w", err)

View file

@ -1,12 +1,19 @@
package dockerswarm
import (
"flag"
"fmt"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/proxy"
)
// SDCheckInterval defines interval for dockerswarm targets refresh.
var SDCheckInterval = flag.Duration("promscrape.dockerswarmSDCheckInterval", 30*time.Second, "Interval for checking for changes in dockerswarm. "+
"This works only if dockerswarm_sd_configs is configured in '-promscrape.config' file. "+
"See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dockerswarm_sd_config for details")
// SDConfig represents docker swarm service discovery configuration
//
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dockerswarm_sd_config

View file

@ -7,7 +7,6 @@ import (
"strconv"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discoveryutils"
)

View file

@ -1,9 +1,16 @@
package ec2
import (
"flag"
"fmt"
"time"
)
// SDCheckInterval defines interval for targets refresh.
var SDCheckInterval = flag.Duration("promscrape.ec2SDCheckInterval", time.Minute, "Interval for checking for changes in ec2. "+
"This works only if ec2_sd_configs is configured in '-promscrape.config' file. "+
"See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#ec2_sd_config for details")
// SDConfig represents service discovery config for ec2.
//
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#ec2_sd_config

View file

@ -2,15 +2,20 @@ package eureka
import (
"encoding/xml"
"flag"
"fmt"
"strconv"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discoveryutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/proxy"
)
const appsAPIPath = "/apps"
// SDCheckInterval defines interval for targets refresh.
var SDCheckInterval = flag.Duration("promscrape.eurekaSDCheckInterval", 30*time.Second, "Interval for checking for changes in eureka. "+
"This works only if eureka_sd_configs is configured in '-promscrape.config' file. "+
"See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#eureka_sd_config for details")
// SDConfig represents service discovery config for eureka.
//
@ -82,7 +87,7 @@ func (sdc *SDConfig) GetLabels(baseDir string) ([]map[string]string, error) {
if err != nil {
return nil, fmt.Errorf("cannot get API config: %w", err)
}
data, err := getAPIResponse(cfg, appsAPIPath)
data, err := getAPIResponse(cfg, "/apps")
if err != nil {
return nil, err
}

View file

@ -1,9 +1,16 @@
package gce
import (
"flag"
"fmt"
"time"
)
// SDCheckInterval defines interval for targets refresh.
var SDCheckInterval = flag.Duration("promscrape.gceSDCheckInterval", time.Minute, "Interval for checking for changes in gce. "+
"This works only if gce_sd_configs is configured in '-promscrape.config' file. "+
"See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#gce_sd_config for details")
// SDConfig represents service discovery config for gce.
//
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#gce_sd_config

View file

@ -0,0 +1,78 @@
package http
import (
"encoding/json"
"fmt"
"net/url"
"strconv"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discoveryutils"
"github.com/VictoriaMetrics/fasthttp"
)
var configMap = discoveryutils.NewConfigMap()
type apiConfig struct {
client *discoveryutils.Client
path string
}
// httpGroupTarget respresent prometheus GroupTarget
// https://prometheus.io/docs/prometheus/latest/http_sd/
type httpGroupTarget struct {
Targets []string `json:"targets"`
Labels map[string]string `json:"labels"`
}
func newAPIConfig(sdc *SDConfig, baseDir string) (*apiConfig, error) {
ac, err := sdc.HTTPClientConfig.NewConfig(baseDir)
if err != nil {
return nil, fmt.Errorf("cannot parse auth config: %w", err)
}
parsedURL, err := url.Parse(sdc.URL)
if err != nil {
return nil, fmt.Errorf("cannot parse http_sd URL: %w", err)
}
apiServer := fmt.Sprintf("%s://%s", parsedURL.Scheme, parsedURL.Host)
proxyAC, err := sdc.ProxyClientConfig.NewConfig(baseDir)
if err != nil {
return nil, fmt.Errorf("cannot parse proxy auth config: %w", err)
}
client, err := discoveryutils.NewClient(apiServer, ac, sdc.ProxyURL, proxyAC)
if err != nil {
return nil, fmt.Errorf("cannot create HTTP client for %q: %w", apiServer, err)
}
cfg := &apiConfig{
client: client,
path: parsedURL.RequestURI(),
}
return cfg, nil
}
func getAPIConfig(sdc *SDConfig, baseDir string) (*apiConfig, error) {
v, err := configMap.Get(sdc, func() (interface{}, error) { return newAPIConfig(sdc, baseDir) })
if err != nil {
return nil, err
}
return v.(*apiConfig), nil
}
func getHTTPTargets(cfg *apiConfig) ([]httpGroupTarget, error) {
data, err := cfg.client.GetAPIResponseWithReqParams(cfg.path, func(request *fasthttp.Request) {
request.Header.Set("X-Prometheus-Refresh-Interval-Seconds", strconv.FormatFloat(SDCheckInterval.Seconds(), 'f', 0, 64))
request.Header.Set("Accept", "application/json")
})
if err != nil {
return nil, fmt.Errorf("cannot read http_sd api response: %w", err)
}
return parseAPIResponse(data, cfg.path)
}
func parseAPIResponse(data []byte, path string) ([]httpGroupTarget, error) {
var r []httpGroupTarget
if err := json.Unmarshal(data, &r); err != nil {
return nil, fmt.Errorf("cannot parse http_sd api response path: %s, err: %w", path, err)
}
return r, nil
}

View file

@ -0,0 +1,49 @@
package http
import (
"reflect"
"testing"
)
func Test_parseAPIResponse(t *testing.T) {
type args struct {
data []byte
path string
}
tests := []struct {
name string
args args
want []httpGroupTarget
wantErr bool
}{
{
name: "parse ok",
args: args{
path: "/ok",
data: []byte(`[
{"targets": ["http://target-1:9100","http://target-2:9150"],
"labels": {"label-1":"value-1"} }
]`),
},
want: []httpGroupTarget{
{
Labels: map[string]string{"label-1": "value-1"},
Targets: []string{"http://target-1:9100", "http://target-2:9150"},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := parseAPIResponse(tt.args.data, tt.args.path)
if (err != nil) != tt.wantErr {
t.Errorf("parseAPIResponse() error = %v, wantErr %v", err, tt.wantErr)
return
}
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("parseAPIResponse() got = %v, want %v", got, tt.want)
}
})
}
}

View file

@ -0,0 +1,60 @@
package http
import (
"flag"
"fmt"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/proxy"
)
// SDCheckInterval defines interval for targets refresh.
var SDCheckInterval = flag.Duration("promscrape.httpSDCheckInterval", time.Minute, "Interval for checking for changes in http endpoint service discovery. "+
"This works only if http_sd_configs is configured in '-promscrape.config' file. "+
"See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#http_sd_config for details")
// SDConfig represents service discovery config for http.
//
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#http_sd_config
type SDConfig struct {
URL string `yaml:"url"`
HTTPClientConfig promauth.HTTPClientConfig `yaml:",inline"`
ProxyURL proxy.URL `yaml:"proxy_url,omitempty"`
ProxyClientConfig promauth.ProxyClientConfig `yaml:",inline"`
}
// GetLabels returns http service discovery labels according to sdc.
func (sdc *SDConfig) GetLabels(baseDir string) ([]map[string]string, error) {
cfg, err := getAPIConfig(sdc, baseDir)
if err != nil {
return nil, fmt.Errorf("cannot get API config: %w", err)
}
hts, err := getHTTPTargets(cfg)
if err != nil {
return nil, err
}
return addHTTPTargetLabels(hts, sdc.URL), nil
}
func addHTTPTargetLabels(src []httpGroupTarget, sourceURL string) []map[string]string {
ms := make([]map[string]string, 0, len(src))
for _, targetGroup := range src {
labels := targetGroup.Labels
for _, target := range targetGroup.Targets {
m := make(map[string]string, len(labels))
for k, v := range labels {
m[k] = v
}
m["__address__"] = target
m["__meta_url"] = sourceURL
ms = append(ms, m)
}
}
return ms
}
// MustStop stops further usage for sdc.
func (sdc *SDConfig) MustStop() {
configMap.Delete(sdc)
}

View file

@ -0,0 +1,58 @@
package http
import (
"reflect"
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discoveryutils"
)
func Test_addHTTPTargetLabels(t *testing.T) {
type args struct {
src []httpGroupTarget
}
tests := []struct {
name string
args args
want [][]prompbmarshal.Label
}{
{
name: "add ok",
args: args{
src: []httpGroupTarget{
{
Targets: []string{"127.0.0.1:9100", "127.0.0.2:91001"},
Labels: map[string]string{"__meta_kubernetes_pod": "pod-1", "__meta_consul_dc": "dc-2"},
},
},
},
want: [][]prompbmarshal.Label{
discoveryutils.GetSortedLabels(map[string]string{
"__address__": "127.0.0.1:9100",
"__meta_kubernetes_pod": "pod-1",
"__meta_consul_dc": "dc-2",
"__meta_url": "http://foo.bar/baz?aaa=bb",
}),
discoveryutils.GetSortedLabels(map[string]string{
"__address__": "127.0.0.2:91001",
"__meta_kubernetes_pod": "pod-1",
"__meta_consul_dc": "dc-2",
"__meta_url": "http://foo.bar/baz?aaa=bb",
}),
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := addHTTPTargetLabels(tt.args.src, "http://foo.bar/baz?aaa=bb")
var sortedLabelss [][]prompbmarshal.Label
for _, labels := range got {
sortedLabelss = append(sortedLabelss, discoveryutils.GetSortedLabels(labels))
}
if !reflect.DeepEqual(sortedLabelss, tt.want) {
t.Errorf("addHTTPTargetLabels() \ngot \n%v\n, \nwant \n%v\n", sortedLabelss, tt.want)
}
})
}
}

View file

@ -1,12 +1,19 @@
package kubernetes
import (
"flag"
"fmt"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/proxy"
)
// SDCheckInterval defines interval for targets refresh.
var SDCheckInterval = flag.Duration("promscrape.kubernetesSDCheckInterval", 30*time.Second, "Interval for checking for changes in Kubernetes API server. "+
"This works only if kubernetes_sd_configs is configured in '-promscrape.config' file. "+
"See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config for details")
// SDConfig represents kubernetes-based service discovery config.
//
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config

View file

@ -1,11 +1,18 @@
package openstack
import (
"flag"
"fmt"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
)
// SDCheckInterval defines interval for targets refresh.
var SDCheckInterval = flag.Duration("promscrape.openstackSDCheckInterval", 30*time.Second, "Interval for checking for changes in openstack API server. "+
"This works only if openstack_sd_configs is configured in '-promscrape.config' file. "+
"See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#openstack_sd_config for details")
// SDConfig is the configuration for OpenStack based service discovery.
//
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#openstack_sd_config

View file

@ -154,8 +154,19 @@ func (c *Client) Addr() string {
return c.hc.Addr
}
// GetAPIResponseWithReqParams returns response for given absolute path with optional callback for request.
// modifyRequestParams should never reference data from request.
func (c *Client) GetAPIResponseWithReqParams(path string, modifyRequestParams func(request *fasthttp.Request)) ([]byte, error) {
return c.getAPIResponse(path, modifyRequestParams)
}
// GetAPIResponse returns response for the given absolute path.
func (c *Client) GetAPIResponse(path string) ([]byte, error) {
return c.getAPIResponse(path, nil)
}
// GetAPIResponse returns response for the given absolute path with optional callback for request.
func (c *Client) getAPIResponse(path string, modifyRequest func(request *fasthttp.Request)) ([]byte, error) {
// Limit the number of concurrent API requests.
concurrencyLimitChOnce.Do(concurrencyLimitChInit)
t := timerpool.Get(*maxWaitTime)
@ -168,17 +179,17 @@ func (c *Client) GetAPIResponse(path string) ([]byte, error) {
c.apiServer, *maxWaitTime, *maxConcurrency)
}
defer func() { <-concurrencyLimitCh }()
return c.getAPIResponseWithParamsAndClient(c.hc, path, nil)
return c.getAPIResponseWithParamsAndClient(c.hc, path, modifyRequest, nil)
}
// GetBlockingAPIResponse returns response for given absolute path with blocking client and optional callback for api response,
// inspectResponse - should never reference data from response.
func (c *Client) GetBlockingAPIResponse(path string, inspectResponse func(resp *fasthttp.Response)) ([]byte, error) {
return c.getAPIResponseWithParamsAndClient(c.blockingClient, path, inspectResponse)
return c.getAPIResponseWithParamsAndClient(c.blockingClient, path, nil, inspectResponse)
}
// getAPIResponseWithParamsAndClient returns response for the given absolute path with optional callback for response.
func (c *Client) getAPIResponseWithParamsAndClient(client *fasthttp.HostClient, path string, inspectResponse func(resp *fasthttp.Response)) ([]byte, error) {
// getAPIResponseWithParamsAndClient returns response for the given absolute path with optional callback for request and for response.
func (c *Client) getAPIResponseWithParamsAndClient(client *fasthttp.HostClient, path string, modifyRequest func(req *fasthttp.Request), inspectResponse func(resp *fasthttp.Response)) ([]byte, error) {
requestURL := c.apiServer + path
var u fasthttp.URI
u.Update(requestURL)
@ -196,6 +207,9 @@ func (c *Client) getAPIResponseWithParamsAndClient(client *fasthttp.HostClient,
if ah := c.getProxyAuthHeader(); ah != "" {
req.Header.Set("Proxy-Authorization", ah)
}
if modifyRequest != nil {
modifyRequest(&req)
}
var resp fasthttp.Response
deadline := time.Now().Add(client.ReadTimeout)

View file

@ -12,42 +12,29 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/procutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/consul"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/digitalocean"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/dns"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/docker"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/dockerswarm"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/ec2"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/eureka"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/gce"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/http"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/kubernetes"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/openstack"
"github.com/VictoriaMetrics/metrics"
)
var (
configCheckInterval = flag.Duration("promscrape.configCheckInterval", 0, "Interval for checking for changes in '-promscrape.config' file. "+
"By default the checking is disabled. Send SIGHUP signal in order to force config check for changes")
fileSDCheckInterval = flag.Duration("promscrape.fileSDCheckInterval", 30*time.Second, "Interval for checking for changes in 'file_sd_config'. "+
"See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#file_sd_config for details")
kubernetesSDCheckInterval = flag.Duration("promscrape.kubernetesSDCheckInterval", 30*time.Second, "Interval for checking for changes in Kubernetes API server. "+
"This works only if kubernetes_sd_configs is configured in '-promscrape.config' file. "+
"See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config for details")
openstackSDCheckInterval = flag.Duration("promscrape.openstackSDCheckInterval", 30*time.Second, "Interval for checking for changes in openstack API server. "+
"This works only if openstack_sd_configs is configured in '-promscrape.config' file. "+
"See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#openstack_sd_config for details")
eurekaSDCheckInterval = flag.Duration("promscrape.eurekaSDCheckInterval", 30*time.Second, "Interval for checking for changes in eureka. "+
"This works only if eureka_sd_configs is configured in '-promscrape.config' file. "+
"See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#eureka_sd_config for details")
dnsSDCheckInterval = flag.Duration("promscrape.dnsSDCheckInterval", 30*time.Second, "Interval for checking for changes in dns. "+
"This works only if dns_sd_configs is configured in '-promscrape.config' file. "+
"See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dns_sd_config for details")
ec2SDCheckInterval = flag.Duration("promscrape.ec2SDCheckInterval", time.Minute, "Interval for checking for changes in ec2. "+
"This works only if ec2_sd_configs is configured in '-promscrape.config' file. "+
"See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#ec2_sd_config for details")
gceSDCheckInterval = flag.Duration("promscrape.gceSDCheckInterval", time.Minute, "Interval for checking for changes in gce. "+
"This works only if gce_sd_configs is configured in '-promscrape.config' file. "+
"See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#gce_sd_config for details")
dockerswarmSDCheckInterval = flag.Duration("promscrape.dockerswarmSDCheckInterval", 30*time.Second, "Interval for checking for changes in dockerswarm. "+
"This works only if dockerswarm_sd_configs is configured in '-promscrape.config' file. "+
"See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dockerswarm_sd_config for details")
digitaloceanSDCheckInterval = flag.Duration("promscrape.digitaloceanSDCheckInterval", time.Minute, "Interval for checking for changes in digital ocean. "+
"This works only if digitalocean_sd_configs is configured in '-promscrape.config' file. "+
"See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#digitalocean_sd_config for details")
promscrapeConfigFile = flag.String("promscrape.config", "", "Optional path to Prometheus config file with 'scrape_configs' section containing targets to scrape. "+
"See https://docs.victoriametrics.com/#how-to-scrape-prometheus-exporters-such-as-node-exporter for details")
suppressDuplicateScrapeTargetErrors = flag.Bool("promscrape.suppressDuplicateScrapeTargetErrors", false, "Whether to suppress 'duplicate scrape target' errors; "+
"see https://docs.victoriametrics.com/vmagent.html#troubleshooting for details")
promscrapeConfigFile = flag.String("promscrape.config", "", "Optional path to Prometheus config file with 'scrape_configs' section containing targets to scrape. "+
"See https://docs.victoriametrics.com/#how-to-scrape-prometheus-exporters-such-as-node-exporter for details")
fileSDCheckInterval = flag.Duration("promscrape.fileSDCheckInterval", 30*time.Second, "Interval for checking for changes in 'file_sd_config'. "+
"See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#file_sd_config for details")
)
// CheckConfig checks -promscrape.config for errors and unsupported options.
@ -104,17 +91,19 @@ func runScraper(configFile string, pushData func(wr *prompbmarshal.WriteRequest)
cfg.mustStart()
scs := newScrapeConfigs(pushData)
scs.add("static_configs", 0, func(cfg *Config, swsPrev []*ScrapeWork) []*ScrapeWork { return cfg.getStaticScrapeWork() })
scs.add("file_sd_configs", *fileSDCheckInterval, func(cfg *Config, swsPrev []*ScrapeWork) []*ScrapeWork { return cfg.getFileSDScrapeWork(swsPrev) })
scs.add("kubernetes_sd_configs", *kubernetesSDCheckInterval, func(cfg *Config, swsPrev []*ScrapeWork) []*ScrapeWork { return cfg.getKubernetesSDScrapeWork(swsPrev) })
scs.add("openstack_sd_configs", *openstackSDCheckInterval, func(cfg *Config, swsPrev []*ScrapeWork) []*ScrapeWork { return cfg.getOpenStackSDScrapeWork(swsPrev) })
scs.add("consul_sd_configs", *consul.SDCheckInterval, func(cfg *Config, swsPrev []*ScrapeWork) []*ScrapeWork { return cfg.getConsulSDScrapeWork(swsPrev) })
scs.add("eureka_sd_configs", *eurekaSDCheckInterval, func(cfg *Config, swsPrev []*ScrapeWork) []*ScrapeWork { return cfg.getEurekaSDScrapeWork(swsPrev) })
scs.add("dns_sd_configs", *dnsSDCheckInterval, func(cfg *Config, swsPrev []*ScrapeWork) []*ScrapeWork { return cfg.getDNSSDScrapeWork(swsPrev) })
scs.add("ec2_sd_configs", *ec2SDCheckInterval, func(cfg *Config, swsPrev []*ScrapeWork) []*ScrapeWork { return cfg.getEC2SDScrapeWork(swsPrev) })
scs.add("gce_sd_configs", *gceSDCheckInterval, func(cfg *Config, swsPrev []*ScrapeWork) []*ScrapeWork { return cfg.getGCESDScrapeWork(swsPrev) })
scs.add("dockerswarm_sd_configs", *dockerswarmSDCheckInterval, func(cfg *Config, swsPrev []*ScrapeWork) []*ScrapeWork { return cfg.getDockerSwarmSDScrapeWork(swsPrev) })
scs.add("digitalocean_sd_configs", *digitaloceanSDCheckInterval, func(cfg *Config, swsPrev []*ScrapeWork) []*ScrapeWork { return cfg.getDigitalOceanDScrapeWork(swsPrev) })
scs.add("digitalocean_sd_configs", *digitalocean.SDCheckInterval, func(cfg *Config, swsPrev []*ScrapeWork) []*ScrapeWork { return cfg.getDigitalOceanDScrapeWork(swsPrev) })
scs.add("dns_sd_configs", *dns.SDCheckInterval, func(cfg *Config, swsPrev []*ScrapeWork) []*ScrapeWork { return cfg.getDNSSDScrapeWork(swsPrev) })
scs.add("docker_sd_configs", *docker.SDCheckInterval, func(cfg *Config, swsPrev []*ScrapeWork) []*ScrapeWork { return cfg.getDockerSDScrapeWork(swsPrev) })
scs.add("dockerswarm_sd_configs", *dockerswarm.SDCheckInterval, func(cfg *Config, swsPrev []*ScrapeWork) []*ScrapeWork { return cfg.getDockerSwarmSDScrapeWork(swsPrev) })
scs.add("ec2_sd_configs", *ec2.SDCheckInterval, func(cfg *Config, swsPrev []*ScrapeWork) []*ScrapeWork { return cfg.getEC2SDScrapeWork(swsPrev) })
scs.add("eureka_sd_configs", *eureka.SDCheckInterval, func(cfg *Config, swsPrev []*ScrapeWork) []*ScrapeWork { return cfg.getEurekaSDScrapeWork(swsPrev) })
scs.add("file_sd_configs", *fileSDCheckInterval, func(cfg *Config, swsPrev []*ScrapeWork) []*ScrapeWork { return cfg.getFileSDScrapeWork(swsPrev) })
scs.add("gce_sd_configs", *gce.SDCheckInterval, func(cfg *Config, swsPrev []*ScrapeWork) []*ScrapeWork { return cfg.getGCESDScrapeWork(swsPrev) })
scs.add("http_sd_configs", *http.SDCheckInterval, func(cfg *Config, swsPrev []*ScrapeWork) []*ScrapeWork { return cfg.getHTTPDScrapeWork(swsPrev) })
scs.add("kubernetes_sd_configs", *kubernetes.SDCheckInterval, func(cfg *Config, swsPrev []*ScrapeWork) []*ScrapeWork { return cfg.getKubernetesSDScrapeWork(swsPrev) })
scs.add("openstack_sd_configs", *openstack.SDCheckInterval, func(cfg *Config, swsPrev []*ScrapeWork) []*ScrapeWork { return cfg.getOpenStackSDScrapeWork(swsPrev) })
scs.add("static_configs", 0, func(cfg *Config, swsPrev []*ScrapeWork) []*ScrapeWork { return cfg.getStaticScrapeWork() })
var tickerCh <-chan time.Time
if *configCheckInterval > 0 {

View file

@ -2787,8 +2787,6 @@ func (is *indexSearch) getMetricIDsForDateAndFilters(date uint64, tfs *TagFilter
for i := range tfs.tfs {
tf := &tfs.tfs[i]
loopsCount, filterLoopsCount, timestamp := is.getLoopsCountAndTimestampForDateFilter(date, tf)
origLoopsCount := loopsCount
origFilterLoopsCount := filterLoopsCount
if currentTime > timestamp+3600 {
// Update stats once per hour for relatively fast tag filters.
// There is no need in spending CPU resources on updating stats for heavy tag filters.
@ -2799,17 +2797,6 @@ func (is *indexSearch) getMetricIDsForDateAndFilters(date uint64, tfs *TagFilter
filterLoopsCount = 0
}
}
if loopsCount == 0 {
// Prevent from possible thundering herd issue when potentially heavy tf is executed from multiple concurrent queries
// by temporary persisting its position in the tag filters list.
if origLoopsCount == 0 {
origLoopsCount = 9e6
}
if origFilterLoopsCount == 0 {
origFilterLoopsCount = 9e6
}
is.storeLoopsCountForDateFilter(date, tf, origLoopsCount, origFilterLoopsCount)
}
tfws[i] = tagFilterWithWeight{
tf: tf,
loopsCount: loopsCount,
@ -2837,13 +2824,6 @@ func (is *indexSearch) getMetricIDsForDateAndFilters(date uint64, tfs *TagFilter
is.storeLoopsCountForDateFilter(date, tfw.tf, tfw.loopsCount, tfw.filterLoopsCount)
}
}
storeZeroLoopsCounts := func(tfws []tagFilterWithWeight) {
for _, tfw := range tfws {
if tfw.loopsCount == 0 || tfw.filterLoopsCount == 0 {
is.storeLoopsCountForDateFilter(date, tfw.tf, tfw.loopsCount, tfw.filterLoopsCount)
}
}
}
// Populate metricIDs for the first non-negative filter with the cost smaller than maxLoopsCount.
var metricIDs *uint64set.Set
@ -2869,8 +2849,6 @@ func (is *indexSearch) getMetricIDsForDateAndFilters(date uint64, tfs *TagFilter
}
// Move failing filter to the end of filter list.
storeLoopsCount(&tfw, int64Max)
storeZeroLoopsCounts(tfws[i+1:])
storeZeroLoopsCounts(tfwsRemaining)
return nil, err
}
if m.Len() >= maxDateMetrics {
@ -2892,7 +2870,6 @@ func (is *indexSearch) getMetricIDsForDateAndFilters(date uint64, tfs *TagFilter
// so later they can be filtered out with negative filters.
m, err := is.getMetricIDsForDate(date, maxDateMetrics)
if err != nil {
storeZeroLoopsCounts(tfws)
if err == errMissingMetricIDsForDate {
// Zero time series were written on the given date.
return nil, nil
@ -2901,7 +2878,6 @@ func (is *indexSearch) getMetricIDsForDateAndFilters(date uint64, tfs *TagFilter
}
if m.Len() >= maxDateMetrics {
// Too many time series found for the given (date). Fall back to global search.
storeZeroLoopsCounts(tfws)
return nil, errFallbackToGlobalSearch
}
metricIDs = m
@ -2940,7 +2916,6 @@ func (is *indexSearch) getMetricIDsForDateAndFilters(date uint64, tfs *TagFilter
metricIDsLen := metricIDs.Len()
if metricIDsLen == 0 {
// There is no need in applying the remaining filters to an empty set.
storeZeroLoopsCounts(tfws[i:])
break
}
if tfw.filterLoopsCount > int64(metricIDsLen)*loopsCountPerMetricNameMatch {
@ -2949,7 +2924,6 @@ func (is *indexSearch) getMetricIDsForDateAndFilters(date uint64, tfs *TagFilter
for _, tfw := range tfws[i:] {
tfsPostponed = append(tfsPostponed, tfw.tf)
}
storeZeroLoopsCounts(tfws[i:])
break
}
maxLoopsCount := getFirstPositiveFilterLoopsCount(tfws[i+1:])
@ -2966,7 +2940,6 @@ func (is *indexSearch) getMetricIDsForDateAndFilters(date uint64, tfs *TagFilter
}
// Move failing tf to the end of filter list
storeFilterLoopsCount(&tfw, int64Max)
storeZeroLoopsCounts(tfws[i:])
return nil, err
}
storeFilterLoopsCount(&tfw, filterLoopsCount)

View file

@ -52,7 +52,7 @@ var transformFuncs = map[string]bool{
"label_match": true,
"label_mismatch": true,
"union": true,
"": true, // empty func is a synonym to union
"": true, // empty func is a synonim to union
"keep_last_value": true,
"keep_next_value": true,
"interpolate": true,

View file

@ -13,7 +13,6 @@ package ec2metadata
import (
"bytes"
"errors"
"io"
"net/http"
"net/url"
@ -234,7 +233,8 @@ func unmarshalError(r *request.Request) {
// Response body format is not consistent between metadata endpoints.
// Grab the error message as a string and include that as the source error
r.Error = awserr.NewRequestFailure(awserr.New("EC2MetadataError", "failed to make EC2Metadata request", errors.New(b.String())),
r.Error = awserr.NewRequestFailure(
awserr.New("EC2MetadataError", "failed to make EC2Metadata request\n"+b.String(), nil),
r.HTTPResponse.StatusCode, r.RequestID)
}

View file

@ -1615,6 +1615,7 @@ var awsPartition = partition{
Region: "us-west-2",
},
},
"me-south-1": endpoint{},
"sa-east-1": endpoint{},
"us-east-1": endpoint{},
"us-east-2": endpoint{},
@ -1660,6 +1661,7 @@ var awsPartition = partition{
Region: "us-west-2",
},
},
"me-south-1": endpoint{},
"sa-east-1": endpoint{},
"us-east-1": endpoint{},
"us-east-2": endpoint{},
@ -7171,6 +7173,12 @@ var awsPartition = partition{
Region: "ap-northeast-2",
},
},
"ap-northeast-3": endpoint{
Hostname: "waf-regional.ap-northeast-3.amazonaws.com",
CredentialScope: credentialScope{
Region: "ap-northeast-3",
},
},
"ap-south-1": endpoint{
Hostname: "waf-regional.ap-south-1.amazonaws.com",
CredentialScope: credentialScope{
@ -7255,6 +7263,12 @@ var awsPartition = partition{
Region: "ap-northeast-2",
},
},
"fips-ap-northeast-3": endpoint{
Hostname: "waf-regional-fips.ap-northeast-3.amazonaws.com",
CredentialScope: credentialScope{
Region: "ap-northeast-3",
},
},
"fips-ap-south-1": endpoint{
Hostname: "waf-regional-fips.ap-south-1.amazonaws.com",
CredentialScope: credentialScope{
@ -8391,6 +8405,42 @@ var awscnPartition = partition{
},
},
},
"transfer": service{
Endpoints: endpoints{
"cn-north-1": endpoint{},
"cn-northwest-1": endpoint{},
},
},
"waf-regional": service{
Endpoints: endpoints{
"cn-north-1": endpoint{
Hostname: "waf-regional.cn-north-1.amazonaws.com.cn",
CredentialScope: credentialScope{
Region: "cn-north-1",
},
},
"cn-northwest-1": endpoint{
Hostname: "waf-regional.cn-northwest-1.amazonaws.com.cn",
CredentialScope: credentialScope{
Region: "cn-northwest-1",
},
},
"fips-cn-north-1": endpoint{
Hostname: "waf-regional-fips.cn-north-1.amazonaws.com.cn",
CredentialScope: credentialScope{
Region: "cn-north-1",
},
},
"fips-cn-northwest-1": endpoint{
Hostname: "waf-regional-fips.cn-northwest-1.amazonaws.com.cn",
CredentialScope: credentialScope{
Region: "cn-northwest-1",
},
},
},
},
"workspaces": service{
Endpoints: endpoints{
@ -9573,6 +9623,13 @@ var awsusgovPartition = partition{
"us-gov-west-1": endpoint{},
},
},
"mq": service{
Endpoints: endpoints{
"us-gov-east-1": endpoint{},
"us-gov-west-1": endpoint{},
},
},
"neptune": service{
Endpoints: endpoints{

View file

@ -34,23 +34,23 @@ func (m mapRule) IsValid(value string) bool {
return ok
}
// whitelist is a generic rule for whitelisting
type whitelist struct {
// allowList is a generic rule for allow listing
type allowList struct {
rule
}
// IsValid for whitelist checks if the value is within the whitelist
func (w whitelist) IsValid(value string) bool {
// IsValid for allow list checks if the value is within the allow list
func (w allowList) IsValid(value string) bool {
return w.rule.IsValid(value)
}
// blacklist is a generic rule for blacklisting
type blacklist struct {
// excludeList is a generic rule for blacklisting
type excludeList struct {
rule
}
// IsValid for whitelist checks if the value is within the whitelist
func (b blacklist) IsValid(value string) bool {
// IsValid for allow list checks if the value is within the allow list
func (b excludeList) IsValid(value string) bool {
return !b.rule.IsValid(value)
}

View file

@ -90,7 +90,7 @@ const (
)
var ignoredHeaders = rules{
blacklist{
excludeList{
mapRule{
authorizationHeader: struct{}{},
"User-Agent": struct{}{},
@ -99,9 +99,9 @@ var ignoredHeaders = rules{
},
}
// requiredSignedHeaders is a whitelist for build canonical headers.
// requiredSignedHeaders is a allow list for build canonical headers.
var requiredSignedHeaders = rules{
whitelist{
allowList{
mapRule{
"Cache-Control": struct{}{},
"Content-Disposition": struct{}{},
@ -145,12 +145,13 @@ var requiredSignedHeaders = rules{
},
},
patterns{"X-Amz-Meta-"},
patterns{"X-Amz-Object-Lock-"},
}
// allowedHoisting is a whitelist for build query headers. The boolean value
// allowedHoisting is a allow list for build query headers. The boolean value
// represents whether or not it is a pattern.
var allowedQueryHoisting = inclusiveRules{
blacklist{requiredSignedHeaders},
excludeList{requiredSignedHeaders},
patterns{"X-Amz-"},
}

View file

@ -5,4 +5,4 @@ package aws
const SDKName = "aws-sdk-go"
// SDKVersion is the version of this SDK
const SDKVersion = "1.38.57"
const SDKVersion = "1.38.66"

View file

@ -98,7 +98,7 @@ func buildLocationElements(r *request.Request, v reflect.Value, buildGETQuery bo
// Support the ability to customize values to be marshaled as a
// blob even though they were modeled as a string. Required for S3
// API operations like SSECustomerKey is modeled as stirng but
// API operations like SSECustomerKey is modeled as string but
// required to be base64 encoded in request.
if field.Tag.Get("marshal-as") == "blob" {
m = m.Convert(byteSliceType)

View file

@ -1,6 +1,8 @@
package protocol
import (
"bytes"
"fmt"
"math"
"strconv"
"time"
@ -20,6 +22,8 @@ const (
const (
// RFC 7231#section-7.1.1.1 timetamp format. e.g Tue, 29 Apr 2014 18:30:38 GMT
RFC822TimeFormat = "Mon, 2 Jan 2006 15:04:05 GMT"
rfc822TimeFormatSingleDigitDay = "Mon, _2 Jan 2006 15:04:05 GMT"
rfc822TimeFormatSingleDigitDayTwoDigitYear = "Mon, _2 Jan 06 15:04:05 GMT"
// This format is used for output time without seconds precision
RFC822OutputTimeFormat = "Mon, 02 Jan 2006 15:04:05 GMT"
@ -67,10 +71,20 @@ func FormatTime(name string, t time.Time) string {
// the time if it was able to be parsed, and fails otherwise.
func ParseTime(formatName, value string) (time.Time, error) {
switch formatName {
case RFC822TimeFormatName:
return time.Parse(RFC822TimeFormat, value)
case ISO8601TimeFormatName:
return time.Parse(ISO8601TimeFormat, value)
case RFC822TimeFormatName: // Smithy HTTPDate format
return tryParse(value,
RFC822TimeFormat,
rfc822TimeFormatSingleDigitDay,
rfc822TimeFormatSingleDigitDayTwoDigitYear,
time.RFC850,
time.ANSIC,
)
case ISO8601TimeFormatName: // Smithy DateTime format
return tryParse(value,
ISO8601TimeFormat,
time.RFC3339Nano,
time.RFC3339,
)
case UnixTimeFormatName:
v, err := strconv.ParseFloat(value, 64)
_, dec := math.Modf(v)
@ -83,3 +97,36 @@ func ParseTime(formatName, value string) (time.Time, error) {
panic("unknown timestamp format name, " + formatName)
}
}
func tryParse(v string, formats ...string) (time.Time, error) {
var errs parseErrors
for _, f := range formats {
t, err := time.Parse(f, v)
if err != nil {
errs = append(errs, parseError{
Format: f,
Err: err,
})
continue
}
return t, nil
}
return time.Time{}, fmt.Errorf("unable to parse time string, %v", errs)
}
type parseErrors []parseError
func (es parseErrors) Error() string {
var s bytes.Buffer
for _, e := range es {
fmt.Fprintf(&s, "\n * %q: %v", e.Format, e.Err)
}
return "parse errors:" + s.String()
}
type parseError struct {
Format string
Err error
}

View file

@ -6,6 +6,10 @@ package http2
import "strings"
// The HTTP protocols are defined in terms of ASCII, not Unicode. This file
// contains helper functions which may use Unicode-aware functions which would
// otherwise be unsafe and could introduce vulnerabilities if used improperly.
// asciiEqualFold is strings.EqualFold, ASCII only. It reports whether s and t
// are equal, ASCII-case-insensitively.
func asciiEqualFold(s, t string) bool {

View file

@ -259,16 +259,12 @@ func ConfigureServer(s *http.Server, conf *Server) error {
s.TLSConfig.PreferServerCipherSuites = true
haveNPN := false
for _, p := range s.TLSConfig.NextProtos {
if p == NextProtoTLS {
haveNPN = true
break
}
}
if !haveNPN {
if !strSliceContains(s.TLSConfig.NextProtos, NextProtoTLS) {
s.TLSConfig.NextProtos = append(s.TLSConfig.NextProtos, NextProtoTLS)
}
if !strSliceContains(s.TLSConfig.NextProtos, "http/1.1") {
s.TLSConfig.NextProtos = append(s.TLSConfig.NextProtos, "http/1.1")
}
if s.TLSNextProto == nil {
s.TLSNextProto = map[string]func(*http.Server, *tls.Conn, http.Handler){}

View file

@ -266,7 +266,6 @@ type ClientConn struct {
hbuf bytes.Buffer // HPACK encoder writes into this
henc *hpack.Encoder
freeBuf [][]byte
wmu sync.Mutex // held while writing; acquire AFTER mu if holding both
werr error // first write error that has occurred
@ -913,46 +912,6 @@ func (cc *ClientConn) closeForLostPing() error {
return cc.closeForError(err)
}
const maxAllocFrameSize = 512 << 10
// frameBuffer returns a scratch buffer suitable for writing DATA frames.
// They're capped at the min of the peer's max frame size or 512KB
// (kinda arbitrarily), but definitely capped so we don't allocate 4GB
// bufers.
func (cc *ClientConn) frameScratchBuffer() []byte {
cc.mu.Lock()
size := cc.maxFrameSize
if size > maxAllocFrameSize {
size = maxAllocFrameSize
}
for i, buf := range cc.freeBuf {
if len(buf) >= int(size) {
cc.freeBuf[i] = nil
cc.mu.Unlock()
return buf[:size]
}
}
cc.mu.Unlock()
return make([]byte, size)
}
func (cc *ClientConn) putFrameScratchBuffer(buf []byte) {
cc.mu.Lock()
defer cc.mu.Unlock()
const maxBufs = 4 // arbitrary; 4 concurrent requests per conn? investigate.
if len(cc.freeBuf) < maxBufs {
cc.freeBuf = append(cc.freeBuf, buf)
return
}
for i, old := range cc.freeBuf {
if old == nil {
cc.freeBuf[i] = buf
return
}
}
// forget about it.
}
// errRequestCanceled is a copy of net/http's errRequestCanceled because it's not
// exported. At least they'll be DeepEqual for h1-vs-h2 comparisons tests.
var errRequestCanceled = errors.New("net/http: request canceled")
@ -1295,11 +1254,35 @@ var (
errReqBodyTooLong = errors.New("http2: request body larger than specified content length")
)
// frameScratchBufferLen returns the length of a buffer to use for
// outgoing request bodies to read/write to/from.
//
// It returns max(1, min(peer's advertised max frame size,
// Request.ContentLength+1, 512KB)).
func (cs *clientStream) frameScratchBufferLen(maxFrameSize int) int {
const max = 512 << 10
n := int64(maxFrameSize)
if n > max {
n = max
}
if cl := actualContentLength(cs.req); cl != -1 && cl+1 < n {
// Add an extra byte past the declared content-length to
// give the caller's Request.Body io.Reader a chance to
// give us more bytes than they declared, so we can catch it
// early.
n = cl + 1
}
if n < 1 {
return 1
}
return int(n) // doesn't truncate; max is 512K
}
var bufPool sync.Pool // of *[]byte
func (cs *clientStream) writeRequestBody(body io.Reader, bodyCloser io.Closer) (err error) {
cc := cs.cc
sentEnd := false // whether we sent the final DATA frame w/ END_STREAM
buf := cc.frameScratchBuffer()
defer cc.putFrameScratchBuffer(buf)
defer func() {
traceWroteRequest(cs.trace, err)
@ -1318,9 +1301,24 @@ func (cs *clientStream) writeRequestBody(body io.Reader, bodyCloser io.Closer) (
remainLen := actualContentLength(req)
hasContentLen := remainLen != -1
cc.mu.Lock()
maxFrameSize := int(cc.maxFrameSize)
cc.mu.Unlock()
// Scratch buffer for reading into & writing from.
scratchLen := cs.frameScratchBufferLen(maxFrameSize)
var buf []byte
if bp, ok := bufPool.Get().(*[]byte); ok && len(*bp) >= scratchLen {
defer bufPool.Put(bp)
buf = *bp
} else {
buf = make([]byte, scratchLen)
defer bufPool.Put(&buf)
}
var sawEOF bool
for !sawEOF {
n, err := body.Read(buf[:len(buf)-1])
n, err := body.Read(buf[:len(buf)])
if hasContentLen {
remainLen -= int64(n)
if remainLen == 0 && err == nil {
@ -1331,8 +1329,9 @@ func (cs *clientStream) writeRequestBody(body io.Reader, bodyCloser io.Closer) (
// to send the END_STREAM bit early, double-check that we're actually
// at EOF. Subsequent reads should return (0, EOF) at this point.
// If either value is different, we return an error in one of two ways below.
var scratch [1]byte
var n1 int
n1, err = body.Read(buf[n:])
n1, err = body.Read(scratch[:])
remainLen -= int64(n1)
}
if remainLen < 0 {
@ -1402,10 +1401,6 @@ func (cs *clientStream) writeRequestBody(body io.Reader, bodyCloser io.Closer) (
}
}
cc.mu.Lock()
maxFrameSize := int(cc.maxFrameSize)
cc.mu.Unlock()
cc.wmu.Lock()
defer cc.wmu.Unlock()

View file

@ -4,9 +4,9 @@
// Package google provides support for making OAuth2 authorized and authenticated
// HTTP requests to Google APIs. It supports the Web server flow, client-side
// credentials, service accounts, Google Compute Engine service accounts, Google
// App Engine service accounts and workload identity federation from non-Google
// cloud platforms.
// credentials, service accounts, Google Compute Engine service accounts,
// Google App Engine service accounts and workload identity federation
// from non-Google cloud platforms.
//
// A brief overview of the package follows. For more information, please read
// https://developers.google.com/accounts/docs/OAuth2

View file

@ -13,7 +13,6 @@ import (
"encoding/json"
"errors"
"fmt"
"golang.org/x/oauth2"
"io"
"io/ioutil"
"net/http"
@ -23,6 +22,8 @@ import (
"sort"
"strings"
"time"
"golang.org/x/oauth2"
)
type awsSecurityCredentials struct {
@ -343,6 +344,9 @@ func (cs *awsCredentialSource) getRegion() (string, error) {
if envAwsRegion := getenv("AWS_REGION"); envAwsRegion != "" {
return envAwsRegion, nil
}
if envAwsRegion := getenv("AWS_DEFAULT_REGION"); envAwsRegion != "" {
return envAwsRegion, nil
}
if cs.RegionURL == "" {
return "", errors.New("oauth2/google: unable to determine AWS region")

View file

@ -20,15 +20,34 @@ var now = func() time.Time {
// Config stores the configuration for fetching tokens with external credentials.
type Config struct {
// Audience is the Secure Token Service (STS) audience which contains the resource name for the workload
// identity pool or the workforce pool and the provider identifier in that pool.
Audience string
// SubjectTokenType is the STS token type based on the Oauth2.0 token exchange spec
// e.g. `urn:ietf:params:oauth:token-type:jwt`.
SubjectTokenType string
// TokenURL is the STS token exchange endpoint.
TokenURL string
// TokenInfoURL is the token_info endpoint used to retrieve the account related information (
// user attributes like account identifier, eg. email, username, uid, etc). This is
// needed for gCloud session account identification.
TokenInfoURL string
// ServiceAccountImpersonationURL is the URL for the service account impersonation request. This is only
// required for workload identity pools when APIs to be accessed have not integrated with UberMint.
ServiceAccountImpersonationURL string
// ClientSecret is currently only required if token_info endpoint also
// needs to be called with the generated GCP access token. When provided, STS will be
// called with additional basic authentication using client_id as username and client_secret as password.
ClientSecret string
// ClientID is only required in conjunction with ClientSecret, as described above.
ClientID string
// CredentialSource contains the necessary information to retrieve the token itself, as well
// as some environmental information.
CredentialSource CredentialSource
// QuotaProjectID is injected by gCloud. If the value is non-empty, the Auth libraries
// will set the x-goog-user-project which overrides the project associated with the credentials.
QuotaProjectID string
// Scopes contains the desired scopes for the returned access token.
Scopes []string
}
@ -66,6 +85,8 @@ type format struct {
}
// CredentialSource stores the information necessary to retrieve the credentials for the STS exchange.
// Either the File or the URL field should be filled, depending on the kind of credential in question.
// The EnvironmentID should start with AWS if being used for an AWS credential.
type CredentialSource struct {
File string `json:"file"`
@ -107,7 +128,7 @@ type baseCredentialSource interface {
subjectToken() (string, error)
}
// tokenSource is the source that handles external credentials.
// tokenSource is the source that handles external credentials. It is used to retrieve Tokens.
type tokenSource struct {
ctx context.Context
conf *Config

View file

@ -19,6 +19,9 @@ type clientAuthentication struct {
ClientSecret string
}
// InjectAuthentication is used to add authentication to a Secure Token Service exchange
// request. It modifies either the passed url.Values or http.Header depending on the desired
// authentication format.
func (c *clientAuthentication) InjectAuthentication(values url.Values, headers http.Header) {
if c.ClientID == "" || c.ClientSecret == "" || values == nil || headers == nil {
return

View file

@ -36,7 +36,7 @@ type impersonateTokenSource struct {
scopes []string
}
// Token performs the exchange to get a temporary service account
// Token performs the exchange to get a temporary service account token to allow access to GCP.
func (its impersonateTokenSource) Token() (*oauth2.Token, error) {
reqBody := generateAccessTokenReq{
Lifetime: "3600s",

View file

@ -7,6 +7,7 @@ package google
import (
"crypto/rsa"
"fmt"
"strings"
"time"
"golang.org/x/oauth2"
@ -24,6 +25,28 @@ import (
// optimization supported by a few Google services.
// Unless you know otherwise, you should use JWTConfigFromJSON instead.
func JWTAccessTokenSourceFromJSON(jsonKey []byte, audience string) (oauth2.TokenSource, error) {
return newJWTSource(jsonKey, audience, nil)
}
// JWTAccessTokenSourceWithScope uses a Google Developers service account JSON
// key file to read the credentials that authorize and authenticate the
// requests, and returns a TokenSource that does not use any OAuth2 flow but
// instead creates a JWT and sends that as the access token.
// The scope is typically a list of URLs that specifies the scope of the
// credentials.
//
// Note that this is not a standard OAuth flow, but rather an
// optimization supported by a few Google services.
// Unless you know otherwise, you should use JWTConfigFromJSON instead.
func JWTAccessTokenSourceWithScope(jsonKey []byte, scope ...string) (oauth2.TokenSource, error) {
return newJWTSource(jsonKey, "", scope)
}
func newJWTSource(jsonKey []byte, audience string, scopes []string) (oauth2.TokenSource, error) {
if len(scopes) == 0 && audience == "" {
return nil, fmt.Errorf("google: missing scope/audience for JWT access token")
}
cfg, err := JWTConfigFromJSON(jsonKey)
if err != nil {
return nil, fmt.Errorf("google: could not parse JSON key: %v", err)
@ -35,6 +58,7 @@ func JWTAccessTokenSourceFromJSON(jsonKey []byte, audience string) (oauth2.Token
ts := &jwtAccessTokenSource{
email: cfg.Email,
audience: audience,
scopes: scopes,
pk: pk,
pkID: cfg.PrivateKeyID,
}
@ -47,6 +71,7 @@ func JWTAccessTokenSourceFromJSON(jsonKey []byte, audience string) (oauth2.Token
type jwtAccessTokenSource struct {
email, audience string
scopes []string
pk *rsa.PrivateKey
pkID string
}
@ -54,10 +79,12 @@ type jwtAccessTokenSource struct {
func (ts *jwtAccessTokenSource) Token() (*oauth2.Token, error) {
iat := time.Now()
exp := iat.Add(time.Hour)
scope := strings.Join(ts.scopes, " ")
cs := &jws.ClaimSet{
Iss: ts.email,
Sub: ts.email,
Aud: ts.audience,
Scope: scope,
Iat: iat.Unix(),
Exp: exp.Unix(),
}

View file

@ -563,6 +563,7 @@ ccflags="$@"
$2 ~ /^KEYCTL_/ ||
$2 ~ /^PERF_/ ||
$2 ~ /^SECCOMP_MODE_/ ||
$2 ~ /^SEEK_/ ||
$2 ~ /^SPLICE_/ ||
$2 ~ /^SYNC_FILE_RANGE_/ ||
$2 !~ /^AUDIT_RECORD_MAGIC/ &&

View file

@ -13,6 +13,7 @@
package unix
import (
"fmt"
"runtime"
"syscall"
"unsafe"
@ -398,6 +399,38 @@ func GetsockoptXucred(fd, level, opt int) (*Xucred, error) {
return x, err
}
func SysctlKinfoProcSlice(name string) ([]KinfoProc, error) {
mib, err := sysctlmib(name)
if err != nil {
return nil, err
}
// Find size.
n := uintptr(0)
if err := sysctl(mib, nil, &n, nil, 0); err != nil {
return nil, err
}
if n == 0 {
return nil, nil
}
if n%SizeofKinfoProc != 0 {
return nil, fmt.Errorf("sysctl() returned a size of %d, which is not a multiple of %d", n, SizeofKinfoProc)
}
// Read into buffer of that size.
buf := make([]KinfoProc, n/SizeofKinfoProc)
if err := sysctl(mib, (*byte)(unsafe.Pointer(&buf[0])), &n, nil, 0); err != nil {
return nil, err
}
if n%SizeofKinfoProc != 0 {
return nil, fmt.Errorf("sysctl() returned a size of %d, which is not a multiple of %d", n, SizeofKinfoProc)
}
// The actual call may return less than the original reported required
// size so ensure we deal with that.
return buf[:n/SizeofKinfoProc], nil
}
//sys sendfile(infd int, outfd int, offset int64, len *int64, hdtr unsafe.Pointer, flags int) (err error)
/*

View file

@ -1262,6 +1262,11 @@ const (
SCM_RIGHTS = 0x1
SCM_TIMESTAMP = 0x2
SCM_TIMESTAMP_MONOTONIC = 0x4
SEEK_CUR = 0x1
SEEK_DATA = 0x4
SEEK_END = 0x2
SEEK_HOLE = 0x3
SEEK_SET = 0x0
SHUT_RD = 0x0
SHUT_RDWR = 0x2
SHUT_WR = 0x1

View file

@ -1262,6 +1262,11 @@ const (
SCM_RIGHTS = 0x1
SCM_TIMESTAMP = 0x2
SCM_TIMESTAMP_MONOTONIC = 0x4
SEEK_CUR = 0x1
SEEK_DATA = 0x4
SEEK_END = 0x2
SEEK_HOLE = 0x3
SEEK_SET = 0x0
SHUT_RD = 0x0
SHUT_RDWR = 0x2
SHUT_WR = 0x1

View file

@ -1297,6 +1297,11 @@ const (
SCM_RIGHTS = 0x1
SCM_TIMESTAMP = 0x2
SCM_TIME_INFO = 0x7
SEEK_CUR = 0x1
SEEK_DATA = 0x3
SEEK_END = 0x2
SEEK_HOLE = 0x4
SEEK_SET = 0x0
SHUT_RD = 0x0
SHUT_RDWR = 0x2
SHUT_WR = 0x1

View file

@ -1298,6 +1298,11 @@ const (
SCM_RIGHTS = 0x1
SCM_TIMESTAMP = 0x2
SCM_TIME_INFO = 0x7
SEEK_CUR = 0x1
SEEK_DATA = 0x3
SEEK_END = 0x2
SEEK_HOLE = 0x4
SEEK_SET = 0x0
SHUT_RD = 0x0
SHUT_RDWR = 0x2
SHUT_WR = 0x1

View file

@ -1276,6 +1276,11 @@ const (
SCM_CREDS = 0x3
SCM_RIGHTS = 0x1
SCM_TIMESTAMP = 0x2
SEEK_CUR = 0x1
SEEK_DATA = 0x3
SEEK_END = 0x2
SEEK_HOLE = 0x4
SEEK_SET = 0x0
SHUT_RD = 0x0
SHUT_RDWR = 0x2
SHUT_WR = 0x1

View file

@ -1298,6 +1298,11 @@ const (
SCM_RIGHTS = 0x1
SCM_TIMESTAMP = 0x2
SCM_TIME_INFO = 0x7
SEEK_CUR = 0x1
SEEK_DATA = 0x3
SEEK_END = 0x2
SEEK_HOLE = 0x4
SEEK_SET = 0x0
SHUT_RD = 0x0
SHUT_RDWR = 0x2
SHUT_WR = 0x1

View file

@ -2284,6 +2284,12 @@ const (
SECCOMP_MODE_FILTER = 0x2
SECCOMP_MODE_STRICT = 0x1
SECURITYFS_MAGIC = 0x73636673
SEEK_CUR = 0x1
SEEK_DATA = 0x3
SEEK_END = 0x2
SEEK_HOLE = 0x4
SEEK_MAX = 0x4
SEEK_SET = 0x0
SELINUX_MAGIC = 0xf97cff8c
SHUT_RD = 0x0
SHUT_RDWR = 0x2

View file

@ -535,3 +535,107 @@ type CtlInfo struct {
Id uint32
Name [96]byte
}
const SizeofKinfoProc = 0x288
type Eproc struct {
Paddr uintptr
Sess uintptr
Pcred Pcred
Ucred Ucred
Vm Vmspace
Ppid int32
Pgid int32
Jobc int16
Tdev int32
Tpgid int32
Tsess uintptr
Wmesg [8]int8
Xsize int32
Xrssize int16
Xccount int16
Xswrss int16
Flag int32
Login [12]int8
Spare [4]int32
_ [4]byte
}
type ExternProc struct {
P_starttime Timeval
P_vmspace *Vmspace
P_sigacts uintptr
P_flag int32
P_stat int8
P_pid int32
P_oppid int32
P_dupfd int32
User_stack *int8
Exit_thread *byte
P_debugger int32
Sigwait int32
P_estcpu uint32
P_cpticks int32
P_pctcpu uint32
P_wchan *byte
P_wmesg *int8
P_swtime uint32
P_slptime uint32
P_realtimer Itimerval
P_rtime Timeval
P_uticks uint64
P_sticks uint64
P_iticks uint64
P_traceflag int32
P_tracep uintptr
P_siglist int32
P_textvp uintptr
P_holdcnt int32
P_sigmask uint32
P_sigignore uint32
P_sigcatch uint32
P_priority uint8
P_usrpri uint8
P_nice int8
P_comm [17]int8
P_pgrp uintptr
P_addr uintptr
P_xstat uint16
P_acflag uint16
P_ru *Rusage
}
type Itimerval struct {
Interval Timeval
Value Timeval
}
type KinfoProc struct {
Proc ExternProc
Eproc Eproc
}
type Vmspace struct {
Dummy int32
Dummy2 *int8
Dummy3 [5]int32
Dummy4 [3]*int8
}
type Pcred struct {
Pc_lock [72]int8
Pc_ucred uintptr
P_ruid uint32
P_svuid uint32
P_rgid uint32
P_svgid uint32
P_refcnt int32
_ [4]byte
}
type Ucred struct {
Ref int32
Uid uint32
Ngroups int16
Groups [16]uint32
}

View file

@ -535,3 +535,107 @@ type CtlInfo struct {
Id uint32
Name [96]byte
}
const SizeofKinfoProc = 0x288
type Eproc struct {
Paddr uintptr
Sess uintptr
Pcred Pcred
Ucred Ucred
Vm Vmspace
Ppid int32
Pgid int32
Jobc int16
Tdev int32
Tpgid int32
Tsess uintptr
Wmesg [8]int8
Xsize int32
Xrssize int16
Xccount int16
Xswrss int16
Flag int32
Login [12]int8
Spare [4]int32
_ [4]byte
}
type ExternProc struct {
P_starttime Timeval
P_vmspace *Vmspace
P_sigacts uintptr
P_flag int32
P_stat int8
P_pid int32
P_oppid int32
P_dupfd int32
User_stack *int8
Exit_thread *byte
P_debugger int32
Sigwait int32
P_estcpu uint32
P_cpticks int32
P_pctcpu uint32
P_wchan *byte
P_wmesg *int8
P_swtime uint32
P_slptime uint32
P_realtimer Itimerval
P_rtime Timeval
P_uticks uint64
P_sticks uint64
P_iticks uint64
P_traceflag int32
P_tracep uintptr
P_siglist int32
P_textvp uintptr
P_holdcnt int32
P_sigmask uint32
P_sigignore uint32
P_sigcatch uint32
P_priority uint8
P_usrpri uint8
P_nice int8
P_comm [17]int8
P_pgrp uintptr
P_addr uintptr
P_xstat uint16
P_acflag uint16
P_ru *Rusage
}
type Itimerval struct {
Interval Timeval
Value Timeval
}
type KinfoProc struct {
Proc ExternProc
Eproc Eproc
}
type Vmspace struct {
Dummy int32
Dummy2 *int8
Dummy3 [5]int32
Dummy4 [3]*int8
}
type Pcred struct {
Pc_lock [72]int8
Pc_ucred uintptr
P_ruid uint32
P_svuid uint32
P_rgid uint32
P_svgid uint32
P_refcnt int32
_ [4]byte
}
type Ucred struct {
Ref int32
Uid uint32
Ngroups int16
Groups [16]uint32
}

View file

@ -431,6 +431,9 @@ type Winsize struct {
const (
AT_FDCWD = 0xfffafdcd
AT_SYMLINK_NOFOLLOW = 0x1
AT_REMOVEDIR = 0x2
AT_EACCESS = 0x4
AT_SYMLINK_FOLLOW = 0x8
)
type PollFd struct {

View file

@ -672,9 +672,10 @@ type Winsize struct {
const (
AT_FDCWD = -0x64
AT_REMOVEDIR = 0x800
AT_SYMLINK_FOLLOW = 0x400
AT_EACCESS = 0x100
AT_SYMLINK_NOFOLLOW = 0x200
AT_SYMLINK_FOLLOW = 0x400
AT_REMOVEDIR = 0x800
)
type PollFd struct {

View file

@ -675,9 +675,10 @@ type Winsize struct {
const (
AT_FDCWD = -0x64
AT_REMOVEDIR = 0x800
AT_SYMLINK_FOLLOW = 0x400
AT_EACCESS = 0x100
AT_SYMLINK_NOFOLLOW = 0x200
AT_SYMLINK_FOLLOW = 0x400
AT_REMOVEDIR = 0x800
)
type PollFd struct {

View file

@ -656,9 +656,10 @@ type Winsize struct {
const (
AT_FDCWD = -0x64
AT_REMOVEDIR = 0x800
AT_SYMLINK_FOLLOW = 0x400
AT_EACCESS = 0x100
AT_SYMLINK_NOFOLLOW = 0x200
AT_SYMLINK_FOLLOW = 0x400
AT_REMOVEDIR = 0x800
)
type PollFd struct {

View file

@ -653,9 +653,10 @@ type Winsize struct {
const (
AT_FDCWD = -0x64
AT_REMOVEDIR = 0x800
AT_SYMLINK_FOLLOW = 0x400
AT_EACCESS = 0x100
AT_SYMLINK_NOFOLLOW = 0x200
AT_SYMLINK_FOLLOW = 0x400
AT_REMOVEDIR = 0x800
)
type PollFd struct {

View file

@ -445,8 +445,10 @@ type Ptmget struct {
const (
AT_FDCWD = -0x64
AT_SYMLINK_FOLLOW = 0x400
AT_EACCESS = 0x100
AT_SYMLINK_NOFOLLOW = 0x200
AT_SYMLINK_FOLLOW = 0x400
AT_REMOVEDIR = 0x800
)
type PollFd struct {

View file

@ -453,8 +453,10 @@ type Ptmget struct {
const (
AT_FDCWD = -0x64
AT_SYMLINK_FOLLOW = 0x400
AT_EACCESS = 0x100
AT_SYMLINK_NOFOLLOW = 0x200
AT_SYMLINK_FOLLOW = 0x400
AT_REMOVEDIR = 0x800
)
type PollFd struct {

View file

@ -450,8 +450,10 @@ type Ptmget struct {
const (
AT_FDCWD = -0x64
AT_SYMLINK_FOLLOW = 0x400
AT_EACCESS = 0x100
AT_SYMLINK_NOFOLLOW = 0x200
AT_SYMLINK_FOLLOW = 0x400
AT_REMOVEDIR = 0x800
)
type PollFd struct {

View file

@ -453,8 +453,10 @@ type Ptmget struct {
const (
AT_FDCWD = -0x64
AT_SYMLINK_FOLLOW = 0x400
AT_EACCESS = 0x100
AT_SYMLINK_NOFOLLOW = 0x200
AT_SYMLINK_FOLLOW = 0x400
AT_REMOVEDIR = 0x800
)
type PollFd struct {

View file

@ -9,6 +9,8 @@ import (
"go/ast"
"reflect"
"sort"
"golang.org/x/tools/internal/typeparams"
)
// An ApplyFunc is invoked by Apply for each node n, even if n is nil,
@ -437,8 +439,12 @@ func (a *application) apply(parent ast.Node, name string, iter *iterator, n ast.
}
default:
if typeparams.IsListExpr(n) {
a.applyList(n, "ElemList")
} else {
panic(fmt.Sprintf("Apply: unexpected node type %T", n))
}
}
if a.post != nil && !a.post(&a.cursor) {
panic(abort)

View file

@ -14,7 +14,7 @@ import (
// It returns the X in Go 1.X.
func GoVersion(ctx context.Context, inv Invocation, r *Runner) (int, error) {
inv.Verb = "list"
inv.Args = []string{"-e", "-f", `{{context.ReleaseTags}}`}
inv.Args = []string{"-e", "-f", `{{context.ReleaseTags}}`, `--`, `unsafe`}
inv.Env = append(append([]string{}, inv.Env...), "GO111MODULE=off")
// Unset any unneeded flags, and remove them from BuildFlags, if they're
// present.

11
vendor/golang.org/x/tools/internal/typeparams/doc.go generated vendored Normal file
View file

@ -0,0 +1,11 @@
// Copyright 2021 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package typeparams provides functions to work indirectly with type parameter
// data stored in go/ast and go/types objects, while these API are guarded by a
// build constraint.
//
// This package exists to make it easier for tools to work with generic code,
// while also compiling against older Go versions.
package typeparams

View file

@ -0,0 +1,90 @@
// Copyright 2021 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build !typeparams || !go1.17
// +build !typeparams !go1.17
package typeparams
import (
"go/ast"
"go/types"
)
// NOTE: doc comments must be kept in sync with typeparams.go.
// Enabled reports whether type parameters are enabled in the current build
// environment.
const Enabled = false
// UnpackIndex extracts all index expressions from e. For non-generic code this
// is always one expression: e.Index, but may be more than one expression for
// generic type instantiation.
func UnpackIndex(e *ast.IndexExpr) []ast.Expr {
return []ast.Expr{e.Index}
}
// IsListExpr reports whether n is an *ast.ListExpr, which is a new node type
// introduced to hold type arguments for generic type instantiation.
func IsListExpr(n ast.Node) bool {
return false
}
// ForTypeDecl extracts the (possibly nil) type parameter node list from n.
func ForTypeDecl(*ast.TypeSpec) *ast.FieldList {
return nil
}
// ForFuncDecl extracts the (possibly nil) type parameter node list from n.
func ForFuncDecl(*ast.FuncDecl) *ast.FieldList {
return nil
}
// ForSignature extracts the (possibly empty) type parameter object list from
// sig.
func ForSignature(*types.Signature) []*types.TypeName {
return nil
}
// HasTypeSet reports if iface has a type set.
func HasTypeSet(*types.Interface) bool {
return false
}
// IsComparable reports if iface is the comparable interface.
func IsComparable(*types.Interface) bool {
return false
}
// IsConstraint reports whether iface may only be used as a type parameter
// constraint (i.e. has a type set or is the comparable interface).
func IsConstraint(*types.Interface) bool {
return false
}
// ForNamed extracts the (possibly empty) type parameter object list from
// named.
func ForNamed(*types.Named) []*types.TypeName {
return nil
}
// NamedTArgs extracts the (possibly empty) type argument list from named.
func NamedTArgs(*types.Named) []types.Type {
return nil
}
// InitInferred initializes info to record inferred type information.
func InitInferred(*types.Info) {
}
// GetInferred extracts inferred type information from info for e.
//
// The expression e may have an inferred type if it is an *ast.IndexExpr
// representing partial instantiation of a generic function type for which type
// arguments have been inferred using constraint type inference, or if it is an
// *ast.CallExpr for which type type arguments have be inferred using both
// constraint type inference and function argument inference.
func GetInferred(*types.Info, ast.Expr) ([]types.Type, *types.Signature) {
return nil, nil
}

View file

@ -0,0 +1,105 @@
// Copyright 2021 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build typeparams && go1.17
// +build typeparams,go1.17
package typeparams
import (
"go/ast"
"go/types"
)
// NOTE: doc comments must be kept in sync with notypeparams.go.
// Enabled reports whether type parameters are enabled in the current build
// environment.
const Enabled = true
// UnpackIndex extracts all index expressions from e. For non-generic code this
// is always one expression: e.Index, but may be more than one expression for
// generic type instantiation.
func UnpackIndex(e *ast.IndexExpr) []ast.Expr {
if x, _ := e.Index.(*ast.ListExpr); x != nil {
return x.ElemList
}
if e.Index != nil {
return []ast.Expr{e.Index}
}
return nil
}
// IsListExpr reports whether n is an *ast.ListExpr, which is a new node type
// introduced to hold type arguments for generic type instantiation.
func IsListExpr(n ast.Node) bool {
_, ok := n.(*ast.ListExpr)
return ok
}
// ForTypeDecl extracts the (possibly nil) type parameter node list from n.
func ForTypeDecl(n *ast.TypeSpec) *ast.FieldList {
return n.TParams
}
// ForFuncDecl extracts the (possibly nil) type parameter node list from n.
func ForFuncDecl(n *ast.FuncDecl) *ast.FieldList {
if n.Type != nil {
return n.Type.TParams
}
return nil
}
// ForSignature extracts the (possibly empty) type parameter object list from
// sig.
func ForSignature(sig *types.Signature) []*types.TypeName {
return sig.TParams()
}
// HasTypeSet reports if iface has a type set.
func HasTypeSet(iface *types.Interface) bool {
return iface.HasTypeList()
}
// IsComparable reports if iface is the comparable interface.
func IsComparable(iface *types.Interface) bool {
return iface.IsComparable()
}
// IsConstraint reports whether iface may only be used as a type parameter
// constraint (i.e. has a type set or is the comparable interface).
func IsConstraint(iface *types.Interface) bool {
return iface.IsConstraint()
}
// ForNamed extracts the (possibly empty) type parameter object list from
// named.
func ForNamed(named *types.Named) []*types.TypeName {
return named.TParams()
}
// NamedTArgs extracts the (possibly empty) type argument list from named.
func NamedTArgs(named *types.Named) []types.Type {
return named.TArgs()
}
// InitInferred initializes info to record inferred type information.
func InitInferred(info *types.Info) {
info.Inferred = make(map[ast.Expr]types.Inferred)
}
// GetInferred extracts inferred type information from info for e.
//
// The expression e may have an inferred type if it is an *ast.IndexExpr
// representing partial instantiation of a generic function type for which type
// arguments have been inferred using constraint type inference, or if it is an
// *ast.CallExpr for which type type arguments have be inferred using both
// constraint type inference and function argument inference.
func GetInferred(info *types.Info, e ast.Expr) ([]types.Type, *types.Signature) {
if info.Inferred == nil {
return nil, nil
}
inf := info.Inferred[e]
return inf.TArgs, inf.Sig
}

View file

@ -7,6 +7,7 @@ package internal
import (
"context"
"encoding/json"
"errors"
"fmt"
"io/ioutil"
@ -62,56 +63,68 @@ const (
serviceAccountKey = "service_account"
)
// credentialsFromJSON returns a google.Credentials based on the input.
// credentialsFromJSON returns a google.Credentials from the JSON data
//
// - A self-signed JWT auth flow will be executed if: the data file is a service
// account, no user are scopes provided, an audience is provided, a user
// specified endpoint is not provided, and credentials will not be
// impersonated.
// - A self-signed JWT flow will be executed if the following conditions are
// met:
// (1) At least one of the following is true:
// (a) No scope is provided
// (b) Scope for self-signed JWT flow is enabled
// (c) Audiences are explicitly provided by users
// (2) No service account impersontation
//
// - Otherwise, executes a stanard OAuth 2.0 flow.
// - Otherwise, executes standard OAuth 2.0 flow
// More details: google.aip.dev/auth/4111
func credentialsFromJSON(ctx context.Context, data []byte, ds *DialSettings) (*google.Credentials, error) {
// By default, a standard OAuth 2.0 token source is created
cred, err := google.CredentialsFromJSON(ctx, data, ds.GetScopes()...)
if err != nil {
return nil, err
}
// Standard OAuth 2.0 Flow
if len(data) == 0 ||
len(ds.Scopes) > 0 ||
(ds.DefaultAudience == "" && len(ds.Audiences) == 0) ||
ds.ImpersonationConfig != nil ||
ds.Endpoint != "" {
return cred, nil
}
// Check if JSON is a service account and if so create a self-signed JWT.
var f struct {
Type string `json:"type"`
// The rest JSON fields are omitted because they are not used.
}
if err := json.Unmarshal(cred.JSON, &f); err != nil {
// Override the token source to use self-signed JWT if conditions are met
isJWTFlow, err := isSelfSignedJWTFlow(data, ds)
if err != nil {
return nil, err
}
if f.Type == serviceAccountKey {
ts, err := selfSignedJWTTokenSource(data, ds.DefaultAudience, ds.Audiences)
if isJWTFlow {
ts, err := selfSignedJWTTokenSource(data, ds)
if err != nil {
return nil, err
}
cred.TokenSource = ts
}
return cred, err
}
func selfSignedJWTTokenSource(data []byte, defaultAudience string, audiences []string) (oauth2.TokenSource, error) {
audience := defaultAudience
if len(audiences) > 0 {
// TODO(shinfan): Update golang oauth to support multiple audiences.
if len(audiences) > 1 {
return nil, fmt.Errorf("multiple audiences support is not implemented")
func isSelfSignedJWTFlow(data []byte, ds *DialSettings) (bool, error) {
if (ds.EnableJwtWithScope || ds.HasCustomAudience() || len(ds.GetScopes()) == 0) &&
ds.ImpersonationConfig == nil {
// Check if JSON is a service account and if so create a self-signed JWT.
var f struct {
Type string `json:"type"`
// The rest JSON fields are omitted because they are not used.
}
audience = audiences[0]
if err := json.Unmarshal(data, &f); err != nil {
return false, err
}
return f.Type == serviceAccountKey, nil
}
return false, nil
}
func selfSignedJWTTokenSource(data []byte, ds *DialSettings) (oauth2.TokenSource, error) {
if len(ds.GetScopes()) > 0 && !ds.HasCustomAudience() {
// Scopes are preferred in self-signed JWT unless the scope is not available
// or a custom audience is used.
return google.JWTAccessTokenSourceWithScope(data, ds.GetScopes()...)
} else if ds.GetAudience() != "" {
// Fallback to audience if scope is not provided
return google.JWTAccessTokenSourceFromJSON(data, ds.GetAudience())
} else {
return nil, errors.New("neither scopes or audience are available for the self-signed JWT")
}
return google.JWTAccessTokenSourceFromJSON(data, audience)
}
// QuotaProjectFromCreds returns the quota project from the JSON blob in the provided credentials.

View file

@ -24,6 +24,7 @@ type DialSettings struct {
DefaultMTLSEndpoint string
Scopes []string
DefaultScopes []string
EnableJwtWithScope bool
TokenSource oauth2.TokenSource
Credentials *google.Credentials
CredentialsFile string // if set, Token Source is ignored.
@ -60,6 +61,19 @@ func (ds *DialSettings) GetScopes() []string {
return ds.DefaultScopes
}
// GetAudience returns the user-provided audience, if set, or else falls back to the default audience.
func (ds *DialSettings) GetAudience() string {
if ds.HasCustomAudience() {
return ds.Audiences[0]
}
return ds.DefaultAudience
}
// HasCustomAudience returns true if a custom audience is provided by users.
func (ds *DialSettings) HasCustomAudience() bool {
return len(ds.Audiences) > 0
}
// Validate reports an error if ds is invalid.
func (ds *DialSettings) Validate() error {
if ds.SkipValidation {

View file

@ -94,3 +94,15 @@ func (w withDefaultScopes) Apply(o *internal.DialSettings) {
o.DefaultScopes = make([]string, len(w))
copy(o.DefaultScopes, w)
}
// EnableJwtWithScope returns a ClientOption that specifies if scope can be used
// with self-signed JWT.
func EnableJwtWithScope() option.ClientOption {
return enableJwtWithScope(true)
}
type enableJwtWithScope bool
func (w enableJwtWithScope) Apply(o *internal.DialSettings) {
o.EnableJwtWithScope = bool(w)
}

View file

@ -2454,7 +2454,7 @@ func (c *BucketAccessControlsDeleteCall) Header() http.Header {
func (c *BucketAccessControlsDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -2607,7 +2607,7 @@ func (c *BucketAccessControlsGetCall) Header() http.Header {
func (c *BucketAccessControlsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -2776,7 +2776,7 @@ func (c *BucketAccessControlsInsertCall) Header() http.Header {
func (c *BucketAccessControlsInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -2951,7 +2951,7 @@ func (c *BucketAccessControlsListCall) Header() http.Header {
func (c *BucketAccessControlsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -3117,7 +3117,7 @@ func (c *BucketAccessControlsPatchCall) Header() http.Header {
func (c *BucketAccessControlsPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -3296,7 +3296,7 @@ func (c *BucketAccessControlsUpdateCall) Header() http.Header {
func (c *BucketAccessControlsUpdateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -3484,7 +3484,7 @@ func (c *BucketsDeleteCall) Header() http.Header {
func (c *BucketsDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -3665,7 +3665,7 @@ func (c *BucketsGetCall) Header() http.Header {
func (c *BucketsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -3873,7 +3873,7 @@ func (c *BucketsGetIamPolicyCall) Header() http.Header {
func (c *BucketsGetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -4092,7 +4092,7 @@ func (c *BucketsInsertCall) Header() http.Header {
func (c *BucketsInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -4351,7 +4351,7 @@ func (c *BucketsListCall) Header() http.Header {
func (c *BucketsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -4565,7 +4565,7 @@ func (c *BucketsLockRetentionPolicyCall) Header() http.Header {
func (c *BucketsLockRetentionPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -4802,7 +4802,7 @@ func (c *BucketsPatchCall) Header() http.Header {
func (c *BucketsPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -5033,7 +5033,7 @@ func (c *BucketsSetIamPolicyCall) Header() http.Header {
func (c *BucketsSetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -5211,7 +5211,7 @@ func (c *BucketsTestIamPermissionsCall) Header() http.Header {
func (c *BucketsTestIamPermissionsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -5453,7 +5453,7 @@ func (c *BucketsUpdateCall) Header() http.Header {
func (c *BucketsUpdateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -5665,7 +5665,7 @@ func (c *ChannelsStopCall) Header() http.Header {
func (c *ChannelsStopCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -5787,7 +5787,7 @@ func (c *DefaultObjectAccessControlsDeleteCall) Header() http.Header {
func (c *DefaultObjectAccessControlsDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -5940,7 +5940,7 @@ func (c *DefaultObjectAccessControlsGetCall) Header() http.Header {
func (c *DefaultObjectAccessControlsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -6110,7 +6110,7 @@ func (c *DefaultObjectAccessControlsInsertCall) Header() http.Header {
func (c *DefaultObjectAccessControlsInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -6302,7 +6302,7 @@ func (c *DefaultObjectAccessControlsListCall) Header() http.Header {
func (c *DefaultObjectAccessControlsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -6480,7 +6480,7 @@ func (c *DefaultObjectAccessControlsPatchCall) Header() http.Header {
func (c *DefaultObjectAccessControlsPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -6659,7 +6659,7 @@ func (c *DefaultObjectAccessControlsUpdateCall) Header() http.Header {
func (c *DefaultObjectAccessControlsUpdateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -6834,7 +6834,7 @@ func (c *NotificationsDeleteCall) Header() http.Header {
func (c *NotificationsDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -6985,7 +6985,7 @@ func (c *NotificationsGetCall) Header() http.Header {
func (c *NotificationsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -7157,7 +7157,7 @@ func (c *NotificationsInsertCall) Header() http.Header {
func (c *NotificationsInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -7334,7 +7334,7 @@ func (c *NotificationsListCall) Header() http.Header {
func (c *NotificationsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -7514,7 +7514,7 @@ func (c *ObjectAccessControlsDeleteCall) Header() http.Header {
func (c *ObjectAccessControlsDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -7693,7 +7693,7 @@ func (c *ObjectAccessControlsGetCall) Header() http.Header {
func (c *ObjectAccessControlsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -7888,7 +7888,7 @@ func (c *ObjectAccessControlsInsertCall) Header() http.Header {
func (c *ObjectAccessControlsInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -8089,7 +8089,7 @@ func (c *ObjectAccessControlsListCall) Header() http.Header {
func (c *ObjectAccessControlsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -8281,7 +8281,7 @@ func (c *ObjectAccessControlsPatchCall) Header() http.Header {
func (c *ObjectAccessControlsPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -8486,7 +8486,7 @@ func (c *ObjectAccessControlsUpdateCall) Header() http.Header {
func (c *ObjectAccessControlsUpdateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -8729,7 +8729,7 @@ func (c *ObjectsComposeCall) Header() http.Header {
func (c *ObjectsComposeCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -9085,7 +9085,7 @@ func (c *ObjectsCopyCall) Header() http.Header {
func (c *ObjectsCopyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -9417,7 +9417,7 @@ func (c *ObjectsDeleteCall) Header() http.Header {
func (c *ObjectsDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -9654,7 +9654,7 @@ func (c *ObjectsGetCall) Header() http.Header {
func (c *ObjectsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -9908,7 +9908,7 @@ func (c *ObjectsGetIamPolicyCall) Header() http.Header {
func (c *ObjectsGetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -10228,7 +10228,7 @@ func (c *ObjectsInsertCall) Header() http.Header {
func (c *ObjectsInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -10603,7 +10603,7 @@ func (c *ObjectsListCall) Header() http.Header {
func (c *ObjectsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -10924,7 +10924,7 @@ func (c *ObjectsPatchCall) Header() http.Header {
func (c *ObjectsPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -11329,7 +11329,7 @@ func (c *ObjectsRewriteCall) Header() http.Header {
func (c *ObjectsRewriteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -11636,7 +11636,7 @@ func (c *ObjectsSetIamPolicyCall) Header() http.Header {
func (c *ObjectsSetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -11841,7 +11841,7 @@ func (c *ObjectsTestIamPermissionsCall) Header() http.Header {
func (c *ObjectsTestIamPermissionsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -12106,7 +12106,7 @@ func (c *ObjectsUpdateCall) Header() http.Header {
func (c *ObjectsUpdateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -12426,7 +12426,7 @@ func (c *ObjectsWatchAllCall) Header() http.Header {
func (c *ObjectsWatchAllCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -12645,7 +12645,7 @@ func (c *ProjectsHmacKeysCreateCall) Header() http.Header {
func (c *ProjectsHmacKeysCreateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -12798,7 +12798,7 @@ func (c *ProjectsHmacKeysDeleteCall) Header() http.Header {
func (c *ProjectsHmacKeysDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -12937,7 +12937,7 @@ func (c *ProjectsHmacKeysGetCall) Header() http.Header {
func (c *ProjectsHmacKeysGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -13139,7 +13139,7 @@ func (c *ProjectsHmacKeysListCall) Header() http.Header {
func (c *ProjectsHmacKeysListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -13338,7 +13338,7 @@ func (c *ProjectsHmacKeysUpdateCall) Header() http.Header {
func (c *ProjectsHmacKeysUpdateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@ -13517,7 +13517,7 @@ func (c *ProjectsServiceAccountGetCall) Header() http.Header {
func (c *ProjectsServiceAccountGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210606")
reqHeaders.Set("x-goog-api-client", "gl-go/"+gensupport.GoVersion()+" gdcl/20210622")
for k, v := range c.header_ {
reqHeaders[k] = v
}

19
vendor/modules.txt vendored
View file

@ -1,5 +1,4 @@
# cloud.google.com/go v0.84.0
## explicit
cloud.google.com/go
cloud.google.com/go/compute/metadata
cloud.google.com/go/iam
@ -28,7 +27,7 @@ github.com/VictoriaMetrics/metricsql/binaryop
# github.com/VividCortex/ewma v1.2.0
## explicit
github.com/VividCortex/ewma
# github.com/aws/aws-sdk-go v1.38.57
# github.com/aws/aws-sdk-go v1.38.66
## explicit
github.com/aws/aws-sdk-go/aws
github.com/aws/aws-sdk-go/aws/arn
@ -158,7 +157,7 @@ github.com/prometheus/client_golang/prometheus
github.com/prometheus/client_golang/prometheus/internal
# github.com/prometheus/client_model v0.2.0
github.com/prometheus/client_model/go
# github.com/prometheus/common v0.28.0
# github.com/prometheus/common v0.29.0
## explicit
github.com/prometheus/common/expfmt
github.com/prometheus/common/internal/bitbucket.org/ww/goautoneg
@ -239,7 +238,7 @@ golang.org/x/lint/golint
# golang.org/x/mod v0.4.2
golang.org/x/mod/module
golang.org/x/mod/semver
# golang.org/x/net v0.0.0-20210525063256-abc453219eb5
# golang.org/x/net v0.0.0-20210614182718-04defd469f4e
## explicit
golang.org/x/net/context
golang.org/x/net/context/ctxhttp
@ -251,7 +250,7 @@ golang.org/x/net/internal/socks
golang.org/x/net/internal/timeseries
golang.org/x/net/proxy
golang.org/x/net/trace
# golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c
# golang.org/x/oauth2 v0.0.0-20210622215436-a8dc77f794b6
## explicit
golang.org/x/oauth2
golang.org/x/oauth2/authhandler
@ -263,7 +262,7 @@ golang.org/x/oauth2/jws
golang.org/x/oauth2/jwt
# golang.org/x/sync v0.0.0-20210220032951-036812b2e83c
golang.org/x/sync/errgroup
# golang.org/x/sys v0.0.0-20210608053332-aa57babbf139
# golang.org/x/sys v0.0.0-20210616094352-59db8d763f22
## explicit
golang.org/x/sys/execabs
golang.org/x/sys/internal/unsafeheader
@ -274,7 +273,8 @@ golang.org/x/text/secure/bidirule
golang.org/x/text/transform
golang.org/x/text/unicode/bidi
golang.org/x/text/unicode/norm
# golang.org/x/tools v0.1.2
# golang.org/x/tools v0.1.4
## explicit
golang.org/x/tools/cmd/goimports
golang.org/x/tools/go/ast/astutil
golang.org/x/tools/go/gcexportdata
@ -287,10 +287,11 @@ golang.org/x/tools/internal/fastwalk
golang.org/x/tools/internal/gocommand
golang.org/x/tools/internal/gopathwalk
golang.org/x/tools/internal/imports
golang.org/x/tools/internal/typeparams
# golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1
golang.org/x/xerrors
golang.org/x/xerrors/internal
# google.golang.org/api v0.48.0
# google.golang.org/api v0.49.0
## explicit
google.golang.org/api/googleapi
google.golang.org/api/googleapi/transport
@ -317,7 +318,7 @@ google.golang.org/appengine/internal/modules
google.golang.org/appengine/internal/remote_api
google.golang.org/appengine/internal/urlfetch
google.golang.org/appengine/urlfetch
# google.golang.org/genproto v0.0.0-20210608205507-b6d2f5bf0d7d
# google.golang.org/genproto v0.0.0-20210617175327-b9e0b3197ced
google.golang.org/genproto/googleapis/api/annotations
google.golang.org/genproto/googleapis/iam/v1
google.golang.org/genproto/googleapis/rpc/code