mirror of
https://github.com/VictoriaMetrics/VictoriaMetrics.git
synced 2025-03-01 15:33:35 +00:00
Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files
This commit is contained in:
commit
22e48e6517
171 changed files with 5002 additions and 2634 deletions
2
.github/ISSUE_TEMPLATE/bug_report.md
vendored
2
.github/ISSUE_TEMPLATE/bug_report.md
vendored
|
@ -11,7 +11,7 @@ A clear and concise description of what the bug is.
|
|||
It would be great to [upgrade](https://docs.victoriametrics.com/#how-to-upgrade)
|
||||
to [the latest available release](https://github.com/VictoriaMetrics/VictoriaMetrics/releases)
|
||||
and verify whether the bug is reproducible there.
|
||||
It's also recommended to read the [troubleshooting docs](https://docs.victoriametrics.com/#troubleshooting).
|
||||
It's also recommended to read the [troubleshooting docs](https://docs.victoriametrics.com/Troubleshooting.html).
|
||||
|
||||
**To Reproduce**
|
||||
Steps to reproduce the behavior.
|
||||
|
|
2
.github/workflows/check-licenses.yml
vendored
2
.github/workflows/check-licenses.yml
vendored
|
@ -17,7 +17,7 @@ jobs:
|
|||
- name: Setup Go
|
||||
uses: actions/setup-go@main
|
||||
with:
|
||||
go-version: 1.19.3
|
||||
go-version: 1.19.4
|
||||
id: go
|
||||
- name: Code checkout
|
||||
uses: actions/checkout@master
|
||||
|
|
2
.github/workflows/main.yml
vendored
2
.github/workflows/main.yml
vendored
|
@ -19,7 +19,7 @@ jobs:
|
|||
- name: Setup Go
|
||||
uses: actions/setup-go@main
|
||||
with:
|
||||
go-version: 1.19.3
|
||||
go-version: 1.19.4
|
||||
id: go
|
||||
- name: Code checkout
|
||||
uses: actions/checkout@master
|
||||
|
|
21
README.md
21
README.md
|
@ -787,7 +787,7 @@ to your needs or when testing bugfixes.
|
|||
|
||||
### Development build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.3.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.
|
||||
2. Run `make victoria-metrics` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `victoria-metrics` binary and puts it into the `bin` folder.
|
||||
|
||||
|
@ -803,7 +803,7 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b
|
|||
|
||||
### Development ARM build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.3.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.
|
||||
2. Run `make victoria-metrics-linux-arm` or `make victoria-metrics-linux-arm64` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `victoria-metrics-linux-arm` or `victoria-metrics-linux-arm64` binary respectively and puts it into the `bin` folder.
|
||||
|
||||
|
@ -817,7 +817,7 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b
|
|||
|
||||
`Pure Go` mode builds only Go code without [cgo](https://golang.org/cmd/cgo/) dependencies.
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.3.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.
|
||||
2. Run `make victoria-metrics-pure` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `victoria-metrics-pure` binary and puts it into the `bin` folder.
|
||||
|
||||
|
@ -1245,7 +1245,11 @@ Example contents for `-relabelConfig` file:
|
|||
regex: true
|
||||
```
|
||||
|
||||
VictoriaMetrics provides additional relabeling features such as Graphite-style relabeling. See [these docs](https://docs.victoriametrics.com/vmagent.html#relabeling) for more details.
|
||||
VictoriaMetrics provides additional relabeling features such as Graphite-style relabeling.
|
||||
See [these docs](https://docs.victoriametrics.com/vmagent.html#relabeling) for more details.
|
||||
|
||||
The relabeling can be debugged at `http://victoriametrics:8428/metric-relabel-debug` page.
|
||||
See [these docs](https://docs.victoriametrics.com/vmagent.html#relabel-debug) for more details.
|
||||
|
||||
|
||||
## Federation
|
||||
|
@ -1351,7 +1355,12 @@ with the enabled de-duplication. See [this section](#deduplication) for details.
|
|||
|
||||
## Deduplication
|
||||
|
||||
VictoriaMetrics leaves a single raw sample with the biggest timestamp per each `-dedup.minScrapeInterval` discrete interval if `-dedup.minScrapeInterval` is set to positive duration. For example, `-dedup.minScrapeInterval=60s` would leave a single raw sample with the biggest timestamp per each discrete 60s interval. If multiple raw samples have the same biggest timestamp on the given `-dedup.minScrapeInterval` discrete interval, then an arbitrary sample out of these samples is left. This aligns with the [staleness rules in Prometheus](https://prometheus.io/docs/prometheus/latest/querying/basics/#staleness).
|
||||
VictoriaMetrics leaves a single raw sample with the biggest timestamp per each `-dedup.minScrapeInterval` discrete interval
|
||||
if `-dedup.minScrapeInterval` is set to positive duration. For example, `-dedup.minScrapeInterval=60s` would leave a single
|
||||
raw sample with the biggest timestamp per each discrete 60s interval.
|
||||
This aligns with the [staleness rules in Prometheus](https://prometheus.io/docs/prometheus/latest/querying/basics/#staleness).
|
||||
|
||||
If multiple raw samples have the same biggest timestamp on the given `-dedup.minScrapeInterval` discrete interval, then the sample with the biggest value is left.
|
||||
|
||||
The `-dedup.minScrapeInterval=D` is equivalent to `-downsampling.period=0s:D` if [downsampling](#downsampling) is enabled. So it is safe to use deduplication and downsampling simultaneously.
|
||||
|
||||
|
@ -2298,8 +2307,6 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
-relabelConfig string
|
||||
Optional path to a file with relabeling rules, which are applied to all the ingested metrics. The path can point either to local file or to http url. See https://docs.victoriametrics.com/#relabeling for details. The config is reloaded on SIGHUP signal
|
||||
-relabelDebug
|
||||
Whether to log metrics before and after relabeling with -relabelConfig. If the -relabelDebug is enabled, then the metrics aren't sent to storage. This is useful for debugging the relabeling configs
|
||||
-retentionFilter array
|
||||
Retention filter in the format 'filter:retention'. For example, '{env="dev"}:3d' configures the retention for time series with env="dev" label to 3 days. See https://docs.victoriametrics.com/#retention-filters for details. This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/enterprise.html
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
|
|
|
@ -112,6 +112,7 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
|
|||
{"vmui", "Web UI"},
|
||||
{"targets", "status for discovered active targets"},
|
||||
{"service-discovery", "labels before and after relabeling for discovered targets"},
|
||||
{"metric-relabel-debug", "debug metric relabeling"},
|
||||
{"api/v1/targets", "advanced information about discovered targets in JSON format"},
|
||||
{"config", "-promscrape.config contents"},
|
||||
{"metrics", "available service metrics"},
|
||||
|
|
|
@ -245,8 +245,6 @@ scrape_configs:
|
|||
* `scrape_align_interval: duration` for aligning scrapes to the given interval instead of using random offset
|
||||
in the range `[0 ... scrape_interval]` for scraping each target. The random offset helps spreading scrapes evenly in time.
|
||||
* `scrape_offset: duration` for specifying the exact offset for scraping instead of using random offset in the range `[0 ... scrape_interval]`.
|
||||
* `relabel_debug: true` for enabling debug logging during relabeling of the discovered targets. See [these docs](#relabeling).
|
||||
* `metric_relabel_debug: true` for enabling debug logging during relabeling of the scraped metrics. See [these docs](#relabeling).
|
||||
|
||||
See [scrape_configs docs](https://docs.victoriametrics.com/sd_configs.html#scrape_configs) for more details on all the supported options.
|
||||
|
||||
|
@ -419,26 +417,25 @@ with [additional enhancements](#relabeling-enhancements). The relabeling can be
|
|||
This relabeling is used for modifying labels in discovered targets and for dropping unneded targets.
|
||||
See [relabeling cookbook](https://docs.victoriametrics.com/relabeling.html) for details.
|
||||
|
||||
This relabeling can be debugged by passing `relabel_debug: true` option to the corresponding `scrape_config` section.
|
||||
In this case `vmagent` logs target labels before and after the relabeling and then drops the logged target.
|
||||
This relabeling can be debugged by clicking the `debug` link at the corresponding target on the `http://vmagent:8429/targets` page
|
||||
or on the `http://vmagent:8429/service-discovery` page. See [these docs](#relabel-debug) for details.
|
||||
|
||||
* At the `scrape_config -> metric_relabel_configs` section in `-promscrape.config` file.
|
||||
This relabeling is used for modifying labels in scraped metrics and for dropping unneeded metrics.
|
||||
See [relabeling cookbook](https://docs.victoriametrics.com/relabeling.html) for details.
|
||||
|
||||
This relabeling can be debugged by passing `metric_relabel_debug: true` option to the corresponding `scrape_config` section.
|
||||
In this case `vmagent` logs metrics before and after the relabeling and then drops the logged metrics.
|
||||
This relabeling can be debugged via `http://vmagent:8429/metric-relabel-debug` page. See [these docs](#relabel-debug) for details.
|
||||
|
||||
* At the `-remoteWrite.relabelConfig` file. This relabeling is used for modifying labels for all the collected metrics
|
||||
(inluding [metrics obtained via push-based protocols](#how-to-push-data-to-vmagent)) and for dropping unneeded metrics
|
||||
(including [metrics obtained via push-based protocols](#how-to-push-data-to-vmagent)) and for dropping unneeded metrics
|
||||
before sending them to all the configured `-remoteWrite.url` addresses.
|
||||
This relabeling can be debugged by passing `-remoteWrite.relabelDebug` command-line option to `vmagent`.
|
||||
In this case `vmagent` logs metrics before and after the relabeling and then drops all the logged metrics instead of sending them to remote storage.
|
||||
|
||||
This relabeling can be debugged via `http://vmagent:8429/metric-relabel-debug` page. See [these docs](#relabel-debug) for details.
|
||||
|
||||
* At the `-remoteWrite.urlRelabelConfig` files. This relabeling is used for modifying labels for metrics
|
||||
and for dropping unneeded metrics before sending them to a particular `-remoteWrite.url`.
|
||||
This relabeling can be debugged by passing `-remoteWrite.urlRelabelDebug` command-line options to `vmagent`.
|
||||
In this case `vmagent` logs metrics before and after the relabeling and then drops all the logged metrics instead of sending them to the corresponding `-remoteWrite.url`.
|
||||
|
||||
This relabeling can be debugged via `http://vmagent:8429/metric-relabel-debug` page. See [these docs](#relabel-debug) for details.
|
||||
|
||||
All the files with relabeling configs can contain special placeholders in the form `%{ENV_VAR}`,
|
||||
which are replaced by the corresponding environment variable values.
|
||||
|
@ -453,9 +450,6 @@ The following articles contain useful information about Prometheus relabeling:
|
|||
* [Extracting labels from legacy metric names](https://www.robustperception.io/extracting-labels-from-legacy-metric-names)
|
||||
* [relabel_configs vs metric_relabel_configs](https://www.robustperception.io/relabel_configs-vs-metric_relabel_configs)
|
||||
|
||||
[This relabeler playground](https://relabeler.promlabs.com/) can help debugging issues related to relabeling.
|
||||
|
||||
|
||||
## Relabeling enhancements
|
||||
|
||||
`vmagent` provides the following enhancements on top of Prometheus-compatible relabeling:
|
||||
|
@ -597,6 +591,28 @@ Important notes about `action: graphite` relabeling rules:
|
|||
The `action: graphite` relabeling rules are easier to write and maintain than `action: replace` for labels extraction from Graphite-style metric names.
|
||||
Additionally, the `action: graphite` relabeling rules usually work much faster than the equivalent `action: replace` rules.
|
||||
|
||||
## Relabel debug
|
||||
|
||||
`vmagent` and [single-node VictoriaMetrics](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html)
|
||||
provide the following tools for debugging target-level and metric-level relabeling:
|
||||
|
||||
- Target-level relabeling (e.g. `relabel_configs` section at [scrape_configs](https://docs.victoriametrics.com/sd_configs.html#scrape_configs))
|
||||
can be performed by navigating to `http://vmagent:8429/targets` page (`http://victoriametrics:8428/targets` page for single-node VictoriaMetrics)
|
||||
and clicking the `debug` link at the target, which must be debugged.
|
||||
The opened page will show step-by-step results for the actual relabeling rules applied to the target labels.
|
||||
|
||||
The `http://vmagent:8429/targets` page shows only active targets. If you need to understand why some target
|
||||
is dropped during the relabeling, then navigate to `http://vmagent:8428/service-discovery` page
|
||||
(`http://victoriametrics:8428/service-discovery` for single-node VictoriaMetrics), find the dropped target
|
||||
and click the `debug` link there. The opened page will show step-by-step results for the actual relabeling rules,
|
||||
which result to target drop.
|
||||
|
||||
- Metric-level relabeling (e.g. `metric_relabel_configs` section at [scrape_configs](https://docs.victoriametrics.com/sd_configs.html#scrape_configs)
|
||||
and all the relabeling, which can be set up via `-relabelConfig`, `-remoteWrite.relabelConfig` and `-remoteWrite.urlRelabelConfig`
|
||||
command-line flags) can be performed by navigating to `http://vmagent:8429/metric-relabel-debug` page
|
||||
(`http://victoriametrics:8428/metric-relabel-debug` page for single-node VictoriaMetrics)
|
||||
and submitting there relabeling rules together with the metric to be relabeled.
|
||||
The page will show step-by-step results for the entered relabeling rules executed against the entered metric.
|
||||
|
||||
## Prometheus staleness markers
|
||||
|
||||
|
@ -654,8 +670,9 @@ scrape_configs:
|
|||
'match[]': ['{__name__!=""}']
|
||||
```
|
||||
|
||||
Note that `sample_limit` and `series_limit` [scrape_config options](https://docs.victoriametrics.com/sd_configs.html#scrape_configs)
|
||||
cannot be used in stream parsing mode because the parsed data is pushed to remote storage as soon as it is parsed.
|
||||
Note that `vmagent` in stream parsing mode stores up to `sample_limit` samples to the configured `-remoteStorage.url`
|
||||
instead of droping all the samples read from the target, because the parsed data is sent to the remote storage
|
||||
as soon as it is parsed in stream parsing mode.
|
||||
|
||||
## Scraping big number of targets
|
||||
|
||||
|
@ -744,8 +761,8 @@ By default `vmagent` doesn't limit the number of time series each scrape target
|
|||
|
||||
* Via `-promscrape.seriesLimitPerTarget` command-line option. This limit is applied individually
|
||||
to all the scrape targets defined in the file pointed by `-promscrape.config`.
|
||||
* Via `series_limit` config option at `scrape_config` section. This limit is applied individually
|
||||
to all the scrape targets defined in the given `scrape_config`.
|
||||
* Via `series_limit` config option at [scrape_config](https://docs.victoriametrics.com/sd_configs.html#scrape_configs) section.
|
||||
This limit is applied individually to all the scrape targets defined in the given `scrape_config`.
|
||||
* Via `__series_limit__` label, which can be set with [relabeling](#relabeling) at `relabel_configs` section.
|
||||
This limit is applied to the corresponding scrape targets. Typical use case: to set the limit
|
||||
via [Kubernetes annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) for targets,
|
||||
|
@ -1031,7 +1048,7 @@ It may be needed to build `vmagent` from source code when developing or testing
|
|||
|
||||
### Development build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.3.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.
|
||||
2. Run `make vmagent` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds the `vmagent` binary and puts it into the `bin` folder.
|
||||
|
||||
|
@ -1060,7 +1077,7 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b
|
|||
|
||||
### Development ARM build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.3.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.
|
||||
2. Run `make vmagent-linux-arm` or `make vmagent-linux-arm64` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics)
|
||||
It builds `vmagent-linux-arm` or `vmagent-linux-arm64` binary respectively and puts it into the `bin` folder.
|
||||
|
||||
|
@ -1427,8 +1444,6 @@ See the docs at https://docs.victoriametrics.com/vmagent.html .
|
|||
Supports array of values separated by comma or specified via multiple flags.
|
||||
-remoteWrite.relabelConfig string
|
||||
Optional path to file with relabel_config entries. The path can point either to local file or to http url. These entries are applied to all the metrics before sending them to -remoteWrite.url. See https://docs.victoriametrics.com/vmagent.html#relabeling for details
|
||||
-remoteWrite.relabelDebug
|
||||
Whether to log metrics before and after relabeling with -remoteWrite.relabelConfig. If the -remoteWrite.relabelDebug is enabled, then the metrics aren't sent to remote storage. This is useful for debugging the relabeling configs
|
||||
-remoteWrite.roundDigits array
|
||||
Round metric values to this number of decimal digits after the point before writing them to remote storage. Examples: -remoteWrite.roundDigits=2 would round 1.236 to 1.24, while -remoteWrite.roundDigits=-1 would round 126.78 to 130. By default digits rounding is disabled. Set it to 100 for disabling it for a particular remote storage. This option may be used for improving data compression for the stored metrics
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
|
@ -1463,9 +1478,6 @@ See the docs at https://docs.victoriametrics.com/vmagent.html .
|
|||
-remoteWrite.urlRelabelConfig array
|
||||
Optional path to relabel config for the corresponding -remoteWrite.url. The path can point either to local file or to http url
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
-remoteWrite.urlRelabelDebug array
|
||||
Whether to log metrics before and after relabeling with -remoteWrite.urlRelabelConfig. If the -remoteWrite.urlRelabelDebug is enabled, then the metrics aren't sent to the corresponding -remoteWrite.url. This is useful for debugging the relabeling configs
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
-sortLabels
|
||||
Whether to sort labels for incoming samples before writing them to all the configured remote storage systems. This may be needed for reducing memory usage at remote storage when the order of labels in incoming samples is random. For example, if m{k1="v1",k2="v2"} may be sent as m{k2="v2",k1="v1"}Enabled sorting for labels can slow down ingestion performance a bit
|
||||
-tls
|
||||
|
|
|
@ -52,10 +52,12 @@ func insertRows(at *auth.Token, series []parser.Series, extraLabels []prompbmars
|
|||
Name: "__name__",
|
||||
Value: ss.Metric,
|
||||
})
|
||||
labels = append(labels, prompbmarshal.Label{
|
||||
Name: "host",
|
||||
Value: ss.Host,
|
||||
})
|
||||
if ss.Host != "" {
|
||||
labels = append(labels, prompbmarshal.Label{
|
||||
Name: "host",
|
||||
Value: ss.Host,
|
||||
})
|
||||
}
|
||||
if ss.Device != "" {
|
||||
labels = append(labels, prompbmarshal.Label{
|
||||
Name: "device",
|
||||
|
|
|
@ -207,6 +207,7 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
|
|||
httpserver.WriteAPIHelp(w, [][2]string{
|
||||
{"targets", "status for discovered active targets"},
|
||||
{"service-discovery", "labels before and after relabeling for discovered targets"},
|
||||
{"metric-relabel-debug", "debug metric relabeling"},
|
||||
{"api/v1/targets", "advanced information about discovered targets in JSON format"},
|
||||
{"config", "-promscrape.config contents"},
|
||||
{"metrics", "available service metrics"},
|
||||
|
@ -325,6 +326,14 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
|
|||
promscrapeServiceDiscoveryRequests.Inc()
|
||||
promscrape.WriteServiceDiscovery(w, r)
|
||||
return true
|
||||
case "/prometheus/metric-relabel-debug", "/metric-relabel-debug":
|
||||
promscrapeMetricRelabelDebugRequests.Inc()
|
||||
promscrape.WriteMetricRelabelDebug(w, r)
|
||||
return true
|
||||
case "/prometheus/target-relabel-debug", "/target-relabel-debug":
|
||||
promscrapeTargetRelabelDebugRequests.Inc()
|
||||
promscrape.WriteTargetRelabelDebug(w, r)
|
||||
return true
|
||||
case "/prometheus/api/v1/targets", "/api/v1/targets":
|
||||
promscrapeAPIV1TargetsRequests.Inc()
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
|
@ -546,7 +555,11 @@ var (
|
|||
|
||||
promscrapeTargetsRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/targets"}`)
|
||||
promscrapeServiceDiscoveryRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/service-discovery"}`)
|
||||
promscrapeAPIV1TargetsRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/api/v1/targets"}`)
|
||||
|
||||
promscrapeMetricRelabelDebugRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/metric-relabel-debug"}`)
|
||||
promscrapeTargetRelabelDebugRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/target-relabel-debug"}`)
|
||||
|
||||
promscrapeAPIV1TargetsRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/api/v1/targets"}`)
|
||||
|
||||
promscrapeTargetResponseRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/target_response"}`)
|
||||
promscrapeTargetResponseErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/target_response"}`)
|
||||
|
|
|
@ -18,13 +18,8 @@ var (
|
|||
relabelConfigPathGlobal = flag.String("remoteWrite.relabelConfig", "", "Optional path to file with relabel_config entries. "+
|
||||
"The path can point either to local file or to http url. These entries are applied to all the metrics "+
|
||||
"before sending them to -remoteWrite.url. See https://docs.victoriametrics.com/vmagent.html#relabeling for details")
|
||||
relabelDebugGlobal = flag.Bool("remoteWrite.relabelDebug", false, "Whether to log metrics before and after relabeling with -remoteWrite.relabelConfig. "+
|
||||
"If the -remoteWrite.relabelDebug is enabled, then the metrics aren't sent to remote storage. This is useful for debugging the relabeling configs")
|
||||
relabelConfigPaths = flagutil.NewArrayString("remoteWrite.urlRelabelConfig", "Optional path to relabel config for the corresponding -remoteWrite.url. "+
|
||||
"The path can point either to local file or to http url")
|
||||
relabelDebug = flagutil.NewArrayBool("remoteWrite.urlRelabelDebug", "Whether to log metrics before and after relabeling with -remoteWrite.urlRelabelConfig. "+
|
||||
"If the -remoteWrite.urlRelabelDebug is enabled, then the metrics aren't sent to the corresponding -remoteWrite.url. "+
|
||||
"This is useful for debugging the relabeling configs")
|
||||
|
||||
usePromCompatibleNaming = flag.Bool("usePromCompatibleNaming", false, "Whether to replace characters unsupported by Prometheus with underscores "+
|
||||
"in the ingested metric names and label names. For example, foo.bar{a.b='c'} is transformed into foo_bar{a_b='c'} during data ingestion if this flag is set. "+
|
||||
|
@ -42,7 +37,7 @@ func CheckRelabelConfigs() error {
|
|||
func loadRelabelConfigs() (*relabelConfigs, error) {
|
||||
var rcs relabelConfigs
|
||||
if *relabelConfigPathGlobal != "" {
|
||||
global, err := promrelabel.LoadRelabelConfigs(*relabelConfigPathGlobal, *relabelDebugGlobal)
|
||||
global, err := promrelabel.LoadRelabelConfigs(*relabelConfigPathGlobal)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot load -remoteWrite.relabelConfig=%q: %w", *relabelConfigPathGlobal, err)
|
||||
}
|
||||
|
@ -58,7 +53,7 @@ func loadRelabelConfigs() (*relabelConfigs, error) {
|
|||
// Skip empty relabel config.
|
||||
continue
|
||||
}
|
||||
prc, err := promrelabel.LoadRelabelConfigs(path, relabelDebug.GetOptionalArg(i))
|
||||
prc, err := promrelabel.LoadRelabelConfigs(path)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot load relabel configs from -remoteWrite.urlRelabelConfig=%q: %w", path, err)
|
||||
}
|
||||
|
|
|
@ -1317,7 +1317,7 @@ spec:
|
|||
|
||||
### Development build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.3.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.
|
||||
2. Run `make vmalert` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `vmalert` binary and puts it into the `bin` folder.
|
||||
|
||||
|
@ -1333,7 +1333,7 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b
|
|||
|
||||
### Development ARM build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.3.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.
|
||||
2. Run `make vmalert-linux-arm` or `make vmalert-linux-arm64` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `vmalert-linux-arm` or `vmalert-linux-arm64` binary respectively and puts it into the `bin` folder.
|
||||
|
||||
|
|
|
@ -284,7 +284,7 @@ func (ar *AlertingRule) Exec(ctx context.Context, ts time.Time, limit int) ([]pr
|
|||
duration: time.Since(start),
|
||||
samples: len(qMetrics),
|
||||
err: err,
|
||||
req: req,
|
||||
curl: requestToCurl(req),
|
||||
}
|
||||
|
||||
defer func() {
|
||||
|
|
|
@ -421,20 +421,26 @@ func (e *executor) exec(ctx context.Context, rule Rule, ts time.Time, resolveDur
|
|||
return fmt.Errorf("rule %q: failed to execute: %w", rule, err)
|
||||
}
|
||||
|
||||
errGr := new(utils.ErrGroup)
|
||||
if e.rw != nil {
|
||||
pushToRW := func(tss []prompbmarshal.TimeSeries) {
|
||||
pushToRW := func(tss []prompbmarshal.TimeSeries) error {
|
||||
var lastErr error
|
||||
for _, ts := range tss {
|
||||
remoteWriteTotal.Inc()
|
||||
if err := e.rw.Push(ts); err != nil {
|
||||
remoteWriteErrors.Inc()
|
||||
errGr.Add(fmt.Errorf("rule %q: remote write failure: %w", rule, err))
|
||||
lastErr = fmt.Errorf("rule %q: remote write failure: %w", rule, err)
|
||||
}
|
||||
}
|
||||
return lastErr
|
||||
}
|
||||
pushToRW(tss)
|
||||
if err := pushToRW(tss); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
staleSeries := e.getStaleSeries(rule, tss, ts)
|
||||
pushToRW(staleSeries)
|
||||
if err := pushToRW(staleSeries); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
ar, ok := rule.(*AlertingRule)
|
||||
|
@ -448,6 +454,7 @@ func (e *executor) exec(ctx context.Context, rule Rule, ts time.Time, resolveDur
|
|||
}
|
||||
|
||||
wg := sync.WaitGroup{}
|
||||
errGr := new(utils.ErrGroup)
|
||||
for _, nt := range e.notifiers() {
|
||||
wg.Add(1)
|
||||
go func(nt notifier.Notifier) {
|
||||
|
|
|
@ -3,8 +3,6 @@ package main
|
|||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/decimal"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
|
||||
"reflect"
|
||||
"sort"
|
||||
"testing"
|
||||
|
@ -12,6 +10,9 @@ import (
|
|||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/config"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/notifier"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/remotewrite"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/decimal"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutils"
|
||||
)
|
||||
|
||||
|
@ -452,3 +453,24 @@ func TestFaultyNotifier(t *testing.T) {
|
|||
}
|
||||
t.Fatalf("alive notifier didn't receive notification by %v", deadline)
|
||||
}
|
||||
|
||||
func TestFaultyRW(t *testing.T) {
|
||||
fq := &fakeQuerier{}
|
||||
fq.add(metricWithValueAndLabels(t, 1, "__name__", "foo", "job", "bar"))
|
||||
|
||||
r := &RecordingRule{
|
||||
Name: "test",
|
||||
state: newRuleState(),
|
||||
q: fq,
|
||||
}
|
||||
|
||||
e := &executor{
|
||||
rw: &remotewrite.Client{},
|
||||
previouslySentSeriesToRW: make(map[uint64]map[string][]prompbmarshal.Label),
|
||||
}
|
||||
|
||||
err := e.exec(context.Background(), r, time.Now(), 0, 10)
|
||||
if err == nil {
|
||||
t.Fatalf("expected to get an error from faulty RW client, got nil instead")
|
||||
}
|
||||
}
|
||||
|
|
|
@ -239,7 +239,7 @@ func TestAlert_toPromLabels(t *testing.T) {
|
|||
replacement: "aaa"
|
||||
- action: labeldrop
|
||||
regex: "env.*"
|
||||
`), false)
|
||||
`))
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
|
|
|
@ -83,12 +83,12 @@ func (cfg *Config) UnmarshalYAML(unmarshal func(interface{}) error) error {
|
|||
if cfg.Timeout.Duration() == 0 {
|
||||
cfg.Timeout = promutils.NewDuration(time.Second * 10)
|
||||
}
|
||||
rCfg, err := promrelabel.ParseRelabelConfigs(cfg.RelabelConfigs, false)
|
||||
rCfg, err := promrelabel.ParseRelabelConfigs(cfg.RelabelConfigs)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to parse relabeling config: %w", err)
|
||||
}
|
||||
cfg.parsedRelabelConfigs = rCfg
|
||||
arCfg, err := promrelabel.ParseRelabelConfigs(cfg.AlertRelabelConfigs, false)
|
||||
arCfg, err := promrelabel.ParseRelabelConfigs(cfg.AlertRelabelConfigs)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to parse alert relabeling config: %w", err)
|
||||
}
|
||||
|
|
|
@ -121,7 +121,7 @@ func (rr *RecordingRule) Exec(ctx context.Context, ts time.Time, limit int) ([]p
|
|||
at: ts,
|
||||
duration: time.Since(start),
|
||||
samples: len(qMetrics),
|
||||
req: req,
|
||||
curl: requestToCurl(req),
|
||||
}
|
||||
|
||||
defer func() {
|
||||
|
|
|
@ -3,7 +3,6 @@ package main
|
|||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"net/http"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
|
@ -54,8 +53,8 @@ type ruleStateEntry struct {
|
|||
// stores the number of samples returned during
|
||||
// the last evaluation
|
||||
samples int
|
||||
// stores the HTTP request used by datasource during rule.Exec
|
||||
req *http.Request
|
||||
// stores the curl command reflecting the HTTP request used during rule.Exec
|
||||
curl string
|
||||
}
|
||||
|
||||
const defaultStateEntriesLimit = 20
|
||||
|
|
|
@ -72,7 +72,7 @@ func (cw *curlWriter) add(str string) {
|
|||
}
|
||||
|
||||
func requestToCurl(req *http.Request) string {
|
||||
if req.URL == nil {
|
||||
if req == nil || req.URL == nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
|
|
|
@ -8,7 +8,6 @@ import (
|
|||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/notifier"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/tpl"
|
||||
|
@ -18,20 +17,14 @@ import (
|
|||
)
|
||||
|
||||
var (
|
||||
once = sync.Once{}
|
||||
apiLinks [][2]string
|
||||
navItems []tpl.NavItem
|
||||
)
|
||||
|
||||
func initLinks() {
|
||||
apiLinks = [][2]string{
|
||||
// api links are relative since they can be used by external clients,
|
||||
// such as Grafana, and proxied via vmselect.
|
||||
{"api/v1/rules", "list all loaded groups and rules"},
|
||||
{"api/v1/alerts", "list all active alerts"},
|
||||
{fmt.Sprintf("api/v1/alert?%s=<int>&%s=<int>", paramGroupID, paramAlertID), "get alert status by group and alert ID"},
|
||||
|
||||
// system links
|
||||
}
|
||||
systemLinks = [][2]string{
|
||||
{"/flags", "command-line flags"},
|
||||
{"/metrics", "list of application metrics"},
|
||||
{"/-/reload", "reload configuration"},
|
||||
|
@ -43,7 +36,7 @@ func initLinks() {
|
|||
{Name: "Notifiers", Url: "notifiers"},
|
||||
{Name: "Docs", Url: "https://docs.victoriametrics.com/vmalert.html"},
|
||||
}
|
||||
}
|
||||
)
|
||||
|
||||
type requestHandler struct {
|
||||
m *manager
|
||||
|
@ -57,10 +50,6 @@ var (
|
|||
)
|
||||
|
||||
func (rh *requestHandler) handler(w http.ResponseWriter, r *http.Request) bool {
|
||||
once.Do(func() {
|
||||
initLinks()
|
||||
})
|
||||
|
||||
if strings.HasPrefix(r.URL.Path, "/vmalert/static") {
|
||||
staticServer.ServeHTTP(w, r)
|
||||
return true
|
||||
|
|
|
@ -16,11 +16,16 @@
|
|||
<p>
|
||||
API:<br>
|
||||
{% for _, p := range apiLinks %}
|
||||
{%code
|
||||
p, doc := p[0], p[1]
|
||||
%}
|
||||
<a href="{%s p %}">{%s p %}</a> - {%s doc %}<br/>
|
||||
{%code p, doc := p[0], p[1] %}
|
||||
<a href="{%s p %}">{%s p %}</a> - {%s doc %}<br/>
|
||||
{% endfor %}
|
||||
{% if r.Header.Get("X-Forwarded-For") == "" %}
|
||||
System:<br>
|
||||
{% for _, p := range systemLinks %}
|
||||
{%code p, doc := p[0], p[1] %}
|
||||
<a href="{%s p %}">{%s p %}</a> - {%s doc %}<br/>
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
</p>
|
||||
{%= tpl.Footer(r) %}
|
||||
{% endfunc %}
|
||||
|
@ -457,7 +462,7 @@
|
|||
<td class="text-center">{%f.3 u.duration.Seconds() %}s</td>
|
||||
<td class="text-center">{%s u.at.Format(time.RFC3339) %}</td>
|
||||
<td>
|
||||
<textarea class="curl-area" rows="1" onclick="this.focus();this.select()">{%s requestToCurl(u.req) %}</textarea>
|
||||
<textarea class="curl-area" rows="1" onclick="this.focus();this.select()">{%s u.curl %}</textarea>
|
||||
</td>
|
||||
</tr>
|
||||
</li>
|
||||
|
|
File diff suppressed because it is too large
Load diff
|
@ -167,7 +167,7 @@ It is recommended using [binary releases](https://github.com/VictoriaMetrics/Vic
|
|||
|
||||
### Development build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.3.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.
|
||||
2. Run `make vmauth` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `vmauth` binary and puts it into the `bin` folder.
|
||||
|
||||
|
|
|
@ -286,7 +286,7 @@ It is recommended using [binary releases](https://github.com/VictoriaMetrics/Vic
|
|||
|
||||
### Development build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.3.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.
|
||||
2. Run `make vmbackup` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `vmbackup` binary and puts it into the `bin` folder.
|
||||
|
||||
|
|
|
@ -1017,7 +1017,7 @@ It is recommended using [binary releases](https://github.com/VictoriaMetrics/Vic
|
|||
|
||||
### Development build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.3.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.
|
||||
2. Run `make vmctl` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `vmctl` binary and puts it into the `bin` folder.
|
||||
|
||||
|
@ -1046,7 +1046,7 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b
|
|||
|
||||
#### Development ARM build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.3.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.
|
||||
2. Run `make vmctl-linux-arm` or `make vmctl-linux-arm64` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `vmctl-linux-arm` or `vmctl-linux-arm64` binary respectively and puts it into the `bin` folder.
|
||||
|
||||
|
|
|
@ -171,6 +171,41 @@ curl 'http://localhost:8431/api/v1/labels' -H 'Authorization: Bearer eyJhbGciOiJ
|
|||
# check rate limit
|
||||
```
|
||||
|
||||
## JWT signature verification
|
||||
|
||||
`vmgateway` supports JWT signature verification.
|
||||
|
||||
Supported algorithms are `RS256`, `RS384`, `RS512`, `ES256`, `ES384`, `ES512`, `PS256`, `PS384`, `PS512`.
|
||||
Tokens with unsupported algorithms will be rejected.
|
||||
|
||||
In order to enable JWT signature verification, you need to specify keys for signature verification.
|
||||
The following flags are used to specify keys:
|
||||
- `-auth.publicKeyFiles` - allows to pass file path to file with public key.
|
||||
- `-auth.publicKeys` - allows to pass public key directly.
|
||||
|
||||
Note that both flags support passing multiple keys and also can be used together.
|
||||
|
||||
Example usage:
|
||||
```console
|
||||
./bin/vmgateway -eula \
|
||||
-enable.auth \
|
||||
-write.url=http://localhost:8480 \
|
||||
-read.url=http://localhost:8481 \
|
||||
-auth.publicKeyFiles=public_key.pem \
|
||||
-auth.publicKeyFiles=public_key2.pem \
|
||||
-auth.publicKeys=`-----BEGIN PUBLIC KEY-----
|
||||
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAu1SU1LfVLPHCozMxH2Mo
|
||||
4lgOEePzNm0tRgeLezV6ffAt0gunVTLw7onLRnrq0/IzW7yWR7QkrmBL7jTKEn5u
|
||||
+qKhbwKfBstIs+bMY2Zkp18gnTxKLxoS2tFczGkPLPgizskuemMghRniWaoLcyeh
|
||||
kd3qqGElvW/VDL5AaWTg0nLVkjRo9z+40RQzuVaE8AkAFmxZzow3x+VJYKdjykkJ
|
||||
0iT9wCS0DRTXu269V264Vf/3jvredZiKRkgwlL9xNAwxXFg0x/XFw005UWVRIkdg
|
||||
cKWTjpBP2dPwVZ4WWC+9aGVd+Gyn1o0CLelf4rEjGoXbAAEgAqeGUxrcIlbjXfbc
|
||||
mwIDAQAB
|
||||
-----END PUBLIC KEY-----
|
||||
`
|
||||
```
|
||||
This command will result in 3 keys loaded: 2 keys from files and 1 from command line.
|
||||
|
||||
## Configuration
|
||||
|
||||
The shortlist of configuration flags include the following:
|
||||
|
@ -178,6 +213,12 @@ The shortlist of configuration flags include the following:
|
|||
```console
|
||||
-auth.httpHeader string
|
||||
HTTP header name to look for JWT authorization token (default "Authorization")
|
||||
-auth.publicKeyFiles array
|
||||
Path file with public key to verify JWT token signature
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
-auth.publicKeys array
|
||||
Public keys to verify JWT token signature
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
-clusterMode
|
||||
enable this for the cluster version
|
||||
-datasource.appendTypePrefix
|
||||
|
@ -336,7 +377,7 @@ The shortlist of configuration flags include the following:
|
|||
## Limitations
|
||||
|
||||
* Access Control:
|
||||
* `jwt` token must be validated by external system, currently `vmgateway` can't validate the signature.
|
||||
* `jwt` token signature verification for `HMAC` algorithms is not supported.
|
||||
* RateLimiting:
|
||||
* limits applied based on queries to `datasource.url`
|
||||
* only cluster version can be rate-limited.
|
||||
|
|
|
@ -54,7 +54,9 @@ func insertRows(series []parser.Series, extraLabels []prompbmarshal.Label) error
|
|||
rowsTotal += len(ss.Points)
|
||||
ctx.Labels = ctx.Labels[:0]
|
||||
ctx.AddLabel("", ss.Metric)
|
||||
ctx.AddLabel("host", ss.Host)
|
||||
if ss.Host != "" {
|
||||
ctx.AddLabel("host", ss.Host)
|
||||
}
|
||||
if ss.Device != "" {
|
||||
ctx.AddLabel("device", ss.Device)
|
||||
}
|
||||
|
|
|
@ -333,7 +333,8 @@ var (
|
|||
|
||||
promscrapeTargetsRequests = metrics.NewCounter(`vm_http_requests_total{path="/targets"}`)
|
||||
promscrapeServiceDiscoveryRequests = metrics.NewCounter(`vm_http_requests_total{path="/service-discovery"}`)
|
||||
promscrapeAPIV1TargetsRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/targets"}`)
|
||||
|
||||
promscrapeAPIV1TargetsRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/targets"}`)
|
||||
|
||||
promscrapeTargetResponseRequests = metrics.NewCounter(`vm_http_requests_total{path="/target_response"}`)
|
||||
promscrapeTargetResponseErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/target_response"}`)
|
||||
|
|
|
@ -19,8 +19,6 @@ var (
|
|||
relabelConfig = flag.String("relabelConfig", "", "Optional path to a file with relabeling rules, which are applied to all the ingested metrics. "+
|
||||
"The path can point either to local file or to http url. "+
|
||||
"See https://docs.victoriametrics.com/#relabeling for details. The config is reloaded on SIGHUP signal")
|
||||
relabelDebug = flag.Bool("relabelDebug", false, "Whether to log metrics before and after relabeling with -relabelConfig. If the -relabelDebug is enabled, "+
|
||||
"then the metrics aren't sent to storage. This is useful for debugging the relabeling configs")
|
||||
|
||||
usePromCompatibleNaming = flag.Bool("usePromCompatibleNaming", false, "Whether to replace characters unsupported by Prometheus with underscores "+
|
||||
"in the ingested metric names and label names. For example, foo.bar{a.b='c'} is transformed into foo_bar{a_b='c'} during data ingestion if this flag is set. "+
|
||||
|
@ -77,7 +75,7 @@ func loadRelabelConfig() (*promrelabel.ParsedConfigs, error) {
|
|||
if len(*relabelConfig) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
pcs, err := promrelabel.LoadRelabelConfigs(*relabelConfig, *relabelDebug)
|
||||
pcs, err := promrelabel.LoadRelabelConfigs(*relabelConfig)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error when reading -relabelConfig=%q: %w", *relabelConfig, err)
|
||||
}
|
||||
|
|
|
@ -186,7 +186,7 @@ It is recommended using [binary releases](https://github.com/VictoriaMetrics/Vic
|
|||
|
||||
### Development build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.3.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.
|
||||
2. Run `make vmrestore` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `vmrestore` binary and puts it into the `bin` folder.
|
||||
|
||||
|
|
|
@ -21,6 +21,7 @@ import (
|
|||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/querytracer"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timerpool"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
|
@ -215,8 +216,8 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
|
|||
}
|
||||
|
||||
if path == "/vmalert" {
|
||||
// vmalert access via incomplete url without `/` in the end. Redirecto to complete url.
|
||||
// Use relative redirect, since, since the hostname and path prefix may be incorrect if VictoriaMetrics
|
||||
// vmalert access via incomplete url without `/` in the end. Redirect to complete url.
|
||||
// Use relative redirect, since the hostname and path prefix may be incorrect if VictoriaMetrics
|
||||
// is hidden behind vmauth or similar proxy.
|
||||
httpserver.Redirect(w, "vmalert/")
|
||||
return true
|
||||
|
@ -423,6 +424,14 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
|
|||
return true
|
||||
}
|
||||
return true
|
||||
case "/metric-relabel-debug":
|
||||
promscrapeMetricRelabelDebugRequests.Inc()
|
||||
promscrape.WriteMetricRelabelDebug(w, r)
|
||||
return true
|
||||
case "/target-relabel-debug":
|
||||
promscrapeTargetRelabelDebugRequests.Inc()
|
||||
promscrape.WriteTargetRelabelDebug(w, r)
|
||||
return true
|
||||
case "/api/v1/rules", "/rules":
|
||||
rulesRequests.Inc()
|
||||
if len(*vmalertProxyURL) > 0 {
|
||||
|
@ -587,6 +596,9 @@ var (
|
|||
graphiteTagsDelSeriesRequests = metrics.NewCounter(`vm_http_requests_total{path="/tags/delSeries"}`)
|
||||
graphiteTagsDelSeriesErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/tags/delSeries"}`)
|
||||
|
||||
promscrapeMetricRelabelDebugRequests = metrics.NewCounter(`vm_http_requests_total{path="/metric-relabel-debug"}`)
|
||||
promscrapeTargetRelabelDebugRequests = metrics.NewCounter(`vm_http_requests_total{path="/target-relabel-debug"}`)
|
||||
|
||||
graphiteFunctionsRequests = metrics.NewCounter(`vm_http_requests_total{path="/functions"}`)
|
||||
|
||||
vmalertRequests = metrics.NewCounter(`vm_http_requests_total{path="/vmalert"}`)
|
||||
|
|
|
@ -144,7 +144,7 @@ func TestMergeSortBlocks(t *testing.T) {
|
|||
},
|
||||
}, 1, &Result{
|
||||
Timestamps: []int64{1, 2, 4, 5, 10, 11, 12},
|
||||
Values: []float64{21, 22, 23, 7, 24, 5, 26},
|
||||
Values: []float64{21, 22, 23, 7, 24, 25, 26},
|
||||
})
|
||||
|
||||
// Multiple blocks with identical timestamp ranges, no deduplication.
|
||||
|
|
|
@ -748,7 +748,7 @@ func getIntK(k float64, kMax int) int {
|
|||
if math.IsNaN(k) {
|
||||
return 0
|
||||
}
|
||||
kn := int(k)
|
||||
kn := floatToIntBounded(k)
|
||||
if kn < 0 {
|
||||
return 0
|
||||
}
|
||||
|
@ -999,14 +999,10 @@ func aggrFuncLimitK(afa *aggrFuncArg) ([]*timeseries, error) {
|
|||
if err := expectTransformArgsNum(args, 2); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
limits, err := getScalar(args[0], 0)
|
||||
limit, err := getIntNumber(args[0], 0)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot obtain limit arg: %w", err)
|
||||
}
|
||||
limit := 0
|
||||
if len(limits) > 0 {
|
||||
limit = int(limits[0])
|
||||
}
|
||||
if limit < 0 {
|
||||
limit = 0
|
||||
}
|
||||
|
@ -1155,3 +1151,13 @@ func lessWithNaNs(a, b float64) bool {
|
|||
}
|
||||
return a < b
|
||||
}
|
||||
|
||||
func floatToIntBounded(f float64) int {
|
||||
if f > math.MaxInt {
|
||||
return math.MaxInt
|
||||
}
|
||||
if f < math.MinInt {
|
||||
return math.MinInt
|
||||
}
|
||||
return int(f)
|
||||
}
|
||||
|
|
|
@ -5549,6 +5549,30 @@ func TestExecSuccess(t *testing.T) {
|
|||
resultExpected := []netstorage.Result{r1, r2}
|
||||
f(q, resultExpected)
|
||||
})
|
||||
t.Run(`limitk(inf)`, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
q := `sort(limitk(inf, label_set(10, "foo", "bar") or label_set(time()/150, "baz", "sss")))`
|
||||
r1 := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{10, 10, 10, 10, 10, 10},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
r1.MetricName.Tags = []storage.Tag{{
|
||||
Key: []byte("foo"),
|
||||
Value: []byte("bar"),
|
||||
}}
|
||||
r2 := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{6.666666666666667, 8, 9.333333333333334, 10.666666666666666, 12, 13.333333333333334},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
r2.MetricName.Tags = []storage.Tag{{
|
||||
Key: []byte("baz"),
|
||||
Value: []byte("sss"),
|
||||
}}
|
||||
resultExpected := []netstorage.Result{r1, r2}
|
||||
f(q, resultExpected)
|
||||
})
|
||||
t.Run(`any()`, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
q := `any(label_set(10, "__name__", "x", "foo", "bar") or label_set(time()/150, "__name__", "y", "baz", "sss"))`
|
||||
|
|
|
@ -2127,7 +2127,7 @@ func getIntNumber(arg interface{}, argNum int) (int, error) {
|
|||
}
|
||||
n := 0
|
||||
if len(v) > 0 {
|
||||
n = int(v[0])
|
||||
n = floatToIntBounded(v[0])
|
||||
}
|
||||
return n, nil
|
||||
}
|
||||
|
|
|
@ -371,14 +371,10 @@ func transformBucketsLimit(tfa *transformFuncArg) ([]*timeseries, error) {
|
|||
if err := expectTransformArgsNum(args, 2); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
limits, err := getScalar(args[0], 1)
|
||||
limit, err := getIntNumber(args[0], 0)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
limit := 0
|
||||
if len(limits) > 0 {
|
||||
limit = int(limits[0])
|
||||
}
|
||||
if limit <= 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
@ -390,6 +386,7 @@ func transformBucketsLimit(tfa *transformFuncArg) ([]*timeseries, error) {
|
|||
if len(tss) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
pointsCount := len(tss[0].Values)
|
||||
|
||||
// Group timeseries by all MetricGroup+tags excluding `le` tag.
|
||||
type x struct {
|
||||
|
@ -437,7 +434,7 @@ func transformBucketsLimit(tfa *transformFuncArg) ([]*timeseries, error) {
|
|||
sort.Slice(leGroup, func(i, j int) bool {
|
||||
return leGroup[i].le < leGroup[j].le
|
||||
})
|
||||
for n := range limits {
|
||||
for n := 0; n < pointsCount; n++ {
|
||||
prevValue := float64(0)
|
||||
for i := range leGroup {
|
||||
xx := &leGroup[i]
|
||||
|
|
|
@ -209,7 +209,7 @@ func (d *Deadline) String() string {
|
|||
startTime := time.Unix(int64(d.deadline), 0).Add(-d.timeout)
|
||||
elapsed := time.Since(startTime)
|
||||
msg := fmt.Sprintf("%.3f seconds (elapsed %.3f seconds)", d.timeout.Seconds(), elapsed.Seconds())
|
||||
if d.flagHint != "" {
|
||||
if float64(elapsed)/float64(d.timeout) > 0.9 && d.flagHint != "" {
|
||||
msg += fmt.Sprintf("; the timeout can be adjusted with `%s` command-line flag", d.flagHint)
|
||||
}
|
||||
return msg
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
FROM golang:1.19.3 as build-web-stage
|
||||
FROM golang:1.19.4 as build-web-stage
|
||||
COPY build /build
|
||||
|
||||
WORKDIR /build
|
||||
|
|
|
@ -1296,7 +1296,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -1398,7 +1399,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -1506,20 +1508,15 @@
|
|||
"mode": "off"
|
||||
}
|
||||
},
|
||||
"links": [
|
||||
{
|
||||
"targetBlank": true,
|
||||
"title": "Drilldown",
|
||||
"url": "/d/oS7Bi_0Wz?viewPanel=189&var-job=${__field.labels.job}&var-ds=$ds&var-instance=$instance&${__url_time_range}"
|
||||
}
|
||||
],
|
||||
"links": [],
|
||||
"mappings": [],
|
||||
"min": 0,
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -1616,20 +1613,15 @@
|
|||
"mode": "off"
|
||||
}
|
||||
},
|
||||
"links": [
|
||||
{
|
||||
"targetBlank": true,
|
||||
"title": "Drilldown",
|
||||
"url": "/d/oS7Bi_0Wz?viewPanel=192&var-job=${__field.labels.job}&var-ds=$ds&var-instance=$instance&${__url_time_range}"
|
||||
}
|
||||
],
|
||||
"links": [],
|
||||
"mappings": [],
|
||||
"min": 0,
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -1736,7 +1728,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -1891,7 +1884,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -2026,7 +2020,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -2148,7 +2143,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -2279,7 +2275,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -2382,7 +2379,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -2486,7 +2484,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -2589,7 +2588,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -5306,4 +5306,4 @@
|
|||
"uid": "wNf0q_kZk",
|
||||
"version": 1,
|
||||
"weekStart": ""
|
||||
}
|
||||
}
|
|
@ -1538,8 +1538,7 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
"color": "green"
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -1555,7 +1554,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 3
|
||||
"y": 11
|
||||
},
|
||||
"id": 109,
|
||||
"links": [],
|
||||
|
@ -1652,8 +1651,7 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
"color": "green"
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -1669,7 +1667,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 3
|
||||
"y": 11
|
||||
},
|
||||
"id": 111,
|
||||
"links": [],
|
||||
|
@ -1763,8 +1761,7 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
"color": "green"
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -1793,7 +1790,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 11
|
||||
"y": 19
|
||||
},
|
||||
"id": 81,
|
||||
"links": [],
|
||||
|
@ -1898,8 +1895,7 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
"color": "green"
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -1928,7 +1924,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 11
|
||||
"y": 19
|
||||
},
|
||||
"id": 7,
|
||||
"options": {
|
||||
|
@ -2028,8 +2024,7 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
"color": "green"
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -2045,7 +2040,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 19
|
||||
"y": 27
|
||||
},
|
||||
"id": 83,
|
||||
"links": [],
|
||||
|
@ -2133,8 +2128,7 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
"color": "green"
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -2150,7 +2144,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 19
|
||||
"y": 27
|
||||
},
|
||||
"id": 39,
|
||||
"links": [],
|
||||
|
@ -2238,8 +2232,7 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
"color": "green"
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -2255,7 +2248,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 27
|
||||
"y": 35
|
||||
},
|
||||
"id": 41,
|
||||
"links": [],
|
||||
|
@ -3162,6 +3155,130 @@
|
|||
],
|
||||
"title": "Invalid datapoints rate ($instance)",
|
||||
"type": "timeseries"
|
||||
},
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "$ds"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "thresholds"
|
||||
},
|
||||
"custom": {
|
||||
"align": "auto",
|
||||
"displayMode": "auto",
|
||||
"inspect": false
|
||||
},
|
||||
"mappings": [],
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
"value": 80
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"overrides": [
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "Value"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "custom.hidden",
|
||||
"value": true
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": {
|
||||
"id": "byName",
|
||||
"options": "Time"
|
||||
},
|
||||
"properties": [
|
||||
{
|
||||
"id": "custom.hidden",
|
||||
"value": true
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 7,
|
||||
"w": 24,
|
||||
"x": 0,
|
||||
"y": 36
|
||||
},
|
||||
"id": 129,
|
||||
"options": {
|
||||
"footer": {
|
||||
"fields": "",
|
||||
"reducer": [
|
||||
"sum"
|
||||
],
|
||||
"show": false
|
||||
},
|
||||
"showHeader": true,
|
||||
"sortBy": [
|
||||
{
|
||||
"desc": true,
|
||||
"displayName": "job"
|
||||
}
|
||||
]
|
||||
},
|
||||
"pluginVersion": "9.2.6",
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "$ds"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"exemplar": false,
|
||||
"expr": "sum(flag{is_set=\"true\", job=~\"$job\", instance=~\"$instance\"}) by(job, instance, name, value)",
|
||||
"format": "table",
|
||||
"instant": true,
|
||||
"legendFormat": "__auto",
|
||||
"range": false,
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"title": "Non-default flags",
|
||||
"transformations": [
|
||||
{
|
||||
"id": "groupBy",
|
||||
"options": {
|
||||
"fields": {
|
||||
"instance": {
|
||||
"aggregations": []
|
||||
},
|
||||
"job": {
|
||||
"aggregations": [],
|
||||
"operation": "groupby"
|
||||
},
|
||||
"name": {
|
||||
"aggregations": [],
|
||||
"operation": "groupby"
|
||||
},
|
||||
"value": {
|
||||
"aggregations": [],
|
||||
"operation": "groupby"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"type": "table"
|
||||
}
|
||||
],
|
||||
"targets": [
|
||||
|
@ -3237,8 +3354,7 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
"color": "green"
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -3254,7 +3370,7 @@
|
|||
"h": 7,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 37
|
||||
"y": 45
|
||||
},
|
||||
"id": 48,
|
||||
"options": {
|
||||
|
@ -3342,8 +3458,7 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
"color": "green"
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -3359,7 +3474,7 @@
|
|||
"h": 7,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 37
|
||||
"y": 45
|
||||
},
|
||||
"id": 76,
|
||||
"options": {
|
||||
|
@ -3446,8 +3561,7 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
"color": "green"
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -3463,7 +3577,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 44
|
||||
"y": 52
|
||||
},
|
||||
"id": 20,
|
||||
"options": {
|
||||
|
@ -3549,8 +3663,7 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
"color": "green"
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -3566,7 +3679,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 44
|
||||
"y": 52
|
||||
},
|
||||
"id": 126,
|
||||
"options": {
|
||||
|
@ -3651,8 +3764,7 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
"color": "green"
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -3668,7 +3780,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 52
|
||||
"y": 60
|
||||
},
|
||||
"id": 46,
|
||||
"options": {
|
||||
|
@ -3753,8 +3865,7 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
"color": "green"
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -3770,7 +3881,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 52
|
||||
"y": 60
|
||||
},
|
||||
"id": 31,
|
||||
"options": {
|
||||
|
@ -3931,8 +4042,7 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
"color": "green"
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -3948,7 +4058,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 38
|
||||
"y": 46
|
||||
},
|
||||
"id": 73,
|
||||
"links": [],
|
||||
|
@ -4048,8 +4158,7 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
"color": "green"
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -4065,7 +4174,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 38
|
||||
"y": 46
|
||||
},
|
||||
"id": 77,
|
||||
"links": [],
|
||||
|
@ -4189,8 +4298,7 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
"color": "green"
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -4206,7 +4314,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 39
|
||||
"y": 47
|
||||
},
|
||||
"id": 60,
|
||||
"options": {
|
||||
|
@ -4292,8 +4400,7 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
"color": "green"
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -4309,7 +4416,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 39
|
||||
"y": 47
|
||||
},
|
||||
"id": 66,
|
||||
"options": {
|
||||
|
@ -4395,8 +4502,7 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
"color": "green"
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -4412,7 +4518,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 47
|
||||
"y": 55
|
||||
},
|
||||
"id": 61,
|
||||
"options": {
|
||||
|
@ -4498,8 +4604,7 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
"color": "green"
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -4515,7 +4620,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 47
|
||||
"y": 55
|
||||
},
|
||||
"id": 65,
|
||||
"options": {
|
||||
|
@ -4600,8 +4705,7 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "transparent",
|
||||
"value": null
|
||||
"color": "transparent"
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -4617,7 +4721,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 55
|
||||
"y": 63
|
||||
},
|
||||
"id": 88,
|
||||
"options": {
|
||||
|
@ -4699,8 +4803,7 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "transparent",
|
||||
"value": null
|
||||
"color": "transparent"
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -4716,7 +4819,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 55
|
||||
"y": 63
|
||||
},
|
||||
"id": 84,
|
||||
"options": {
|
||||
|
@ -4801,8 +4904,7 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "transparent",
|
||||
"value": null
|
||||
"color": "transparent"
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -4818,7 +4920,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 63
|
||||
"y": 71
|
||||
},
|
||||
"id": 90,
|
||||
"options": {
|
||||
|
@ -4884,7 +4986,7 @@
|
|||
"h": 2,
|
||||
"w": 24,
|
||||
"x": 0,
|
||||
"y": 79
|
||||
"y": 87
|
||||
},
|
||||
"id": 115,
|
||||
"options": {
|
||||
|
@ -4948,8 +5050,7 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
"color": "green"
|
||||
}
|
||||
]
|
||||
},
|
||||
|
@ -4961,7 +5062,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 81
|
||||
"y": 89
|
||||
},
|
||||
"id": 119,
|
||||
"links": [],
|
||||
|
@ -5052,8 +5153,7 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
"color": "green"
|
||||
}
|
||||
]
|
||||
},
|
||||
|
@ -5065,7 +5165,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 81
|
||||
"y": 89
|
||||
},
|
||||
"id": 117,
|
||||
"links": [],
|
||||
|
@ -5154,8 +5254,7 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
"color": "green"
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -5171,7 +5270,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 89
|
||||
"y": 97
|
||||
},
|
||||
"id": 125,
|
||||
"links": [
|
||||
|
@ -5262,8 +5361,7 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
"color": "green"
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -5292,7 +5390,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 89
|
||||
"y": 97
|
||||
},
|
||||
"id": 123,
|
||||
"options": {
|
||||
|
@ -5420,7 +5518,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 97
|
||||
"y": 105
|
||||
},
|
||||
"id": 121,
|
||||
"links": [],
|
||||
|
@ -5494,9 +5592,9 @@
|
|||
"list": [
|
||||
{
|
||||
"current": {
|
||||
"selected": false,
|
||||
"text": "VictoriaMetrics - cluster",
|
||||
"value": "VictoriaMetrics - cluster"
|
||||
"selected": true,
|
||||
"text": "VictoriaMetrics",
|
||||
"value": "VictoriaMetrics"
|
||||
},
|
||||
"hide": 0,
|
||||
"includeAll": false,
|
||||
|
|
|
@ -4,7 +4,7 @@ DOCKER_NAMESPACE := victoriametrics
|
|||
|
||||
ROOT_IMAGE ?= alpine:3.17.0
|
||||
CERTS_IMAGE := alpine:3.17.0
|
||||
GO_BUILDER_IMAGE := golang:1.19.3-alpine
|
||||
GO_BUILDER_IMAGE := golang:1.19.4-alpine
|
||||
BUILDER_IMAGE := local/builder:2.0.0-$(shell echo $(GO_BUILDER_IMAGE) | tr :/ __)-1
|
||||
BASE_IMAGE := local/base:1.1.3-$(shell echo $(ROOT_IMAGE) | tr :/ __)-$(shell echo $(CERTS_IMAGE) | tr :/ __)
|
||||
|
||||
|
|
|
@ -54,7 +54,6 @@ groups:
|
|||
for: 15m
|
||||
labels:
|
||||
severity: warning
|
||||
show_at: dashboard
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/wNf0q_kZk?viewPanel=35&var-instance={{ $labels.instance }}"
|
||||
summary: "Too many errors served for path {{ $labels.path }} (instance {{ $labels.instance }})"
|
||||
|
|
|
@ -17,7 +17,7 @@ services:
|
|||
|
||||
grafana:
|
||||
container_name: grafana
|
||||
image: grafana/grafana:9.2.6
|
||||
image: grafana/grafana:9.2.7
|
||||
depends_on:
|
||||
- "vmselect"
|
||||
ports:
|
||||
|
|
|
@ -40,7 +40,7 @@ services:
|
|||
restart: always
|
||||
grafana:
|
||||
container_name: grafana
|
||||
image: grafana/grafana:9.2.6
|
||||
image: grafana/grafana:9.2.7
|
||||
depends_on:
|
||||
- "victoriametrics"
|
||||
ports:
|
||||
|
|
|
@ -55,9 +55,10 @@ See also [case studies](https://docs.victoriametrics.com/CaseStudies.html).
|
|||
* [Install and configure VictoriaMetrics on Debian](https://www.vultr.com/docs/install-and-configure-victoriametrics-on-debian)
|
||||
* [Superset BI with Victoria Metrics](https://cer6erus.medium.com/superset-bi-with-victoria-metrics-a109d3e91bc6)
|
||||
* [VictoriaMetrics Source Code Analysis - Bloom filter](https://www.sobyte.net/post/2022-05/victoriametrics-bloomfilter/)
|
||||
* [How we tried using VictoriaMetrics and Thanos at the same time](https://habr.com/ru/company/sravni/blog/672908/)
|
||||
* [How we tried using VictoriaMetrics and Thanos at the same time](https://medium.com/@uburro/how-we-tried-using-victoriametrics-and-thanos-at-the-same-time-48803d2a638b)
|
||||
* [Prometheus, Grafana, and Kubernetes, Oh My!](https://www.groundcover.com/blog/prometheus-grafana-kubernetes)
|
||||
* [Explaining modern server monitoring stacks for self-hosting](https://dataswamp.org/~solene/2022-09-11-exploring-monitoring-stacks.html)
|
||||
* [How do We Keep Metrics for a Long Time in VictoriaMetrics](https://www.youtube.com/watch?v=SGZjY7xgDwE)
|
||||
|
||||
## Our articles
|
||||
|
||||
|
@ -110,7 +111,7 @@ See also [case studies](https://docs.victoriametrics.com/CaseStudies.html).
|
|||
|
||||
* [How ClickHouse inspired us to build a high performance time series database](https://www.youtube.com/watch?v=p9qjb_yoBro). See also [slides](https://docs.google.com/presentation/d/1SdFrwsyR-HMXfbzrY8xfDZH_Dg6E7E5NJ84tQozMn3w/edit?usp=sharing)
|
||||
* [OSA Con 2022: Specifics of data analysis in Time Series Databases](https://www.youtube.com/watch?v=_zORxrgLtec)
|
||||
* [OSMC 2022. VictoriaMetrics: scaling to 100 million metrics per second](https://www.slideshare.net/NETWAYS/osmc-2022-victoriametrics-scaling-to-100-million-metrics-per-second-by-aliaksandr-valialkin)
|
||||
* [OSMC 2022. VictoriaMetrics: scaling to 100 million metrics per second](https://www.youtube.com/watch?v=xfed9_Q0_qU). See also [slides](https://www.slideshare.net/NETWAYS/osmc-2022-victoriametrics-scaling-to-100-million-metrics-per-second-by-aliaksandr-valialkin)
|
||||
* [CNCF Paris Meetup 2022-09-15 - VictoriaMetrics - The cost of scale in Prometheus ecosystem](https://www.youtube.com/watch?v=gcZYHpri2Hw). See also [slides](https://docs.google.com/presentation/d/1jhZuKnAXi15M-mdBP5a4ZAiyrMeHhYmzO8xcZ6pMyLc/edit?usp=sharing)
|
||||
* [Comparing Thanos to VictoriaMetrics cluster](https://faun.pub/comparing-thanos-to-victoriametrics-cluster-b193bea1683)
|
||||
* [Evaluation performance and correctness: VictoriaMetrics response](https://valyala.medium.com/evaluating-performance-and-correctness-victoriametrics-response-e27315627e87)
|
||||
|
|
|
@ -15,10 +15,19 @@ The following tip changes can be tested by building VictoriaMetrics components f
|
|||
|
||||
## tip
|
||||
|
||||
|
||||
## [v1.85.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.85.0)
|
||||
|
||||
Released at 11-12-2022
|
||||
|
||||
**Update note 1:** this release drops support for direct upgrade from VictoriaMetrics versions prior [v1.28.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.28.0). Please upgrade to `v1.84.0`, wait until `finished round 2 of background conversion` line is emitted to log by single-node VictoriaMetrics or by `vmstorage`, and then upgrade to newer releases.
|
||||
|
||||
**Update note 2:** this release splits `type="indexdb"` metrics into `type="indexdb/inmemory"` and `type="indexdb/file"` metrics. This may break old dashboards and alerting rules, which contain [label filter](https://docs.victoriametrics.com/keyConcepts.html#filtering) on `{type="indexdb"}`. Such label filter must be substituted with `{type=~"indexdb.*"}`, so it matches `indexdb` from the previous releases and `indexdb/inmemory` + `indexdb/file` from new releases. It is recommended upgrading to the latest available dashboards and alerting rules mentioned in [these docs](https://docs.victoriametrics.com/#monitoring), since they already contain fixed label filters.
|
||||
|
||||
**Update note 3:** this release deprecates `relabel_debug` and `metric_relabel_debug` config options in [scrape_configs](https://docs.victoriametrics.com/sd_configs.html#scrape_configs). The `-relabelDebug`, `-remoteWrite.relabelDebug` and `-remoteWrite.urlRelabelDebug` command-line options are also deprecated. Use more powerful target-level relabel debugging and metric-level relabel debugging instead as documented [here](https://docs.victoriametrics.com/vmagent.html#relabel-debug).
|
||||
|
||||
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): provide enhanced target-level and metric-level relabel debugging. See [these docs](https://docs.victoriametrics.com/vmagent.html#relabel-debug) and [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3407).
|
||||
* FEATURE: leave a sample with the biggest value for identical timestamps per each `-dedup.minScrapeInterval` discrete interval when the [deduplication](https://docs.victoriametrics.com/#deduplication) is enabled. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3333).
|
||||
* FEATURE: add `-inmemoryDataFlushInterval` command-line flag, which can be used for controlling the frequency of in-memory data flush to disk. The data flush frequency can be reduced when VictoriaMetrics stores data to low-end flash device with limited number of write cycles (for example, on Raspberry PI). See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3337).
|
||||
* FEATURE: expose additional metrics for `indexdb` and `storage` parts stored in memory and for `indexdb` parts stored in files (see [storage docs](https://docs.victoriametrics.com/#storage) for technical details):
|
||||
* `vm_active_merges{type="storage/inmemory"}` - active merges for in-memory `storage` parts
|
||||
|
@ -47,21 +56,32 @@ The following tip changes can be tested by building VictoriaMetrics components f
|
|||
* `vm_rows{type="indexdb/file"}` - the total number of file-based `indexdb` rows
|
||||
* FEATURE: [DataDog parser](https://docs.victoriametrics.com/#how-to-send-data-from-datadog-agent): add `device` tag when it is passed in the `device` field is present in the `series` object of the input request. Thanks to @PerGon for the provided [pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/3431).
|
||||
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): improve [service discovery](https://docs.victoriametrics.com/sd_configs.html) performance when discovering big number of targets (10K and more).
|
||||
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): allow using `series_limit` option for [limiting the number of series a single scrape target generates](https://docs.victoriametrics.com/vmagent.html#cardinality-limiter) in [stream parsing mode](https://docs.victoriametrics.com/vmagent.html#stream-parsing-mode). See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3458).
|
||||
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): allow using `sample_limit` option for limiting the number of metrics a single scrape target can expose in every response sent over [stream parsing mode](https://docs.victoriametrics.com/vmagent.html#stream-parsing-mode).
|
||||
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): add `exported_` prefix to metric names exported by scrape targets if these metric names clash with [automatically generated metrics](https://docs.victoriametrics.com/vmagent.html#automatically-generated-metrics) such as `up`, `scrape_samples_scraped`, etc. This prevents from corruption of automatically generated metrics. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3406).
|
||||
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): make the `host` label optional in [DataDog data ingestion protocol](https://docs.victoriametrics.com/#how-to-send-data-from-datadog-agent). See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3432).
|
||||
* FEATURE: [VictoriaMetrics cluster](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html): improve error message when the requested path cannot be properly parsed, so users could identify the issue and properly fix the path. Now the error message links to [url format docs](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format). See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3402).
|
||||
* FEATURE: [VictoriaMetrics enterprise cluster](https://docs.victoriametrics.com/enterprise.html): add `-storageNode.discoveryInterval` command-line flag to `vmselect` and `vminsert` to control load on DNS servers when [automatic discovery of vmstorage nodes](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#automatic-vmstorage-discovery) is enabled. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3417).
|
||||
* FEATURE: [VictoriaMetrics enterprise cluster](https://docs.victoriametrics.com/enterprise.html): allow reading and updating the list of `vmstorage` nodes at `vmselect` and `vminsert` nodes via file. See [automatic discovery of vmstorage](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#automatic-vmstorage-discovery) for details.
|
||||
* FEATURE: [vmalert](https://docs.victoriametrics.com/vmalert.html): reduce memory and CPU usage by up to 50% on setups with thousands of recording/alerting groups. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3464).
|
||||
* FEATURE: [vmalert](https://docs.victoriametrics.com/vmalert.html): add `-remoteWrite.sendTimeout` command-line flag, which allows configuring timeout for sending data to `-remoteWrite.url`. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3408).
|
||||
* FEATURE: [vmctl](https://docs.victoriametrics.com/vmctl.html): add ability to migrate data between VictoriaMetrics clusters with automatic tenants discovery. See [these docs](https://docs.victoriametrics.com/vmctl.html#cluster-to-cluster-migration-mode) and [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2930).
|
||||
* FEATURE: [vmctl](https://docs.victoriametrics.com/vmctl.html): add ability to copy data from sources via Prometheus `remote_read` protocol. See [these docs](https://docs.victoriametrics.com/vmctl.html#migrating-data-by-remote-read-protocol). The related issues: [one](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3132) and [two](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1101).
|
||||
* FEATURE: [vmui](https://docs.victoriametrics.com/#vmui): allow changing timezones for the requested data. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3075).
|
||||
* FEATURE: [vmui](https://docs.victoriametrics.com/#vmui): provide fast path for hiding results for all the queries except the given one by clicking `eye` icon with `ctrl` key pressed. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3446).
|
||||
* FEATURE: [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html): add `range_trim_spikes(phi, q)` function for trimming `phi` percent of the largest spikes per each time series returned by `q`. See [these docs](https://docs.victoriametrics.com/MetricsQL.html#range_trim_spikes).
|
||||
* FEATURE: [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html): allow passing `inf` arg into [limitk](https://docs.victoriametrics.com/MetricsQL.html#limitk), [topk](https://docs.victoriametrics.com/MetricsQL.html#topk), [bottomk](https://docs.victoriametrics.com/MetricsQL.html) and other functions, which accept numeric arg, which limits the number of output time series. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3461).
|
||||
* FEATURE: [vmgateway](https://docs.victoriametrics.com/vmgateway.html): add support for JWT token signature verification. See [these docs](https://docs.victoriametrics.com/vmgateway.html#jwt-signature-verification) for details.
|
||||
* FEATURE: put the version of VictoriaMetrics in the first message of [query trace](https://docs.victoriametrics.com/#query-tracing). This should simplify debugging.
|
||||
|
||||
* BUGFIX: [vmagent](https://docs.victoriametrics.com/vmagent.html): fix the `The request did not have a subscription or a valid tenant level resource provider` error when discovering Azure targets with [azure_sd_configs](https://docs.victoriametrics.com/sd_configs.html#azure_sd_configs). See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3247).
|
||||
* BUGFIX: [vmalert](https://docs.victoriametrics.com/vmalert.html): properly pass HTTP headers during the alert state restore procedure. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3418).
|
||||
* BUGFIX: [vmalert](https://docs.victoriametrics.com/vmalert.html): properly specify rule evaluation step during the [replay mode](https://docs.victoriametrics.com/vmalert.html#rules-backfilling). The `step` value was previously overriden by `-datasource.queryStep` command-line flag.
|
||||
* BUGFIX: [vmalert](https://docs.victoriametrics.com/vmalert.html): properly return the error message from remote-write failures. Before, error was ignored and only `vmalert_remotewrite_errors_total` was incremented.
|
||||
* BUGFIX: [vmui](https://docs.victoriametrics.com/#vmui): fix sticky tooltip sizing, which could prevent from closing the tooltip. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3427).
|
||||
* BUGFIX: [vmui](https://docs.victoriametrics.com/#vmui): properly put multi-line queries in the url, so it could be copy-n-pasted and opened without issues in a new browser tab. Previously the url for multi-line query couldn't be opened. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3444).
|
||||
* BUGFIX: [vmui](https://docs.victoriametrics.com/#vmui): correctly handle `up` and `down` keypresses when editing multi-line queries. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3445).
|
||||
|
||||
|
||||
## [v1.84.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.84.0)
|
||||
|
||||
Released at 25-11-2022
|
||||
|
@ -316,6 +336,22 @@ Released at 08-08-2022
|
|||
* BUGFIX: [vmui](https://docs.victoriametrics.com/#vmui): properly show date picker at `Table` tab. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2874).
|
||||
* BUGFIX: properly generate http redirects if `-http.pathPrefix` command-line flag is set. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2918).
|
||||
|
||||
## [v1.79.6](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.79.6)
|
||||
|
||||
Released at 11-12-2022
|
||||
|
||||
**v1.79.x is a line of LTS releases (e.g. long-time support). It contains important up-to-date bugfixes.
|
||||
The v1.79.x line will be supported for at least 12 months since [v1.79.0](https://docs.victoriametrics.com/CHANGELOG.html#v1790) release**
|
||||
|
||||
* SECURITY: update Go builder from v1.19.3 to v1.19.4. See [the changelog](https://github.com/golang/go/issues?q=milestone%3AGo1.19.4+label%3ACherryPickApproved).
|
||||
* SECURITY: update base Docker image for VictoriaMetrics components from Alpine 3.16.2 to Alpine v3.17.0. See [the changelog](https://alpinelinux.org/releases/).
|
||||
|
||||
* BUGFIX: [vmagent](https://docs.victoriametrics.com/vmagent.html): fix the `The request did not have a subscription or a valid tenant level resource provider` error when discovering Azure targets with [azure_sd_configs](https://docs.victoriametrics.com/sd_configs.html#azure_sd_configs). See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3247).
|
||||
* BUGFIX: [vmagent](https://docs.victoriametrics.com/vmagent.html): properly discover GCE zones when `filter` option is set at [gce_sd_configs](https://docs.victoriametrics.com/sd_configs.html#gce_sd_configs). See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3202).
|
||||
* BUGFIX: [vmalert](https://docs.victoriametrics.com/vmalert.html): properly specify rule evaluation step during the [replay mode](https://docs.victoriametrics.com/vmalert.html#rules-backfilling). The `step` value was previously overriden by `-datasource.queryStep` command-line flag.
|
||||
* BUGFIX: [vmalert](https://docs.victoriametrics.com/vmalert.html): properly return the error message from remote-write failures. Before, error was ignored and only `vmalert_remotewrite_errors_total` was incremented.
|
||||
* BUGFIX: [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html): properly return an empty result from [limit_offset](https://docs.victoriametrics.com/MetricsQL.html#limit_offset) if the `offset` arg exceeds the number of inner time series. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3312).
|
||||
|
||||
## [v1.79.5](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.79.5)
|
||||
|
||||
Released at 10-11-2022
|
||||
|
|
|
@ -208,15 +208,27 @@ Additionally, all the VictoriaMetrics components allow setting flag values via e
|
|||
|
||||
## Automatic vmstorage discovery
|
||||
|
||||
[Entrprise version of VictoriaMetrics](https://docs.victoriametrics.com/enterprise.html) supports [dns+srv](https://en.wikipedia.org/wiki/SRV_record) names
|
||||
at `-storageNode` command-line flag passed to `vminsert` and `vmselect`. In this case the provided `dns+srv` names are resolved
|
||||
into tcp addresses of `vmstorage` nodes to connect to. The list of discovered `vmstorage` nodes is automatically updated at `vminsert` and `vmselect`
|
||||
when it changes behind the corresponding `dns+srv` names. The `dns+srv` names must be prefixed with `dns+srv:` prefix.
|
||||
`vminsert` and `vmselect` components in [enterprise version of VictoriaMetrics](https://docs.victoriametrics.com/enterprise.html) support
|
||||
the following approaches for automatic discovery of `vmstorage` nodes:
|
||||
|
||||
It is possible passing multiple `dns+srv` names to `-storageNode` command-line flag. In this case all these names are resolved to tcp addresses of `vmstorage` nodes to connect to.
|
||||
For example, `-storageNode='dns+srv:vmstorage-hot' -storageNode='dns+srv:vmstorage-cold'` .
|
||||
- file-based discovery - put the list of `vmstorage` nodes into a file - one node address per each line - and then pass `-storageNode=file:/path/to/file-with-vmstorage-list`
|
||||
to `vminsert` and `vmselect`. It is possible to read the list of vmstorage nodes from http or https urls.
|
||||
For example, `-storageNode=file:http://some-host/vmstorage-list` would read the list of storage nodes
|
||||
from `http://some-host/vmstorage-list`.
|
||||
The list of discovered `vmstorage` nodes is automatically updated when the file contents changes.
|
||||
The update frequency can be controlled with `-storageNode.discoveryInterval` command-line flag.
|
||||
|
||||
It is OK to pass regular static `vmstorage` addresses together with `dns+srv` addresses at `-storageNode`. For example,
|
||||
- [dns+srv](https://en.wikipedia.org/wiki/SRV_record) - pass `dns+src:some-name` value to `-storageNode` command-line flag.
|
||||
In this case the provided `dns+srv` names are resolved into tcp addresses of `vmstorage` nodes.
|
||||
The list of discovered `vmstorage` nodes is automatically updated at `vminsert` and `vmselect`
|
||||
when it changes behind the corresponding `dns+srv` names.
|
||||
The update frequency can be controlled with `-storageNode.discoveryInterval` command-line flag.
|
||||
|
||||
It is possible passing multiple `file` and `dns+srv` names to `-storageNode` command-line flag. In this case all these names
|
||||
are resolved to tcp addresses of `vmstorage` nodes to connect to.
|
||||
For example, `-storageNode=file:/path/to/local-vmstorage-list -storageNode='dns+srv:vmstorage-hot' -storageNode='dns+srv:vmstorage-cold'`.
|
||||
|
||||
It is OK to pass regular static `vmstorage` addresses together with `file` and `dns+srv` addresses at `-storageNode`. For example,
|
||||
`-storageNode=vmstorage1,vmstorage2 -storageNode='dns+srv:vmstorage-autodiscovery'`.
|
||||
|
||||
The discovered addresses can be filtered with optional `-storageNode.filter` command-line flag, which can contain arbitrary regular expression filter.
|
||||
|
@ -311,6 +323,7 @@ See [troubleshooting docs](https://docs.victoriametrics.com/Troubleshooting.html
|
|||
- `api/v1/status/active_queries` - for currently executed active queries. Note that every `vmselect` maintains an independent list of active queries,
|
||||
which is returned in the response.
|
||||
- `api/v1/status/top_queries` - for listing the most frequently executed queries and queries taking the most duration.
|
||||
- `metric-relabel-debug` - for debugging [relabeling rules](https://docs.victoriametrics.com/relabeling.html).
|
||||
|
||||
- URLs for [Graphite Metrics API](https://graphite-api.readthedocs.io/en/latest/api.html#the-metrics-api): `http://<vmselect>:8481/select/<accountID>/graphite/<suffix>`, where:
|
||||
- `<accountID>` is an arbitrary number identifying data namespace for query (aka tenant)
|
||||
|
@ -858,8 +871,6 @@ Below is the output for `/path/to/vminsert -help`:
|
|||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
-relabelConfig string
|
||||
Optional path to a file with relabeling rules, which are applied to all the ingested metrics. The path can point either to local file or to http url. See https://docs.victoriametrics.com/#relabeling for details. The config is reloaded on SIGHUP signal
|
||||
-relabelDebug
|
||||
Whether to log metrics before and after relabeling with -relabelConfig. If the -relabelDebug is enabled, then the metrics aren't sent to storage. This is useful for debugging the relabeling configs
|
||||
-replicationFactor int
|
||||
Replication factor for the ingested data, i.e. how many copies to make among distinct -storageNode instances. Note that vmselect must run with -dedup.minScrapeInterval=1ms for data de-duplication when replicationFactor is greater than 1. Higher values for -dedup.minScrapeInterval at vmselect is OK (default 1)
|
||||
-rpc.disableCompression
|
||||
|
@ -869,6 +880,8 @@ Below is the output for `/path/to/vminsert -help`:
|
|||
-storageNode array
|
||||
Comma-separated addresses of vmstorage nodes; usage: -storageNode=vmstorage-host1,...,vmstorage-hostN . Enterprise version of VictoriaMetrics supports automatic discovery of vmstorage addresses via dns+srv records. For example, -storageNode=dns+srv:vmstorage.addrs . See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#automatic-vmstorage-discovery
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
-storageNode.discoveryInterval duration
|
||||
Interval for refreshing -storageNode list behind dns+srv records. The minimum supported interval is 1s. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#automatic-vmstorage-discovery . This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/enterprise.html (default 2s)
|
||||
-storageNode.filter string
|
||||
An optional regexp filter for discovered -storageNode addresses according to https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#automatic-vmstorage-discovery. Discovered addresses matching the filter are retained, while other addresses are ignored. This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/enterprise.html
|
||||
-tls
|
||||
|
@ -1084,6 +1097,8 @@ Below is the output for `/path/to/vmselect -help`:
|
|||
-storageNode array
|
||||
Comma-separated addresses of vmstorage nodes; usage: -storageNode=vmstorage-host1,...,vmstorage-hostN . Enterprise version of VictoriaMetrics supports automatic discovery of vmstorage addresses via dns+srv records. For example, -storageNode=dns+srv:vmstorage.addrs . See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#automatic-vmstorage-discovery
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
-storageNode.discoveryInterval duration
|
||||
Interval for refreshing -storageNode list behind dns+srv records. The minimum supported interval is 1s. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#automatic-vmstorage-discovery . This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/enterprise.html (default 2s)
|
||||
-storageNode.filter string
|
||||
An optional regexp filter for discovered -storageNode addresses according to https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#automatic-vmstorage-discovery. Discovered addresses matching the filter are retained, while other addresses are ignored. This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/enterprise.html
|
||||
-tls
|
||||
|
|
|
@ -788,7 +788,7 @@ to your needs or when testing bugfixes.
|
|||
|
||||
### Development build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.3.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.
|
||||
2. Run `make victoria-metrics` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `victoria-metrics` binary and puts it into the `bin` folder.
|
||||
|
||||
|
@ -804,7 +804,7 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b
|
|||
|
||||
### Development ARM build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.3.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.
|
||||
2. Run `make victoria-metrics-linux-arm` or `make victoria-metrics-linux-arm64` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `victoria-metrics-linux-arm` or `victoria-metrics-linux-arm64` binary respectively and puts it into the `bin` folder.
|
||||
|
||||
|
@ -818,7 +818,7 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b
|
|||
|
||||
`Pure Go` mode builds only Go code without [cgo](https://golang.org/cmd/cgo/) dependencies.
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.3.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.
|
||||
2. Run `make victoria-metrics-pure` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `victoria-metrics-pure` binary and puts it into the `bin` folder.
|
||||
|
||||
|
@ -1246,7 +1246,11 @@ Example contents for `-relabelConfig` file:
|
|||
regex: true
|
||||
```
|
||||
|
||||
VictoriaMetrics provides additional relabeling features such as Graphite-style relabeling. See [these docs](https://docs.victoriametrics.com/vmagent.html#relabeling) for more details.
|
||||
VictoriaMetrics provides additional relabeling features such as Graphite-style relabeling.
|
||||
See [these docs](https://docs.victoriametrics.com/vmagent.html#relabeling) for more details.
|
||||
|
||||
The relabeling can be debugged at `http://victoriametrics:8428/metric-relabel-debug` page.
|
||||
See [these docs](https://docs.victoriametrics.com/vmagent.html#relabel-debug) for more details.
|
||||
|
||||
|
||||
## Federation
|
||||
|
@ -1352,7 +1356,12 @@ with the enabled de-duplication. See [this section](#deduplication) for details.
|
|||
|
||||
## Deduplication
|
||||
|
||||
VictoriaMetrics leaves a single raw sample with the biggest timestamp per each `-dedup.minScrapeInterval` discrete interval if `-dedup.minScrapeInterval` is set to positive duration. For example, `-dedup.minScrapeInterval=60s` would leave a single raw sample with the biggest timestamp per each discrete 60s interval. If multiple raw samples have the same biggest timestamp on the given `-dedup.minScrapeInterval` discrete interval, then an arbitrary sample out of these samples is left. This aligns with the [staleness rules in Prometheus](https://prometheus.io/docs/prometheus/latest/querying/basics/#staleness).
|
||||
VictoriaMetrics leaves a single raw sample with the biggest timestamp per each `-dedup.minScrapeInterval` discrete interval
|
||||
if `-dedup.minScrapeInterval` is set to positive duration. For example, `-dedup.minScrapeInterval=60s` would leave a single
|
||||
raw sample with the biggest timestamp per each discrete 60s interval.
|
||||
This aligns with the [staleness rules in Prometheus](https://prometheus.io/docs/prometheus/latest/querying/basics/#staleness).
|
||||
|
||||
If multiple raw samples have the same biggest timestamp on the given `-dedup.minScrapeInterval` discrete interval, then the sample with the biggest value is left.
|
||||
|
||||
The `-dedup.minScrapeInterval=D` is equivalent to `-downsampling.period=0s:D` if [downsampling](#downsampling) is enabled. So it is safe to use deduplication and downsampling simultaneously.
|
||||
|
||||
|
@ -2299,8 +2308,6 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
-relabelConfig string
|
||||
Optional path to a file with relabeling rules, which are applied to all the ingested metrics. The path can point either to local file or to http url. See https://docs.victoriametrics.com/#relabeling for details. The config is reloaded on SIGHUP signal
|
||||
-relabelDebug
|
||||
Whether to log metrics before and after relabeling with -relabelConfig. If the -relabelDebug is enabled, then the metrics aren't sent to storage. This is useful for debugging the relabeling configs
|
||||
-retentionFilter array
|
||||
Retention filter in the format 'filter:retention'. For example, '{env="dev"}:3d' configures the retention for time series with env="dev" label to 3 days. See https://docs.victoriametrics.com/#retention-filters for details. This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/enterprise.html
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
|
|
|
@ -791,7 +791,7 @@ to your needs or when testing bugfixes.
|
|||
|
||||
### Development build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.3.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.
|
||||
2. Run `make victoria-metrics` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `victoria-metrics` binary and puts it into the `bin` folder.
|
||||
|
||||
|
@ -807,7 +807,7 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b
|
|||
|
||||
### Development ARM build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.3.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.
|
||||
2. Run `make victoria-metrics-linux-arm` or `make victoria-metrics-linux-arm64` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `victoria-metrics-linux-arm` or `victoria-metrics-linux-arm64` binary respectively and puts it into the `bin` folder.
|
||||
|
||||
|
@ -821,7 +821,7 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b
|
|||
|
||||
`Pure Go` mode builds only Go code without [cgo](https://golang.org/cmd/cgo/) dependencies.
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.3.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.
|
||||
2. Run `make victoria-metrics-pure` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `victoria-metrics-pure` binary and puts it into the `bin` folder.
|
||||
|
||||
|
@ -1249,7 +1249,11 @@ Example contents for `-relabelConfig` file:
|
|||
regex: true
|
||||
```
|
||||
|
||||
VictoriaMetrics provides additional relabeling features such as Graphite-style relabeling. See [these docs](https://docs.victoriametrics.com/vmagent.html#relabeling) for more details.
|
||||
VictoriaMetrics provides additional relabeling features such as Graphite-style relabeling.
|
||||
See [these docs](https://docs.victoriametrics.com/vmagent.html#relabeling) for more details.
|
||||
|
||||
The relabeling can be debugged at `http://victoriametrics:8428/metric-relabel-debug` page.
|
||||
See [these docs](https://docs.victoriametrics.com/vmagent.html#relabel-debug) for more details.
|
||||
|
||||
|
||||
## Federation
|
||||
|
@ -1355,7 +1359,12 @@ with the enabled de-duplication. See [this section](#deduplication) for details.
|
|||
|
||||
## Deduplication
|
||||
|
||||
VictoriaMetrics leaves a single raw sample with the biggest timestamp per each `-dedup.minScrapeInterval` discrete interval if `-dedup.minScrapeInterval` is set to positive duration. For example, `-dedup.minScrapeInterval=60s` would leave a single raw sample with the biggest timestamp per each discrete 60s interval. If multiple raw samples have the same biggest timestamp on the given `-dedup.minScrapeInterval` discrete interval, then an arbitrary sample out of these samples is left. This aligns with the [staleness rules in Prometheus](https://prometheus.io/docs/prometheus/latest/querying/basics/#staleness).
|
||||
VictoriaMetrics leaves a single raw sample with the biggest timestamp per each `-dedup.minScrapeInterval` discrete interval
|
||||
if `-dedup.minScrapeInterval` is set to positive duration. For example, `-dedup.minScrapeInterval=60s` would leave a single
|
||||
raw sample with the biggest timestamp per each discrete 60s interval.
|
||||
This aligns with the [staleness rules in Prometheus](https://prometheus.io/docs/prometheus/latest/querying/basics/#staleness).
|
||||
|
||||
If multiple raw samples have the same biggest timestamp on the given `-dedup.minScrapeInterval` discrete interval, then the sample with the biggest value is left.
|
||||
|
||||
The `-dedup.minScrapeInterval=D` is equivalent to `-downsampling.period=0s:D` if [downsampling](#downsampling) is enabled. So it is safe to use deduplication and downsampling simultaneously.
|
||||
|
||||
|
@ -2302,8 +2311,6 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
-relabelConfig string
|
||||
Optional path to a file with relabeling rules, which are applied to all the ingested metrics. The path can point either to local file or to http url. See https://docs.victoriametrics.com/#relabeling for details. The config is reloaded on SIGHUP signal
|
||||
-relabelDebug
|
||||
Whether to log metrics before and after relabeling with -relabelConfig. If the -relabelDebug is enabled, then the metrics aren't sent to storage. This is useful for debugging the relabeling configs
|
||||
-retentionFilter array
|
||||
Retention filter in the format 'filter:retention'. For example, '{env="dev"}:3d' configures the retention for time series with env="dev" label to 3 days. See https://docs.victoriametrics.com/#retention-filters for details. This flag is available only in VictoriaMetrics enterprise. See https://docs.victoriametrics.com/enterprise.html
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
|
|
|
@ -1371,8 +1371,6 @@ VMScrapeParams defines scrape target configuration that compatible only with Vic
|
|||
|
||||
| Field | Description | Scheme | Required |
|
||||
| ----- | ----------- | ------ | -------- |
|
||||
| relabel_debug | | *bool | false |
|
||||
| metric_relabel_debug | | *bool | false |
|
||||
| disable_compression | | *bool | false |
|
||||
| disable_keep_alive | | *bool | false |
|
||||
| no_stale_markers | | *bool | false |
|
||||
|
|
|
@ -446,6 +446,8 @@ See also [useful tips for target relabeling](#useful-tips-for-target-relabeling)
|
|||
|
||||
## Useful tips for target relabeling
|
||||
|
||||
* Target relabelig can be debugged by clicking the `debug` link for the needed target on the `http://vmagent:8429/target`
|
||||
or on the `http://vmagent:8429/service-discovery` pages. See [these docs](https://docs.victoriametrics.com/vmagent.html#relabel-debug).
|
||||
* Every discovered target contains a set of meta-labels, which start with `__meta_` prefix.
|
||||
The specific sets of labels per each supported service discovery option are listed
|
||||
[here](https://docs.victoriametrics.com/sd_configs.html#prometheus-service-discovery).
|
||||
|
@ -462,6 +464,8 @@ See also [useful tips for target relabeling](#useful-tips-for-target-relabeling)
|
|||
|
||||
## Useful tips for metric relabeling
|
||||
|
||||
* Metric relabeling can be debugged at `http://vmagent:8429/metric-relabel-debug` page.
|
||||
See [these docs](https://docs.victoriametrics.com/vmagent.html#relabel-debug).
|
||||
* All the labels, which start with `__` prefix, are automatically removed from metrics after the relabeling.
|
||||
So it is common practice to store temporary labels with names startigh with `__` during metrics relabeling.
|
||||
* All the target-level labels are automatically added to all the metrics scraped from targets,
|
||||
|
|
|
@ -1181,14 +1181,6 @@ scrape_configs:
|
|||
# By default the limit is disabled.
|
||||
# sample_limit: <int>
|
||||
|
||||
# relabel_debug enables debugging for relabel_configs if set to true.
|
||||
# See https://docs.victoriametrics.com/vmagent.html#relabeling
|
||||
# relabel_debug: <boolean>
|
||||
|
||||
# metric_relabel_debug enables debugging for metric_relabel_configs if set to true.
|
||||
# See https://docs.victoriametrics.com/vmagent.html#relabeling
|
||||
# metric_relabel_debug: <boolean>
|
||||
|
||||
# disable_compression allows disabling HTTP compression for responses received from scrape targets.
|
||||
# By default scrape targets are queried with `Accept-Encoding: gzip` http request header,
|
||||
# so targets could send compressed responses in order to save network bandwidth.
|
||||
|
|
|
@ -608,7 +608,7 @@ Additional information:
|
|||
|
||||
## TCP and UDP
|
||||
|
||||
###How to send data from OpenTSDB-compatible agents to VictoriaMetrics
|
||||
### How to send data from OpenTSDB-compatible agents to VictoriaMetrics
|
||||
|
||||
Turned off by default. Enable OpenTSDB receiver in VictoriaMetrics by setting `-opentsdbListenAddr` command-line flag.
|
||||
*If run from docker, '-opentsdbListenAddr' port should be exposed*
|
||||
|
|
|
@ -249,8 +249,6 @@ scrape_configs:
|
|||
* `scrape_align_interval: duration` for aligning scrapes to the given interval instead of using random offset
|
||||
in the range `[0 ... scrape_interval]` for scraping each target. The random offset helps spreading scrapes evenly in time.
|
||||
* `scrape_offset: duration` for specifying the exact offset for scraping instead of using random offset in the range `[0 ... scrape_interval]`.
|
||||
* `relabel_debug: true` for enabling debug logging during relabeling of the discovered targets. See [these docs](#relabeling).
|
||||
* `metric_relabel_debug: true` for enabling debug logging during relabeling of the scraped metrics. See [these docs](#relabeling).
|
||||
|
||||
See [scrape_configs docs](https://docs.victoriametrics.com/sd_configs.html#scrape_configs) for more details on all the supported options.
|
||||
|
||||
|
@ -423,26 +421,25 @@ with [additional enhancements](#relabeling-enhancements). The relabeling can be
|
|||
This relabeling is used for modifying labels in discovered targets and for dropping unneded targets.
|
||||
See [relabeling cookbook](https://docs.victoriametrics.com/relabeling.html) for details.
|
||||
|
||||
This relabeling can be debugged by passing `relabel_debug: true` option to the corresponding `scrape_config` section.
|
||||
In this case `vmagent` logs target labels before and after the relabeling and then drops the logged target.
|
||||
This relabeling can be debugged by clicking the `debug` link at the corresponding target on the `http://vmagent:8429/targets` page
|
||||
or on the `http://vmagent:8429/service-discovery` page. See [these docs](#relabel-debug) for details.
|
||||
|
||||
* At the `scrape_config -> metric_relabel_configs` section in `-promscrape.config` file.
|
||||
This relabeling is used for modifying labels in scraped metrics and for dropping unneeded metrics.
|
||||
See [relabeling cookbook](https://docs.victoriametrics.com/relabeling.html) for details.
|
||||
|
||||
This relabeling can be debugged by passing `metric_relabel_debug: true` option to the corresponding `scrape_config` section.
|
||||
In this case `vmagent` logs metrics before and after the relabeling and then drops the logged metrics.
|
||||
This relabeling can be debugged via `http://vmagent:8429/metric-relabel-debug` page. See [these docs](#relabel-debug) for details.
|
||||
|
||||
* At the `-remoteWrite.relabelConfig` file. This relabeling is used for modifying labels for all the collected metrics
|
||||
(inluding [metrics obtained via push-based protocols](#how-to-push-data-to-vmagent)) and for dropping unneeded metrics
|
||||
(including [metrics obtained via push-based protocols](#how-to-push-data-to-vmagent)) and for dropping unneeded metrics
|
||||
before sending them to all the configured `-remoteWrite.url` addresses.
|
||||
This relabeling can be debugged by passing `-remoteWrite.relabelDebug` command-line option to `vmagent`.
|
||||
In this case `vmagent` logs metrics before and after the relabeling and then drops all the logged metrics instead of sending them to remote storage.
|
||||
|
||||
This relabeling can be debugged via `http://vmagent:8429/metric-relabel-debug` page. See [these docs](#relabel-debug) for details.
|
||||
|
||||
* At the `-remoteWrite.urlRelabelConfig` files. This relabeling is used for modifying labels for metrics
|
||||
and for dropping unneeded metrics before sending them to a particular `-remoteWrite.url`.
|
||||
This relabeling can be debugged by passing `-remoteWrite.urlRelabelDebug` command-line options to `vmagent`.
|
||||
In this case `vmagent` logs metrics before and after the relabeling and then drops all the logged metrics instead of sending them to the corresponding `-remoteWrite.url`.
|
||||
|
||||
This relabeling can be debugged via `http://vmagent:8429/metric-relabel-debug` page. See [these docs](#relabel-debug) for details.
|
||||
|
||||
All the files with relabeling configs can contain special placeholders in the form `%{ENV_VAR}`,
|
||||
which are replaced by the corresponding environment variable values.
|
||||
|
@ -457,9 +454,6 @@ The following articles contain useful information about Prometheus relabeling:
|
|||
* [Extracting labels from legacy metric names](https://www.robustperception.io/extracting-labels-from-legacy-metric-names)
|
||||
* [relabel_configs vs metric_relabel_configs](https://www.robustperception.io/relabel_configs-vs-metric_relabel_configs)
|
||||
|
||||
[This relabeler playground](https://relabeler.promlabs.com/) can help debugging issues related to relabeling.
|
||||
|
||||
|
||||
## Relabeling enhancements
|
||||
|
||||
`vmagent` provides the following enhancements on top of Prometheus-compatible relabeling:
|
||||
|
@ -601,6 +595,28 @@ Important notes about `action: graphite` relabeling rules:
|
|||
The `action: graphite` relabeling rules are easier to write and maintain than `action: replace` for labels extraction from Graphite-style metric names.
|
||||
Additionally, the `action: graphite` relabeling rules usually work much faster than the equivalent `action: replace` rules.
|
||||
|
||||
## Relabel debug
|
||||
|
||||
`vmagent` and [single-node VictoriaMetrics](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html)
|
||||
provide the following tools for debugging target-level and metric-level relabeling:
|
||||
|
||||
- Target-level relabeling (e.g. `relabel_configs` section at [scrape_configs](https://docs.victoriametrics.com/sd_configs.html#scrape_configs))
|
||||
can be performed by navigating to `http://vmagent:8429/targets` page (`http://victoriametrics:8428/targets` page for single-node VictoriaMetrics)
|
||||
and clicking the `debug` link at the target, which must be debugged.
|
||||
The opened page will show step-by-step results for the actual relabeling rules applied to the target labels.
|
||||
|
||||
The `http://vmagent:8429/targets` page shows only active targets. If you need to understand why some target
|
||||
is dropped during the relabeling, then navigate to `http://vmagent:8428/service-discovery` page
|
||||
(`http://victoriametrics:8428/service-discovery` for single-node VictoriaMetrics), find the dropped target
|
||||
and click the `debug` link there. The opened page will show step-by-step results for the actual relabeling rules,
|
||||
which result to target drop.
|
||||
|
||||
- Metric-level relabeling (e.g. `metric_relabel_configs` section at [scrape_configs](https://docs.victoriametrics.com/sd_configs.html#scrape_configs)
|
||||
and all the relabeling, which can be set up via `-relabelConfig`, `-remoteWrite.relabelConfig` and `-remoteWrite.urlRelabelConfig`
|
||||
command-line flags) can be performed by navigating to `http://vmagent:8429/metric-relabel-debug` page
|
||||
(`http://victoriametrics:8428/metric-relabel-debug` page for single-node VictoriaMetrics)
|
||||
and submitting there relabeling rules together with the metric to be relabeled.
|
||||
The page will show step-by-step results for the entered relabeling rules executed against the entered metric.
|
||||
|
||||
## Prometheus staleness markers
|
||||
|
||||
|
@ -658,8 +674,9 @@ scrape_configs:
|
|||
'match[]': ['{__name__!=""}']
|
||||
```
|
||||
|
||||
Note that `sample_limit` and `series_limit` [scrape_config options](https://docs.victoriametrics.com/sd_configs.html#scrape_configs)
|
||||
cannot be used in stream parsing mode because the parsed data is pushed to remote storage as soon as it is parsed.
|
||||
Note that `vmagent` in stream parsing mode stores up to `sample_limit` samples to the configured `-remoteStorage.url`
|
||||
instead of droping all the samples read from the target, because the parsed data is sent to the remote storage
|
||||
as soon as it is parsed in stream parsing mode.
|
||||
|
||||
## Scraping big number of targets
|
||||
|
||||
|
@ -748,8 +765,8 @@ By default `vmagent` doesn't limit the number of time series each scrape target
|
|||
|
||||
* Via `-promscrape.seriesLimitPerTarget` command-line option. This limit is applied individually
|
||||
to all the scrape targets defined in the file pointed by `-promscrape.config`.
|
||||
* Via `series_limit` config option at `scrape_config` section. This limit is applied individually
|
||||
to all the scrape targets defined in the given `scrape_config`.
|
||||
* Via `series_limit` config option at [scrape_config](https://docs.victoriametrics.com/sd_configs.html#scrape_configs) section.
|
||||
This limit is applied individually to all the scrape targets defined in the given `scrape_config`.
|
||||
* Via `__series_limit__` label, which can be set with [relabeling](#relabeling) at `relabel_configs` section.
|
||||
This limit is applied to the corresponding scrape targets. Typical use case: to set the limit
|
||||
via [Kubernetes annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) for targets,
|
||||
|
@ -1035,7 +1052,7 @@ It may be needed to build `vmagent` from source code when developing or testing
|
|||
|
||||
### Development build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.3.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.
|
||||
2. Run `make vmagent` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds the `vmagent` binary and puts it into the `bin` folder.
|
||||
|
||||
|
@ -1064,7 +1081,7 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b
|
|||
|
||||
### Development ARM build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.3.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.
|
||||
2. Run `make vmagent-linux-arm` or `make vmagent-linux-arm64` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics)
|
||||
It builds `vmagent-linux-arm` or `vmagent-linux-arm64` binary respectively and puts it into the `bin` folder.
|
||||
|
||||
|
@ -1431,8 +1448,6 @@ See the docs at https://docs.victoriametrics.com/vmagent.html .
|
|||
Supports array of values separated by comma or specified via multiple flags.
|
||||
-remoteWrite.relabelConfig string
|
||||
Optional path to file with relabel_config entries. The path can point either to local file or to http url. These entries are applied to all the metrics before sending them to -remoteWrite.url. See https://docs.victoriametrics.com/vmagent.html#relabeling for details
|
||||
-remoteWrite.relabelDebug
|
||||
Whether to log metrics before and after relabeling with -remoteWrite.relabelConfig. If the -remoteWrite.relabelDebug is enabled, then the metrics aren't sent to remote storage. This is useful for debugging the relabeling configs
|
||||
-remoteWrite.roundDigits array
|
||||
Round metric values to this number of decimal digits after the point before writing them to remote storage. Examples: -remoteWrite.roundDigits=2 would round 1.236 to 1.24, while -remoteWrite.roundDigits=-1 would round 126.78 to 130. By default digits rounding is disabled. Set it to 100 for disabling it for a particular remote storage. This option may be used for improving data compression for the stored metrics
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
|
@ -1467,9 +1482,6 @@ See the docs at https://docs.victoriametrics.com/vmagent.html .
|
|||
-remoteWrite.urlRelabelConfig array
|
||||
Optional path to relabel config for the corresponding -remoteWrite.url. The path can point either to local file or to http url
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
-remoteWrite.urlRelabelDebug array
|
||||
Whether to log metrics before and after relabeling with -remoteWrite.urlRelabelConfig. If the -remoteWrite.urlRelabelDebug is enabled, then the metrics aren't sent to the corresponding -remoteWrite.url. This is useful for debugging the relabeling configs
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
-sortLabels
|
||||
Whether to sort labels for incoming samples before writing them to all the configured remote storage systems. This may be needed for reducing memory usage at remote storage when the order of labels in incoming samples is random. For example, if m{k1="v1",k2="v2"} may be sent as m{k2="v2",k1="v1"}Enabled sorting for labels can slow down ingestion performance a bit
|
||||
-tls
|
||||
|
|
|
@ -1321,7 +1321,7 @@ spec:
|
|||
|
||||
### Development build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.3.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.
|
||||
2. Run `make vmalert` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `vmalert` binary and puts it into the `bin` folder.
|
||||
|
||||
|
@ -1337,7 +1337,7 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b
|
|||
|
||||
### Development ARM build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.3.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.
|
||||
2. Run `make vmalert-linux-arm` or `make vmalert-linux-arm64` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `vmalert-linux-arm` or `vmalert-linux-arm64` binary respectively and puts it into the `bin` folder.
|
||||
|
||||
|
|
|
@ -171,7 +171,7 @@ It is recommended using [binary releases](https://github.com/VictoriaMetrics/Vic
|
|||
|
||||
### Development build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.3.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.
|
||||
2. Run `make vmauth` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `vmauth` binary and puts it into the `bin` folder.
|
||||
|
||||
|
|
|
@ -290,7 +290,7 @@ It is recommended using [binary releases](https://github.com/VictoriaMetrics/Vic
|
|||
|
||||
### Development build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.3.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.
|
||||
2. Run `make vmbackup` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `vmbackup` binary and puts it into the `bin` folder.
|
||||
|
||||
|
|
|
@ -1021,7 +1021,7 @@ It is recommended using [binary releases](https://github.com/VictoriaMetrics/Vic
|
|||
|
||||
### Development build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.3.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.
|
||||
2. Run `make vmctl` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `vmctl` binary and puts it into the `bin` folder.
|
||||
|
||||
|
@ -1050,7 +1050,7 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b
|
|||
|
||||
#### Development ARM build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.3.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.
|
||||
2. Run `make vmctl-linux-arm` or `make vmctl-linux-arm64` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `vmctl-linux-arm` or `vmctl-linux-arm64` binary respectively and puts it into the `bin` folder.
|
||||
|
||||
|
|
|
@ -175,6 +175,41 @@ curl 'http://localhost:8431/api/v1/labels' -H 'Authorization: Bearer eyJhbGciOiJ
|
|||
# check rate limit
|
||||
```
|
||||
|
||||
## JWT signature verification
|
||||
|
||||
`vmgateway` supports JWT signature verification.
|
||||
|
||||
Supported algorithms are `RS256`, `RS384`, `RS512`, `ES256`, `ES384`, `ES512`, `PS256`, `PS384`, `PS512`.
|
||||
Tokens with unsupported algorithms will be rejected.
|
||||
|
||||
In order to enable JWT signature verification, you need to specify keys for signature verification.
|
||||
The following flags are used to specify keys:
|
||||
- `-auth.publicKeyFiles` - allows to pass file path to file with public key.
|
||||
- `-auth.publicKeys` - allows to pass public key directly.
|
||||
|
||||
Note that both flags support passing multiple keys and also can be used together.
|
||||
|
||||
Example usage:
|
||||
```console
|
||||
./bin/vmgateway -eula \
|
||||
-enable.auth \
|
||||
-write.url=http://localhost:8480 \
|
||||
-read.url=http://localhost:8481 \
|
||||
-auth.publicKeyFiles=public_key.pem \
|
||||
-auth.publicKeyFiles=public_key2.pem \
|
||||
-auth.publicKeys=`-----BEGIN PUBLIC KEY-----
|
||||
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAu1SU1LfVLPHCozMxH2Mo
|
||||
4lgOEePzNm0tRgeLezV6ffAt0gunVTLw7onLRnrq0/IzW7yWR7QkrmBL7jTKEn5u
|
||||
+qKhbwKfBstIs+bMY2Zkp18gnTxKLxoS2tFczGkPLPgizskuemMghRniWaoLcyeh
|
||||
kd3qqGElvW/VDL5AaWTg0nLVkjRo9z+40RQzuVaE8AkAFmxZzow3x+VJYKdjykkJ
|
||||
0iT9wCS0DRTXu269V264Vf/3jvredZiKRkgwlL9xNAwxXFg0x/XFw005UWVRIkdg
|
||||
cKWTjpBP2dPwVZ4WWC+9aGVd+Gyn1o0CLelf4rEjGoXbAAEgAqeGUxrcIlbjXfbc
|
||||
mwIDAQAB
|
||||
-----END PUBLIC KEY-----
|
||||
`
|
||||
```
|
||||
This command will result in 3 keys loaded: 2 keys from files and 1 from command line.
|
||||
|
||||
## Configuration
|
||||
|
||||
The shortlist of configuration flags include the following:
|
||||
|
@ -182,6 +217,12 @@ The shortlist of configuration flags include the following:
|
|||
```console
|
||||
-auth.httpHeader string
|
||||
HTTP header name to look for JWT authorization token (default "Authorization")
|
||||
-auth.publicKeyFiles array
|
||||
Path file with public key to verify JWT token signature
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
-auth.publicKeys array
|
||||
Public keys to verify JWT token signature
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
-clusterMode
|
||||
enable this for the cluster version
|
||||
-datasource.appendTypePrefix
|
||||
|
@ -340,7 +381,7 @@ The shortlist of configuration flags include the following:
|
|||
## Limitations
|
||||
|
||||
* Access Control:
|
||||
* `jwt` token must be validated by external system, currently `vmgateway` can't validate the signature.
|
||||
* `jwt` token signature verification for `HMAC` algorithms is not supported.
|
||||
* RateLimiting:
|
||||
* limits applied based on queries to `datasource.url`
|
||||
* only cluster version can be rate-limited.
|
||||
|
|
|
@ -190,7 +190,7 @@ It is recommended using [binary releases](https://github.com/VictoriaMetrics/Vic
|
|||
|
||||
### Development build
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.3.
|
||||
1. [Install Go](https://golang.org/doc/install). The minimum supported version is Go 1.19.
|
||||
2. Run `make vmrestore` from the root folder of [the repository](https://github.com/VictoriaMetrics/VictoriaMetrics).
|
||||
It builds `vmrestore` binary and puts it into the `bin` folder.
|
||||
|
||||
|
|
26
go.mod
26
go.mod
|
@ -5,7 +5,7 @@ go 1.19
|
|||
require (
|
||||
cloud.google.com/go/storage v1.28.1
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.2.0
|
||||
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v0.5.1
|
||||
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v0.6.1
|
||||
github.com/VictoriaMetrics/fastcache v1.12.0
|
||||
|
||||
// Do not use the original github.com/valyala/fasthttp because of issues
|
||||
|
@ -25,23 +25,23 @@ require (
|
|||
github.com/gogo/protobuf v1.3.2
|
||||
github.com/golang/snappy v0.0.4
|
||||
github.com/googleapis/gax-go/v2 v2.7.0
|
||||
github.com/influxdata/influxdb v1.10.0
|
||||
github.com/klauspost/compress v1.15.12
|
||||
github.com/influxdata/influxdb v1.11.0
|
||||
github.com/klauspost/compress v1.15.13
|
||||
github.com/mattn/go-colorable v0.1.13 // indirect
|
||||
github.com/mattn/go-runewidth v0.0.14 // indirect
|
||||
github.com/oklog/ulid v1.3.1
|
||||
github.com/prometheus/common v0.37.0 // indirect
|
||||
github.com/prometheus/prometheus v0.40.5
|
||||
github.com/urfave/cli/v2 v2.23.6
|
||||
github.com/prometheus/common v0.38.0 // indirect
|
||||
github.com/prometheus/prometheus v0.40.6
|
||||
github.com/urfave/cli/v2 v2.23.7
|
||||
github.com/valyala/fastjson v1.6.3
|
||||
github.com/valyala/fastrand v1.1.0
|
||||
github.com/valyala/fasttemplate v1.2.2
|
||||
github.com/valyala/gozstd v1.17.0
|
||||
github.com/valyala/quicktemplate v1.7.0
|
||||
golang.org/x/net v0.3.0
|
||||
golang.org/x/oauth2 v0.2.0
|
||||
golang.org/x/net v0.4.0
|
||||
golang.org/x/oauth2 v0.3.0
|
||||
golang.org/x/sys v0.3.0
|
||||
google.golang.org/api v0.103.0
|
||||
google.golang.org/api v0.104.0
|
||||
gopkg.in/yaml.v2 v2.4.0
|
||||
)
|
||||
|
||||
|
@ -53,7 +53,7 @@ require (
|
|||
github.com/Azure/azure-sdk-for-go/sdk/internal v1.1.1 // indirect
|
||||
github.com/VividCortex/ewma v1.2.0 // indirect
|
||||
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137 // indirect
|
||||
github.com/aws/aws-sdk-go v1.44.153 // indirect
|
||||
github.com/aws/aws-sdk-go v1.44.157 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.4.10 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/credentials v1.13.4 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.12.20 // indirect
|
||||
|
@ -81,7 +81,7 @@ require (
|
|||
github.com/golang/protobuf v1.5.2 // indirect
|
||||
github.com/google/go-cmp v0.5.9 // indirect
|
||||
github.com/google/uuid v1.3.0 // indirect
|
||||
github.com/googleapis/enterprise-certificate-proxy v0.2.0 // indirect
|
||||
github.com/googleapis/enterprise-certificate-proxy v0.2.1 // indirect
|
||||
github.com/grafana/regexp v0.0.0-20221122212121-6b5c0a4cb7fd // indirect
|
||||
github.com/jmespath/go-jmespath v0.4.0 // indirect
|
||||
github.com/jpillora/backoff v1.0.0 // indirect
|
||||
|
@ -107,13 +107,13 @@ require (
|
|||
go.opentelemetry.io/otel/trace v1.11.2 // indirect
|
||||
go.uber.org/atomic v1.10.0 // indirect
|
||||
go.uber.org/goleak v1.2.0 // indirect
|
||||
golang.org/x/exp v0.0.0-20221205204356-47842c84f3db // indirect
|
||||
golang.org/x/exp v0.0.0-20221208152030-732eee02a75a // indirect
|
||||
golang.org/x/sync v0.1.0 // indirect
|
||||
golang.org/x/text v0.5.0 // indirect
|
||||
golang.org/x/time v0.3.0 // indirect
|
||||
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 // indirect
|
||||
google.golang.org/appengine v1.6.7 // indirect
|
||||
google.golang.org/genproto v0.0.0-20221205194025-8222ab48f5fc // indirect
|
||||
google.golang.org/genproto v0.0.0-20221207170731-23e4bf6bdc37 // indirect
|
||||
google.golang.org/grpc v1.51.0 // indirect
|
||||
google.golang.org/protobuf v1.28.1 // indirect
|
||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||
|
|
63
go.sum
63
go.sum
|
@ -48,8 +48,8 @@ github.com/Azure/azure-sdk-for-go/sdk/azcore v1.2.0/go.mod h1:uGG2W01BaETf0Ozp+Q
|
|||
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.1.0 h1:QkAcEIAKbNL4KoFr4SathZPhDhF4mVwpBMFlYjyAqy8=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/internal v1.1.1 h1:Oj853U9kG+RLTCQXpjvOnrv0WaZHxgmZz1TlLywgOPY=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/internal v1.1.1/go.mod h1:eWRD7oawr1Mu1sLCawqVc0CUiF43ia3qQMxLscsKQ9w=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v0.5.1 h1:BMTdr+ib5ljLa9MxTJK8x/Ds0MbBb4MfuW5BL0zMJnI=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v0.5.1/go.mod h1:c6WvOhtmjNUWbLfOG1qxM/q0SPvQNSVJvolm+C52dIU=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v0.6.1 h1:YvQv9Mz6T8oR5ypQOL6erY0Z5t71ak1uHV4QFokCOZk=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v0.6.1/go.mod h1:c6WvOhtmjNUWbLfOG1qxM/q0SPvQNSVJvolm+C52dIU=
|
||||
github.com/Azure/go-autorest v14.2.0+incompatible h1:V5VMDjClD3GiElqLWO7mz2MxNAK/vTfRHdAubSIPRgs=
|
||||
github.com/Azure/go-autorest/autorest v0.11.28 h1:ndAExarwr5Y+GaHE6VCaY1kyS/HwwGGyuimVhWsHOEM=
|
||||
github.com/Azure/go-autorest/autorest/adal v0.9.21 h1:jjQnVFXPfekaqb8vIsv2G1lxshoW+oGv4MDlhRtnYZk=
|
||||
|
@ -89,8 +89,8 @@ github.com/andybalholm/brotli v1.0.2/go.mod h1:loMXtMfwqflxFJPmdbJO0a3KNoPuLBgiu
|
|||
github.com/andybalholm/brotli v1.0.3/go.mod h1:fO7iG3H7G2nSZ7m0zPUDn85XEX2GTukHGRSepvi9Eig=
|
||||
github.com/armon/go-metrics v0.3.10 h1:FR+drcQStOe+32sYyJYyZ7FIdgoGGBnwLl+flodp8Uo=
|
||||
github.com/aws/aws-sdk-go v1.38.35/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
|
||||
github.com/aws/aws-sdk-go v1.44.153 h1:KfN5URb9O/Fk48xHrAinrPV2DzPcLa0cd9yo1ax5KGg=
|
||||
github.com/aws/aws-sdk-go v1.44.153/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI=
|
||||
github.com/aws/aws-sdk-go v1.44.157 h1:JVBPpEWC8+yA7CbfAuTl/ZFFlHS3yoqWFqxFyTCISwg=
|
||||
github.com/aws/aws-sdk-go v1.44.157/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI=
|
||||
github.com/aws/aws-sdk-go-v2 v1.17.2 h1:r0yRZInwiPBNpQ4aDy/Ssh3ROWsGtKDwar2JS8Lm+N8=
|
||||
github.com/aws/aws-sdk-go-v2 v1.17.2/go.mod h1:uzbQtefpm44goOPmdKyAlXSNcwlRgF3ePWVW6EtJvvw=
|
||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.4.10 h1:dK82zF6kkPeCo8J1e+tGx4JdvDIQzj7ygIoLg8WMuGs=
|
||||
|
@ -181,7 +181,6 @@ github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2
|
|||
github.com/go-kit/kit v0.12.0 h1:e4o3o3IsBfAKQh5Qbbiqyfu97Ku7jrO/JbohvztANh4=
|
||||
github.com/go-kit/kit v0.12.0/go.mod h1:lHd+EkCZPIwYItmGDDRdhinkzX2A1sj+M9biaEaizzs=
|
||||
github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY=
|
||||
github.com/go-kit/log v0.2.0/go.mod h1:NwTd00d/i8cPZ3xOwwiv2PO5MOcx78fFErGNcVmBjv0=
|
||||
github.com/go-kit/log v0.2.1 h1:MRVx0/zhvdseW+Gza6N9rVzU/IVzaeE1SFI4raAhmBU=
|
||||
github.com/go-kit/log v0.2.1/go.mod h1:NwTd00d/i8cPZ3xOwwiv2PO5MOcx78fFErGNcVmBjv0=
|
||||
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
|
||||
|
@ -272,8 +271,8 @@ github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm4
|
|||
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
|
||||
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/googleapis/enterprise-certificate-proxy v0.2.0 h1:y8Yozv7SZtlU//QXbezB6QkpuE6jMD2/gfzk4AftXjs=
|
||||
github.com/googleapis/enterprise-certificate-proxy v0.2.0/go.mod h1:8C0jb7/mgJe/9KK8Lm7X9ctZC2t60YyIpYEI16jx0Qg=
|
||||
github.com/googleapis/enterprise-certificate-proxy v0.2.1 h1:RY7tHKZcRlk788d5WSo/e83gOyyy742E8GSs771ySpg=
|
||||
github.com/googleapis/enterprise-certificate-proxy v0.2.1/go.mod h1:AwSRAtLfXpU5Nm3pW+v7rGDHp09LsPtGY9MduiEsR9k=
|
||||
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
|
||||
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
|
||||
github.com/googleapis/gax-go/v2 v2.7.0 h1:IcsPKeInNvYi7eqSaDjiZqDDKu5rsmunY0Y1YupQSSQ=
|
||||
|
@ -297,8 +296,8 @@ github.com/hashicorp/serf v0.9.7 h1:hkdgbqizGQHuU5IPqYM1JdSMV8nKfpuOnZYXssk9muY=
|
|||
github.com/hetznercloud/hcloud-go v1.35.3 h1:WCmFAhLRooih2QHAsbCbEdpIHnshQQmrPqsr3rHE1Ow=
|
||||
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
|
||||
github.com/imdario/mergo v0.3.12 h1:b6R2BslTbIEToALKP7LxUvijTsNI9TAe80pLWN2g/HU=
|
||||
github.com/influxdata/influxdb v1.10.0 h1:8xDpt8KO3lzrzf/ss+l8r42AGUZvoITu5824berK7SE=
|
||||
github.com/influxdata/influxdb v1.10.0/go.mod h1:IVPuoA2pOOxau/NguX7ipW0Jp9Bn+dMWlo0+VOscevU=
|
||||
github.com/influxdata/influxdb v1.11.0 h1:0X+ZsbcOWc6AEi5MHee9BYqXCKmz8IZsljrRYjmV8Qg=
|
||||
github.com/influxdata/influxdb v1.11.0/go.mod h1:V93tJcidY0Zh0LtSONZWnXXGDyt20dtVf+Ddp4EnhaA=
|
||||
github.com/ionos-cloud/sdk-go/v6 v6.1.3 h1:vb6yqdpiqaytvreM0bsn2pXw+1YDvEk2RKSmBAQvgDQ=
|
||||
github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
|
||||
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
|
||||
|
@ -311,7 +310,6 @@ github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCV
|
|||
github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
|
||||
github.com/json-iterator/go v1.1.11/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
|
||||
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
|
||||
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
|
||||
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
|
||||
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
|
||||
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
|
||||
|
@ -320,8 +318,8 @@ github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI
|
|||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||
github.com/klauspost/compress v1.13.4/go.mod h1:8dP1Hq4DHOhN9w426knH3Rhby4rFm6D8eO+e+Dq5Gzg=
|
||||
github.com/klauspost/compress v1.13.5/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
|
||||
github.com/klauspost/compress v1.15.12 h1:YClS/PImqYbn+UILDnqxQCZ3RehC9N318SU3kElDUEM=
|
||||
github.com/klauspost/compress v1.15.12/go.mod h1:QPwzmACJjUTFsnSHH934V6woptycfrDDJnH7hvFVbGM=
|
||||
github.com/klauspost/compress v1.15.13 h1:NFn1Wr8cfnenSJSA46lLq4wHCcBzKTSjnBIexDMMOV0=
|
||||
github.com/klauspost/compress v1.15.13/go.mod h1:QPwzmACJjUTFsnSHH934V6woptycfrDDJnH7hvFVbGM=
|
||||
github.com/kolo/xmlrpc v0.0.0-20220921171641-a4b6fa1dd06b h1:udzkj9S/zlT5X367kqJis0QP7YMxobob6zhzq6Yre00=
|
||||
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||
|
@ -357,7 +355,6 @@ github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJ
|
|||
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
||||
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
||||
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
|
||||
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
|
||||
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
|
||||
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
|
||||
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f h1:KUppIJq7/+SVif2QVs3tOP0zanoHgBEVAwHxUSIzRqU=
|
||||
|
@ -378,7 +375,6 @@ github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXP
|
|||
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
|
||||
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
|
||||
github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
|
||||
github.com/prometheus/client_golang v1.12.1/go.mod h1:3Z9XVyYiZYEO+YQWt3RD2R3jrbd179Rt297l4aS6nDY=
|
||||
github.com/prometheus/client_golang v1.14.0 h1:nJdhIvne2eSX/XRAFV9PcvFFRbrjbcTUj0VP62TMhnw=
|
||||
github.com/prometheus/client_golang v1.14.0/go.mod h1:8vpkKitgIVNcqrRBWh1C4TIUQgYNtG/XQE4E/Zae36Y=
|
||||
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
|
||||
|
@ -391,20 +387,18 @@ github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y8
|
|||
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
|
||||
github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc=
|
||||
github.com/prometheus/common v0.29.0/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls=
|
||||
github.com/prometheus/common v0.32.1/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls=
|
||||
github.com/prometheus/common v0.37.0 h1:ccBbHCgIiT9uSoFY0vX8H3zsNR5eLt17/RQLUvn8pXE=
|
||||
github.com/prometheus/common v0.37.0/go.mod h1:phzohg0JFMnBEFGxTDbfu3QyL5GI8gTQJFhYO5B3mfA=
|
||||
github.com/prometheus/common v0.38.0 h1:VTQitp6mXTdUoCmDMugDVOJ1opi6ADftKfp/yeqTR/E=
|
||||
github.com/prometheus/common v0.38.0/go.mod h1:MBXfmBQZrK5XpbCkjofnXs96LD2QQ7fEq4C0xjC/yec=
|
||||
github.com/prometheus/common/sigv4 v0.1.0 h1:qoVebwtwwEhS85Czm2dSROY5fTo2PAPEVdDeppTwGX4=
|
||||
github.com/prometheus/common/sigv4 v0.1.0/go.mod h1:2Jkxxk9yYvCkE5G1sQT7GuEXm57JrvHu9k5YwTjsNtI=
|
||||
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
||||
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
|
||||
github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
|
||||
github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
|
||||
github.com/prometheus/procfs v0.7.3/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
|
||||
github.com/prometheus/procfs v0.8.0 h1:ODq8ZFEaYeCaZOJlZZdJA2AbQR98dSHSM1KW/You5mo=
|
||||
github.com/prometheus/procfs v0.8.0/go.mod h1:z7EfXMXOkbkqb9IINtpCn86r/to3BnA0uaxHdg830/4=
|
||||
github.com/prometheus/prometheus v0.40.5 h1:wmk5yNrQlkQ2OvZucMhUB4k78AVfG34szb1UtopS8Vc=
|
||||
github.com/prometheus/prometheus v0.40.5/go.mod h1:bxgdmtoSNLmmIVPGmeTJ3OiP67VmuY4yalE4ZP6L/j8=
|
||||
github.com/prometheus/prometheus v0.40.6 h1:JP2Wbm4HJI9OlWbOzCGRL3zlOXFdSzC0TttI09+EodM=
|
||||
github.com/prometheus/prometheus v0.40.6/go.mod h1:nO+vI0cJo1ezp2DPGw5NEnTlYHGRpBFrqE4zb9O0g0U=
|
||||
github.com/rivo/uniseg v0.1.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
|
||||
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
|
||||
github.com/rivo/uniseg v0.4.3 h1:utMvzDsuh3suAEnhH0RdHmoPbU648o6CvXxTx4SBMOw=
|
||||
|
@ -430,8 +424,8 @@ github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/
|
|||
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
|
||||
github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk=
|
||||
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
||||
github.com/urfave/cli/v2 v2.23.6 h1:iWmtKD+prGo1nKUtLO0Wg4z9esfBM4rAV4QRLQiEmJ4=
|
||||
github.com/urfave/cli/v2 v2.23.6/go.mod h1:GHupkWPMM0M/sj1a2b4wUrWBPzazNrIjouW6fmdJLxc=
|
||||
github.com/urfave/cli/v2 v2.23.7 h1:YHDQ46s3VghFHFf1DdF+Sh7H4RqhcM+t0TmZRJx4oJY=
|
||||
github.com/urfave/cli/v2 v2.23.7/go.mod h1:GHupkWPMM0M/sj1a2b4wUrWBPzazNrIjouW6fmdJLxc=
|
||||
github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
|
||||
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
|
||||
github.com/valyala/fasthttp v1.30.0/go.mod h1:2rsYD01CKFrjjsvFxx75KlEUNpWNBY9JWD3K/7o2Cus=
|
||||
|
@ -494,8 +488,8 @@ golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u0
|
|||
golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
|
||||
golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM=
|
||||
golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU=
|
||||
golang.org/x/exp v0.0.0-20221205204356-47842c84f3db h1:D/cFflL63o2KSLJIwjlcIt8PR064j/xsmdEJL/YvY/o=
|
||||
golang.org/x/exp v0.0.0-20221205204356-47842c84f3db/go.mod h1:CxIveKay+FTh1D0yPZemJVgC/95VzuuOLq5Qi4xnoYc=
|
||||
golang.org/x/exp v0.0.0-20221208152030-732eee02a75a h1:4iLhBPcpqFmylhnkbY3W0ONLUYYkDAW9xMFLfxgsvCw=
|
||||
golang.org/x/exp v0.0.0-20221208152030-732eee02a75a/go.mod h1:CxIveKay+FTh1D0yPZemJVgC/95VzuuOLq5Qi4xnoYc=
|
||||
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
|
||||
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
|
||||
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||
|
@ -552,21 +546,18 @@ golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwY
|
|||
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
||||
golang.org/x/net v0.0.0-20210510120150-4163338589ed/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
|
||||
golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
|
||||
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
|
||||
golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco=
|
||||
golang.org/x/net v0.3.0 h1:VWL6FNY2bEEmsGVKabSlHu5Irp34xmMRoqb/9lF9lxk=
|
||||
golang.org/x/net v0.3.0/go.mod h1:MBQ8lrhLObU/6UmLb4fmbmk5OcyYmqtbGd/9yIeKjEE=
|
||||
golang.org/x/net v0.4.0 h1:Q5QPcMlvfxFTAPV0+07Xz/MpK9NTXu2VDUuy0FeMfaU=
|
||||
golang.org/x/net v0.4.0/go.mod h1:MBQ8lrhLObU/6UmLb4fmbmk5OcyYmqtbGd/9yIeKjEE=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
||||
golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
|
||||
golang.org/x/oauth2 v0.2.0 h1:GtQkldQ9m7yvzCL1V+LrYow3Khe0eJH0w7RbX/VbaIU=
|
||||
golang.org/x/oauth2 v0.2.0/go.mod h1:Cwn6afJ8jrQwYMxQDTpISoXmXW9I6qF6vDeuuoX3Ibs=
|
||||
golang.org/x/oauth2 v0.3.0 h1:6l90koy8/LaBLmLu8jpHeHexzMwEita0zFfYlggy2F8=
|
||||
golang.org/x/oauth2 v0.3.0/go.mod h1:rQrIauxkUhJ6CuwEXwymO2/eh4xz2ZWF1nBkcxS+tGk=
|
||||
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
|
@ -620,8 +611,6 @@ golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBc
|
|||
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220405052023-b1e9470b6e64/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
|
@ -715,8 +704,8 @@ google.golang.org/api v0.24.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0M
|
|||
google.golang.org/api v0.28.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE=
|
||||
google.golang.org/api v0.29.0/go.mod h1:Lcubydp8VUV7KeIHD9z2Bys/sm/vGKnG1UHuDBSrHWM=
|
||||
google.golang.org/api v0.30.0/go.mod h1:QGmEvQ87FHZNiUVJkT14jQNYJ4ZJjdRF23ZXz5138Fc=
|
||||
google.golang.org/api v0.103.0 h1:9yuVqlu2JCvcLg9p8S3fcFLZij8EPSyvODIY1rkMizQ=
|
||||
google.golang.org/api v0.103.0/go.mod h1:hGtW6nK1AC+d9si/UBhw8Xli+QMOf6xyNAyJw4qU9w0=
|
||||
google.golang.org/api v0.104.0 h1:KBfmLRqdZEbwQleFlSLnzpQJwhjpmNOk4cKQIBDZ9mg=
|
||||
google.golang.org/api v0.104.0/go.mod h1:JCspTXJbBxa5ySXw4UgUqVer7DfVxbvc/CTUFqAED5U=
|
||||
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
|
||||
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
|
@ -754,8 +743,8 @@ google.golang.org/genproto v0.0.0-20200618031413-b414f8b61790/go.mod h1:jDfRM7Fc
|
|||
google.golang.org/genproto v0.0.0-20200729003335-053ba62fc06f/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/genproto v0.0.0-20221205194025-8222ab48f5fc h1:nUKKji0AarrQKh6XpFEpG3p1TNztxhe7C8TcUvDgXqw=
|
||||
google.golang.org/genproto v0.0.0-20221205194025-8222ab48f5fc/go.mod h1:1dOng4TWOomJrDGhpXjfCD35wQC6jnC7HpRmOFRqEV0=
|
||||
google.golang.org/genproto v0.0.0-20221207170731-23e4bf6bdc37 h1:jmIfw8+gSvXcZSgaFAGyInDXeWzUhvYH57G/5GKMn70=
|
||||
google.golang.org/genproto v0.0.0-20221207170731-23e4bf6bdc37/go.mod h1:RGgjbofJ8xD9Sq1VVhDM1Vok1vRONV+rg+CjzG4SZKM=
|
||||
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
||||
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
|
||||
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
|
||||
|
|
|
@ -18,14 +18,14 @@ import (
|
|||
//
|
||||
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config
|
||||
type RelabelConfig struct {
|
||||
If *IfExpression `yaml:"if,omitempty"`
|
||||
Action string `yaml:"action,omitempty"`
|
||||
SourceLabels []string `yaml:"source_labels,flow,omitempty"`
|
||||
Separator *string `yaml:"separator,omitempty"`
|
||||
TargetLabel string `yaml:"target_label,omitempty"`
|
||||
Regex *MultiLineRegex `yaml:"regex,omitempty"`
|
||||
Modulus uint64 `yaml:"modulus,omitempty"`
|
||||
Replacement *string `yaml:"replacement,omitempty"`
|
||||
Action string `yaml:"action,omitempty"`
|
||||
If *IfExpression `yaml:"if,omitempty"`
|
||||
|
||||
// Match is used together with Labels for `action: graphite`. For example:
|
||||
// - action: graphite
|
||||
|
@ -121,8 +121,7 @@ func (mlr *MultiLineRegex) MarshalYAML() (interface{}, error) {
|
|||
|
||||
// ParsedConfigs represents parsed relabel configs.
|
||||
type ParsedConfigs struct {
|
||||
prcs []*parsedRelabelConfig
|
||||
relabelDebug bool
|
||||
prcs []*parsedRelabelConfig
|
||||
}
|
||||
|
||||
// Len returns the number of relabel configs in pcs.
|
||||
|
@ -140,14 +139,23 @@ func (pcs *ParsedConfigs) String() string {
|
|||
}
|
||||
var a []string
|
||||
for _, prc := range pcs.prcs {
|
||||
s := "[" + prc.String() + "]"
|
||||
s := prc.String()
|
||||
lines := strings.Split(s, "\n")
|
||||
lines[0] = "- " + lines[0]
|
||||
for i := range lines[1:] {
|
||||
line := &lines[1+i]
|
||||
if len(*line) > 0 {
|
||||
*line = " " + *line
|
||||
}
|
||||
}
|
||||
s = strings.Join(lines, "\n")
|
||||
a = append(a, s)
|
||||
}
|
||||
return fmt.Sprintf("%s, relabelDebug=%v", strings.Join(a, ","), pcs.relabelDebug)
|
||||
return strings.Join(a, "")
|
||||
}
|
||||
|
||||
// LoadRelabelConfigs loads relabel configs from the given path.
|
||||
func LoadRelabelConfigs(path string, relabelDebug bool) (*ParsedConfigs, error) {
|
||||
func LoadRelabelConfigs(path string) (*ParsedConfigs, error) {
|
||||
data, err := fs.ReadFileOrHTTP(path)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot read `relabel_configs` from %q: %w", path, err)
|
||||
|
@ -156,7 +164,7 @@ func LoadRelabelConfigs(path string, relabelDebug bool) (*ParsedConfigs, error)
|
|||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot expand environment vars at %q: %w", path, err)
|
||||
}
|
||||
pcs, err := ParseRelabelConfigsData(data, relabelDebug)
|
||||
pcs, err := ParseRelabelConfigsData(data)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot unmarshal `relabel_configs` from %q: %w", path, err)
|
||||
}
|
||||
|
@ -164,16 +172,16 @@ func LoadRelabelConfigs(path string, relabelDebug bool) (*ParsedConfigs, error)
|
|||
}
|
||||
|
||||
// ParseRelabelConfigsData parses relabel configs from the given data.
|
||||
func ParseRelabelConfigsData(data []byte, relabelDebug bool) (*ParsedConfigs, error) {
|
||||
func ParseRelabelConfigsData(data []byte) (*ParsedConfigs, error) {
|
||||
var rcs []RelabelConfig
|
||||
if err := yaml.UnmarshalStrict(data, &rcs); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return ParseRelabelConfigs(rcs, relabelDebug)
|
||||
return ParseRelabelConfigs(rcs)
|
||||
}
|
||||
|
||||
// ParseRelabelConfigs parses rcs to dst.
|
||||
func ParseRelabelConfigs(rcs []RelabelConfig, relabelDebug bool) (*ParsedConfigs, error) {
|
||||
func ParseRelabelConfigs(rcs []RelabelConfig) (*ParsedConfigs, error) {
|
||||
if len(rcs) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
@ -186,8 +194,7 @@ func ParseRelabelConfigs(rcs []RelabelConfig, relabelDebug bool) (*ParsedConfigs
|
|||
prcs[i] = prc
|
||||
}
|
||||
return &ParsedConfigs{
|
||||
prcs: prcs,
|
||||
relabelDebug: relabelDebug,
|
||||
prcs: prcs,
|
||||
}, nil
|
||||
}
|
||||
|
||||
|
@ -350,7 +357,13 @@ func parseRelabelConfig(rc *RelabelConfig) (*parsedRelabelConfig, error) {
|
|||
return nil, fmt.Errorf("`labels` config cannot be applied to `action=%s`; it is applied only to `action=graphite`", action)
|
||||
}
|
||||
}
|
||||
ruleOriginal, err := yaml.Marshal(rc)
|
||||
if err != nil {
|
||||
logger.Panicf("BUG: cannot marshal RelabelConfig: %s", err)
|
||||
}
|
||||
prc := &parsedRelabelConfig{
|
||||
ruleOriginal: string(ruleOriginal),
|
||||
|
||||
SourceLabels: sourceLabels,
|
||||
Separator: separator,
|
||||
TargetLabel: targetLabel,
|
||||
|
|
|
@ -50,7 +50,7 @@ func TestRelabelConfigMarshalUnmarshal(t *testing.T) {
|
|||
f(`
|
||||
- action: keep
|
||||
regex: foobar
|
||||
`, "- regex: foobar\n action: keep\n")
|
||||
`, "- action: keep\n regex: foobar\n")
|
||||
f(`
|
||||
- regex:
|
||||
- 'fo.+'
|
||||
|
@ -80,7 +80,7 @@ func TestRelabelConfigMarshalUnmarshal(t *testing.T) {
|
|||
|
||||
func TestLoadRelabelConfigsSuccess(t *testing.T) {
|
||||
path := "testdata/relabel_configs_valid.yml"
|
||||
pcs, err := LoadRelabelConfigs(path, false)
|
||||
pcs, err := LoadRelabelConfigs(path)
|
||||
if err != nil {
|
||||
t.Fatalf("cannot load relabel configs from %q: %s", path, err)
|
||||
}
|
||||
|
@ -93,7 +93,7 @@ func TestLoadRelabelConfigsSuccess(t *testing.T) {
|
|||
func TestLoadRelabelConfigsFailure(t *testing.T) {
|
||||
f := func(path string) {
|
||||
t.Helper()
|
||||
rcs, err := LoadRelabelConfigs(path, false)
|
||||
rcs, err := LoadRelabelConfigs(path)
|
||||
if err == nil {
|
||||
t.Fatalf("expecting non-nil error")
|
||||
}
|
||||
|
@ -112,7 +112,7 @@ func TestLoadRelabelConfigsFailure(t *testing.T) {
|
|||
func TestParsedConfigsString(t *testing.T) {
|
||||
f := func(rcs []RelabelConfig, sExpected string) {
|
||||
t.Helper()
|
||||
pcs, err := ParseRelabelConfigs(rcs, false)
|
||||
pcs, err := ParseRelabelConfigs(rcs)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
|
@ -126,8 +126,7 @@ func TestParsedConfigsString(t *testing.T) {
|
|||
TargetLabel: "foo",
|
||||
SourceLabels: []string{"aaa"},
|
||||
},
|
||||
}, "[SourceLabels=[aaa], Separator=;, TargetLabel=foo, Regex=.*, Modulus=0, Replacement=$1, Action=replace, If=, "+
|
||||
"graphiteMatchTemplate=<nil>, graphiteLabelRules=[]], relabelDebug=false")
|
||||
}, "- source_labels: [aaa]\n target_label: foo\n")
|
||||
var ie IfExpression
|
||||
if err := ie.Parse("{foo=~'bar'}"); err != nil {
|
||||
t.Fatalf("unexpected error when parsing if expression: %s", err)
|
||||
|
@ -141,8 +140,8 @@ func TestParsedConfigsString(t *testing.T) {
|
|||
},
|
||||
If: &ie,
|
||||
},
|
||||
}, "[SourceLabels=[], Separator=;, TargetLabel=, Regex=.*, Modulus=0, Replacement=$1, Action=graphite, If={foo=~'bar'}, "+
|
||||
"graphiteMatchTemplate=foo.*.bar, graphiteLabelRules=[replaceTemplate=$1-zz, targetLabel=job]], relabelDebug=false")
|
||||
}, "- if: '{foo=~''bar''}'\n action: graphite\n match: foo.*.bar\n labels:\n job: $1-zz\n")
|
||||
replacement := "foo"
|
||||
f([]RelabelConfig{
|
||||
{
|
||||
Action: "replace",
|
||||
|
@ -150,19 +149,23 @@ func TestParsedConfigsString(t *testing.T) {
|
|||
TargetLabel: "x",
|
||||
If: &ie,
|
||||
},
|
||||
}, "[SourceLabels=[foo bar], Separator=;, TargetLabel=x, Regex=.*, Modulus=0, Replacement=$1, Action=replace, If={foo=~'bar'}, "+
|
||||
"graphiteMatchTemplate=<nil>, graphiteLabelRules=[]], relabelDebug=false")
|
||||
{
|
||||
TargetLabel: "x",
|
||||
Replacement: &replacement,
|
||||
},
|
||||
}, "- if: '{foo=~''bar''}'\n action: replace\n source_labels: [foo, bar]\n target_label: x\n- target_label: x\n replacement: foo\n")
|
||||
}
|
||||
|
||||
func TestParseRelabelConfigsSuccess(t *testing.T) {
|
||||
f := func(rcs []RelabelConfig, pcsExpected *ParsedConfigs) {
|
||||
t.Helper()
|
||||
pcs, err := ParseRelabelConfigs(rcs, false)
|
||||
pcs, err := ParseRelabelConfigs(rcs)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
if pcs != nil {
|
||||
for _, prc := range pcs.prcs {
|
||||
prc.ruleOriginal = ""
|
||||
prc.stringReplacer = nil
|
||||
prc.submatchReplacer = nil
|
||||
}
|
||||
|
@ -198,7 +201,7 @@ func TestParseRelabelConfigsSuccess(t *testing.T) {
|
|||
func TestParseRelabelConfigsFailure(t *testing.T) {
|
||||
f := func(rcs []RelabelConfig) {
|
||||
t.Helper()
|
||||
pcs, err := ParseRelabelConfigs(rcs, false)
|
||||
pcs, err := ParseRelabelConfigs(rcs)
|
||||
if err == nil {
|
||||
t.Fatalf("expecting non-nil error")
|
||||
}
|
||||
|
|
|
@ -122,7 +122,7 @@ func TestIfExpressionMatch(t *testing.T) {
|
|||
if err := yaml.UnmarshalStrict([]byte(ifExpr), &ie); err != nil {
|
||||
t.Fatalf("unexpected error during unmarshal: %s", err)
|
||||
}
|
||||
labels := promutils.NewLabelsFromString(metricWithLabels)
|
||||
labels := promutils.MustNewLabelsFromString(metricWithLabels)
|
||||
if !ie.Match(labels.GetLabels()) {
|
||||
t.Fatalf("unexpected mismatch of ifExpr=%s for %s", ifExpr, metricWithLabels)
|
||||
}
|
||||
|
@ -156,7 +156,7 @@ func TestIfExpressionMismatch(t *testing.T) {
|
|||
if err := yaml.UnmarshalStrict([]byte(ifExpr), &ie); err != nil {
|
||||
t.Fatalf("unexpected error during unmarshal: %s", err)
|
||||
}
|
||||
labels := promutils.NewLabelsFromString(metricWithLabels)
|
||||
labels := promutils.MustNewLabelsFromString(metricWithLabels)
|
||||
if ie.Match(labels.GetLabels()) {
|
||||
t.Fatalf("unexpected match of ifExpr=%s for %s", ifExpr, metricWithLabels)
|
||||
}
|
||||
|
|
|
@ -18,6 +18,9 @@ import (
|
|||
//
|
||||
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config
|
||||
type parsedRelabelConfig struct {
|
||||
// ruleOriginal contains the original relabeling rule for the given prasedRelabelConfig.
|
||||
ruleOriginal string
|
||||
|
||||
SourceLabels []string
|
||||
Separator string
|
||||
TargetLabel string
|
||||
|
@ -41,50 +44,78 @@ type parsedRelabelConfig struct {
|
|||
submatchReplacer *bytesutil.FastStringTransformer
|
||||
}
|
||||
|
||||
// DebugStep contains debug information about a single relabeling rule step
|
||||
type DebugStep struct {
|
||||
// Rule contains string representation of the rule step
|
||||
Rule string
|
||||
|
||||
// In contains the input labels before the exeuction of the rule step
|
||||
In string
|
||||
|
||||
// Out contains the output labels after the execution of the rule step
|
||||
Out string
|
||||
}
|
||||
|
||||
// String returns human-readable representation for ds
|
||||
func (ds DebugStep) String() string {
|
||||
return fmt.Sprintf("rule=%q, in=%s, out=%s", ds.Rule, ds.In, ds.Out)
|
||||
}
|
||||
|
||||
// String returns human-readable representation for prc.
|
||||
func (prc *parsedRelabelConfig) String() string {
|
||||
return fmt.Sprintf("SourceLabels=%s, Separator=%s, TargetLabel=%s, Regex=%s, Modulus=%d, Replacement=%s, Action=%s, If=%s, graphiteMatchTemplate=%s, graphiteLabelRules=%s",
|
||||
prc.SourceLabels, prc.Separator, prc.TargetLabel, prc.regexOriginal, prc.Modulus, prc.Replacement,
|
||||
prc.Action, prc.If, prc.graphiteMatchTemplate, prc.graphiteLabelRules)
|
||||
return prc.ruleOriginal
|
||||
}
|
||||
|
||||
// ApplyDebug applies pcs to labels in debug mode.
|
||||
//
|
||||
// It returns DebugStep list - one entry per each applied relabeling step.
|
||||
func (pcs *ParsedConfigs) ApplyDebug(labels []prompbmarshal.Label) ([]prompbmarshal.Label, []DebugStep) {
|
||||
labels, dss := pcs.applyInternal(labels, 0, true)
|
||||
return labels, dss
|
||||
}
|
||||
|
||||
// Apply applies pcs to labels starting from the labelsOffset.
|
||||
func (pcs *ParsedConfigs) Apply(labels []prompbmarshal.Label, labelsOffset int) []prompbmarshal.Label {
|
||||
var inStr string
|
||||
relabelDebug := false
|
||||
labels, _ = pcs.applyInternal(labels, labelsOffset, false)
|
||||
return labels
|
||||
}
|
||||
|
||||
func (pcs *ParsedConfigs) applyInternal(labels []prompbmarshal.Label, labelsOffset int, debug bool) ([]prompbmarshal.Label, []DebugStep) {
|
||||
var dss []DebugStep
|
||||
inStr := ""
|
||||
if debug {
|
||||
inStr = LabelsToString(labels[labelsOffset:])
|
||||
}
|
||||
if pcs != nil {
|
||||
relabelDebug = pcs.relabelDebug
|
||||
if relabelDebug {
|
||||
inStr = labelsToString(labels[labelsOffset:])
|
||||
}
|
||||
for _, prc := range pcs.prcs {
|
||||
tmp := prc.apply(labels, labelsOffset)
|
||||
if len(tmp) == labelsOffset {
|
||||
// All the labels have been removed.
|
||||
if pcs.relabelDebug {
|
||||
logger.Infof("\nRelabel In: %s\nRelabel Out: DROPPED - all labels removed", inStr)
|
||||
}
|
||||
return tmp
|
||||
labels = prc.apply(labels, labelsOffset)
|
||||
if debug {
|
||||
outStr := LabelsToString(labels[labelsOffset:])
|
||||
dss = append(dss, DebugStep{
|
||||
Rule: prc.String(),
|
||||
In: inStr,
|
||||
Out: outStr,
|
||||
})
|
||||
inStr = outStr
|
||||
}
|
||||
if len(labels) == labelsOffset {
|
||||
// All the labels have been removed.
|
||||
return labels, dss
|
||||
}
|
||||
labels = tmp
|
||||
}
|
||||
}
|
||||
labels = removeEmptyLabels(labels, labelsOffset)
|
||||
if relabelDebug {
|
||||
if len(labels) == labelsOffset {
|
||||
logger.Infof("\nRelabel In: %s\nRelabel Out: DROPPED - all labels removed", inStr)
|
||||
return labels
|
||||
if debug {
|
||||
outStr := LabelsToString(labels[labelsOffset:])
|
||||
if outStr != inStr {
|
||||
dss = append(dss, DebugStep{
|
||||
Rule: "remove empty labels",
|
||||
In: inStr,
|
||||
Out: outStr,
|
||||
})
|
||||
}
|
||||
outStr := labelsToString(labels[labelsOffset:])
|
||||
if inStr == outStr {
|
||||
logger.Infof("\nRelabel In: %s\nRelabel Out: KEPT AS IS - no change", inStr)
|
||||
} else {
|
||||
logger.Infof("\nRelabel In: %s\nRelabel Out: %s", inStr, outStr)
|
||||
}
|
||||
// Drop labels
|
||||
labels = labels[:labelsOffset]
|
||||
}
|
||||
return labels
|
||||
return labels, dss
|
||||
}
|
||||
|
||||
func removeEmptyLabels(labels []prompbmarshal.Label, labelsOffset int) []prompbmarshal.Label {
|
||||
|
@ -504,25 +535,27 @@ func CleanLabels(labels []prompbmarshal.Label) {
|
|||
}
|
||||
}
|
||||
|
||||
func labelsToString(labels []prompbmarshal.Label) string {
|
||||
// LabelsToString returns Prometheus string representation for the given labels.
|
||||
//
|
||||
// Labels in the returned string are sorted by name,
|
||||
// while the __name__ label is put in front of {} labels.
|
||||
func LabelsToString(labels []prompbmarshal.Label) string {
|
||||
labelsCopy := append([]prompbmarshal.Label{}, labels...)
|
||||
SortLabels(labelsCopy)
|
||||
mname := ""
|
||||
for _, label := range labelsCopy {
|
||||
for i, label := range labelsCopy {
|
||||
if label.Name == "__name__" {
|
||||
mname = label.Value
|
||||
labelsCopy = append(labelsCopy[:i], labelsCopy[i+1:]...)
|
||||
break
|
||||
}
|
||||
}
|
||||
if mname != "" && len(labelsCopy) <= 1 {
|
||||
if mname != "" && len(labelsCopy) == 0 {
|
||||
return mname
|
||||
}
|
||||
b := []byte(mname)
|
||||
b = append(b, '{')
|
||||
for i, label := range labelsCopy {
|
||||
if label.Name == "__name__" {
|
||||
continue
|
||||
}
|
||||
b = append(b, label.Name...)
|
||||
b = append(b, '=')
|
||||
b = strconv.AppendQuote(b, label.Value)
|
||||
|
|
|
@ -2,6 +2,7 @@ package promrelabel
|
|||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
|
||||
|
@ -27,7 +28,7 @@ func TestSanitizeName(t *testing.T) {
|
|||
func TestLabelsToString(t *testing.T) {
|
||||
f := func(labels []prompbmarshal.Label, sExpected string) {
|
||||
t.Helper()
|
||||
s := labelsToString(labels)
|
||||
s := LabelsToString(labels)
|
||||
if s != sExpected {
|
||||
t.Fatalf("unexpected result;\ngot\n%s\nwant\n%s", s, sExpected)
|
||||
}
|
||||
|
@ -71,20 +72,95 @@ func TestLabelsToString(t *testing.T) {
|
|||
}, `xxx{a="bc",foo="bar"}`)
|
||||
}
|
||||
|
||||
func TestApplyRelabelConfigs(t *testing.T) {
|
||||
f := func(config, metric string, isFinalize bool, resultExpected string) {
|
||||
func TestParsedRelabelConfigsApplyDebug(t *testing.T) {
|
||||
f := func(config, metric string, dssExpected []DebugStep) {
|
||||
t.Helper()
|
||||
pcs, err := ParseRelabelConfigsData([]byte(config), false)
|
||||
pcs, err := ParseRelabelConfigsData([]byte(config))
|
||||
if err != nil {
|
||||
t.Fatalf("cannot parse %q: %s", config, err)
|
||||
}
|
||||
labels := promutils.NewLabelsFromString(metric)
|
||||
labels := promutils.MustNewLabelsFromString(metric)
|
||||
_, dss := pcs.ApplyDebug(labels.GetLabels())
|
||||
if !reflect.DeepEqual(dss, dssExpected) {
|
||||
t.Fatalf("unexpected result; got\n%s\nwant\n%s", dss, dssExpected)
|
||||
}
|
||||
}
|
||||
|
||||
// empty relabel config
|
||||
f(``, `foo`, nil)
|
||||
// add label
|
||||
f(`
|
||||
- target_label: abc
|
||||
replacement: xyz
|
||||
`, `foo{bar="baz"}`, []DebugStep{
|
||||
{
|
||||
Rule: "target_label: abc\nreplacement: xyz\n",
|
||||
In: `foo{bar="baz"}`,
|
||||
Out: `foo{abc="xyz",bar="baz"}`,
|
||||
},
|
||||
})
|
||||
// drop label
|
||||
f(`
|
||||
- target_label: bar
|
||||
replacement: ''
|
||||
`, `foo{bar="baz"}`, []DebugStep{
|
||||
{
|
||||
Rule: "target_label: bar\nreplacement: \"\"\n",
|
||||
In: `foo{bar="baz"}`,
|
||||
Out: `foo{bar=""}`,
|
||||
},
|
||||
{
|
||||
Rule: "remove empty labels",
|
||||
In: `foo{bar=""}`,
|
||||
Out: `foo`,
|
||||
},
|
||||
})
|
||||
// drop metric
|
||||
f(`
|
||||
- action: drop
|
||||
source_labels: [bar]
|
||||
regex: baz
|
||||
`, `foo{bar="baz",abc="def"}`, []DebugStep{
|
||||
{
|
||||
Rule: "action: drop\nsource_labels: [bar]\nregex: baz\n",
|
||||
In: `foo{abc="def",bar="baz"}`,
|
||||
Out: `{}`,
|
||||
},
|
||||
})
|
||||
// Multiple steps
|
||||
f(`
|
||||
- action: labeldrop
|
||||
regex: "foo.*"
|
||||
- target_label: foobar
|
||||
replacement: "abc"
|
||||
`, `m{foo="x",foobc="123",a="b"}`, []DebugStep{
|
||||
{
|
||||
Rule: "action: labeldrop\nregex: foo.*\n",
|
||||
In: `m{a="b",foo="x",foobc="123"}`,
|
||||
Out: `m{a="b"}`,
|
||||
},
|
||||
{
|
||||
Rule: "target_label: foobar\nreplacement: abc\n",
|
||||
In: `m{a="b"}`,
|
||||
Out: `m{a="b",foobar="abc"}`,
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func TestParsedRelabelConfigsApply(t *testing.T) {
|
||||
f := func(config, metric string, isFinalize bool, resultExpected string) {
|
||||
t.Helper()
|
||||
pcs, err := ParseRelabelConfigsData([]byte(config))
|
||||
if err != nil {
|
||||
t.Fatalf("cannot parse %q: %s", config, err)
|
||||
}
|
||||
labels := promutils.MustNewLabelsFromString(metric)
|
||||
resultLabels := pcs.Apply(labels.GetLabels(), 0)
|
||||
if isFinalize {
|
||||
resultLabels = FinalizeLabels(resultLabels[:0], resultLabels)
|
||||
}
|
||||
SortLabels(resultLabels)
|
||||
result := labelsToString(resultLabels)
|
||||
result := LabelsToString(resultLabels)
|
||||
if result != resultExpected {
|
||||
t.Fatalf("unexpected result; got\n%s\nwant\n%s", result, resultExpected)
|
||||
}
|
||||
|
@ -726,9 +802,9 @@ func TestApplyRelabelConfigs(t *testing.T) {
|
|||
func TestFinalizeLabels(t *testing.T) {
|
||||
f := func(metric, resultExpected string) {
|
||||
t.Helper()
|
||||
labels := promutils.NewLabelsFromString(metric)
|
||||
labels := promutils.MustNewLabelsFromString(metric)
|
||||
resultLabels := FinalizeLabels(nil, labels.GetLabels())
|
||||
result := labelsToString(resultLabels)
|
||||
result := LabelsToString(resultLabels)
|
||||
if result != resultExpected {
|
||||
t.Fatalf("unexpected result; got\n%s\nwant\n%s", result, resultExpected)
|
||||
}
|
||||
|
@ -742,7 +818,7 @@ func TestFinalizeLabels(t *testing.T) {
|
|||
func TestFillLabelReferences(t *testing.T) {
|
||||
f := func(replacement, metric, resultExpected string) {
|
||||
t.Helper()
|
||||
labels := promutils.NewLabelsFromString(metric)
|
||||
labels := promutils.MustNewLabelsFromString(metric)
|
||||
result := fillLabelReferences(nil, replacement, labels.GetLabels())
|
||||
if string(result) != resultExpected {
|
||||
t.Fatalf("unexpected result; got\n%q\nwant\n%q", result, resultExpected)
|
||||
|
|
|
@ -1155,7 +1155,7 @@ func BenchmarkApplyRelabelConfigs(b *testing.B) {
|
|||
}
|
||||
|
||||
func mustParseRelabelConfigs(config string) *ParsedConfigs {
|
||||
pcs, err := ParseRelabelConfigsData([]byte(config), false)
|
||||
pcs, err := ParseRelabelConfigsData([]byte(config))
|
||||
if err != nil {
|
||||
panic(fmt.Errorf("unexpected error: %w", err))
|
||||
}
|
||||
|
|
|
@ -267,8 +267,6 @@ type ScrapeConfig struct {
|
|||
YandexCloudSDConfigs []yandexcloud.SDConfig `yaml:"yandexcloud_sd_configs,omitempty"`
|
||||
|
||||
// These options are supported only by lib/promscrape.
|
||||
RelabelDebug bool `yaml:"relabel_debug,omitempty"`
|
||||
MetricRelabelDebug bool `yaml:"metric_relabel_debug,omitempty"`
|
||||
DisableCompression bool `yaml:"disable_compression,omitempty"`
|
||||
DisableKeepAlive bool `yaml:"disable_keepalive,omitempty"`
|
||||
StreamParse bool `yaml:"stream_parse,omitempty"`
|
||||
|
@ -928,20 +926,14 @@ func getScrapeWorkConfig(sc *ScrapeConfig, baseDir string, globalCfg *GlobalConf
|
|||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot parse proxy auth config for `job_name` %q: %w", jobName, err)
|
||||
}
|
||||
relabelConfigs, err := promrelabel.ParseRelabelConfigs(sc.RelabelConfigs, sc.RelabelDebug)
|
||||
relabelConfigs, err := promrelabel.ParseRelabelConfigs(sc.RelabelConfigs)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot parse `relabel_configs` for `job_name` %q: %w", jobName, err)
|
||||
}
|
||||
metricRelabelConfigs, err := promrelabel.ParseRelabelConfigs(sc.MetricRelabelConfigs, sc.MetricRelabelDebug)
|
||||
metricRelabelConfigs, err := promrelabel.ParseRelabelConfigs(sc.MetricRelabelConfigs)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot parse `metric_relabel_configs` for `job_name` %q: %w", jobName, err)
|
||||
}
|
||||
if (*streamParse || sc.StreamParse) && sc.SampleLimit > 0 {
|
||||
return nil, fmt.Errorf("cannot use stream parsing mode when `sample_limit` is set for `job_name` %q", jobName)
|
||||
}
|
||||
if (*streamParse || sc.StreamParse) && sc.SeriesLimit > 0 {
|
||||
return nil, fmt.Errorf("cannot use stream parsing mode when `series_limit` is set for `job_name` %q", jobName)
|
||||
}
|
||||
externalLabels := globalCfg.ExternalLabels
|
||||
noStaleTracking := *noStaleMarkers
|
||||
if sc.NoStaleMarkers != nil {
|
||||
|
@ -1194,7 +1186,7 @@ func (swc *scrapeWorkConfig) getScrapeWork(target string, extraLabels, metaLabel
|
|||
}
|
||||
if labels.Len() == 0 {
|
||||
// Drop target without labels.
|
||||
droppedTargetsMap.Register(originalLabels)
|
||||
droppedTargetsMap.Register(originalLabels, swc.relabelConfigs)
|
||||
return nil, nil
|
||||
}
|
||||
// See https://www.robustperception.io/life-of-a-label
|
||||
|
@ -1209,7 +1201,7 @@ func (swc *scrapeWorkConfig) getScrapeWork(target string, extraLabels, metaLabel
|
|||
address := labels.Get("__address__")
|
||||
if len(address) == 0 {
|
||||
// Drop target without scrape address.
|
||||
droppedTargetsMap.Register(originalLabels)
|
||||
droppedTargetsMap.Register(originalLabels, swc.relabelConfigs)
|
||||
return nil, nil
|
||||
}
|
||||
// Usability extension to Prometheus behavior: extract optional scheme and metricsPath from __address__.
|
||||
|
@ -1320,6 +1312,7 @@ func (swc *scrapeWorkConfig) getScrapeWork(target string, extraLabels, metaLabel
|
|||
ProxyURL: swc.proxyURL,
|
||||
ProxyAuthConfig: swc.proxyAuthConfig,
|
||||
AuthConfig: swc.authConfig,
|
||||
RelabelConfigs: swc.relabelConfigs,
|
||||
MetricRelabelConfigs: swc.metricRelabelConfigs,
|
||||
SampleLimit: swc.sampleLimit,
|
||||
DisableCompression: swc.disableCompression,
|
||||
|
|
|
@ -104,7 +104,6 @@ scrape_configs:
|
|||
static_configs:
|
||||
- targets:
|
||||
- foo
|
||||
relabel_debug: true
|
||||
scrape_align_interval: 1h30m0s
|
||||
proxy_bearer_token_file: file.txt
|
||||
proxy_headers:
|
||||
|
@ -721,6 +720,7 @@ scrape_config_files:
|
|||
func resetNonEssentialFields(sws []*ScrapeWork) {
|
||||
for _, sw := range sws {
|
||||
sw.OriginalLabels = nil
|
||||
sw.RelabelConfigs = nil
|
||||
sw.MetricRelabelConfigs = nil
|
||||
}
|
||||
}
|
||||
|
|
|
@ -3,6 +3,7 @@ package azure
|
|||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/url"
|
||||
"sync"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
|
||||
|
@ -61,24 +62,39 @@ type listAPIResponse struct {
|
|||
|
||||
// visitAllAPIObjects iterates over list API with pagination and applies cb for each response object
|
||||
func visitAllAPIObjects(ac *apiConfig, apiURL string, cb func(data json.RawMessage) error) error {
|
||||
nextLink := apiURL
|
||||
for nextLink != "" {
|
||||
resp, err := ac.c.GetAPIResponseWithReqParams(nextLink, func(request *fasthttp.Request) {
|
||||
nextLinkURI := apiURL
|
||||
for {
|
||||
resp, err := ac.c.GetAPIResponseWithReqParams(nextLinkURI, func(request *fasthttp.Request) {
|
||||
request.Header.Set("Authorization", "Bearer "+ac.mustGetAuthToken())
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot execute azure api request at %s: %w", nextLink, err)
|
||||
return fmt.Errorf("cannot execute azure api request at %s: %w", nextLinkURI, err)
|
||||
}
|
||||
var lar listAPIResponse
|
||||
if err := json.Unmarshal(resp, &lar); err != nil {
|
||||
return fmt.Errorf("cannot parse azure api response %q obtained from %s: %w", resp, nextLink, err)
|
||||
return fmt.Errorf("cannot parse azure api response %q obtained from %s: %w", resp, nextLinkURI, err)
|
||||
}
|
||||
for i := range lar.Value {
|
||||
if err := cb(lar.Value[i]); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
nextLink = lar.NextLink
|
||||
|
||||
// Azure API returns NextLink with apiServer in it, so we need to remove it.
|
||||
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3247
|
||||
if lar.NextLink == "" {
|
||||
break
|
||||
}
|
||||
nextURL, err := url.Parse(lar.NextLink)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot parse nextLink from response %q: %w", lar.NextLink, err)
|
||||
}
|
||||
|
||||
if nextURL.Host != "" && nextURL.Host != ac.c.APIServer() {
|
||||
return fmt.Errorf("unexpected nextLink host %q, expecting %q", nextURL.Host, ac.c.APIServer())
|
||||
}
|
||||
|
||||
nextLinkURI = nextURL.RequestURI()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
|
|
@ -240,6 +240,11 @@ func (c *Client) getAPIResponseWithParamsAndClient(client *fasthttp.HostClient,
|
|||
return data, nil
|
||||
}
|
||||
|
||||
// APIServer returns the API server address
|
||||
func (c *Client) APIServer() string {
|
||||
return c.apiServer
|
||||
}
|
||||
|
||||
// DoRequestWithPossibleRetry performs the given req at hc and stores the response at resp.
|
||||
func DoRequestWithPossibleRetry(hc *fasthttp.HostClient, req *fasthttp.Request, resp *fasthttp.Response, deadline time.Time, requestCounter, retryCounter *metrics.Counter) error {
|
||||
sleepTime := time.Second
|
||||
|
|
141
lib/promscrape/relabel_debug.go
Normal file
141
lib/promscrape/relabel_debug.go
Normal file
|
@ -0,0 +1,141 @@
|
|||
package promscrape
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutils"
|
||||
)
|
||||
|
||||
// WriteMetricRelabelDebug serves requests to /metric-relabel-debug page
|
||||
func WriteMetricRelabelDebug(w http.ResponseWriter, r *http.Request) {
|
||||
metric := r.FormValue("metric")
|
||||
relabelConfigs := r.FormValue("relabel_configs")
|
||||
|
||||
if metric == "" {
|
||||
metric = "{}"
|
||||
}
|
||||
labels, err := promutils.NewLabelsFromString(metric)
|
||||
if err != nil {
|
||||
err = fmt.Errorf("cannot parse metric: %s", err)
|
||||
WriteMetricRelabelDebugSteps(w, nil, metric, relabelConfigs, err)
|
||||
return
|
||||
}
|
||||
pcs, err := promrelabel.ParseRelabelConfigsData([]byte(relabelConfigs))
|
||||
if err != nil {
|
||||
err = fmt.Errorf("cannot parse relabel configs: %s", err)
|
||||
WriteMetricRelabelDebugSteps(w, nil, metric, relabelConfigs, err)
|
||||
return
|
||||
}
|
||||
|
||||
dss := newDebugRelabelSteps(pcs, labels, false)
|
||||
WriteMetricRelabelDebugSteps(w, dss, metric, relabelConfigs, nil)
|
||||
}
|
||||
|
||||
// WriteTargetRelabelDebug generates response for /target-relabel-debug page
|
||||
func WriteTargetRelabelDebug(w http.ResponseWriter, r *http.Request) {
|
||||
targetID := r.FormValue("id")
|
||||
metric := r.FormValue("metric")
|
||||
relabelConfigs := r.FormValue("relabel_configs")
|
||||
|
||||
if metric == "" && relabelConfigs == "" {
|
||||
if targetID == "" {
|
||||
metric = "{}"
|
||||
WriteTargetRelabelDebugSteps(w, targetID, nil, metric, relabelConfigs, nil)
|
||||
return
|
||||
}
|
||||
pcs, labels, ok := getRelabelContextByTargetID(targetID)
|
||||
if !ok {
|
||||
err := fmt.Errorf("cannot find target for id=%s", targetID)
|
||||
targetID = ""
|
||||
WriteTargetRelabelDebugSteps(w, targetID, nil, metric, relabelConfigs, err)
|
||||
return
|
||||
}
|
||||
metric = labels.String()
|
||||
relabelConfigs = pcs.String()
|
||||
dss := newDebugRelabelSteps(pcs, labels, true)
|
||||
WriteTargetRelabelDebugSteps(w, targetID, dss, metric, relabelConfigs, nil)
|
||||
return
|
||||
}
|
||||
|
||||
if metric == "" {
|
||||
metric = "{}"
|
||||
}
|
||||
labels, err := promutils.NewLabelsFromString(metric)
|
||||
if err != nil {
|
||||
err = fmt.Errorf("cannot parse metric: %s", err)
|
||||
WriteTargetRelabelDebugSteps(w, targetID, nil, metric, relabelConfigs, err)
|
||||
return
|
||||
}
|
||||
pcs, err := promrelabel.ParseRelabelConfigsData([]byte(relabelConfigs))
|
||||
if err != nil {
|
||||
err = fmt.Errorf("cannot parse relabel configs: %s", err)
|
||||
WriteTargetRelabelDebugSteps(w, targetID, nil, metric, relabelConfigs, err)
|
||||
return
|
||||
}
|
||||
dss := newDebugRelabelSteps(pcs, labels, true)
|
||||
WriteTargetRelabelDebugSteps(w, targetID, dss, metric, relabelConfigs, nil)
|
||||
}
|
||||
|
||||
func newDebugRelabelSteps(pcs *promrelabel.ParsedConfigs, labels *promutils.Labels, isTargetRelabel bool) []promrelabel.DebugStep {
|
||||
// The target relabeling below must be in sync with the code at scrapeWorkConfig.getScrapeWork if isTragetRelabeling=true
|
||||
// and with the code at scrapeWork.addRowToTimeseries when isTargetRelabeling=false
|
||||
|
||||
// Prevent from modifying the original labels
|
||||
labels = labels.Clone()
|
||||
|
||||
// Apply relabeling
|
||||
labelsResult, dss := pcs.ApplyDebug(labels.GetLabels())
|
||||
labels.Labels = labelsResult
|
||||
outStr := promrelabel.LabelsToString(labels.GetLabels())
|
||||
|
||||
// Add missing instance label
|
||||
if isTargetRelabel && labels.Get("instance") == "" {
|
||||
address := labels.Get("__address__")
|
||||
if address != "" {
|
||||
inStr := outStr
|
||||
labels.Add("instance", address)
|
||||
outStr = promrelabel.LabelsToString(labels.GetLabels())
|
||||
dss = append(dss, promrelabel.DebugStep{
|
||||
Rule: "add missing instance label from __address__ label",
|
||||
In: inStr,
|
||||
Out: outStr,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Remove labels with __ prefix
|
||||
inStr := outStr
|
||||
labels.RemoveLabelsWithDoubleUnderscorePrefix()
|
||||
outStr = promrelabel.LabelsToString(labels.GetLabels())
|
||||
if inStr != outStr {
|
||||
dss = append(dss, promrelabel.DebugStep{
|
||||
Rule: "remove labels with __ prefix",
|
||||
In: inStr,
|
||||
Out: outStr,
|
||||
})
|
||||
}
|
||||
|
||||
// There is no need in labels' sorting, since promrelabel.LabelsToString() automatically sorts labels.
|
||||
return dss
|
||||
}
|
||||
|
||||
func getChangedLabelNames(in, out *promutils.Labels) map[string]struct{} {
|
||||
inMap := in.ToMap()
|
||||
outMap := out.ToMap()
|
||||
changed := make(map[string]struct{})
|
||||
for k, v := range outMap {
|
||||
inV, ok := inMap[k]
|
||||
if !ok || inV != v {
|
||||
changed[k] = struct{}{}
|
||||
}
|
||||
}
|
||||
for k, v := range inMap {
|
||||
outV, ok := outMap[k]
|
||||
if !ok || outV != v {
|
||||
changed[k] = struct{}{}
|
||||
}
|
||||
}
|
||||
return changed
|
||||
}
|
178
lib/promscrape/relabel_debug.qtpl
Normal file
178
lib/promscrape/relabel_debug.qtpl
Normal file
|
@ -0,0 +1,178 @@
|
|||
{% import (
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutils"
|
||||
) %}
|
||||
|
||||
{% stripspace %}
|
||||
|
||||
{% func MetricRelabelDebugSteps(dss []promrelabel.DebugStep, metric, relabelConfigs string, err error) %}
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
{%= commonHeader() %}
|
||||
<title>Metric relabel debug</title>
|
||||
</head>
|
||||
<body>
|
||||
{%= navbar() %}
|
||||
<div class="container-fluid">
|
||||
<a href="https://docs.victoriametrics.com/relabeling.html" target="_blank">Relabeling docs</a>{% space %}
|
||||
<a href="target-relabel-debug">Target relabel debug</a>
|
||||
<br>
|
||||
{% if err != nil %}
|
||||
{%= errorNotification(err) %}
|
||||
{% endif %}
|
||||
|
||||
<div class="m-3">
|
||||
<form method="POST">
|
||||
{%= relabelDebugFormInputs(metric, relabelConfigs) %}
|
||||
|
||||
<input type="submit" value="Submit" class="btn btn-primary m-1" />
|
||||
</form>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<main class="col-12">
|
||||
{%= relabelDebugSteps(dss) %}
|
||||
</main>
|
||||
</div>
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
{% endfunc %}
|
||||
|
||||
{% func TargetRelabelDebugSteps(targetID string, dss []promrelabel.DebugStep, metric, relabelConfigs string, err error) %}
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
{%= commonHeader() %}
|
||||
<title>Target relabel debug</title>
|
||||
</head>
|
||||
<body>
|
||||
{%= navbar() %}
|
||||
<div class="container-fluid">
|
||||
<a href="https://docs.victoriametrics.com/relabeling.html" target="_blank">Relabeling docs</a>{% space %}
|
||||
<a href="metric-relabel-debug">Metric relabel debug</a>
|
||||
<br/>
|
||||
{% if err != nil %}
|
||||
{%= errorNotification(err) %}
|
||||
{% endif %}
|
||||
|
||||
<div class="m-3">
|
||||
<form method="POST">
|
||||
{%= relabelDebugFormInputs(metric, relabelConfigs) %}
|
||||
|
||||
<input type="hidden" name="id" value="{%s targetID %}" />
|
||||
|
||||
<input type="submit" value="Submit" class="btn btn-primary m-1" />
|
||||
{% if targetID != "" %}
|
||||
<button type="button" onclick="location.href='target-relabel-debug?id={%s targetID %}'" class="btn btn-secondary m-1">Reset</button>
|
||||
{% endif %}
|
||||
</form>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<main class="col-12">
|
||||
{%= relabelDebugSteps(dss) %}
|
||||
</main>
|
||||
</div>
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
{% endfunc %}
|
||||
|
||||
{% func relabelDebugFormInputs(metric, relabelConfigs string) %}
|
||||
<div>
|
||||
Relabel configs:<br/>
|
||||
<textarea name="relabel_configs" style="width: 100%; height: 15em" class="m-1">{%s relabelConfigs %}</textarea>
|
||||
</div>
|
||||
|
||||
<div>
|
||||
Labels:<br/>
|
||||
<textarea name="metric" style="width: 100%; height: 5em" class="m-1">{%s metric %}</textarea>
|
||||
</div>
|
||||
{% endfunc %}
|
||||
|
||||
{% func relabelDebugSteps(dss []promrelabel.DebugStep) %}
|
||||
{% if len(dss) > 0 %}
|
||||
<div class="m-3">
|
||||
<b>Original labels:</b> <samp>{%= mustFormatLabels(dss[0].In) %}</samp>
|
||||
</div>
|
||||
{% endif %}
|
||||
<table class="table table-striped table-hover table-bordered table-sm">
|
||||
<thead>
|
||||
<tr>
|
||||
<th scope="col" style="width: 5%">Step</th>
|
||||
<th scope="col" style="width: 25%">Relabeling Rule</th>
|
||||
<th scope="col" style="width: 35%">Input Labels</th>
|
||||
<th scope="col" stile="width: 35%">Output labels</a>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
{% for i, ds := range dss %}
|
||||
{% code
|
||||
inLabels := promutils.MustNewLabelsFromString(ds.In)
|
||||
outLabels := promutils.MustNewLabelsFromString(ds.Out)
|
||||
changedLabels := getChangedLabelNames(inLabels, outLabels)
|
||||
%}
|
||||
<tr>
|
||||
<td>{%d i %}</td>
|
||||
<td><b><pre class="m-2">{%s ds.Rule %}</pre></b></td>
|
||||
<td>
|
||||
<div class="m-2" style="font-size: 0.9em" title="deleted and updated labels highlighted in red">
|
||||
{%= labelsWithHighlight(inLabels, changedLabels, "red") %}
|
||||
</div>
|
||||
</td>
|
||||
<td>
|
||||
<div class="m-2" style="font-size: 0.9em" title="added and updated labels highlighted in blue">
|
||||
{%= labelsWithHighlight(outLabels, changedLabels, "blue") %}
|
||||
</div>
|
||||
</td>
|
||||
</tr>
|
||||
{% endfor %}
|
||||
</tbody>
|
||||
</table>
|
||||
{% if len(dss) > 0 %}
|
||||
<div class="m-3">
|
||||
<b>Resulting labels:</b> <samp>{%= mustFormatLabels(dss[len(dss)-1].Out) %}</samp>
|
||||
</div>
|
||||
{% endif %}
|
||||
{% endfunc %}
|
||||
|
||||
{% func labelsWithHighlight(labels *promutils.Labels, highlight map[string]struct{}, color string) %}
|
||||
{% code
|
||||
labelsList := labels.GetLabels()
|
||||
metricName := ""
|
||||
for i, label := range labelsList {
|
||||
if label.Name == "__name__" {
|
||||
metricName = label.Value
|
||||
labelsList = append(labelsList[:i], labelsList[i+1:]...)
|
||||
break
|
||||
}
|
||||
}
|
||||
%}
|
||||
{% if metricName != "" %}
|
||||
{% if _, ok := highlight["__name__"]; ok %}
|
||||
<span style="font-weight:bold;color:{%s color %}">{%s metricName %}</span>
|
||||
{% else %}
|
||||
{%s metricName %}
|
||||
{% endif %}
|
||||
{% if len(labelsList) == 0 %}{% return %}{% endif %}
|
||||
{% endif %}
|
||||
{
|
||||
{% for i, label := range labelsList %}
|
||||
{% if _, ok := highlight[label.Name]; ok %}
|
||||
<span style="font-weight:bold;color:{%s color %}">{%s label.Name %}={%q label.Value %}</span>
|
||||
{% else %}
|
||||
{%s label.Name %}={%q label.Value %}
|
||||
{% endif %}
|
||||
{% if i < len(labelsList)-1 %},{% space %}{% endif %}
|
||||
{% endfor %}
|
||||
}
|
||||
{% endfunc %}
|
||||
|
||||
{% func mustFormatLabels(s string) %}
|
||||
{% code labels := promutils.MustNewLabelsFromString(s) %}
|
||||
{%= labelsWithHighlight(labels, nil, "") %}
|
||||
{% endfunc %}
|
||||
|
||||
{% endstripspace %}
|
433
lib/promscrape/relabel_debug.qtpl.go
Normal file
433
lib/promscrape/relabel_debug.qtpl.go
Normal file
|
@ -0,0 +1,433 @@
|
|||
// Code generated by qtc from "relabel_debug.qtpl". DO NOT EDIT.
|
||||
// See https://github.com/valyala/quicktemplate for details.
|
||||
|
||||
//line lib/promscrape/relabel_debug.qtpl:1
|
||||
package promscrape
|
||||
|
||||
//line lib/promscrape/relabel_debug.qtpl:1
|
||||
import (
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutils"
|
||||
)
|
||||
|
||||
//line lib/promscrape/relabel_debug.qtpl:8
|
||||
import (
|
||||
qtio422016 "io"
|
||||
|
||||
qt422016 "github.com/valyala/quicktemplate"
|
||||
)
|
||||
|
||||
//line lib/promscrape/relabel_debug.qtpl:8
|
||||
var (
|
||||
_ = qtio422016.Copy
|
||||
_ = qt422016.AcquireByteBuffer
|
||||
)
|
||||
|
||||
//line lib/promscrape/relabel_debug.qtpl:8
|
||||
func StreamMetricRelabelDebugSteps(qw422016 *qt422016.Writer, dss []promrelabel.DebugStep, metric, relabelConfigs string, err error) {
|
||||
//line lib/promscrape/relabel_debug.qtpl:8
|
||||
qw422016.N().S(`<!DOCTYPE html><html lang="en"><head>`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:12
|
||||
streamcommonHeader(qw422016)
|
||||
//line lib/promscrape/relabel_debug.qtpl:12
|
||||
qw422016.N().S(`<title>Metric relabel debug</title></head><body>`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:16
|
||||
streamnavbar(qw422016)
|
||||
//line lib/promscrape/relabel_debug.qtpl:16
|
||||
qw422016.N().S(`<div class="container-fluid"><a href="https://docs.victoriametrics.com/relabeling.html" target="_blank">Relabeling docs</a>`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:18
|
||||
qw422016.N().S(` `)
|
||||
//line lib/promscrape/relabel_debug.qtpl:18
|
||||
qw422016.N().S(`<a href="target-relabel-debug">Target relabel debug</a><br>`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:21
|
||||
if err != nil {
|
||||
//line lib/promscrape/relabel_debug.qtpl:22
|
||||
streamerrorNotification(qw422016, err)
|
||||
//line lib/promscrape/relabel_debug.qtpl:23
|
||||
}
|
||||
//line lib/promscrape/relabel_debug.qtpl:23
|
||||
qw422016.N().S(`<div class="m-3"><form method="POST">`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:27
|
||||
streamrelabelDebugFormInputs(qw422016, metric, relabelConfigs)
|
||||
//line lib/promscrape/relabel_debug.qtpl:27
|
||||
qw422016.N().S(`<input type="submit" value="Submit" class="btn btn-primary m-1" /></form></div><div class="row"><main class="col-12">`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:35
|
||||
streamrelabelDebugSteps(qw422016, dss)
|
||||
//line lib/promscrape/relabel_debug.qtpl:35
|
||||
qw422016.N().S(`</main></div></div></body></html>`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:41
|
||||
}
|
||||
|
||||
//line lib/promscrape/relabel_debug.qtpl:41
|
||||
func WriteMetricRelabelDebugSteps(qq422016 qtio422016.Writer, dss []promrelabel.DebugStep, metric, relabelConfigs string, err error) {
|
||||
//line lib/promscrape/relabel_debug.qtpl:41
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line lib/promscrape/relabel_debug.qtpl:41
|
||||
StreamMetricRelabelDebugSteps(qw422016, dss, metric, relabelConfigs, err)
|
||||
//line lib/promscrape/relabel_debug.qtpl:41
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line lib/promscrape/relabel_debug.qtpl:41
|
||||
}
|
||||
|
||||
//line lib/promscrape/relabel_debug.qtpl:41
|
||||
func MetricRelabelDebugSteps(dss []promrelabel.DebugStep, metric, relabelConfigs string, err error) string {
|
||||
//line lib/promscrape/relabel_debug.qtpl:41
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line lib/promscrape/relabel_debug.qtpl:41
|
||||
WriteMetricRelabelDebugSteps(qb422016, dss, metric, relabelConfigs, err)
|
||||
//line lib/promscrape/relabel_debug.qtpl:41
|
||||
qs422016 := string(qb422016.B)
|
||||
//line lib/promscrape/relabel_debug.qtpl:41
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line lib/promscrape/relabel_debug.qtpl:41
|
||||
return qs422016
|
||||
//line lib/promscrape/relabel_debug.qtpl:41
|
||||
}
|
||||
|
||||
//line lib/promscrape/relabel_debug.qtpl:43
|
||||
func StreamTargetRelabelDebugSteps(qw422016 *qt422016.Writer, targetID string, dss []promrelabel.DebugStep, metric, relabelConfigs string, err error) {
|
||||
//line lib/promscrape/relabel_debug.qtpl:43
|
||||
qw422016.N().S(`<!DOCTYPE html><html lang="en"><head>`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:47
|
||||
streamcommonHeader(qw422016)
|
||||
//line lib/promscrape/relabel_debug.qtpl:47
|
||||
qw422016.N().S(`<title>Target relabel debug</title></head><body>`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:51
|
||||
streamnavbar(qw422016)
|
||||
//line lib/promscrape/relabel_debug.qtpl:51
|
||||
qw422016.N().S(`<div class="container-fluid"><a href="https://docs.victoriametrics.com/relabeling.html" target="_blank">Relabeling docs</a>`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:53
|
||||
qw422016.N().S(` `)
|
||||
//line lib/promscrape/relabel_debug.qtpl:53
|
||||
qw422016.N().S(`<a href="metric-relabel-debug">Metric relabel debug</a><br/>`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:56
|
||||
if err != nil {
|
||||
//line lib/promscrape/relabel_debug.qtpl:57
|
||||
streamerrorNotification(qw422016, err)
|
||||
//line lib/promscrape/relabel_debug.qtpl:58
|
||||
}
|
||||
//line lib/promscrape/relabel_debug.qtpl:58
|
||||
qw422016.N().S(`<div class="m-3"><form method="POST">`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:62
|
||||
streamrelabelDebugFormInputs(qw422016, metric, relabelConfigs)
|
||||
//line lib/promscrape/relabel_debug.qtpl:62
|
||||
qw422016.N().S(`<input type="hidden" name="id" value="`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:64
|
||||
qw422016.E().S(targetID)
|
||||
//line lib/promscrape/relabel_debug.qtpl:64
|
||||
qw422016.N().S(`" /><input type="submit" value="Submit" class="btn btn-primary m-1" />`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:67
|
||||
if targetID != "" {
|
||||
//line lib/promscrape/relabel_debug.qtpl:67
|
||||
qw422016.N().S(`<button type="button" onclick="location.href='target-relabel-debug?id=`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:68
|
||||
qw422016.E().S(targetID)
|
||||
//line lib/promscrape/relabel_debug.qtpl:68
|
||||
qw422016.N().S(`'" class="btn btn-secondary m-1">Reset</button>`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:69
|
||||
}
|
||||
//line lib/promscrape/relabel_debug.qtpl:69
|
||||
qw422016.N().S(`</form></div><div class="row"><main class="col-12">`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:75
|
||||
streamrelabelDebugSteps(qw422016, dss)
|
||||
//line lib/promscrape/relabel_debug.qtpl:75
|
||||
qw422016.N().S(`</main></div></div></body></html>`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:81
|
||||
}
|
||||
|
||||
//line lib/promscrape/relabel_debug.qtpl:81
|
||||
func WriteTargetRelabelDebugSteps(qq422016 qtio422016.Writer, targetID string, dss []promrelabel.DebugStep, metric, relabelConfigs string, err error) {
|
||||
//line lib/promscrape/relabel_debug.qtpl:81
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line lib/promscrape/relabel_debug.qtpl:81
|
||||
StreamTargetRelabelDebugSteps(qw422016, targetID, dss, metric, relabelConfigs, err)
|
||||
//line lib/promscrape/relabel_debug.qtpl:81
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line lib/promscrape/relabel_debug.qtpl:81
|
||||
}
|
||||
|
||||
//line lib/promscrape/relabel_debug.qtpl:81
|
||||
func TargetRelabelDebugSteps(targetID string, dss []promrelabel.DebugStep, metric, relabelConfigs string, err error) string {
|
||||
//line lib/promscrape/relabel_debug.qtpl:81
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line lib/promscrape/relabel_debug.qtpl:81
|
||||
WriteTargetRelabelDebugSteps(qb422016, targetID, dss, metric, relabelConfigs, err)
|
||||
//line lib/promscrape/relabel_debug.qtpl:81
|
||||
qs422016 := string(qb422016.B)
|
||||
//line lib/promscrape/relabel_debug.qtpl:81
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line lib/promscrape/relabel_debug.qtpl:81
|
||||
return qs422016
|
||||
//line lib/promscrape/relabel_debug.qtpl:81
|
||||
}
|
||||
|
||||
//line lib/promscrape/relabel_debug.qtpl:83
|
||||
func streamrelabelDebugFormInputs(qw422016 *qt422016.Writer, metric, relabelConfigs string) {
|
||||
//line lib/promscrape/relabel_debug.qtpl:83
|
||||
qw422016.N().S(`<div>Relabel configs:<br/><textarea name="relabel_configs" style="width: 100%; height: 15em" class="m-1">`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:86
|
||||
qw422016.E().S(relabelConfigs)
|
||||
//line lib/promscrape/relabel_debug.qtpl:86
|
||||
qw422016.N().S(`</textarea></div><div>Labels:<br/><textarea name="metric" style="width: 100%; height: 5em" class="m-1">`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:91
|
||||
qw422016.E().S(metric)
|
||||
//line lib/promscrape/relabel_debug.qtpl:91
|
||||
qw422016.N().S(`</textarea></div>`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:93
|
||||
}
|
||||
|
||||
//line lib/promscrape/relabel_debug.qtpl:93
|
||||
func writerelabelDebugFormInputs(qq422016 qtio422016.Writer, metric, relabelConfigs string) {
|
||||
//line lib/promscrape/relabel_debug.qtpl:93
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line lib/promscrape/relabel_debug.qtpl:93
|
||||
streamrelabelDebugFormInputs(qw422016, metric, relabelConfigs)
|
||||
//line lib/promscrape/relabel_debug.qtpl:93
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line lib/promscrape/relabel_debug.qtpl:93
|
||||
}
|
||||
|
||||
//line lib/promscrape/relabel_debug.qtpl:93
|
||||
func relabelDebugFormInputs(metric, relabelConfigs string) string {
|
||||
//line lib/promscrape/relabel_debug.qtpl:93
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line lib/promscrape/relabel_debug.qtpl:93
|
||||
writerelabelDebugFormInputs(qb422016, metric, relabelConfigs)
|
||||
//line lib/promscrape/relabel_debug.qtpl:93
|
||||
qs422016 := string(qb422016.B)
|
||||
//line lib/promscrape/relabel_debug.qtpl:93
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line lib/promscrape/relabel_debug.qtpl:93
|
||||
return qs422016
|
||||
//line lib/promscrape/relabel_debug.qtpl:93
|
||||
}
|
||||
|
||||
//line lib/promscrape/relabel_debug.qtpl:95
|
||||
func streamrelabelDebugSteps(qw422016 *qt422016.Writer, dss []promrelabel.DebugStep) {
|
||||
//line lib/promscrape/relabel_debug.qtpl:96
|
||||
if len(dss) > 0 {
|
||||
//line lib/promscrape/relabel_debug.qtpl:96
|
||||
qw422016.N().S(`<div class="m-3"><b>Original labels:</b> <samp>`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:98
|
||||
streammustFormatLabels(qw422016, dss[0].In)
|
||||
//line lib/promscrape/relabel_debug.qtpl:98
|
||||
qw422016.N().S(`</samp></div>`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:100
|
||||
}
|
||||
//line lib/promscrape/relabel_debug.qtpl:100
|
||||
qw422016.N().S(`<table class="table table-striped table-hover table-bordered table-sm"><thead><tr><th scope="col" style="width: 5%">Step</th><th scope="col" style="width: 25%">Relabeling Rule</th><th scope="col" style="width: 35%">Input Labels</th><th scope="col" stile="width: 35%">Output labels</a></tr></thead><tbody>`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:111
|
||||
for i, ds := range dss {
|
||||
//line lib/promscrape/relabel_debug.qtpl:113
|
||||
inLabels := promutils.MustNewLabelsFromString(ds.In)
|
||||
outLabels := promutils.MustNewLabelsFromString(ds.Out)
|
||||
changedLabels := getChangedLabelNames(inLabels, outLabels)
|
||||
|
||||
//line lib/promscrape/relabel_debug.qtpl:116
|
||||
qw422016.N().S(`<tr><td>`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:118
|
||||
qw422016.N().D(i)
|
||||
//line lib/promscrape/relabel_debug.qtpl:118
|
||||
qw422016.N().S(`</td><td><b><pre class="m-2">`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:119
|
||||
qw422016.E().S(ds.Rule)
|
||||
//line lib/promscrape/relabel_debug.qtpl:119
|
||||
qw422016.N().S(`</pre></b></td><td><div class="m-2" style="font-size: 0.9em" title="deleted and updated labels highlighted in red">`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:122
|
||||
streamlabelsWithHighlight(qw422016, inLabels, changedLabels, "red")
|
||||
//line lib/promscrape/relabel_debug.qtpl:122
|
||||
qw422016.N().S(`</div></td><td><div class="m-2" style="font-size: 0.9em" title="added and updated labels highlighted in blue">`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:127
|
||||
streamlabelsWithHighlight(qw422016, outLabels, changedLabels, "blue")
|
||||
//line lib/promscrape/relabel_debug.qtpl:127
|
||||
qw422016.N().S(`</div></td></tr>`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:131
|
||||
}
|
||||
//line lib/promscrape/relabel_debug.qtpl:131
|
||||
qw422016.N().S(`</tbody></table>`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:134
|
||||
if len(dss) > 0 {
|
||||
//line lib/promscrape/relabel_debug.qtpl:134
|
||||
qw422016.N().S(`<div class="m-3"><b>Resulting labels:</b> <samp>`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:136
|
||||
streammustFormatLabels(qw422016, dss[len(dss)-1].Out)
|
||||
//line lib/promscrape/relabel_debug.qtpl:136
|
||||
qw422016.N().S(`</samp></div>`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:138
|
||||
}
|
||||
//line lib/promscrape/relabel_debug.qtpl:139
|
||||
}
|
||||
|
||||
//line lib/promscrape/relabel_debug.qtpl:139
|
||||
func writerelabelDebugSteps(qq422016 qtio422016.Writer, dss []promrelabel.DebugStep) {
|
||||
//line lib/promscrape/relabel_debug.qtpl:139
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line lib/promscrape/relabel_debug.qtpl:139
|
||||
streamrelabelDebugSteps(qw422016, dss)
|
||||
//line lib/promscrape/relabel_debug.qtpl:139
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line lib/promscrape/relabel_debug.qtpl:139
|
||||
}
|
||||
|
||||
//line lib/promscrape/relabel_debug.qtpl:139
|
||||
func relabelDebugSteps(dss []promrelabel.DebugStep) string {
|
||||
//line lib/promscrape/relabel_debug.qtpl:139
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line lib/promscrape/relabel_debug.qtpl:139
|
||||
writerelabelDebugSteps(qb422016, dss)
|
||||
//line lib/promscrape/relabel_debug.qtpl:139
|
||||
qs422016 := string(qb422016.B)
|
||||
//line lib/promscrape/relabel_debug.qtpl:139
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line lib/promscrape/relabel_debug.qtpl:139
|
||||
return qs422016
|
||||
//line lib/promscrape/relabel_debug.qtpl:139
|
||||
}
|
||||
|
||||
//line lib/promscrape/relabel_debug.qtpl:141
|
||||
func streamlabelsWithHighlight(qw422016 *qt422016.Writer, labels *promutils.Labels, highlight map[string]struct{}, color string) {
|
||||
//line lib/promscrape/relabel_debug.qtpl:143
|
||||
labelsList := labels.GetLabels()
|
||||
metricName := ""
|
||||
for i, label := range labelsList {
|
||||
if label.Name == "__name__" {
|
||||
metricName = label.Value
|
||||
labelsList = append(labelsList[:i], labelsList[i+1:]...)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
//line lib/promscrape/relabel_debug.qtpl:153
|
||||
if metricName != "" {
|
||||
//line lib/promscrape/relabel_debug.qtpl:154
|
||||
if _, ok := highlight["__name__"]; ok {
|
||||
//line lib/promscrape/relabel_debug.qtpl:154
|
||||
qw422016.N().S(`<span style="font-weight:bold;color:`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:155
|
||||
qw422016.E().S(color)
|
||||
//line lib/promscrape/relabel_debug.qtpl:155
|
||||
qw422016.N().S(`">`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:155
|
||||
qw422016.E().S(metricName)
|
||||
//line lib/promscrape/relabel_debug.qtpl:155
|
||||
qw422016.N().S(`</span>`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:156
|
||||
} else {
|
||||
//line lib/promscrape/relabel_debug.qtpl:157
|
||||
qw422016.E().S(metricName)
|
||||
//line lib/promscrape/relabel_debug.qtpl:158
|
||||
}
|
||||
//line lib/promscrape/relabel_debug.qtpl:159
|
||||
if len(labelsList) == 0 {
|
||||
//line lib/promscrape/relabel_debug.qtpl:159
|
||||
return
|
||||
//line lib/promscrape/relabel_debug.qtpl:159
|
||||
}
|
||||
//line lib/promscrape/relabel_debug.qtpl:160
|
||||
}
|
||||
//line lib/promscrape/relabel_debug.qtpl:160
|
||||
qw422016.N().S(`{`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:162
|
||||
for i, label := range labelsList {
|
||||
//line lib/promscrape/relabel_debug.qtpl:163
|
||||
if _, ok := highlight[label.Name]; ok {
|
||||
//line lib/promscrape/relabel_debug.qtpl:163
|
||||
qw422016.N().S(`<span style="font-weight:bold;color:`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:164
|
||||
qw422016.E().S(color)
|
||||
//line lib/promscrape/relabel_debug.qtpl:164
|
||||
qw422016.N().S(`">`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:164
|
||||
qw422016.E().S(label.Name)
|
||||
//line lib/promscrape/relabel_debug.qtpl:164
|
||||
qw422016.N().S(`=`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:164
|
||||
qw422016.E().Q(label.Value)
|
||||
//line lib/promscrape/relabel_debug.qtpl:164
|
||||
qw422016.N().S(`</span>`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:165
|
||||
} else {
|
||||
//line lib/promscrape/relabel_debug.qtpl:166
|
||||
qw422016.E().S(label.Name)
|
||||
//line lib/promscrape/relabel_debug.qtpl:166
|
||||
qw422016.N().S(`=`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:166
|
||||
qw422016.E().Q(label.Value)
|
||||
//line lib/promscrape/relabel_debug.qtpl:167
|
||||
}
|
||||
//line lib/promscrape/relabel_debug.qtpl:168
|
||||
if i < len(labelsList)-1 {
|
||||
//line lib/promscrape/relabel_debug.qtpl:168
|
||||
qw422016.N().S(`,`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:168
|
||||
qw422016.N().S(` `)
|
||||
//line lib/promscrape/relabel_debug.qtpl:168
|
||||
}
|
||||
//line lib/promscrape/relabel_debug.qtpl:169
|
||||
}
|
||||
//line lib/promscrape/relabel_debug.qtpl:169
|
||||
qw422016.N().S(`}`)
|
||||
//line lib/promscrape/relabel_debug.qtpl:171
|
||||
}
|
||||
|
||||
//line lib/promscrape/relabel_debug.qtpl:171
|
||||
func writelabelsWithHighlight(qq422016 qtio422016.Writer, labels *promutils.Labels, highlight map[string]struct{}, color string) {
|
||||
//line lib/promscrape/relabel_debug.qtpl:171
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line lib/promscrape/relabel_debug.qtpl:171
|
||||
streamlabelsWithHighlight(qw422016, labels, highlight, color)
|
||||
//line lib/promscrape/relabel_debug.qtpl:171
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line lib/promscrape/relabel_debug.qtpl:171
|
||||
}
|
||||
|
||||
//line lib/promscrape/relabel_debug.qtpl:171
|
||||
func labelsWithHighlight(labels *promutils.Labels, highlight map[string]struct{}, color string) string {
|
||||
//line lib/promscrape/relabel_debug.qtpl:171
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line lib/promscrape/relabel_debug.qtpl:171
|
||||
writelabelsWithHighlight(qb422016, labels, highlight, color)
|
||||
//line lib/promscrape/relabel_debug.qtpl:171
|
||||
qs422016 := string(qb422016.B)
|
||||
//line lib/promscrape/relabel_debug.qtpl:171
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line lib/promscrape/relabel_debug.qtpl:171
|
||||
return qs422016
|
||||
//line lib/promscrape/relabel_debug.qtpl:171
|
||||
}
|
||||
|
||||
//line lib/promscrape/relabel_debug.qtpl:173
|
||||
func streammustFormatLabels(qw422016 *qt422016.Writer, s string) {
|
||||
//line lib/promscrape/relabel_debug.qtpl:174
|
||||
labels := promutils.MustNewLabelsFromString(s)
|
||||
|
||||
//line lib/promscrape/relabel_debug.qtpl:175
|
||||
streamlabelsWithHighlight(qw422016, labels, nil, "")
|
||||
//line lib/promscrape/relabel_debug.qtpl:176
|
||||
}
|
||||
|
||||
//line lib/promscrape/relabel_debug.qtpl:176
|
||||
func writemustFormatLabels(qq422016 qtio422016.Writer, s string) {
|
||||
//line lib/promscrape/relabel_debug.qtpl:176
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line lib/promscrape/relabel_debug.qtpl:176
|
||||
streammustFormatLabels(qw422016, s)
|
||||
//line lib/promscrape/relabel_debug.qtpl:176
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line lib/promscrape/relabel_debug.qtpl:176
|
||||
}
|
||||
|
||||
//line lib/promscrape/relabel_debug.qtpl:176
|
||||
func mustFormatLabels(s string) string {
|
||||
//line lib/promscrape/relabel_debug.qtpl:176
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line lib/promscrape/relabel_debug.qtpl:176
|
||||
writemustFormatLabels(qb422016, s)
|
||||
//line lib/promscrape/relabel_debug.qtpl:176
|
||||
qs422016 := string(qb422016.B)
|
||||
//line lib/promscrape/relabel_debug.qtpl:176
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line lib/promscrape/relabel_debug.qtpl:176
|
||||
return qs422016
|
||||
//line lib/promscrape/relabel_debug.qtpl:176
|
||||
}
|
|
@ -363,9 +363,9 @@ func (sg *scraperGroup) update(sws []*ScrapeWork) {
|
|||
"make sure service discovery and relabeling is set up properly; "+
|
||||
"see also https://docs.victoriametrics.com/vmagent.html#troubleshooting; "+
|
||||
"original labels for target1: %s; original labels for target2: %s",
|
||||
sw.ScrapeURL, sw.LabelsString(), originalLabels.String(), sw.OriginalLabels.String())
|
||||
sw.ScrapeURL, sw.Labels.String(), originalLabels.String(), sw.OriginalLabels.String())
|
||||
}
|
||||
droppedTargetsMap.Register(sw.OriginalLabels)
|
||||
droppedTargetsMap.Register(sw.OriginalLabels, sw.RelabelConfigs)
|
||||
continue
|
||||
}
|
||||
swsMap[key] = sw.OriginalLabels
|
||||
|
|
|
@ -101,6 +101,9 @@ type ScrapeWork struct {
|
|||
// Auth config
|
||||
AuthConfig *promauth.Config
|
||||
|
||||
// Optional `relabel_configs`.
|
||||
RelabelConfigs *promrelabel.ParsedConfigs
|
||||
|
||||
// Optional `metric_relabel_configs`.
|
||||
MetricRelabelConfigs *promrelabel.ParsedConfigs
|
||||
|
||||
|
@ -147,15 +150,17 @@ func (sw *ScrapeWork) canSwitchToStreamParseMode() bool {
|
|||
// It can be used for comparing for equality for two ScrapeWork objects.
|
||||
func (sw *ScrapeWork) key() string {
|
||||
// Do not take into account OriginalLabels, since they can be changed with relabeling.
|
||||
// Do not take into account RelabelConfigs, since it is already applied to Labels.
|
||||
// Take into account JobNameOriginal in order to capture the case when the original job_name is changed via relabeling.
|
||||
key := fmt.Sprintf("JobNameOriginal=%s, ScrapeURL=%s, ScrapeInterval=%s, ScrapeTimeout=%s, HonorLabels=%v, HonorTimestamps=%v, DenyRedirects=%v, Labels=%s, "+
|
||||
"ExternalLabels=%s, "+
|
||||
"ProxyURL=%s, ProxyAuthConfig=%s, AuthConfig=%s, MetricRelabelConfigs=%s, SampleLimit=%d, DisableCompression=%v, DisableKeepAlive=%v, StreamParse=%v, "+
|
||||
"ProxyURL=%s, ProxyAuthConfig=%s, AuthConfig=%s, MetricRelabelConfigs=%q, "+
|
||||
"SampleLimit=%d, DisableCompression=%v, DisableKeepAlive=%v, StreamParse=%v, "+
|
||||
"ScrapeAlignInterval=%s, ScrapeOffset=%s, SeriesLimit=%d, NoStaleMarkers=%v",
|
||||
sw.jobNameOriginal, sw.ScrapeURL, sw.ScrapeInterval, sw.ScrapeTimeout, sw.HonorLabels, sw.HonorTimestamps, sw.DenyRedirects, sw.LabelsString(),
|
||||
sw.jobNameOriginal, sw.ScrapeURL, sw.ScrapeInterval, sw.ScrapeTimeout, sw.HonorLabels, sw.HonorTimestamps, sw.DenyRedirects, sw.Labels.String(),
|
||||
sw.ExternalLabels.String(),
|
||||
sw.ProxyURL.String(), sw.ProxyAuthConfig.String(),
|
||||
sw.AuthConfig.String(), sw.MetricRelabelConfigs.String(), sw.SampleLimit, sw.DisableCompression, sw.DisableKeepAlive, sw.StreamParse,
|
||||
sw.ProxyURL.String(), sw.ProxyAuthConfig.String(), sw.AuthConfig.String(), sw.MetricRelabelConfigs.String(),
|
||||
sw.SampleLimit, sw.DisableCompression, sw.DisableKeepAlive, sw.StreamParse,
|
||||
sw.ScrapeAlignInterval, sw.ScrapeOffset, sw.SeriesLimit, sw.NoStaleMarkers)
|
||||
return key
|
||||
}
|
||||
|
@ -165,11 +170,6 @@ func (sw *ScrapeWork) Job() string {
|
|||
return sw.Labels.Get("job")
|
||||
}
|
||||
|
||||
// LabelsString returns labels in Prometheus format for the given sw.
|
||||
func (sw *ScrapeWork) LabelsString() string {
|
||||
return sw.Labels.String()
|
||||
}
|
||||
|
||||
type scrapeWork struct {
|
||||
// Config for the scrape.
|
||||
Config *ScrapeWork
|
||||
|
@ -281,7 +281,7 @@ func (sw *scrapeWork) run(stopCh <-chan struct{}, globalStopCh <-chan struct{})
|
|||
// scrapes replicated targets at different time offsets. This guarantees that the deduplication consistently leaves samples
|
||||
// received from the same vmagent replica.
|
||||
// See https://docs.victoriametrics.com/vmagent.html#scraping-big-number-of-targets
|
||||
key := fmt.Sprintf("clusterName=%s, clusterMemberID=%d, ScrapeURL=%s, Labels=%s", *clusterName, clusterMemberID, sw.Config.ScrapeURL, sw.Config.LabelsString())
|
||||
key := fmt.Sprintf("clusterName=%s, clusterMemberID=%d, ScrapeURL=%s, Labels=%s", *clusterName, clusterMemberID, sw.Config.ScrapeURL, sw.Config.Labels.String())
|
||||
h := xxhash.Sum64(bytesutil.ToUnsafeBytes(key))
|
||||
randSleep = uint64(float64(scrapeInterval) * (float64(h) / (1 << 64)))
|
||||
sleepOffset := uint64(time.Now().UnixNano()) % uint64(scrapeInterval)
|
||||
|
@ -348,7 +348,7 @@ func (sw *scrapeWork) logError(s string) {
|
|||
if !*suppressScrapeErrors {
|
||||
logger.ErrorfSkipframes(1, "error when scraping %q from job %q with labels %s: %s; "+
|
||||
"scrape errors can be disabled by -promscrape.suppressScrapeErrors command-line flag",
|
||||
sw.Config.ScrapeURL, sw.Config.Job(), sw.Config.LabelsString(), s)
|
||||
sw.Config.ScrapeURL, sw.Config.Job(), sw.Config.Labels.String(), s)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -362,7 +362,7 @@ func (sw *scrapeWork) scrapeAndLogError(scrapeTimestamp, realTimestamp int64) {
|
|||
sw.errsSuppressedCount++
|
||||
return
|
||||
}
|
||||
err = fmt.Errorf("cannot scrape %q (job %q, labels %s): %w", sw.Config.ScrapeURL, sw.Config.Job(), sw.Config.LabelsString(), err)
|
||||
err = fmt.Errorf("cannot scrape %q (job %q, labels %s): %w", sw.Config.ScrapeURL, sw.Config.Job(), sw.Config.Labels.String(), err)
|
||||
if sw.errsSuppressedCount > 0 {
|
||||
err = fmt.Errorf("%w; %d similar errors suppressed during the last %.1f seconds", err, sw.errsSuppressedCount, d.Seconds())
|
||||
}
|
||||
|
@ -543,6 +543,10 @@ func (sw *scrapeWork) scrapeStream(scrapeTimestamp, realTimestamp int64) error {
|
|||
// Do not pool sbr and do not pre-allocate sbr.body in order to reduce memory usage when scraping big responses.
|
||||
var sbr streamBodyReader
|
||||
|
||||
lastScrape := sw.loadLastScrape()
|
||||
bodyString := ""
|
||||
areIdenticalSeries := true
|
||||
samplesDropped := 0
|
||||
sr, err := sw.GetStreamReader()
|
||||
if err != nil {
|
||||
err = fmt.Errorf("cannot read data: %s", err)
|
||||
|
@ -550,6 +554,8 @@ func (sw *scrapeWork) scrapeStream(scrapeTimestamp, realTimestamp int64) error {
|
|||
var mu sync.Mutex
|
||||
err = sbr.Init(sr)
|
||||
if err == nil {
|
||||
bodyString = bytesutil.ToUnsafeString(sbr.body)
|
||||
areIdenticalSeries = sw.Config.NoStaleMarkers || parser.AreIdenticalSeriesFast(lastScrape, bodyString)
|
||||
err = parser.ParseStream(&sbr, scrapeTimestamp, false, func(rows []parser.Row) error {
|
||||
mu.Lock()
|
||||
defer mu.Unlock()
|
||||
|
@ -557,9 +563,6 @@ func (sw *scrapeWork) scrapeStream(scrapeTimestamp, realTimestamp int64) error {
|
|||
for i := range rows {
|
||||
sw.addRowToTimeseries(wc, &rows[i], scrapeTimestamp, true)
|
||||
}
|
||||
// Push the collected rows to sw before returning from the callback, since they cannot be held
|
||||
// after returning from the callback - this will result in data race.
|
||||
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/825#issuecomment-723198247
|
||||
samplesPostRelabeling += len(wc.writeRequest.Timeseries)
|
||||
if sw.Config.SampleLimit > 0 && samplesPostRelabeling > sw.Config.SampleLimit {
|
||||
wc.resetNoRows()
|
||||
|
@ -567,6 +570,15 @@ func (sw *scrapeWork) scrapeStream(scrapeTimestamp, realTimestamp int64) error {
|
|||
return fmt.Errorf("the response from %q exceeds sample_limit=%d; "+
|
||||
"either reduce the sample count for the target or increase sample_limit", sw.Config.ScrapeURL, sw.Config.SampleLimit)
|
||||
}
|
||||
if sw.seriesLimitExceeded || !areIdenticalSeries {
|
||||
samplesDropped += sw.applySeriesLimit(wc)
|
||||
if samplesDropped > 0 && !sw.seriesLimitExceeded {
|
||||
sw.seriesLimitExceeded = true
|
||||
}
|
||||
}
|
||||
// Push the collected rows to sw before returning from the callback, since they cannot be held
|
||||
// after returning from the callback - this will result in data race.
|
||||
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/825#issuecomment-723198247
|
||||
sw.pushData(sw.Config.AuthToken, &wc.writeRequest)
|
||||
wc.resetNoRows()
|
||||
return nil
|
||||
|
@ -574,9 +586,6 @@ func (sw *scrapeWork) scrapeStream(scrapeTimestamp, realTimestamp int64) error {
|
|||
}
|
||||
sr.MustClose()
|
||||
}
|
||||
lastScrape := sw.loadLastScrape()
|
||||
bodyString := bytesutil.ToUnsafeString(sbr.body)
|
||||
areIdenticalSeries := sw.Config.NoStaleMarkers || parser.AreIdenticalSeriesFast(lastScrape, bodyString)
|
||||
|
||||
scrapedSamples.Update(float64(samplesScraped))
|
||||
endTimestamp := time.Now().UnixNano() / 1e6
|
||||
|
@ -598,11 +607,12 @@ func (sw *scrapeWork) scrapeStream(scrapeTimestamp, realTimestamp int64) error {
|
|||
seriesAdded = sw.getSeriesAdded(lastScrape, bodyString)
|
||||
}
|
||||
am := &autoMetrics{
|
||||
up: up,
|
||||
scrapeDurationSeconds: duration,
|
||||
samplesScraped: samplesScraped,
|
||||
samplesPostRelabeling: samplesPostRelabeling,
|
||||
seriesAdded: seriesAdded,
|
||||
up: up,
|
||||
scrapeDurationSeconds: duration,
|
||||
samplesScraped: samplesScraped,
|
||||
samplesPostRelabeling: samplesPostRelabeling,
|
||||
seriesAdded: seriesAdded,
|
||||
seriesLimitSamplesDropped: samplesDropped,
|
||||
}
|
||||
sw.addAutoMetrics(am, wc, scrapeTimestamp)
|
||||
sw.pushData(sw.Config.AuthToken, &wc.writeRequest)
|
||||
|
|
|
@ -40,8 +40,8 @@ func TestIsAutoMetric(t *testing.T) {
|
|||
func TestAppendExtraLabels(t *testing.T) {
|
||||
f := func(sourceLabels, extraLabels string, honorLabels bool, resultExpected string) {
|
||||
t.Helper()
|
||||
src := promutils.NewLabelsFromString(sourceLabels)
|
||||
extra := promutils.NewLabelsFromString(extraLabels)
|
||||
src := promutils.MustNewLabelsFromString(sourceLabels)
|
||||
extra := promutils.MustNewLabelsFromString(extraLabels)
|
||||
var labels promutils.Labels
|
||||
labels.Labels = appendExtraLabels(src.GetLabels(), extra.GetLabels(), 0, honorLabels)
|
||||
result := labels.String()
|
||||
|
@ -794,7 +794,7 @@ func timeseriesToString(ts *prompbmarshal.TimeSeries) string {
|
|||
}
|
||||
|
||||
func mustParseRelabelConfigs(config string) *promrelabel.ParsedConfigs {
|
||||
pcs, err := promrelabel.ParseRelabelConfigsData([]byte(config), false)
|
||||
pcs, err := promrelabel.ParseRelabelConfigsData([]byte(config))
|
||||
if err != nil {
|
||||
panic(fmt.Errorf("cannot parse %q: %w", config, err))
|
||||
}
|
||||
|
|
|
@ -147,15 +147,16 @@ func (tsm *targetStatusMap) getScrapeWorkByTargetID(targetID string) *scrapeWork
|
|||
tsm.mu.Lock()
|
||||
defer tsm.mu.Unlock()
|
||||
for sw := range tsm.m {
|
||||
if getTargetID(sw) == targetID {
|
||||
// The target is uniquely identified by a pointer to its original labels.
|
||||
if getLabelsID(sw.Config.OriginalLabels) == targetID {
|
||||
return sw
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func getTargetID(sw *scrapeWork) string {
|
||||
return fmt.Sprintf("%016x", uintptr(unsafe.Pointer(sw)))
|
||||
func getLabelsID(labels *promutils.Labels) string {
|
||||
return fmt.Sprintf("%016x", uintptr(unsafe.Pointer(labels)))
|
||||
}
|
||||
|
||||
// StatusByGroup returns the number of targets with status==up
|
||||
|
@ -254,36 +255,36 @@ type droppedTargets struct {
|
|||
|
||||
type droppedTarget struct {
|
||||
originalLabels *promutils.Labels
|
||||
relabelConfigs *promrelabel.ParsedConfigs
|
||||
deadline uint64
|
||||
}
|
||||
|
||||
func (dt *droppedTargets) getTargetsLabels() []*promutils.Labels {
|
||||
func (dt *droppedTargets) getTargetsList() []droppedTarget {
|
||||
dt.mu.Lock()
|
||||
dtls := make([]*promutils.Labels, 0, len(dt.m))
|
||||
dts := make([]droppedTarget, 0, len(dt.m))
|
||||
for _, v := range dt.m {
|
||||
dtls = append(dtls, v.originalLabels)
|
||||
dts = append(dts, v)
|
||||
}
|
||||
dt.mu.Unlock()
|
||||
// Sort discovered targets by __address__ label, so they stay in consistent order across calls
|
||||
sort.Slice(dtls, func(i, j int) bool {
|
||||
addr1 := dtls[i].Get("__address__")
|
||||
addr2 := dtls[j].Get("__address__")
|
||||
sort.Slice(dts, func(i, j int) bool {
|
||||
addr1 := dts[i].originalLabels.Get("__address__")
|
||||
addr2 := dts[j].originalLabels.Get("__address__")
|
||||
return addr1 < addr2
|
||||
})
|
||||
return dtls
|
||||
return dts
|
||||
}
|
||||
|
||||
func (dt *droppedTargets) Register(originalLabels *promutils.Labels) {
|
||||
// It is better to have hash collisions instead of spending additional CPU on promLabelsString() call.
|
||||
func (dt *droppedTargets) Register(originalLabels *promutils.Labels, relabelConfigs *promrelabel.ParsedConfigs) {
|
||||
// It is better to have hash collisions instead of spending additional CPU on originalLabels.String() call.
|
||||
key := labelsHash(originalLabels)
|
||||
currentTime := fasttime.UnixTimestamp()
|
||||
dt.mu.Lock()
|
||||
if k, ok := dt.m[key]; ok {
|
||||
k.deadline = currentTime + 10*60
|
||||
dt.m[key] = k
|
||||
} else if len(dt.m) < *maxDroppedTargets {
|
||||
_, ok := dt.m[key]
|
||||
if ok || len(dt.m) < *maxDroppedTargets {
|
||||
dt.m[key] = droppedTarget{
|
||||
originalLabels: originalLabels,
|
||||
relabelConfigs: relabelConfigs,
|
||||
deadline: currentTime + 10*60,
|
||||
}
|
||||
}
|
||||
|
@ -318,13 +319,13 @@ var xxhashPool = &sync.Pool{
|
|||
|
||||
// WriteDroppedTargetsJSON writes `droppedTargets` contents to w according to https://prometheus.io/docs/prometheus/latest/querying/api/#targets
|
||||
func (dt *droppedTargets) WriteDroppedTargetsJSON(w io.Writer) {
|
||||
dtls := dt.getTargetsLabels()
|
||||
dts := dt.getTargetsList()
|
||||
fmt.Fprintf(w, `[`)
|
||||
for i, labels := range dtls {
|
||||
for i, dt := range dts {
|
||||
fmt.Fprintf(w, `{"discoveredLabels":`)
|
||||
writeLabelsJSON(w, labels)
|
||||
writeLabelsJSON(w, dt.originalLabels)
|
||||
fmt.Fprintf(w, `}`)
|
||||
if i+1 < len(dtls) {
|
||||
if i+1 < len(dts) {
|
||||
fmt.Fprintf(w, `,`)
|
||||
}
|
||||
}
|
||||
|
@ -385,12 +386,12 @@ func (tsm *targetStatusMap) getTargetsStatusByJob(filter *requestFilter) *target
|
|||
// Do not show empty jobs if target filters are set.
|
||||
emptyJobs = nil
|
||||
}
|
||||
dtls := droppedTargetsMap.getTargetsLabels()
|
||||
dts := droppedTargetsMap.getTargetsList()
|
||||
return &targetsStatusResult{
|
||||
jobTargetsStatuses: jts,
|
||||
droppedTargetsLabels: dtls,
|
||||
emptyJobs: emptyJobs,
|
||||
err: err,
|
||||
jobTargetsStatuses: jts,
|
||||
droppedTargets: dts,
|
||||
emptyJobs: emptyJobs,
|
||||
err: err,
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -497,16 +498,16 @@ func getRequestFilter(r *http.Request) *requestFilter {
|
|||
}
|
||||
|
||||
type targetsStatusResult struct {
|
||||
jobTargetsStatuses []*jobTargetsStatuses
|
||||
droppedTargetsLabels []*promutils.Labels
|
||||
emptyJobs []string
|
||||
err error
|
||||
jobTargetsStatuses []*jobTargetsStatuses
|
||||
droppedTargets []droppedTarget
|
||||
emptyJobs []string
|
||||
err error
|
||||
}
|
||||
|
||||
type targetLabels struct {
|
||||
up bool
|
||||
discoveredLabels *promutils.Labels
|
||||
labels *promutils.Labels
|
||||
up bool
|
||||
originalLabels *promutils.Labels
|
||||
labels *promutils.Labels
|
||||
}
|
||||
type targetLabelsByJob struct {
|
||||
jobName string
|
||||
|
@ -515,6 +516,43 @@ type targetLabelsByJob struct {
|
|||
droppedTargets int
|
||||
}
|
||||
|
||||
func getRelabelContextByTargetID(targetID string) (*promrelabel.ParsedConfigs, *promutils.Labels, bool) {
|
||||
var relabelConfigs *promrelabel.ParsedConfigs
|
||||
var labels *promutils.Labels
|
||||
found := false
|
||||
|
||||
// Search for relabel context in tsmGlobal (aka active targets)
|
||||
tsmGlobal.mu.Lock()
|
||||
for sw := range tsmGlobal.m {
|
||||
// The target is uniquely identified by a pointer to its original labels.
|
||||
if getLabelsID(sw.Config.OriginalLabels) == targetID {
|
||||
relabelConfigs = sw.Config.RelabelConfigs
|
||||
labels = sw.Config.OriginalLabels
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
tsmGlobal.mu.Unlock()
|
||||
|
||||
if found {
|
||||
return relabelConfigs, labels, true
|
||||
}
|
||||
|
||||
// Search for relabel context in droppedTargetsMap (aka deleted targets)
|
||||
droppedTargetsMap.mu.Lock()
|
||||
for _, dt := range droppedTargetsMap.m {
|
||||
if getLabelsID(dt.originalLabels) == targetID {
|
||||
relabelConfigs = dt.relabelConfigs
|
||||
labels = dt.originalLabels
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
droppedTargetsMap.mu.Unlock()
|
||||
|
||||
return relabelConfigs, labels, found
|
||||
}
|
||||
|
||||
func (tsr *targetsStatusResult) getTargetLabelsByJob() []*targetLabelsByJob {
|
||||
byJob := make(map[string]*targetLabelsByJob)
|
||||
for _, jts := range tsr.jobTargetsStatuses {
|
||||
|
@ -529,14 +567,14 @@ func (tsr *targetsStatusResult) getTargetLabelsByJob() []*targetLabelsByJob {
|
|||
}
|
||||
m.activeTargets++
|
||||
m.targets = append(m.targets, targetLabels{
|
||||
up: ts.up,
|
||||
discoveredLabels: ts.sw.Config.OriginalLabels,
|
||||
labels: ts.sw.Config.Labels,
|
||||
up: ts.up,
|
||||
originalLabels: ts.sw.Config.OriginalLabels,
|
||||
labels: ts.sw.Config.Labels,
|
||||
})
|
||||
}
|
||||
}
|
||||
for _, labels := range tsr.droppedTargetsLabels {
|
||||
jobName := labels.Get("job")
|
||||
for _, dt := range tsr.droppedTargets {
|
||||
jobName := dt.originalLabels.Get("job")
|
||||
m := byJob[jobName]
|
||||
if m == nil {
|
||||
m = &targetLabelsByJob{
|
||||
|
@ -546,7 +584,7 @@ func (tsr *targetsStatusResult) getTargetLabelsByJob() []*targetLabelsByJob {
|
|||
}
|
||||
m.droppedTargets++
|
||||
m.targets = append(m.targets, targetLabels{
|
||||
discoveredLabels: labels,
|
||||
originalLabels: dt.originalLabels,
|
||||
})
|
||||
}
|
||||
a := make([]*targetLabelsByJob, 0, len(byJob))
|
||||
|
|
|
@ -103,7 +103,7 @@
|
|||
{% func navbar() %}
|
||||
<div class="navbar navbar-dark bg-dark box-shadow">
|
||||
<div class="d-flex justify-content-between">
|
||||
<a href="#" class="navbar-brand d-flex align-items-center ms-3" title="The High Performance Open Source Time Series Database & Monitoring Solution ">
|
||||
<a href="/" class="navbar-brand d-flex align-items-center ms-3" title="The High Performance Open Source Time Series Database & Monitoring Solution ">
|
||||
<svg xmlns="http://www.w3.org/2000/svg" id="VM_logo" viewBox="0 0 464.61 533.89" width="20" height="20" class="me-1"><defs><style>.cls-1{fill:#fff;}</style></defs><path class="cls-1" d="M459.86,467.77c9,7.67,24.12,13.49,39.3,13.69v0h1.68v0c15.18-.2,30.31-6,39.3-13.69,47.43-40.45,184.65-166.24,184.65-166.24,36.84-34.27-65.64-68.28-223.95-68.47h-1.68c-158.31.19-260.79,34.2-224,68.47C275.21,301.53,412.43,427.32,459.86,467.77Z" transform="translate(-267.7 -233.05)"/><path class="cls-1" d="M540.1,535.88c-9,7.67-24.12,13.5-39.3,13.7h-1.6c-15.18-.2-30.31-6-39.3-13.7-32.81-28-148.56-132.93-192.16-172.7v60.74c0,6.67,2.55,15.52,7.09,19.68,29.64,27.18,143.94,131.8,185.07,166.88,9,7.67,24.12,13.49,39.3,13.69v0h1.6v0c15.18-.2,30.31-6,39.3-13.69,41.13-35.08,155.43-139.7,185.07-166.88,4.54-4.16,7.09-13,7.09-19.68V363.18C688.66,403,572.91,507.9,540.1,535.88Z" transform="translate(-267.7 -233.05)"/><path class="cls-1" d="M540.1,678.64c-9,7.67-24.12,13.49-39.3,13.69v0h-1.6v0c-15.18-.2-30.31-6-39.3-13.69-32.81-28-148.56-132.94-192.16-172.7v60.73c0,6.67,2.55,15.53,7.09,19.69,29.64,27.17,143.94,131.8,185.07,166.87,9,7.67,24.12,13.5,39.3,13.7h1.6c15.18-.2,30.31-6,39.3-13.7,41.13-35.07,155.43-139.7,185.07-166.87,4.54-4.16,7.09-13,7.09-19.69V505.94C688.66,545.7,572.91,650.66,540.1,678.64Z" transform="translate(-267.7 -233.05)"/></svg>
|
||||
<strong>VictoriaMetrics</strong>
|
||||
</a>
|
||||
|
@ -225,6 +225,7 @@
|
|||
<th scope="col">Endpoint</th>
|
||||
<th scope="col">State</th>
|
||||
<th scope="col" title="target labels">Labels</th>
|
||||
<th scope="col" title="debug relabeling">Debug relabeling</th>
|
||||
<th scope="col" title="total scrapes">Scrapes</th>
|
||||
<th scope="col" title="total scrape errors">Errors</th>
|
||||
<th scope="col" title="the time of the last scrape">Last Scrape</th>
|
||||
|
@ -237,7 +238,8 @@
|
|||
{% for _, ts := range jts.targetsStatus %}
|
||||
{% code
|
||||
endpoint := ts.sw.Config.ScrapeURL
|
||||
targetID := getTargetID(ts.sw)
|
||||
// The target is uniquely identified by a pointer to its original labels.
|
||||
targetID := getLabelsID(ts.sw.Config.OriginalLabels)
|
||||
lastScrapeDuration := ts.getDurationFromLastScrape()
|
||||
%}
|
||||
<tr {% if !ts.up %}{%space%}class="alert alert-danger" role="alert" {% endif %}>
|
||||
|
@ -263,6 +265,9 @@
|
|||
{%= formatLabels(ts.sw.Config.OriginalLabels) %}
|
||||
</div>
|
||||
</td>
|
||||
<td>
|
||||
<a href="target-relabel-debug?id={%s targetID %}" target="_blank">debug</a>
|
||||
</td>
|
||||
<td>{%d ts.scrapesTotal %}</td>
|
||||
<td>{%d ts.scrapesFailed %}</td>
|
||||
<td>
|
||||
|
@ -304,8 +309,9 @@
|
|||
<thead>
|
||||
<tr>
|
||||
<th scope="col" style="width: 5%">Status</th>
|
||||
<th scope="col" style="width: 65%">Discovered Labels</th>
|
||||
<th scope="col" style="width: 60%">Discovered Labels</th>
|
||||
<th scope="col" style="width: 30%">Target Labels</th>
|
||||
<th scope="col" stile="width: 5%">Debug relabeling</a>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
|
@ -330,11 +336,15 @@
|
|||
{% endif %}
|
||||
</td>
|
||||
<td class="labels">
|
||||
{%= formatLabels(t.discoveredLabels) %}
|
||||
{%= formatLabels(t.originalLabels) %}
|
||||
</td>
|
||||
<td class="labels">
|
||||
{%= formatLabels(t.labels) %}
|
||||
</td>
|
||||
<td>
|
||||
{% code targetID := getLabelsID(t.originalLabels) %}
|
||||
<a href="target-relabel-debug?id={%s targetID %}" target="_blank">debug</a>
|
||||
</td>
|
||||
</tr>
|
||||
{% endfor %}
|
||||
</tbody>
|
||||
|
|
|
@ -355,7 +355,7 @@ func commonHeader() string {
|
|||
//line lib/promscrape/targetstatus.qtpl:103
|
||||
func streamnavbar(qw422016 *qt422016.Writer) {
|
||||
//line lib/promscrape/targetstatus.qtpl:103
|
||||
qw422016.N().S(`<div class="navbar navbar-dark bg-dark box-shadow"><div class="d-flex justify-content-between"><a href="#" class="navbar-brand d-flex align-items-center ms-3" title="The High Performance Open Source Time Series Database & Monitoring Solution "><svg xmlns="http://www.w3.org/2000/svg" id="VM_logo" viewBox="0 0 464.61 533.89" width="20" height="20" class="me-1"><defs><style>.cls-1{fill:#fff;}</style></defs><path class="cls-1" d="M459.86,467.77c9,7.67,24.12,13.49,39.3,13.69v0h1.68v0c15.18-.2,30.31-6,39.3-13.69,47.43-40.45,184.65-166.24,184.65-166.24,36.84-34.27-65.64-68.28-223.95-68.47h-1.68c-158.31.19-260.79,34.2-224,68.47C275.21,301.53,412.43,427.32,459.86,467.77Z" transform="translate(-267.7 -233.05)"/><path class="cls-1" d="M540.1,535.88c-9,7.67-24.12,13.5-39.3,13.7h-1.6c-15.18-.2-30.31-6-39.3-13.7-32.81-28-148.56-132.93-192.16-172.7v60.74c0,6.67,2.55,15.52,7.09,19.68,29.64,27.18,143.94,131.8,185.07,166.88,9,7.67,24.12,13.49,39.3,13.69v0h1.6v0c15.18-.2,30.31-6,39.3-13.69,41.13-35.08,155.43-139.7,185.07-166.88,4.54-4.16,7.09-13,7.09-19.68V363.18C688.66,403,572.91,507.9,540.1,535.88Z" transform="translate(-267.7 -233.05)"/><path class="cls-1" d="M540.1,678.64c-9,7.67-24.12,13.49-39.3,13.69v0h-1.6v0c-15.18-.2-30.31-6-39.3-13.69-32.81-28-148.56-132.94-192.16-172.7v60.73c0,6.67,2.55,15.53,7.09,19.69,29.64,27.17,143.94,131.8,185.07,166.87,9,7.67,24.12,13.5,39.3,13.7h1.6c15.18-.2,30.31-6,39.3-13.7,41.13-35.07,155.43-139.7,185.07-166.87,4.54-4.16,7.09-13,7.09-19.69V505.94C688.66,545.7,572.91,650.66,540.1,678.64Z" transform="translate(-267.7 -233.05)"/></svg><strong>VictoriaMetrics</strong></a></div></div>`)
|
||||
qw422016.N().S(`<div class="navbar navbar-dark bg-dark box-shadow"><div class="d-flex justify-content-between"><a href="/" class="navbar-brand d-flex align-items-center ms-3" title="The High Performance Open Source Time Series Database & Monitoring Solution "><svg xmlns="http://www.w3.org/2000/svg" id="VM_logo" viewBox="0 0 464.61 533.89" width="20" height="20" class="me-1"><defs><style>.cls-1{fill:#fff;}</style></defs><path class="cls-1" d="M459.86,467.77c9,7.67,24.12,13.49,39.3,13.69v0h1.68v0c15.18-.2,30.31-6,39.3-13.69,47.43-40.45,184.65-166.24,184.65-166.24,36.84-34.27-65.64-68.28-223.95-68.47h-1.68c-158.31.19-260.79,34.2-224,68.47C275.21,301.53,412.43,427.32,459.86,467.77Z" transform="translate(-267.7 -233.05)"/><path class="cls-1" d="M540.1,535.88c-9,7.67-24.12,13.5-39.3,13.7h-1.6c-15.18-.2-30.31-6-39.3-13.7-32.81-28-148.56-132.93-192.16-172.7v60.74c0,6.67,2.55,15.52,7.09,19.68,29.64,27.18,143.94,131.8,185.07,166.88,9,7.67,24.12,13.49,39.3,13.69v0h1.6v0c15.18-.2,30.31-6,39.3-13.69,41.13-35.08,155.43-139.7,185.07-166.88,4.54-4.16,7.09-13,7.09-19.68V363.18C688.66,403,572.91,507.9,540.1,535.88Z" transform="translate(-267.7 -233.05)"/><path class="cls-1" d="M540.1,678.64c-9,7.67-24.12,13.49-39.3,13.69v0h-1.6v0c-15.18-.2-30.31-6-39.3-13.69-32.81-28-148.56-132.94-192.16-172.7v60.73c0,6.67,2.55,15.53,7.09,19.69,29.64,27.17,143.94,131.8,185.07,166.87,9,7.67,24.12,13.5,39.3,13.7h1.6c15.18-.2,30.31-6,39.3-13.7,41.13-35.07,155.43-139.7,185.07-166.87,4.54-4.16,7.09-13,7.09-19.69V505.94C688.66,545.7,572.91,650.66,540.1,678.64Z" transform="translate(-267.7 -233.05)"/></svg><strong>VictoriaMetrics</strong></a></div></div>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:112
|
||||
}
|
||||
|
||||
|
@ -633,336 +633,350 @@ func streamscrapeJobTargets(qw422016 *qt422016.Writer, num int, jts *jobTargetsS
|
|||
//line lib/promscrape/targetstatus.qtpl:221
|
||||
qw422016.N().D(num)
|
||||
//line lib/promscrape/targetstatus.qtpl:221
|
||||
qw422016.N().S(`" class="scrape-job table-responsive"><table class="table table-striped table-hover table-bordered table-sm"><thead><tr><th scope="col">Endpoint</th><th scope="col">State</th><th scope="col" title="target labels">Labels</th><th scope="col" title="total scrapes">Scrapes</th><th scope="col" title="total scrape errors">Errors</th><th scope="col" title="the time of the last scrape">Last Scrape</th><th scope="col" title="the duration of the last scrape">Duration</th><th scope="col" title="the number of metrics scraped during the last scrape">Samples</th><th scope="col" title="error from the last scrape (if any)">Last error</th></tr></thead><tbody>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:237
|
||||
qw422016.N().S(`" class="scrape-job table-responsive"><table class="table table-striped table-hover table-bordered table-sm"><thead><tr><th scope="col">Endpoint</th><th scope="col">State</th><th scope="col" title="target labels">Labels</th><th scope="col" title="debug relabeling">Debug relabeling</th><th scope="col" title="total scrapes">Scrapes</th><th scope="col" title="total scrape errors">Errors</th><th scope="col" title="the time of the last scrape">Last Scrape</th><th scope="col" title="the duration of the last scrape">Duration</th><th scope="col" title="the number of metrics scraped during the last scrape">Samples</th><th scope="col" title="error from the last scrape (if any)">Last error</th></tr></thead><tbody>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:238
|
||||
for _, ts := range jts.targetsStatus {
|
||||
//line lib/promscrape/targetstatus.qtpl:239
|
||||
//line lib/promscrape/targetstatus.qtpl:240
|
||||
endpoint := ts.sw.Config.ScrapeURL
|
||||
targetID := getTargetID(ts.sw)
|
||||
// The target is uniquely identified by a pointer to its original labels.
|
||||
targetID := getLabelsID(ts.sw.Config.OriginalLabels)
|
||||
lastScrapeDuration := ts.getDurationFromLastScrape()
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:242
|
||||
//line lib/promscrape/targetstatus.qtpl:244
|
||||
qw422016.N().S(`<tr`)
|
||||
//line lib/promscrape/targetstatus.qtpl:243
|
||||
//line lib/promscrape/targetstatus.qtpl:245
|
||||
if !ts.up {
|
||||
//line lib/promscrape/targetstatus.qtpl:243
|
||||
//line lib/promscrape/targetstatus.qtpl:245
|
||||
qw422016.N().S(` `)
|
||||
//line lib/promscrape/targetstatus.qtpl:243
|
||||
//line lib/promscrape/targetstatus.qtpl:245
|
||||
qw422016.N().S(`class="alert alert-danger" role="alert"`)
|
||||
//line lib/promscrape/targetstatus.qtpl:243
|
||||
//line lib/promscrape/targetstatus.qtpl:245
|
||||
}
|
||||
//line lib/promscrape/targetstatus.qtpl:243
|
||||
//line lib/promscrape/targetstatus.qtpl:245
|
||||
qw422016.N().S(`><td class="endpoint"><a href="`)
|
||||
//line lib/promscrape/targetstatus.qtpl:245
|
||||
//line lib/promscrape/targetstatus.qtpl:247
|
||||
qw422016.E().S(endpoint)
|
||||
//line lib/promscrape/targetstatus.qtpl:245
|
||||
//line lib/promscrape/targetstatus.qtpl:247
|
||||
qw422016.N().S(`" target="_blank">`)
|
||||
//line lib/promscrape/targetstatus.qtpl:245
|
||||
//line lib/promscrape/targetstatus.qtpl:247
|
||||
qw422016.E().S(endpoint)
|
||||
//line lib/promscrape/targetstatus.qtpl:245
|
||||
//line lib/promscrape/targetstatus.qtpl:247
|
||||
qw422016.N().S(`</a> (<a href="target_response?id=`)
|
||||
//line lib/promscrape/targetstatus.qtpl:246
|
||||
//line lib/promscrape/targetstatus.qtpl:248
|
||||
qw422016.E().S(targetID)
|
||||
//line lib/promscrape/targetstatus.qtpl:246
|
||||
//line lib/promscrape/targetstatus.qtpl:248
|
||||
qw422016.N().S(`" target="_blank"title="click to fetch target response on behalf of the scraper">response</a>)</td><td>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:251
|
||||
//line lib/promscrape/targetstatus.qtpl:253
|
||||
if ts.up {
|
||||
//line lib/promscrape/targetstatus.qtpl:251
|
||||
//line lib/promscrape/targetstatus.qtpl:253
|
||||
qw422016.N().S(`<span class="badge bg-success">UP</span>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:253
|
||||
//line lib/promscrape/targetstatus.qtpl:255
|
||||
} else {
|
||||
//line lib/promscrape/targetstatus.qtpl:253
|
||||
//line lib/promscrape/targetstatus.qtpl:255
|
||||
qw422016.N().S(`<span class="badge bg-danger">DOWN</span>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:255
|
||||
//line lib/promscrape/targetstatus.qtpl:257
|
||||
}
|
||||
//line lib/promscrape/targetstatus.qtpl:255
|
||||
//line lib/promscrape/targetstatus.qtpl:257
|
||||
qw422016.N().S(`</td><td class="labels"><div title="click to show original labels"onclick="document.getElementById('original-labels-`)
|
||||
//line lib/promscrape/targetstatus.qtpl:259
|
||||
//line lib/promscrape/targetstatus.qtpl:261
|
||||
qw422016.E().S(targetID)
|
||||
//line lib/promscrape/targetstatus.qtpl:259
|
||||
//line lib/promscrape/targetstatus.qtpl:261
|
||||
qw422016.N().S(`').style.display='block'">`)
|
||||
//line lib/promscrape/targetstatus.qtpl:260
|
||||
//line lib/promscrape/targetstatus.qtpl:262
|
||||
streamformatLabels(qw422016, ts.sw.Config.Labels)
|
||||
//line lib/promscrape/targetstatus.qtpl:260
|
||||
//line lib/promscrape/targetstatus.qtpl:262
|
||||
qw422016.N().S(`</div><div style="display:none" id="original-labels-`)
|
||||
//line lib/promscrape/targetstatus.qtpl:262
|
||||
//line lib/promscrape/targetstatus.qtpl:264
|
||||
qw422016.E().S(targetID)
|
||||
//line lib/promscrape/targetstatus.qtpl:262
|
||||
//line lib/promscrape/targetstatus.qtpl:264
|
||||
qw422016.N().S(`">`)
|
||||
//line lib/promscrape/targetstatus.qtpl:263
|
||||
//line lib/promscrape/targetstatus.qtpl:265
|
||||
streamformatLabels(qw422016, ts.sw.Config.OriginalLabels)
|
||||
//line lib/promscrape/targetstatus.qtpl:263
|
||||
qw422016.N().S(`</div></td><td>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:266
|
||||
qw422016.N().D(ts.scrapesTotal)
|
||||
//line lib/promscrape/targetstatus.qtpl:266
|
||||
qw422016.N().S(`</td><td>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:267
|
||||
qw422016.N().D(ts.scrapesFailed)
|
||||
//line lib/promscrape/targetstatus.qtpl:267
|
||||
qw422016.N().S(`</td><td>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:265
|
||||
qw422016.N().S(`</div></td><td><a href="target-relabel-debug?id=`)
|
||||
//line lib/promscrape/targetstatus.qtpl:269
|
||||
if lastScrapeDuration < 365*24*time.Hour {
|
||||
//line lib/promscrape/targetstatus.qtpl:270
|
||||
qw422016.N().D(int(lastScrapeDuration.Milliseconds()))
|
||||
//line lib/promscrape/targetstatus.qtpl:270
|
||||
qw422016.N().S(`ms ago`)
|
||||
qw422016.E().S(targetID)
|
||||
//line lib/promscrape/targetstatus.qtpl:269
|
||||
qw422016.N().S(`" target="_blank">debug</a></td><td>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:271
|
||||
} else {
|
||||
qw422016.N().D(ts.scrapesTotal)
|
||||
//line lib/promscrape/targetstatus.qtpl:271
|
||||
qw422016.N().S(`none`)
|
||||
//line lib/promscrape/targetstatus.qtpl:273
|
||||
}
|
||||
//line lib/promscrape/targetstatus.qtpl:273
|
||||
qw422016.N().S(`<td>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:274
|
||||
qw422016.N().D(int(ts.scrapeDuration))
|
||||
//line lib/promscrape/targetstatus.qtpl:274
|
||||
qw422016.N().S(`ms</td><td>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:275
|
||||
qw422016.N().D(ts.samplesScraped)
|
||||
//line lib/promscrape/targetstatus.qtpl:275
|
||||
qw422016.N().S(`</td><td>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:272
|
||||
qw422016.N().D(ts.scrapesFailed)
|
||||
//line lib/promscrape/targetstatus.qtpl:272
|
||||
qw422016.N().S(`</td><td>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:274
|
||||
if lastScrapeDuration < 365*24*time.Hour {
|
||||
//line lib/promscrape/targetstatus.qtpl:275
|
||||
qw422016.N().D(int(lastScrapeDuration.Milliseconds()))
|
||||
//line lib/promscrape/targetstatus.qtpl:275
|
||||
qw422016.N().S(`ms ago`)
|
||||
//line lib/promscrape/targetstatus.qtpl:276
|
||||
if ts.err != nil {
|
||||
//line lib/promscrape/targetstatus.qtpl:276
|
||||
qw422016.E().S(ts.err.Error())
|
||||
} else {
|
||||
//line lib/promscrape/targetstatus.qtpl:276
|
||||
qw422016.N().S(`none`)
|
||||
//line lib/promscrape/targetstatus.qtpl:278
|
||||
}
|
||||
//line lib/promscrape/targetstatus.qtpl:276
|
||||
//line lib/promscrape/targetstatus.qtpl:278
|
||||
qw422016.N().S(`<td>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:279
|
||||
qw422016.N().D(int(ts.scrapeDuration))
|
||||
//line lib/promscrape/targetstatus.qtpl:279
|
||||
qw422016.N().S(`ms</td><td>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:280
|
||||
qw422016.N().D(ts.samplesScraped)
|
||||
//line lib/promscrape/targetstatus.qtpl:280
|
||||
qw422016.N().S(`</td><td>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:281
|
||||
if ts.err != nil {
|
||||
//line lib/promscrape/targetstatus.qtpl:281
|
||||
qw422016.E().S(ts.err.Error())
|
||||
//line lib/promscrape/targetstatus.qtpl:281
|
||||
}
|
||||
//line lib/promscrape/targetstatus.qtpl:281
|
||||
qw422016.N().S(`</td></tr>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:278
|
||||
//line lib/promscrape/targetstatus.qtpl:283
|
||||
}
|
||||
//line lib/promscrape/targetstatus.qtpl:278
|
||||
//line lib/promscrape/targetstatus.qtpl:283
|
||||
qw422016.N().S(`</tbody></table></div></div></div>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:284
|
||||
//line lib/promscrape/targetstatus.qtpl:289
|
||||
}
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:284
|
||||
//line lib/promscrape/targetstatus.qtpl:289
|
||||
func writescrapeJobTargets(qq422016 qtio422016.Writer, num int, jts *jobTargetsStatuses) {
|
||||
//line lib/promscrape/targetstatus.qtpl:284
|
||||
//line lib/promscrape/targetstatus.qtpl:289
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line lib/promscrape/targetstatus.qtpl:284
|
||||
//line lib/promscrape/targetstatus.qtpl:289
|
||||
streamscrapeJobTargets(qw422016, num, jts)
|
||||
//line lib/promscrape/targetstatus.qtpl:284
|
||||
//line lib/promscrape/targetstatus.qtpl:289
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line lib/promscrape/targetstatus.qtpl:284
|
||||
//line lib/promscrape/targetstatus.qtpl:289
|
||||
}
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:284
|
||||
//line lib/promscrape/targetstatus.qtpl:289
|
||||
func scrapeJobTargets(num int, jts *jobTargetsStatuses) string {
|
||||
//line lib/promscrape/targetstatus.qtpl:284
|
||||
//line lib/promscrape/targetstatus.qtpl:289
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line lib/promscrape/targetstatus.qtpl:284
|
||||
//line lib/promscrape/targetstatus.qtpl:289
|
||||
writescrapeJobTargets(qb422016, num, jts)
|
||||
//line lib/promscrape/targetstatus.qtpl:284
|
||||
//line lib/promscrape/targetstatus.qtpl:289
|
||||
qs422016 := string(qb422016.B)
|
||||
//line lib/promscrape/targetstatus.qtpl:284
|
||||
//line lib/promscrape/targetstatus.qtpl:289
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line lib/promscrape/targetstatus.qtpl:284
|
||||
//line lib/promscrape/targetstatus.qtpl:289
|
||||
return qs422016
|
||||
//line lib/promscrape/targetstatus.qtpl:284
|
||||
//line lib/promscrape/targetstatus.qtpl:289
|
||||
}
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:286
|
||||
//line lib/promscrape/targetstatus.qtpl:291
|
||||
func streamdiscoveredTargets(qw422016 *qt422016.Writer, tsr *targetsStatusResult) {
|
||||
//line lib/promscrape/targetstatus.qtpl:287
|
||||
//line lib/promscrape/targetstatus.qtpl:292
|
||||
tljs := tsr.getTargetLabelsByJob()
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:287
|
||||
//line lib/promscrape/targetstatus.qtpl:292
|
||||
qw422016.N().S(`<div class="row mt-4"><div class="col-12">`)
|
||||
//line lib/promscrape/targetstatus.qtpl:290
|
||||
//line lib/promscrape/targetstatus.qtpl:295
|
||||
for i, tlj := range tljs {
|
||||
//line lib/promscrape/targetstatus.qtpl:291
|
||||
//line lib/promscrape/targetstatus.qtpl:296
|
||||
streamdiscoveredJobTargets(qw422016, i, tlj)
|
||||
//line lib/promscrape/targetstatus.qtpl:292
|
||||
//line lib/promscrape/targetstatus.qtpl:297
|
||||
}
|
||||
//line lib/promscrape/targetstatus.qtpl:292
|
||||
//line lib/promscrape/targetstatus.qtpl:297
|
||||
qw422016.N().S(`</div></div>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:295
|
||||
//line lib/promscrape/targetstatus.qtpl:300
|
||||
}
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:295
|
||||
//line lib/promscrape/targetstatus.qtpl:300
|
||||
func writediscoveredTargets(qq422016 qtio422016.Writer, tsr *targetsStatusResult) {
|
||||
//line lib/promscrape/targetstatus.qtpl:295
|
||||
//line lib/promscrape/targetstatus.qtpl:300
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line lib/promscrape/targetstatus.qtpl:295
|
||||
//line lib/promscrape/targetstatus.qtpl:300
|
||||
streamdiscoveredTargets(qw422016, tsr)
|
||||
//line lib/promscrape/targetstatus.qtpl:295
|
||||
//line lib/promscrape/targetstatus.qtpl:300
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line lib/promscrape/targetstatus.qtpl:295
|
||||
//line lib/promscrape/targetstatus.qtpl:300
|
||||
}
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:295
|
||||
//line lib/promscrape/targetstatus.qtpl:300
|
||||
func discoveredTargets(tsr *targetsStatusResult) string {
|
||||
//line lib/promscrape/targetstatus.qtpl:295
|
||||
//line lib/promscrape/targetstatus.qtpl:300
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line lib/promscrape/targetstatus.qtpl:295
|
||||
//line lib/promscrape/targetstatus.qtpl:300
|
||||
writediscoveredTargets(qb422016, tsr)
|
||||
//line lib/promscrape/targetstatus.qtpl:295
|
||||
//line lib/promscrape/targetstatus.qtpl:300
|
||||
qs422016 := string(qb422016.B)
|
||||
//line lib/promscrape/targetstatus.qtpl:295
|
||||
//line lib/promscrape/targetstatus.qtpl:300
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line lib/promscrape/targetstatus.qtpl:295
|
||||
//line lib/promscrape/targetstatus.qtpl:300
|
||||
return qs422016
|
||||
//line lib/promscrape/targetstatus.qtpl:295
|
||||
//line lib/promscrape/targetstatus.qtpl:300
|
||||
}
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:297
|
||||
//line lib/promscrape/targetstatus.qtpl:302
|
||||
func streamdiscoveredJobTargets(qw422016 *qt422016.Writer, num int, tlj *targetLabelsByJob) {
|
||||
//line lib/promscrape/targetstatus.qtpl:297
|
||||
//line lib/promscrape/targetstatus.qtpl:302
|
||||
qw422016.N().S(`<h4><span class="me-2">`)
|
||||
//line lib/promscrape/targetstatus.qtpl:299
|
||||
//line lib/promscrape/targetstatus.qtpl:304
|
||||
qw422016.E().S(tlj.jobName)
|
||||
//line lib/promscrape/targetstatus.qtpl:299
|
||||
//line lib/promscrape/targetstatus.qtpl:304
|
||||
qw422016.N().S(` `)
|
||||
//line lib/promscrape/targetstatus.qtpl:299
|
||||
//line lib/promscrape/targetstatus.qtpl:304
|
||||
qw422016.N().S(`(`)
|
||||
//line lib/promscrape/targetstatus.qtpl:299
|
||||
//line lib/promscrape/targetstatus.qtpl:304
|
||||
qw422016.N().D(tlj.activeTargets)
|
||||
//line lib/promscrape/targetstatus.qtpl:299
|
||||
//line lib/promscrape/targetstatus.qtpl:304
|
||||
qw422016.N().S(`/`)
|
||||
//line lib/promscrape/targetstatus.qtpl:299
|
||||
//line lib/promscrape/targetstatus.qtpl:304
|
||||
qw422016.N().D(tlj.activeTargets + tlj.droppedTargets)
|
||||
//line lib/promscrape/targetstatus.qtpl:299
|
||||
//line lib/promscrape/targetstatus.qtpl:304
|
||||
qw422016.N().S(` `)
|
||||
//line lib/promscrape/targetstatus.qtpl:299
|
||||
//line lib/promscrape/targetstatus.qtpl:304
|
||||
qw422016.N().S(`active)</span>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:300
|
||||
//line lib/promscrape/targetstatus.qtpl:305
|
||||
streamshowHideScrapeJobButtons(qw422016, num)
|
||||
//line lib/promscrape/targetstatus.qtpl:300
|
||||
//line lib/promscrape/targetstatus.qtpl:305
|
||||
qw422016.N().S(`</h4><div id="scrape-job-`)
|
||||
//line lib/promscrape/targetstatus.qtpl:302
|
||||
//line lib/promscrape/targetstatus.qtpl:307
|
||||
qw422016.N().D(num)
|
||||
//line lib/promscrape/targetstatus.qtpl:302
|
||||
qw422016.N().S(`" class="scrape-job table-responsive"><table class="table table-striped table-hover table-bordered table-sm"><thead><tr><th scope="col" style="width: 5%">Status</th><th scope="col" style="width: 65%">Discovered Labels</th><th scope="col" style="width: 30%">Target Labels</th></tr></thead><tbody>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:312
|
||||
//line lib/promscrape/targetstatus.qtpl:307
|
||||
qw422016.N().S(`" class="scrape-job table-responsive"><table class="table table-striped table-hover table-bordered table-sm"><thead><tr><th scope="col" style="width: 5%">Status</th><th scope="col" style="width: 60%">Discovered Labels</th><th scope="col" style="width: 30%">Target Labels</th><th scope="col" stile="width: 5%">Debug relabeling</a></tr></thead><tbody>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:318
|
||||
for _, t := range tlj.targets {
|
||||
//line lib/promscrape/targetstatus.qtpl:312
|
||||
//line lib/promscrape/targetstatus.qtpl:318
|
||||
qw422016.N().S(`<tr`)
|
||||
//line lib/promscrape/targetstatus.qtpl:314
|
||||
if !t.up {
|
||||
//line lib/promscrape/targetstatus.qtpl:315
|
||||
qw422016.N().S(` `)
|
||||
//line lib/promscrape/targetstatus.qtpl:315
|
||||
qw422016.N().S(`role="alert"`)
|
||||
//line lib/promscrape/targetstatus.qtpl:315
|
||||
qw422016.N().S(` `)
|
||||
//line lib/promscrape/targetstatus.qtpl:316
|
||||
if t.labels.Len() > 0 {
|
||||
//line lib/promscrape/targetstatus.qtpl:316
|
||||
qw422016.N().S(`class="alert alert-danger"`)
|
||||
//line lib/promscrape/targetstatus.qtpl:318
|
||||
} else {
|
||||
//line lib/promscrape/targetstatus.qtpl:318
|
||||
qw422016.N().S(`class="alert alert-warning"`)
|
||||
//line lib/promscrape/targetstatus.qtpl:320
|
||||
if !t.up {
|
||||
//line lib/promscrape/targetstatus.qtpl:321
|
||||
qw422016.N().S(` `)
|
||||
//line lib/promscrape/targetstatus.qtpl:321
|
||||
qw422016.N().S(`role="alert"`)
|
||||
//line lib/promscrape/targetstatus.qtpl:321
|
||||
qw422016.N().S(` `)
|
||||
//line lib/promscrape/targetstatus.qtpl:322
|
||||
if t.labels.Len() > 0 {
|
||||
//line lib/promscrape/targetstatus.qtpl:322
|
||||
qw422016.N().S(`class="alert alert-danger"`)
|
||||
//line lib/promscrape/targetstatus.qtpl:324
|
||||
} else {
|
||||
//line lib/promscrape/targetstatus.qtpl:324
|
||||
qw422016.N().S(`class="alert alert-warning"`)
|
||||
//line lib/promscrape/targetstatus.qtpl:326
|
||||
}
|
||||
//line lib/promscrape/targetstatus.qtpl:321
|
||||
//line lib/promscrape/targetstatus.qtpl:327
|
||||
}
|
||||
//line lib/promscrape/targetstatus.qtpl:321
|
||||
//line lib/promscrape/targetstatus.qtpl:327
|
||||
qw422016.N().S(`><td>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:324
|
||||
//line lib/promscrape/targetstatus.qtpl:330
|
||||
if t.up {
|
||||
//line lib/promscrape/targetstatus.qtpl:324
|
||||
//line lib/promscrape/targetstatus.qtpl:330
|
||||
qw422016.N().S(`<span class="badge bg-success">UP</span>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:326
|
||||
//line lib/promscrape/targetstatus.qtpl:332
|
||||
} else if t.labels.Len() > 0 {
|
||||
//line lib/promscrape/targetstatus.qtpl:326
|
||||
//line lib/promscrape/targetstatus.qtpl:332
|
||||
qw422016.N().S(`<span class="badge bg-danger">DOWN</span>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:328
|
||||
//line lib/promscrape/targetstatus.qtpl:334
|
||||
} else {
|
||||
//line lib/promscrape/targetstatus.qtpl:328
|
||||
//line lib/promscrape/targetstatus.qtpl:334
|
||||
qw422016.N().S(`<span class="badge bg-warning">DROPPED</span>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:330
|
||||
//line lib/promscrape/targetstatus.qtpl:336
|
||||
}
|
||||
//line lib/promscrape/targetstatus.qtpl:330
|
||||
qw422016.N().S(`</td><td class="labels">`)
|
||||
//line lib/promscrape/targetstatus.qtpl:333
|
||||
streamformatLabels(qw422016, t.discoveredLabels)
|
||||
//line lib/promscrape/targetstatus.qtpl:333
|
||||
qw422016.N().S(`</td><td class="labels">`)
|
||||
//line lib/promscrape/targetstatus.qtpl:336
|
||||
qw422016.N().S(`</td><td class="labels">`)
|
||||
//line lib/promscrape/targetstatus.qtpl:339
|
||||
streamformatLabels(qw422016, t.originalLabels)
|
||||
//line lib/promscrape/targetstatus.qtpl:339
|
||||
qw422016.N().S(`</td><td class="labels">`)
|
||||
//line lib/promscrape/targetstatus.qtpl:342
|
||||
streamformatLabels(qw422016, t.labels)
|
||||
//line lib/promscrape/targetstatus.qtpl:336
|
||||
qw422016.N().S(`</td></tr>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:339
|
||||
//line lib/promscrape/targetstatus.qtpl:342
|
||||
qw422016.N().S(`</td><td>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:345
|
||||
targetID := getLabelsID(t.originalLabels)
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:345
|
||||
qw422016.N().S(`<a href="target-relabel-debug?id=`)
|
||||
//line lib/promscrape/targetstatus.qtpl:346
|
||||
qw422016.E().S(targetID)
|
||||
//line lib/promscrape/targetstatus.qtpl:346
|
||||
qw422016.N().S(`" target="_blank">debug</a></td></tr>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:349
|
||||
}
|
||||
//line lib/promscrape/targetstatus.qtpl:339
|
||||
//line lib/promscrape/targetstatus.qtpl:349
|
||||
qw422016.N().S(`</tbody></table></div>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:343
|
||||
//line lib/promscrape/targetstatus.qtpl:353
|
||||
}
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:343
|
||||
//line lib/promscrape/targetstatus.qtpl:353
|
||||
func writediscoveredJobTargets(qq422016 qtio422016.Writer, num int, tlj *targetLabelsByJob) {
|
||||
//line lib/promscrape/targetstatus.qtpl:343
|
||||
//line lib/promscrape/targetstatus.qtpl:353
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line lib/promscrape/targetstatus.qtpl:343
|
||||
//line lib/promscrape/targetstatus.qtpl:353
|
||||
streamdiscoveredJobTargets(qw422016, num, tlj)
|
||||
//line lib/promscrape/targetstatus.qtpl:343
|
||||
//line lib/promscrape/targetstatus.qtpl:353
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line lib/promscrape/targetstatus.qtpl:343
|
||||
//line lib/promscrape/targetstatus.qtpl:353
|
||||
}
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:343
|
||||
//line lib/promscrape/targetstatus.qtpl:353
|
||||
func discoveredJobTargets(num int, tlj *targetLabelsByJob) string {
|
||||
//line lib/promscrape/targetstatus.qtpl:343
|
||||
//line lib/promscrape/targetstatus.qtpl:353
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line lib/promscrape/targetstatus.qtpl:343
|
||||
//line lib/promscrape/targetstatus.qtpl:353
|
||||
writediscoveredJobTargets(qb422016, num, tlj)
|
||||
//line lib/promscrape/targetstatus.qtpl:343
|
||||
//line lib/promscrape/targetstatus.qtpl:353
|
||||
qs422016 := string(qb422016.B)
|
||||
//line lib/promscrape/targetstatus.qtpl:343
|
||||
//line lib/promscrape/targetstatus.qtpl:353
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line lib/promscrape/targetstatus.qtpl:343
|
||||
//line lib/promscrape/targetstatus.qtpl:353
|
||||
return qs422016
|
||||
//line lib/promscrape/targetstatus.qtpl:343
|
||||
//line lib/promscrape/targetstatus.qtpl:353
|
||||
}
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:345
|
||||
//line lib/promscrape/targetstatus.qtpl:355
|
||||
func streamshowHideScrapeJobButtons(qw422016 *qt422016.Writer, num int) {
|
||||
//line lib/promscrape/targetstatus.qtpl:345
|
||||
//line lib/promscrape/targetstatus.qtpl:355
|
||||
qw422016.N().S(`<button type="button" class="btn btn-primary btn-sm me-1"onclick="document.getElementById('scrape-job-`)
|
||||
//line lib/promscrape/targetstatus.qtpl:347
|
||||
//line lib/promscrape/targetstatus.qtpl:357
|
||||
qw422016.N().D(num)
|
||||
//line lib/promscrape/targetstatus.qtpl:347
|
||||
//line lib/promscrape/targetstatus.qtpl:357
|
||||
qw422016.N().S(`').style.display='none'">collapse</button><button type="button" class="btn btn-secondary btn-sm me-1"onclick="document.getElementById('scrape-job-`)
|
||||
//line lib/promscrape/targetstatus.qtpl:351
|
||||
//line lib/promscrape/targetstatus.qtpl:361
|
||||
qw422016.N().D(num)
|
||||
//line lib/promscrape/targetstatus.qtpl:351
|
||||
//line lib/promscrape/targetstatus.qtpl:361
|
||||
qw422016.N().S(`').style.display='block'">expand</button>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:354
|
||||
//line lib/promscrape/targetstatus.qtpl:364
|
||||
}
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:354
|
||||
//line lib/promscrape/targetstatus.qtpl:364
|
||||
func writeshowHideScrapeJobButtons(qq422016 qtio422016.Writer, num int) {
|
||||
//line lib/promscrape/targetstatus.qtpl:354
|
||||
//line lib/promscrape/targetstatus.qtpl:364
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line lib/promscrape/targetstatus.qtpl:354
|
||||
//line lib/promscrape/targetstatus.qtpl:364
|
||||
streamshowHideScrapeJobButtons(qw422016, num)
|
||||
//line lib/promscrape/targetstatus.qtpl:354
|
||||
//line lib/promscrape/targetstatus.qtpl:364
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line lib/promscrape/targetstatus.qtpl:354
|
||||
//line lib/promscrape/targetstatus.qtpl:364
|
||||
}
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:354
|
||||
//line lib/promscrape/targetstatus.qtpl:364
|
||||
func showHideScrapeJobButtons(num int) string {
|
||||
//line lib/promscrape/targetstatus.qtpl:354
|
||||
//line lib/promscrape/targetstatus.qtpl:364
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line lib/promscrape/targetstatus.qtpl:354
|
||||
//line lib/promscrape/targetstatus.qtpl:364
|
||||
writeshowHideScrapeJobButtons(qb422016, num)
|
||||
//line lib/promscrape/targetstatus.qtpl:354
|
||||
//line lib/promscrape/targetstatus.qtpl:364
|
||||
qs422016 := string(qb422016.B)
|
||||
//line lib/promscrape/targetstatus.qtpl:354
|
||||
//line lib/promscrape/targetstatus.qtpl:364
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line lib/promscrape/targetstatus.qtpl:354
|
||||
//line lib/promscrape/targetstatus.qtpl:364
|
||||
return qs422016
|
||||
//line lib/promscrape/targetstatus.qtpl:354
|
||||
//line lib/promscrape/targetstatus.qtpl:364
|
||||
}
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:356
|
||||
//line lib/promscrape/targetstatus.qtpl:366
|
||||
func streamqueryArgs(qw422016 *qt422016.Writer, filter *requestFilter, override map[string]string) {
|
||||
//line lib/promscrape/targetstatus.qtpl:358
|
||||
//line lib/promscrape/targetstatus.qtpl:368
|
||||
showOnlyUnhealthy := "false"
|
||||
if filter.showOnlyUnhealthy {
|
||||
showOnlyUnhealthy = "true"
|
||||
|
@ -980,126 +994,126 @@ func streamqueryArgs(qw422016 *qt422016.Writer, filter *requestFilter, override
|
|||
qa[k] = []string{v}
|
||||
}
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:375
|
||||
//line lib/promscrape/targetstatus.qtpl:385
|
||||
qw422016.E().S(qa.Encode())
|
||||
//line lib/promscrape/targetstatus.qtpl:376
|
||||
//line lib/promscrape/targetstatus.qtpl:386
|
||||
}
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:376
|
||||
//line lib/promscrape/targetstatus.qtpl:386
|
||||
func writequeryArgs(qq422016 qtio422016.Writer, filter *requestFilter, override map[string]string) {
|
||||
//line lib/promscrape/targetstatus.qtpl:376
|
||||
//line lib/promscrape/targetstatus.qtpl:386
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line lib/promscrape/targetstatus.qtpl:376
|
||||
//line lib/promscrape/targetstatus.qtpl:386
|
||||
streamqueryArgs(qw422016, filter, override)
|
||||
//line lib/promscrape/targetstatus.qtpl:376
|
||||
//line lib/promscrape/targetstatus.qtpl:386
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line lib/promscrape/targetstatus.qtpl:376
|
||||
//line lib/promscrape/targetstatus.qtpl:386
|
||||
}
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:376
|
||||
//line lib/promscrape/targetstatus.qtpl:386
|
||||
func queryArgs(filter *requestFilter, override map[string]string) string {
|
||||
//line lib/promscrape/targetstatus.qtpl:376
|
||||
//line lib/promscrape/targetstatus.qtpl:386
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line lib/promscrape/targetstatus.qtpl:376
|
||||
//line lib/promscrape/targetstatus.qtpl:386
|
||||
writequeryArgs(qb422016, filter, override)
|
||||
//line lib/promscrape/targetstatus.qtpl:376
|
||||
//line lib/promscrape/targetstatus.qtpl:386
|
||||
qs422016 := string(qb422016.B)
|
||||
//line lib/promscrape/targetstatus.qtpl:376
|
||||
//line lib/promscrape/targetstatus.qtpl:386
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line lib/promscrape/targetstatus.qtpl:376
|
||||
//line lib/promscrape/targetstatus.qtpl:386
|
||||
return qs422016
|
||||
//line lib/promscrape/targetstatus.qtpl:376
|
||||
//line lib/promscrape/targetstatus.qtpl:386
|
||||
}
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:378
|
||||
//line lib/promscrape/targetstatus.qtpl:388
|
||||
func streamformatLabels(qw422016 *qt422016.Writer, labels *promutils.Labels) {
|
||||
//line lib/promscrape/targetstatus.qtpl:379
|
||||
//line lib/promscrape/targetstatus.qtpl:389
|
||||
labelsList := labels.GetLabels()
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:379
|
||||
//line lib/promscrape/targetstatus.qtpl:389
|
||||
qw422016.N().S(`{`)
|
||||
//line lib/promscrape/targetstatus.qtpl:381
|
||||
//line lib/promscrape/targetstatus.qtpl:391
|
||||
for i, label := range labelsList {
|
||||
//line lib/promscrape/targetstatus.qtpl:382
|
||||
//line lib/promscrape/targetstatus.qtpl:392
|
||||
qw422016.E().S(label.Name)
|
||||
//line lib/promscrape/targetstatus.qtpl:382
|
||||
//line lib/promscrape/targetstatus.qtpl:392
|
||||
qw422016.N().S(`=`)
|
||||
//line lib/promscrape/targetstatus.qtpl:382
|
||||
//line lib/promscrape/targetstatus.qtpl:392
|
||||
qw422016.E().Q(label.Value)
|
||||
//line lib/promscrape/targetstatus.qtpl:383
|
||||
//line lib/promscrape/targetstatus.qtpl:393
|
||||
if i+1 < len(labelsList) {
|
||||
//line lib/promscrape/targetstatus.qtpl:383
|
||||
//line lib/promscrape/targetstatus.qtpl:393
|
||||
qw422016.N().S(`,`)
|
||||
//line lib/promscrape/targetstatus.qtpl:383
|
||||
//line lib/promscrape/targetstatus.qtpl:393
|
||||
qw422016.N().S(` `)
|
||||
//line lib/promscrape/targetstatus.qtpl:383
|
||||
//line lib/promscrape/targetstatus.qtpl:393
|
||||
}
|
||||
//line lib/promscrape/targetstatus.qtpl:384
|
||||
//line lib/promscrape/targetstatus.qtpl:394
|
||||
}
|
||||
//line lib/promscrape/targetstatus.qtpl:384
|
||||
//line lib/promscrape/targetstatus.qtpl:394
|
||||
qw422016.N().S(`}`)
|
||||
//line lib/promscrape/targetstatus.qtpl:386
|
||||
//line lib/promscrape/targetstatus.qtpl:396
|
||||
}
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:386
|
||||
//line lib/promscrape/targetstatus.qtpl:396
|
||||
func writeformatLabels(qq422016 qtio422016.Writer, labels *promutils.Labels) {
|
||||
//line lib/promscrape/targetstatus.qtpl:386
|
||||
//line lib/promscrape/targetstatus.qtpl:396
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line lib/promscrape/targetstatus.qtpl:386
|
||||
//line lib/promscrape/targetstatus.qtpl:396
|
||||
streamformatLabels(qw422016, labels)
|
||||
//line lib/promscrape/targetstatus.qtpl:386
|
||||
//line lib/promscrape/targetstatus.qtpl:396
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line lib/promscrape/targetstatus.qtpl:386
|
||||
//line lib/promscrape/targetstatus.qtpl:396
|
||||
}
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:386
|
||||
//line lib/promscrape/targetstatus.qtpl:396
|
||||
func formatLabels(labels *promutils.Labels) string {
|
||||
//line lib/promscrape/targetstatus.qtpl:386
|
||||
//line lib/promscrape/targetstatus.qtpl:396
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line lib/promscrape/targetstatus.qtpl:386
|
||||
//line lib/promscrape/targetstatus.qtpl:396
|
||||
writeformatLabels(qb422016, labels)
|
||||
//line lib/promscrape/targetstatus.qtpl:386
|
||||
//line lib/promscrape/targetstatus.qtpl:396
|
||||
qs422016 := string(qb422016.B)
|
||||
//line lib/promscrape/targetstatus.qtpl:386
|
||||
//line lib/promscrape/targetstatus.qtpl:396
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line lib/promscrape/targetstatus.qtpl:386
|
||||
//line lib/promscrape/targetstatus.qtpl:396
|
||||
return qs422016
|
||||
//line lib/promscrape/targetstatus.qtpl:386
|
||||
//line lib/promscrape/targetstatus.qtpl:396
|
||||
}
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:388
|
||||
//line lib/promscrape/targetstatus.qtpl:398
|
||||
func streamerrorNotification(qw422016 *qt422016.Writer, err error) {
|
||||
//line lib/promscrape/targetstatus.qtpl:388
|
||||
//line lib/promscrape/targetstatus.qtpl:398
|
||||
qw422016.N().S(`<div class="alert alert-danger d-flex align-items-center" role="alert"><svg class="bi flex-shrink-0 me-2" width="24" height="24" role="img" aria-label="Danger:"><use xlink:href="#exclamation-triangle-fill"/></svg><div>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:393
|
||||
//line lib/promscrape/targetstatus.qtpl:403
|
||||
qw422016.E().S(err.Error())
|
||||
//line lib/promscrape/targetstatus.qtpl:393
|
||||
//line lib/promscrape/targetstatus.qtpl:403
|
||||
qw422016.N().S(`</div></div>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:396
|
||||
//line lib/promscrape/targetstatus.qtpl:406
|
||||
}
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:396
|
||||
//line lib/promscrape/targetstatus.qtpl:406
|
||||
func writeerrorNotification(qq422016 qtio422016.Writer, err error) {
|
||||
//line lib/promscrape/targetstatus.qtpl:396
|
||||
//line lib/promscrape/targetstatus.qtpl:406
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line lib/promscrape/targetstatus.qtpl:396
|
||||
//line lib/promscrape/targetstatus.qtpl:406
|
||||
streamerrorNotification(qw422016, err)
|
||||
//line lib/promscrape/targetstatus.qtpl:396
|
||||
//line lib/promscrape/targetstatus.qtpl:406
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line lib/promscrape/targetstatus.qtpl:396
|
||||
//line lib/promscrape/targetstatus.qtpl:406
|
||||
}
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:396
|
||||
//line lib/promscrape/targetstatus.qtpl:406
|
||||
func errorNotification(err error) string {
|
||||
//line lib/promscrape/targetstatus.qtpl:396
|
||||
//line lib/promscrape/targetstatus.qtpl:406
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line lib/promscrape/targetstatus.qtpl:396
|
||||
//line lib/promscrape/targetstatus.qtpl:406
|
||||
writeerrorNotification(qb422016, err)
|
||||
//line lib/promscrape/targetstatus.qtpl:396
|
||||
//line lib/promscrape/targetstatus.qtpl:406
|
||||
qs422016 := string(qb422016.B)
|
||||
//line lib/promscrape/targetstatus.qtpl:396
|
||||
//line lib/promscrape/targetstatus.qtpl:406
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line lib/promscrape/targetstatus.qtpl:396
|
||||
//line lib/promscrape/targetstatus.qtpl:406
|
||||
return qs422016
|
||||
//line lib/promscrape/targetstatus.qtpl:396
|
||||
//line lib/promscrape/targetstatus.qtpl:406
|
||||
}
|
||||
|
|
|
@ -35,7 +35,7 @@ func NewLabelsFromMap(m map[string]string) *Labels {
|
|||
|
||||
// MarshalYAML implements yaml.Marshaler interface.
|
||||
func (x *Labels) MarshalYAML() (interface{}, error) {
|
||||
m := x.toMap()
|
||||
m := x.ToMap()
|
||||
return m, nil
|
||||
}
|
||||
|
||||
|
@ -51,7 +51,7 @@ func (x *Labels) UnmarshalYAML(unmarshal func(interface{}) error) error {
|
|||
|
||||
// MarshalJSON returns JSON respresentation for x.
|
||||
func (x *Labels) MarshalJSON() ([]byte, error) {
|
||||
m := x.toMap()
|
||||
m := x.ToMap()
|
||||
return json.Marshal(m)
|
||||
}
|
||||
|
||||
|
@ -74,7 +74,8 @@ func (x *Labels) InitFromMap(m map[string]string) {
|
|||
x.Sort()
|
||||
}
|
||||
|
||||
func (x *Labels) toMap() map[string]string {
|
||||
// ToMap returns a map for the given labels x.
|
||||
func (x *Labels) ToMap() map[string]string {
|
||||
labels := x.GetLabels()
|
||||
m := make(map[string]string, len(labels))
|
||||
for _, label := range labels {
|
||||
|
@ -293,10 +294,21 @@ func PutLabels(x *Labels) {
|
|||
|
||||
var labelsPool sync.Pool
|
||||
|
||||
// MustNewLabelsFromString creates labels from s, which can have the form `metric{labels}`.
|
||||
//
|
||||
// This function must be used only in tests. Use NewLabelsFromString in production code.
|
||||
func MustNewLabelsFromString(metricWithLabels string) *Labels {
|
||||
labels, err := NewLabelsFromString(metricWithLabels)
|
||||
if err != nil {
|
||||
logger.Panicf("BUG: cannot parse %q: %s", metricWithLabels, err)
|
||||
}
|
||||
return labels
|
||||
}
|
||||
|
||||
// NewLabelsFromString creates labels from s, which can have the form `metric{labels}`.
|
||||
//
|
||||
// This function must be used only in tests
|
||||
func NewLabelsFromString(metricWithLabels string) *Labels {
|
||||
func NewLabelsFromString(metricWithLabels string) (*Labels, error) {
|
||||
stripDummyMetric := false
|
||||
if strings.HasPrefix(metricWithLabels, "{") {
|
||||
// Add a dummy metric name, since the parser needs it
|
||||
|
@ -311,10 +323,10 @@ func NewLabelsFromString(metricWithLabels string) *Labels {
|
|||
err = fmt.Errorf("error during metric parse: %s", s)
|
||||
})
|
||||
if err != nil {
|
||||
logger.Panicf("BUG: cannot parse %q: %s", metricWithLabels, err)
|
||||
return nil, err
|
||||
}
|
||||
if len(rows.Rows) != 1 {
|
||||
logger.Panicf("BUG: unexpected number of rows parsed; got %d; want 1", len(rows.Rows))
|
||||
return nil, fmt.Errorf("unexpected number of rows parsed; got %d; want 1", len(rows.Rows))
|
||||
}
|
||||
r := rows.Rows[0]
|
||||
var x Labels
|
||||
|
@ -324,5 +336,5 @@ func NewLabelsFromString(metricWithLabels string) *Labels {
|
|||
for _, tag := range r.Tags {
|
||||
x.Add(tag.Key, tag.Value)
|
||||
}
|
||||
return &x
|
||||
return &x, nil
|
||||
}
|
||||
|
|
|
@ -147,7 +147,7 @@ func TestLabelsAddFrom(t *testing.T) {
|
|||
func TestLabelsRemoveMetaLabels(t *testing.T) {
|
||||
f := func(metric, resultExpected string) {
|
||||
t.Helper()
|
||||
labels := NewLabelsFromString(metric)
|
||||
labels := MustNewLabelsFromString(metric)
|
||||
labels.RemoveMetaLabels()
|
||||
result := labels.String()
|
||||
if result != resultExpected {
|
||||
|
@ -163,7 +163,7 @@ func TestLabelsRemoveMetaLabels(t *testing.T) {
|
|||
func TestLabelsRemoveLabelsWithDoubleUnderscorePrefix(t *testing.T) {
|
||||
f := func(metric, resultExpected string) {
|
||||
t.Helper()
|
||||
labels := NewLabelsFromString(metric)
|
||||
labels := MustNewLabelsFromString(metric)
|
||||
labels.RemoveLabelsWithDoubleUnderscorePrefix()
|
||||
result := labels.String()
|
||||
if result != resultExpected {
|
||||
|
|
|
@ -8,6 +8,8 @@ import (
|
|||
"io"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/buildinfo"
|
||||
)
|
||||
|
||||
var denyQueryTracing = flag.Bool("denyQueryTracing", false, "Whether to disable the ability to trace queries. See https://docs.victoriametrics.com/#query-tracing")
|
||||
|
@ -42,8 +44,10 @@ func New(enabled bool, format string, args ...interface{}) *Tracer {
|
|||
if *denyQueryTracing || !enabled {
|
||||
return nil
|
||||
}
|
||||
message := fmt.Sprintf(format, args...)
|
||||
message = buildinfo.Version + ": " + message
|
||||
return &Tracer{
|
||||
message: fmt.Sprintf(format, args...),
|
||||
message: message,
|
||||
startTime: time.Now(),
|
||||
}
|
||||
}
|
||||
|
|
|
@ -45,7 +45,7 @@ func TestTracerEnabled(t *testing.T) {
|
|||
qt.Printf("parent %d", 789)
|
||||
qt.Donef("foo %d", 33)
|
||||
s := qt.String()
|
||||
sExpected := `- 0ms: test: foo 33
|
||||
sExpected := `- 0ms: : test: foo 33
|
||||
| - 0ms: child done 456
|
||||
| | - 0ms: foo 123
|
||||
| - 0ms: parent 789
|
||||
|
@ -60,7 +60,7 @@ func TestTracerMultiline(t *testing.T) {
|
|||
qt.Printf("line3\nline4\n")
|
||||
qt.Done()
|
||||
s := qt.String()
|
||||
sExpected := `- 0ms: line1
|
||||
sExpected := `- 0ms: : line1
|
||||
| line2
|
||||
| - 0ms: line3
|
||||
| | line4
|
||||
|
@ -84,7 +84,7 @@ func TestTracerToJSON(t *testing.T) {
|
|||
qt.Printf("parent %d", 789)
|
||||
qt.Done()
|
||||
s := qt.ToJSON()
|
||||
sExpected := `{"duration_msec":0,"message":"test","children":[` +
|
||||
sExpected := `{"duration_msec":0,"message":": test","children":[` +
|
||||
`{"duration_msec":0,"message":"child done 456","children":[` +
|
||||
`{"duration_msec":0,"message":"foo 123"}]},` +
|
||||
`{"duration_msec":0,"message":"parent 789"}]}`
|
||||
|
@ -109,9 +109,9 @@ func TestTraceAddJSON(t *testing.T) {
|
|||
}
|
||||
qt.Done()
|
||||
s := qt.String()
|
||||
sExpected := `- 0ms: parent
|
||||
sExpected := `- 0ms: : parent
|
||||
| - 0ms: first_line
|
||||
| - 0ms: child
|
||||
| - 0ms: : child
|
||||
| | - 0ms: foo
|
||||
| - 0ms: last_line
|
||||
`
|
||||
|
@ -120,9 +120,9 @@ func TestTraceAddJSON(t *testing.T) {
|
|||
}
|
||||
|
||||
jsonS := qt.ToJSON()
|
||||
jsonSExpected := `{"duration_msec":0,"message":"parent","children":[` +
|
||||
jsonSExpected := `{"duration_msec":0,"message":": parent","children":[` +
|
||||
`{"duration_msec":0,"message":"first_line"},` +
|
||||
`{"duration_msec":0,"message":"child","children":[` +
|
||||
`{"duration_msec":0,"message":": child","children":[` +
|
||||
`{"duration_msec":0,"message":"foo"}]},` +
|
||||
`{"duration_msec":0,"message":"last_line"}]}`
|
||||
if !areEqualJSONTracesSkipDuration(jsonS, jsonSExpected) {
|
||||
|
@ -137,7 +137,7 @@ func TestTraceMissingDonef(t *testing.T) {
|
|||
qtChild.Printf("child printf")
|
||||
qt.Printf("another parent printf")
|
||||
s := qt.String()
|
||||
sExpected := `- 0ms: parent: missing Tracer.Done() call
|
||||
sExpected := `- 0ms: : parent: missing Tracer.Done() call
|
||||
| - 0ms: parent printf
|
||||
| - 0ms: child: missing Tracer.Done() call
|
||||
| | - 0ms: child printf
|
||||
|
|
|
@ -18,16 +18,36 @@ func DeduplicateSamples(srcTimestamps []int64, srcValues []float64, dedupInterva
|
|||
if ts <= tsNext {
|
||||
continue
|
||||
}
|
||||
dstTimestamps = append(dstTimestamps, srcTimestamps[i])
|
||||
dstValues = append(dstValues, srcValues[i])
|
||||
// Choose the maximum value with the timestamp equal to tsPrev.
|
||||
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3333
|
||||
j := i
|
||||
tsPrev := srcTimestamps[j]
|
||||
vPrev := srcValues[j]
|
||||
for j > 0 && srcTimestamps[j-1] == tsPrev {
|
||||
j--
|
||||
if srcValues[j] > vPrev {
|
||||
vPrev = srcValues[j]
|
||||
}
|
||||
}
|
||||
dstTimestamps = append(dstTimestamps, tsPrev)
|
||||
dstValues = append(dstValues, vPrev)
|
||||
tsNext += dedupInterval
|
||||
if tsNext < ts {
|
||||
tsNext = ts + dedupInterval - 1
|
||||
tsNext -= tsNext % dedupInterval
|
||||
}
|
||||
}
|
||||
dstTimestamps = append(dstTimestamps, srcTimestamps[len(srcTimestamps)-1])
|
||||
dstValues = append(dstValues, srcValues[len(srcValues)-1])
|
||||
j := len(srcTimestamps) - 1
|
||||
tsPrev := srcTimestamps[j]
|
||||
vPrev := srcValues[j]
|
||||
for j > 0 && srcTimestamps[j-1] == tsPrev {
|
||||
j--
|
||||
if srcValues[j] > vPrev {
|
||||
vPrev = srcValues[j]
|
||||
}
|
||||
}
|
||||
dstTimestamps = append(dstTimestamps, tsPrev)
|
||||
dstValues = append(dstValues, vPrev)
|
||||
return dstTimestamps, dstValues
|
||||
}
|
||||
|
||||
|
@ -44,16 +64,36 @@ func deduplicateSamplesDuringMerge(srcTimestamps, srcValues []int64, dedupInterv
|
|||
if ts <= tsNext {
|
||||
continue
|
||||
}
|
||||
dstTimestamps = append(dstTimestamps, srcTimestamps[i])
|
||||
dstValues = append(dstValues, srcValues[i])
|
||||
// Choose the maximum value with the timestamp equal to tsPrev.
|
||||
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3333
|
||||
j := i
|
||||
tsPrev := srcTimestamps[j]
|
||||
vPrev := srcValues[j]
|
||||
for j > 0 && srcTimestamps[j-1] == tsPrev {
|
||||
j--
|
||||
if srcValues[j] > vPrev {
|
||||
vPrev = srcValues[j]
|
||||
}
|
||||
}
|
||||
dstTimestamps = append(dstTimestamps, tsPrev)
|
||||
dstValues = append(dstValues, vPrev)
|
||||
tsNext += dedupInterval
|
||||
if tsNext < ts {
|
||||
tsNext = ts + dedupInterval - 1
|
||||
tsNext -= tsNext % dedupInterval
|
||||
}
|
||||
}
|
||||
dstTimestamps = append(dstTimestamps, srcTimestamps[len(srcTimestamps)-1])
|
||||
dstValues = append(dstValues, srcValues[len(srcValues)-1])
|
||||
j := len(srcTimestamps) - 1
|
||||
tsPrev := srcTimestamps[j]
|
||||
vPrev := srcValues[j]
|
||||
for j > 0 && srcTimestamps[j-1] == tsPrev {
|
||||
j--
|
||||
if srcValues[j] > vPrev {
|
||||
vPrev = srcValues[j]
|
||||
}
|
||||
}
|
||||
dstTimestamps = append(dstTimestamps, tsPrev)
|
||||
dstValues = append(dstValues, vPrev)
|
||||
return dstTimestamps, dstValues
|
||||
}
|
||||
|
||||
|
|
|
@ -35,6 +35,64 @@ func TestNeedsDedup(t *testing.T) {
|
|||
f(10, []int64{0, 31, 49}, false)
|
||||
}
|
||||
|
||||
func TestDeduplicateSamplesWithIdenticalTimestamps(t *testing.T) {
|
||||
f := func(scrapeInterval time.Duration, timestamps []int64, values []float64, timestampsExpected []int64, valuesExpected []float64) {
|
||||
t.Helper()
|
||||
timestampsCopy := append([]int64{}, timestamps...)
|
||||
|
||||
dedupInterval := scrapeInterval.Milliseconds()
|
||||
timestampsCopy, values = DeduplicateSamples(timestampsCopy, values, dedupInterval)
|
||||
if !reflect.DeepEqual(timestampsCopy, timestampsExpected) {
|
||||
t.Fatalf("invalid DeduplicateSamples(%v) timestamps;\ngot\n%v\nwant\n%v", timestamps, timestampsCopy, timestampsExpected)
|
||||
}
|
||||
if !reflect.DeepEqual(values, valuesExpected) {
|
||||
t.Fatalf("invalid DeduplicateSamples(%v) values;\ngot\n%v\nwant\n%v", timestamps, values, valuesExpected)
|
||||
}
|
||||
|
||||
// Verify that the second call to DeduplicateSamples doesn't modify samples.
|
||||
valuesCopy := append([]float64{}, values...)
|
||||
timestampsCopy, valuesCopy = DeduplicateSamples(timestampsCopy, valuesCopy, dedupInterval)
|
||||
if !reflect.DeepEqual(timestampsCopy, timestampsExpected) {
|
||||
t.Fatalf("invalid DeduplicateSamples(%v) timestamps for the second call;\ngot\n%v\nwant\n%v", timestamps, timestampsCopy, timestampsExpected)
|
||||
}
|
||||
if !reflect.DeepEqual(valuesCopy, values) {
|
||||
t.Fatalf("invalid DeduplicateSamples(%v) values for the second call;\ngot\n%v\nwant\n%v", timestamps, values, valuesCopy)
|
||||
}
|
||||
}
|
||||
f(time.Second, []int64{1000, 1000}, []float64{2, 1}, []int64{1000}, []float64{2})
|
||||
f(time.Second, []int64{1001, 1001}, []float64{2, 1}, []int64{1001}, []float64{2})
|
||||
f(time.Second, []int64{1000, 1001, 1001, 1001, 2001}, []float64{1, 2, 5, 3, 0}, []int64{1000, 1001, 2001}, []float64{1, 5, 0})
|
||||
}
|
||||
|
||||
func TestDeduplicateSamplesDuringMergeWithIdenticalTimestamps(t *testing.T) {
|
||||
f := func(scrapeInterval time.Duration, timestamps, values, timestampsExpected, valuesExpected []int64) {
|
||||
t.Helper()
|
||||
timestampsCopy := append([]int64{}, timestamps...)
|
||||
|
||||
dedupInterval := scrapeInterval.Milliseconds()
|
||||
timestampsCopy, values = deduplicateSamplesDuringMerge(timestampsCopy, values, dedupInterval)
|
||||
if !reflect.DeepEqual(timestampsCopy, timestampsExpected) {
|
||||
t.Fatalf("invalid deduplicateSamplesDuringMerge(%v) timestamps;\ngot\n%v\nwant\n%v", timestamps, timestampsCopy, timestampsExpected)
|
||||
}
|
||||
if !reflect.DeepEqual(values, valuesExpected) {
|
||||
t.Fatalf("invalid deduplicateSamplesDuringMerge(%v) values;\ngot\n%v\nwant\n%v", timestamps, values, valuesExpected)
|
||||
}
|
||||
|
||||
// Verify that the second call to deduplicateSamplesDuringMerge doesn't modify samples.
|
||||
valuesCopy := append([]int64{}, values...)
|
||||
timestampsCopy, valuesCopy = deduplicateSamplesDuringMerge(timestampsCopy, valuesCopy, dedupInterval)
|
||||
if !reflect.DeepEqual(timestampsCopy, timestampsExpected) {
|
||||
t.Fatalf("invalid deduplicateSamplesDuringMerge(%v) timestamps for the second call;\ngot\n%v\nwant\n%v", timestamps, timestampsCopy, timestampsExpected)
|
||||
}
|
||||
if !reflect.DeepEqual(valuesCopy, values) {
|
||||
t.Fatalf("invalid deduplicateSamplesDuringMerge(%v) values for the second call;\ngot\n%v\nwant\n%v", timestamps, values, valuesCopy)
|
||||
}
|
||||
}
|
||||
f(time.Second, []int64{1000, 1000}, []int64{2, 1}, []int64{1000}, []int64{2})
|
||||
f(time.Second, []int64{1001, 1001}, []int64{2, 1}, []int64{1001}, []int64{2})
|
||||
f(time.Second, []int64{1000, 1001, 1001, 1001, 2001}, []int64{1, 2, 5, 3, 0}, []int64{1000, 1001, 2001}, []int64{1, 5, 0})
|
||||
}
|
||||
|
||||
func TestDeduplicateSamples(t *testing.T) {
|
||||
// Disable deduplication before exit, since the rest of tests expect disabled dedup.
|
||||
|
||||
|
|
|
@ -12,6 +12,7 @@ func BenchmarkDeduplicateSamples(b *testing.B) {
|
|||
values := make([]float64, blockSize)
|
||||
for i := 0; i < len(timestamps); i++ {
|
||||
timestamps[i] = int64(i) * 1e3
|
||||
values[i] = float64(i)
|
||||
}
|
||||
for _, minScrapeInterval := range []time.Duration{time.Second, 2 * time.Second, 5 * time.Second, 10 * time.Second} {
|
||||
b.Run(fmt.Sprintf("minScrapeInterval=%s", minScrapeInterval), func(b *testing.B) {
|
||||
|
@ -33,3 +34,32 @@ func BenchmarkDeduplicateSamples(b *testing.B) {
|
|||
})
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkDeduplicateSamplesDuringMerge(b *testing.B) {
|
||||
const blockSize = 8192
|
||||
timestamps := make([]int64, blockSize)
|
||||
values := make([]int64, blockSize)
|
||||
for i := 0; i < len(timestamps); i++ {
|
||||
timestamps[i] = int64(i) * 1e3
|
||||
values[i] = int64(i)
|
||||
}
|
||||
for _, minScrapeInterval := range []time.Duration{time.Second, 2 * time.Second, 5 * time.Second, 10 * time.Second} {
|
||||
b.Run(fmt.Sprintf("minScrapeInterval=%s", minScrapeInterval), func(b *testing.B) {
|
||||
dedupInterval := minScrapeInterval.Milliseconds()
|
||||
b.ReportAllocs()
|
||||
b.SetBytes(blockSize)
|
||||
b.RunParallel(func(pb *testing.PB) {
|
||||
timestampsCopy := make([]int64, 0, blockSize)
|
||||
valuesCopy := make([]int64, 0, blockSize)
|
||||
for pb.Next() {
|
||||
timestampsCopy := append(timestampsCopy[:0], timestamps...)
|
||||
valuesCopy := append(valuesCopy[:0], values...)
|
||||
ts, vs := deduplicateSamplesDuringMerge(timestampsCopy, valuesCopy, dedupInterval)
|
||||
if len(ts) == 0 || len(vs) == 0 {
|
||||
panic(fmt.Errorf("expecting non-empty results; got\nts=%v\nvs=%v", ts, vs))
|
||||
}
|
||||
}
|
||||
})
|
||||
})
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
GO_VERSION ?=1.19.3
|
||||
GO_VERSION ?=1.19.4
|
||||
SNAP_BUILDER_IMAGE := local/snap-builder:2.0.0-$(shell echo $(GO_VERSION) | tr :/ __)
|
||||
|
||||
|
||||
|
|
40
vendor/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/CHANGELOG.md
generated
vendored
40
vendor/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/CHANGELOG.md
generated
vendored
|
@ -1,5 +1,45 @@
|
|||
# Release History
|
||||
|
||||
## 0.6.1 (2022-12-09)
|
||||
|
||||
### Bugs Fixed
|
||||
|
||||
* Fix compilation error on Darwin.
|
||||
|
||||
## 0.6.0 (2022-12-08)
|
||||
|
||||
### Features Added
|
||||
|
||||
* Added BlobDeleteType to DeleteOptions to allow access to ['Permanent'](https://learn.microsoft.com/rest/api/storageservices/delete-blob#permanent-delete) DeleteType.
|
||||
* Added [Set Blob Expiry API](https://learn.microsoft.com/rest/api/storageservices/set-blob-expiry).
|
||||
* Added method `ServiceClient()` to the `azblob.Client` type, allowing access to the underlying service client.
|
||||
* Added support for object level immutability policy with versioning (Version Level WORM).
|
||||
* Added the custom CRC64 polynomial used by storage for transactional hashes, and implemented automatic hashing for transactions.
|
||||
|
||||
### Breaking Changes
|
||||
|
||||
* Corrected the name for `saoid` and `suoid` SAS parameters in `BlobSignatureValues` struct as per [this](https://learn.microsoft.com/rest/api/storageservices/create-user-delegation-sas#construct-a-user-delegation-sas)
|
||||
* Updated type of `BlockSize` from int to int64 in `UploadStreamOptions`
|
||||
* CRC64 transactional hashes are now supplied with a `uint64` rather than a `[]byte` to conform with Golang's `hash/crc64` package
|
||||
* Field `XMSContentCRC64` has been renamed to `ContentCRC64`
|
||||
* The `Lease*` constant types and values in the `blob` and `container` packages have been moved to the `lease` package and their names fixed up to avoid stuttering.
|
||||
* Fields `TransactionalContentCRC64` and `TransactionalContentMD5` have been replaced by `TransactionalValidation`.
|
||||
* Fields `SourceContentCRC64` and `SourceContentMD5` have been replaced by `SourceContentValidation`.
|
||||
* Field `TransactionalContentMD5` has been removed from type `AppendBlockFromURLOptions`.
|
||||
|
||||
### Bugs Fixed
|
||||
|
||||
* Corrected signing of User Delegation SAS. Fixes [#19372](https://github.com/Azure/azure-sdk-for-go/issues/19372) and [#19454](https://github.com/Azure/azure-sdk-for-go/issues/19454)
|
||||
* Added formatting of start and expiry time in [SetAccessPolicy](https://learn.microsoft.com/rest/api/storageservices/set-container-acl#request-body). Fixes [#18712](https://github.com/Azure/azure-sdk-for-go/issues/18712)
|
||||
* Uploading block blobs larger than 256MB can fail in some cases with error `net/http: HTTP/1.x transport connection broken`.
|
||||
* Blob name parameters are URL-encoded before constructing the complete blob URL.
|
||||
|
||||
### Other Changes
|
||||
|
||||
* Added some missing public surface area in the `container` and `service` packages.
|
||||
* The `UploadStream()` methods now use anonymous memory mapped files for buffers in order to reduce heap allocations/fragmentation.
|
||||
* The anonymous memory mapped files are typically backed by the page/swap file, multiple files are not actually created.
|
||||
|
||||
## 0.5.1 (2022-10-11)
|
||||
|
||||
### Bugs Fixed
|
||||
|
|
42
vendor/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/appendblob/client.go
generated
vendored
42
vendor/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/appendblob/client.go
generated
vendored
|
@ -10,6 +10,7 @@ import (
|
|||
"context"
|
||||
"io"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
|
||||
"github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime"
|
||||
|
@ -103,6 +104,11 @@ func (ab *Client) generated() *generated.AppendBlobClient {
|
|||
return appendBlob
|
||||
}
|
||||
|
||||
func (ab *Client) innerBlobGenerated() *generated.BlobClient {
|
||||
b := ab.BlobClient()
|
||||
return base.InnerClient((*base.Client[generated.BlobClient])(b))
|
||||
}
|
||||
|
||||
// URL returns the URL endpoint used by the Client object.
|
||||
func (ab *Client) URL() string {
|
||||
return ab.generated().Endpoint()
|
||||
|
@ -153,6 +159,13 @@ func (ab *Client) AppendBlock(ctx context.Context, body io.ReadSeekCloser, o *Ap
|
|||
|
||||
appendOptions, appendPositionAccessConditions, cpkInfo, cpkScope, modifiedAccessConditions, leaseAccessConditions := o.format()
|
||||
|
||||
if o != nil && o.TransactionalValidation != nil {
|
||||
body, err = o.TransactionalValidation.Apply(body, appendOptions)
|
||||
if err != nil {
|
||||
return AppendBlockResponse{}, nil
|
||||
}
|
||||
}
|
||||
|
||||
resp, err := ab.generated().AppendBlock(ctx, count, body, appendOptions, leaseAccessConditions, appendPositionAccessConditions, cpkInfo, cpkScope, modifiedAccessConditions)
|
||||
|
||||
return resp, err
|
||||
|
@ -190,6 +203,24 @@ func (ab *Client) Undelete(ctx context.Context, o *blob.UndeleteOptions) (blob.U
|
|||
return ab.BlobClient().Undelete(ctx, o)
|
||||
}
|
||||
|
||||
// SetImmutabilityPolicy operation enables users to set the immutability policy on a blob.
|
||||
// https://learn.microsoft.com/en-us/azure/storage/blobs/immutable-storage-overview
|
||||
func (ab *Client) SetImmutabilityPolicy(ctx context.Context, expiryTime time.Time, options *blob.SetImmutabilityPolicyOptions) (blob.SetImmutabilityPolicyResponse, error) {
|
||||
return ab.BlobClient().SetImmutabilityPolicy(ctx, expiryTime, options)
|
||||
}
|
||||
|
||||
// DeleteImmutabilityPolicy operation enables users to delete the immutability policy on a blob.
|
||||
// https://learn.microsoft.com/en-us/azure/storage/blobs/immutable-storage-overview
|
||||
func (ab *Client) DeleteImmutabilityPolicy(ctx context.Context, options *blob.DeleteImmutabilityPolicyOptions) (blob.DeleteImmutabilityPolicyResponse, error) {
|
||||
return ab.BlobClient().DeleteImmutabilityPolicy(ctx, options)
|
||||
}
|
||||
|
||||
// SetLegalHold operation enables users to set legal hold on a blob.
|
||||
// https://learn.microsoft.com/en-us/azure/storage/blobs/immutable-storage-overview
|
||||
func (ab *Client) SetLegalHold(ctx context.Context, legalHold bool, options *blob.SetLegalHoldOptions) (blob.SetLegalHoldResponse, error) {
|
||||
return ab.BlobClient().SetLegalHold(ctx, legalHold, options)
|
||||
}
|
||||
|
||||
// SetTier operation sets the tier on a blob. The operation is allowed on a page
|
||||
// blob in a premium storage account and on a block blob in a blob storage account (locally
|
||||
// redundant storage only). A premium page blob's tier determines the allowed size, IOPS, and
|
||||
|
@ -200,6 +231,17 @@ func (ab *Client) SetTier(ctx context.Context, tier blob.AccessTier, o *blob.Set
|
|||
return ab.BlobClient().SetTier(ctx, tier, o)
|
||||
}
|
||||
|
||||
// SetExpiry operation sets an expiry time on an existing blob. This operation is only allowed on Hierarchical Namespace enabled accounts.
|
||||
// For more information, see https://learn.microsoft.com/en-us/rest/api/storageservices/set-blob-expiry
|
||||
func (ab *Client) SetExpiry(ctx context.Context, expiryType ExpiryType, o *SetExpiryOptions) (SetExpiryResponse, error) {
|
||||
if expiryType == nil {
|
||||
expiryType = ExpiryTypeNever{}
|
||||
}
|
||||
et, opts := expiryType.Format(o)
|
||||
resp, err := ab.innerBlobGenerated().SetExpiry(ctx, et, opts)
|
||||
return resp, err
|
||||
}
|
||||
|
||||
// GetProperties returns the blob's properties.
|
||||
// For more information, see https://docs.microsoft.com/rest/api/storageservices/get-blob-properties.
|
||||
func (ab *Client) GetProperties(ctx context.Context, o *blob.GetPropertiesOptions) (blob.GetPropertiesResponse, error) {
|
||||
|
|
48
vendor/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/appendblob/models.go
generated
vendored
48
vendor/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/appendblob/models.go
generated
vendored
|
@ -73,10 +73,9 @@ func (o *CreateOptions) format() (*generated.AppendBlobClientCreateOptions, *gen
|
|||
|
||||
// AppendBlockOptions contains the optional parameters for the Client.AppendBlock method.
|
||||
type AppendBlockOptions struct {
|
||||
// Specify the transactional crc64 for the body, to be validated by the service.
|
||||
TransactionalContentCRC64 []byte
|
||||
// Specify the transactional md5 for the body, to be validated by the service.
|
||||
TransactionalContentMD5 []byte
|
||||
// TransactionalValidation specifies the transfer validation type to use.
|
||||
// The default is nil (no transfer validation).
|
||||
TransactionalValidation blob.TransferValidationType
|
||||
|
||||
AppendPositionAccessConditions *AppendPositionAccessConditions
|
||||
|
||||
|
@ -93,24 +92,16 @@ func (o *AppendBlockOptions) format() (*generated.AppendBlobClientAppendBlockOpt
|
|||
return nil, nil, nil, nil, nil, nil
|
||||
}
|
||||
|
||||
options := &generated.AppendBlobClientAppendBlockOptions{
|
||||
TransactionalContentCRC64: o.TransactionalContentCRC64,
|
||||
TransactionalContentMD5: o.TransactionalContentMD5,
|
||||
}
|
||||
leaseAccessConditions, modifiedAccessConditions := exported.FormatBlobAccessConditions(o.AccessConditions)
|
||||
return options, o.AppendPositionAccessConditions, o.CpkInfo, o.CpkScopeInfo, modifiedAccessConditions, leaseAccessConditions
|
||||
return &generated.AppendBlobClientAppendBlockOptions{}, o.AppendPositionAccessConditions, o.CpkInfo, o.CpkScopeInfo, modifiedAccessConditions, leaseAccessConditions
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
// AppendBlockFromURLOptions contains the optional parameters for the Client.AppendBlockFromURL method.
|
||||
type AppendBlockFromURLOptions struct {
|
||||
// Specify the md5 calculated for the range of bytes that must be read from the copy source.
|
||||
SourceContentMD5 []byte
|
||||
// Specify the crc64 calculated for the range of bytes that must be read from the copy source.
|
||||
SourceContentCRC64 []byte
|
||||
// Specify the transactional md5 for the body, to be validated by the service.
|
||||
TransactionalContentMD5 []byte
|
||||
// SourceContentValidation contains the validation mechanism used on the range of bytes read from the source.
|
||||
SourceContentValidation blob.SourceContentValidationType
|
||||
|
||||
AppendPositionAccessConditions *AppendPositionAccessConditions
|
||||
|
||||
|
@ -134,10 +125,11 @@ func (o *AppendBlockFromURLOptions) format() (*generated.AppendBlobClientAppendB
|
|||
}
|
||||
|
||||
options := &generated.AppendBlobClientAppendBlockFromURLOptions{
|
||||
SourceRange: exported.FormatHTTPRange(o.Range),
|
||||
SourceContentMD5: o.SourceContentMD5,
|
||||
SourceContentcrc64: o.SourceContentCRC64,
|
||||
TransactionalContentMD5: o.TransactionalContentMD5,
|
||||
SourceRange: exported.FormatHTTPRange(o.Range),
|
||||
}
|
||||
|
||||
if o.SourceContentValidation != nil {
|
||||
o.SourceContentValidation.Apply(options)
|
||||
}
|
||||
|
||||
leaseAccessConditions, modifiedAccessConditions := exported.FormatBlobAccessConditions(o.AccessConditions)
|
||||
|
@ -164,3 +156,21 @@ func (o *SealOptions) format() (*generated.LeaseAccessConditions,
|
|||
}
|
||||
|
||||
// ---------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
// ExpiryType defines values for ExpiryType
|
||||
type ExpiryType = exported.ExpiryType
|
||||
|
||||
// ExpiryTypeAbsolute defines the absolute time for the blob expiry
|
||||
type ExpiryTypeAbsolute = exported.ExpiryTypeAbsolute
|
||||
|
||||
// ExpiryTypeRelativeToNow defines the duration relative to now for the blob expiry
|
||||
type ExpiryTypeRelativeToNow = exported.ExpiryTypeRelativeToNow
|
||||
|
||||
// ExpiryTypeRelativeToCreation defines the duration relative to creation for the blob expiry
|
||||
type ExpiryTypeRelativeToCreation = exported.ExpiryTypeRelativeToCreation
|
||||
|
||||
// ExpiryTypeNever defines that the blob will be set to never expire
|
||||
type ExpiryTypeNever = exported.ExpiryTypeNever
|
||||
|
||||
// SetExpiryOptions contains the optional parameters for the Client.SetExpiry method.
|
||||
type SetExpiryOptions = exported.SetExpiryOptions
|
||||
|
|
3
vendor/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/appendblob/responses.go
generated
vendored
3
vendor/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/appendblob/responses.go
generated
vendored
|
@ -21,3 +21,6 @@ type AppendBlockFromURLResponse = generated.AppendBlobClientAppendBlockFromURLRe
|
|||
|
||||
// SealResponse contains the response from method Client.Seal.
|
||||
type SealResponse = generated.AppendBlobClientSealResponse
|
||||
|
||||
// SetExpiryResponse contains the response from method BlobClient.SetExpiry.
|
||||
type SetExpiryResponse = generated.BlobClientSetExpiryResponse
|
||||
|
|
28
vendor/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob/client.go
generated
vendored
28
vendor/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob/client.go
generated
vendored
|
@ -231,6 +231,31 @@ func (b *Client) GetTags(ctx context.Context, options *GetTagsOptions) (GetTagsR
|
|||
|
||||
}
|
||||
|
||||
// SetImmutabilityPolicy operation enables users to set the immutability policy on a blob. Mode defaults to "Unlocked".
|
||||
// https://learn.microsoft.com/en-us/azure/storage/blobs/immutable-storage-overview
|
||||
func (b *Client) SetImmutabilityPolicy(ctx context.Context, expiryTime time.Time, options *SetImmutabilityPolicyOptions) (SetImmutabilityPolicyResponse, error) {
|
||||
blobSetImmutabilityPolicyOptions, modifiedAccessConditions := options.format()
|
||||
blobSetImmutabilityPolicyOptions.ImmutabilityPolicyExpiry = &expiryTime
|
||||
resp, err := b.generated().SetImmutabilityPolicy(ctx, blobSetImmutabilityPolicyOptions, modifiedAccessConditions)
|
||||
return resp, err
|
||||
}
|
||||
|
||||
// DeleteImmutabilityPolicy operation enables users to delete the immutability policy on a blob.
|
||||
// https://learn.microsoft.com/en-us/azure/storage/blobs/immutable-storage-overview
|
||||
func (b *Client) DeleteImmutabilityPolicy(ctx context.Context, options *DeleteImmutabilityPolicyOptions) (DeleteImmutabilityPolicyResponse, error) {
|
||||
deleteImmutabilityOptions := options.format()
|
||||
resp, err := b.generated().DeleteImmutabilityPolicy(ctx, deleteImmutabilityOptions)
|
||||
return resp, err
|
||||
}
|
||||
|
||||
// SetLegalHold operation enables users to set legal hold on a blob.
|
||||
// https://learn.microsoft.com/en-us/azure/storage/blobs/immutable-storage-overview
|
||||
func (b *Client) SetLegalHold(ctx context.Context, legalHold bool, options *SetLegalHoldOptions) (SetLegalHoldResponse, error) {
|
||||
setLegalHoldOptions := options.format()
|
||||
resp, err := b.generated().SetLegalHold(ctx, legalHold, setLegalHoldOptions)
|
||||
return resp, err
|
||||
}
|
||||
|
||||
// CopyFromURL synchronously copies the data at the source URL to a block blob, with sizes up to 256 MB.
|
||||
// For more information, see https://docs.microsoft.com/en-us/rest/api/storageservices/copy-blob-from-url.
|
||||
func (b *Client) CopyFromURL(ctx context.Context, copySource string, options *CopyFromURLOptions) (CopyFromURLResponse, error) {
|
||||
|
@ -311,8 +336,7 @@ func (b *Client) download(ctx context.Context, writer io.WriterAt, o downloadOpt
|
|||
TransferSize: count,
|
||||
ChunkSize: o.BlockSize,
|
||||
Concurrency: o.Concurrency,
|
||||
Operation: func(chunkStart int64, count int64, ctx context.Context) error {
|
||||
|
||||
Operation: func(ctx context.Context, chunkStart int64, count int64) error {
|
||||
downloadBlobOptions := o.getDownloadBlobOptions(HTTPRange{
|
||||
Offset: chunkStart + o.Range.Offset,
|
||||
Count: count,
|
||||
|
|
82
vendor/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob/constants.go
generated
vendored
82
vendor/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob/constants.go
generated
vendored
|
@ -168,21 +168,6 @@ func PossibleDeleteTypeValues() []DeleteType {
|
|||
return generated.PossibleDeleteTypeValues()
|
||||
}
|
||||
|
||||
// ExpiryOptions defines values for ExpiryOptions
|
||||
type ExpiryOptions = generated.ExpiryOptions
|
||||
|
||||
const (
|
||||
ExpiryOptionsAbsolute ExpiryOptions = generated.ExpiryOptionsAbsolute
|
||||
ExpiryOptionsNeverExpire ExpiryOptions = generated.ExpiryOptionsNeverExpire
|
||||
ExpiryOptionsRelativeToCreation ExpiryOptions = generated.ExpiryOptionsRelativeToCreation
|
||||
ExpiryOptionsRelativeToNow ExpiryOptions = generated.ExpiryOptionsRelativeToNow
|
||||
)
|
||||
|
||||
// PossibleExpiryOptionsValues returns the possible values for the ExpiryOptions const type.
|
||||
func PossibleExpiryOptionsValues() []ExpiryOptions {
|
||||
return generated.PossibleExpiryOptionsValues()
|
||||
}
|
||||
|
||||
// QueryFormatType - The quick query format type.
|
||||
type QueryFormatType = generated.QueryFormatType
|
||||
|
||||
|
@ -198,44 +183,47 @@ func PossibleQueryFormatTypeValues() []QueryFormatType {
|
|||
return generated.PossibleQueryFormatTypeValues()
|
||||
}
|
||||
|
||||
// LeaseDurationType defines values for LeaseDurationType
|
||||
type LeaseDurationType = generated.LeaseDurationType
|
||||
// TransferValidationType abstracts the various mechanisms used to verify a transfer.
|
||||
type TransferValidationType = exported.TransferValidationType
|
||||
|
||||
const (
|
||||
LeaseDurationTypeInfinite LeaseDurationType = generated.LeaseDurationTypeInfinite
|
||||
LeaseDurationTypeFixed LeaseDurationType = generated.LeaseDurationTypeFixed
|
||||
)
|
||||
// TransferValidationTypeCRC64 is a TransferValidationType used to provide a precomputed CRC64.
|
||||
type TransferValidationTypeCRC64 = exported.TransferValidationTypeCRC64
|
||||
|
||||
// PossibleLeaseDurationTypeValues returns the possible values for the LeaseDurationType const type.
|
||||
func PossibleLeaseDurationTypeValues() []LeaseDurationType {
|
||||
return generated.PossibleLeaseDurationTypeValues()
|
||||
// TransferValidationTypeComputeCRC64 is a TransferValidationType that indicates a CRC64 should be computed during transfer.
|
||||
func TransferValidationTypeComputeCRC64() TransferValidationType {
|
||||
return exported.TransferValidationTypeComputeCRC64()
|
||||
}
|
||||
|
||||
// LeaseStateType defines values for LeaseStateType
|
||||
type LeaseStateType = generated.LeaseStateType
|
||||
// TransferValidationTypeMD5 is a TransferValidationType used to provide a precomputed MD5.
|
||||
type TransferValidationTypeMD5 = exported.TransferValidationTypeMD5
|
||||
|
||||
const (
|
||||
LeaseStateTypeAvailable LeaseStateType = generated.LeaseStateTypeAvailable
|
||||
LeaseStateTypeLeased LeaseStateType = generated.LeaseStateTypeLeased
|
||||
LeaseStateTypeExpired LeaseStateType = generated.LeaseStateTypeExpired
|
||||
LeaseStateTypeBreaking LeaseStateType = generated.LeaseStateTypeBreaking
|
||||
LeaseStateTypeBroken LeaseStateType = generated.LeaseStateTypeBroken
|
||||
)
|
||||
|
||||
// PossibleLeaseStateTypeValues returns the possible values for the LeaseStateType const type.
|
||||
func PossibleLeaseStateTypeValues() []LeaseStateType {
|
||||
return generated.PossibleLeaseStateTypeValues()
|
||||
// SourceContentValidationType abstracts the various mechanisms used to validate source content.
|
||||
// This interface is not publicly implementable.
|
||||
type SourceContentValidationType interface {
|
||||
Apply(generated.SourceContentSetter)
|
||||
notPubliclyImplementable()
|
||||
}
|
||||
|
||||
// LeaseStatusType defines values for LeaseStatusType
|
||||
type LeaseStatusType = generated.LeaseStatusType
|
||||
// SourceContentValidationTypeCRC64 is a SourceContentValidationType used to provided a precomputed CRC64.
|
||||
type SourceContentValidationTypeCRC64 []byte
|
||||
|
||||
const (
|
||||
LeaseStatusTypeLocked LeaseStatusType = generated.LeaseStatusTypeLocked
|
||||
LeaseStatusTypeUnlocked LeaseStatusType = generated.LeaseStatusTypeUnlocked
|
||||
)
|
||||
|
||||
// PossibleLeaseStatusTypeValues returns the possible values for the LeaseStatusType const type.
|
||||
func PossibleLeaseStatusTypeValues() []LeaseStatusType {
|
||||
return generated.PossibleLeaseStatusTypeValues()
|
||||
// Apply implements the SourceContentValidationType interface for type SourceContentValidationTypeCRC64.
|
||||
func (s SourceContentValidationTypeCRC64) Apply(src generated.SourceContentSetter) {
|
||||
src.SetSourceContentCRC64(s)
|
||||
}
|
||||
|
||||
func (SourceContentValidationTypeCRC64) notPubliclyImplementable() {}
|
||||
|
||||
var _ SourceContentValidationType = (SourceContentValidationTypeCRC64)(nil)
|
||||
|
||||
// SourceContentValidationTypeMD5 is a SourceContentValidationType used to provided a precomputed MD5.
|
||||
type SourceContentValidationTypeMD5 []byte
|
||||
|
||||
// Apply implements the SourceContentValidationType interface for type SourceContentValidationTypeMD5.
|
||||
func (s SourceContentValidationTypeMD5) Apply(src generated.SourceContentSetter) {
|
||||
src.SetSourceContentMD5(s)
|
||||
}
|
||||
|
||||
func (SourceContentValidationTypeMD5) notPubliclyImplementable() {}
|
||||
|
||||
var _ SourceContentValidationType = (SourceContentValidationTypeMD5)(nil)
|
||||
|
|
54
vendor/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob/models.go
generated
vendored
54
vendor/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob/models.go
generated
vendored
|
@ -194,6 +194,11 @@ type DeleteOptions struct {
|
|||
// and all of its snapshots. only: Delete only the blob's snapshots and not the blob itself
|
||||
DeleteSnapshots *DeleteSnapshotsOptionType
|
||||
AccessConditions *AccessConditions
|
||||
// Setting DeleteType to DeleteTypePermanent will permanently delete soft-delete snapshot and/or version blobs.
|
||||
// WARNING: This is a dangerous operation and should not be used unless you know the implications. Please proceed
|
||||
// with caution.
|
||||
// For more information, see https://docs.microsoft.com/rest/api/storageservices/delete-blob
|
||||
BlobDeleteType *DeleteType
|
||||
}
|
||||
|
||||
func (o *DeleteOptions) format() (*generated.BlobClientDeleteOptions, *generated.LeaseAccessConditions, *generated.ModifiedAccessConditions) {
|
||||
|
@ -203,6 +208,7 @@ func (o *DeleteOptions) format() (*generated.BlobClientDeleteOptions, *generated
|
|||
|
||||
basics := generated.BlobClientDeleteOptions{
|
||||
DeleteSnapshots: o.DeleteSnapshots,
|
||||
DeleteType: o.BlobDeleteType, // None by default
|
||||
}
|
||||
|
||||
if o.AccessConditions == nil {
|
||||
|
@ -442,6 +448,54 @@ func (o *GetTagsOptions) format() (*generated.BlobClientGetTagsOptions, *generat
|
|||
|
||||
// ---------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
// SetImmutabilityPolicyOptions contains the parameter for Client.SetImmutabilityPolicy
|
||||
type SetImmutabilityPolicyOptions struct {
|
||||
// Specifies the immutability policy mode to set on the blob. Possible values to set include: "Locked", "Unlocked".
|
||||
// "Mutable" can only be returned by service, don't set to "Mutable". If mode is not set - it will default to Unlocked.
|
||||
Mode *ImmutabilityPolicySetting
|
||||
ModifiedAccessConditions *ModifiedAccessConditions
|
||||
}
|
||||
|
||||
func (o *SetImmutabilityPolicyOptions) format() (*generated.BlobClientSetImmutabilityPolicyOptions, *ModifiedAccessConditions) {
|
||||
if o == nil {
|
||||
return nil, nil
|
||||
}
|
||||
ac := &exported.BlobAccessConditions{
|
||||
ModifiedAccessConditions: o.ModifiedAccessConditions,
|
||||
}
|
||||
_, modifiedAccessConditions := exported.FormatBlobAccessConditions(ac)
|
||||
|
||||
options := &generated.BlobClientSetImmutabilityPolicyOptions{
|
||||
ImmutabilityPolicyMode: o.Mode,
|
||||
}
|
||||
|
||||
return options, modifiedAccessConditions
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
// DeleteImmutabilityPolicyOptions contains the optional parameters for the Client.DeleteImmutabilityPolicy method.
|
||||
type DeleteImmutabilityPolicyOptions struct {
|
||||
// placeholder for future options
|
||||
}
|
||||
|
||||
func (o *DeleteImmutabilityPolicyOptions) format() *generated.BlobClientDeleteImmutabilityPolicyOptions {
|
||||
return nil
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
// SetLegalHoldOptions contains the optional parameters for the Client.SetLegalHold method.
|
||||
type SetLegalHoldOptions struct {
|
||||
// placeholder for future options
|
||||
}
|
||||
|
||||
func (o *SetLegalHoldOptions) format() *generated.BlobClientSetLegalHoldOptions {
|
||||
return nil
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
// CopyFromURLOptions contains the optional parameters for the Client.CopyFromURL method.
|
||||
type CopyFromURLOptions struct {
|
||||
// Optional. Used to set blob tags in various blob operations.
|
||||
|
|
9
vendor/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob/responses.go
generated
vendored
9
vendor/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob/responses.go
generated
vendored
|
@ -85,6 +85,15 @@ type SetTagsResponse = generated.BlobClientSetTagsResponse
|
|||
// GetTagsResponse contains the response from method BlobClient.GetTags.
|
||||
type GetTagsResponse = generated.BlobClientGetTagsResponse
|
||||
|
||||
// SetImmutabilityPolicyResponse contains the response from method BlobClient.SetImmutabilityPolicy.
|
||||
type SetImmutabilityPolicyResponse = generated.BlobClientSetImmutabilityPolicyResponse
|
||||
|
||||
// DeleteImmutabilityPolicyResponse contains the response from method BlobClient.DeleteImmutabilityPolicyResponse.
|
||||
type DeleteImmutabilityPolicyResponse = generated.BlobClientDeleteImmutabilityPolicyResponse
|
||||
|
||||
// SetLegalHoldResponse contains the response from method BlobClient.SetLegalHold.
|
||||
type SetLegalHoldResponse = generated.BlobClientSetLegalHoldResponse
|
||||
|
||||
// CopyFromURLResponse contains the response from method BlobClient.CopyFromURL.
|
||||
type CopyFromURLResponse = generated.BlobClientCopyFromURLResponse
|
||||
|
||||
|
|
403
vendor/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob/chunkwriting.go
generated
vendored
403
vendor/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob/chunkwriting.go
generated
vendored
|
@ -12,225 +12,302 @@ import (
|
|||
"encoding/base64"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
|
||||
"github.com/Azure/azure-sdk-for-go/sdk/azcore/streaming"
|
||||
"github.com/Azure/azure-sdk-for-go/sdk/internal/uuid"
|
||||
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/internal/shared"
|
||||
)
|
||||
|
||||
// blockWriter provides methods to upload blocks that represent a file to a server and commit them.
|
||||
// This allows us to provide a local implementation that fakes the server for hermetic testing.
|
||||
type blockWriter interface {
|
||||
StageBlock(context.Context, string, io.ReadSeekCloser, *StageBlockOptions) (StageBlockResponse, error)
|
||||
Upload(context.Context, io.ReadSeekCloser, *UploadOptions) (UploadResponse, error)
|
||||
CommitBlockList(context.Context, []string, *CommitBlockListOptions) (CommitBlockListResponse, error)
|
||||
}
|
||||
|
||||
// copyFromReader copies a source io.Reader to blob storage using concurrent uploads.
|
||||
// TODO(someone): The existing model provides a buffer size and buffer limit as limiting factors. The buffer size is probably
|
||||
// useless other than needing to be above some number, as the network stack is going to hack up the buffer over some size. The
|
||||
// max buffers is providing a cap on how much memory we use (by multiplying it times the buffer size) and how many go routines can upload
|
||||
// at a time. I think having a single max memory dial would be more efficient. We can choose an internal buffer size that works
|
||||
// well, 4 MiB or 8 MiB, and auto-scale to as many goroutines within the memory limit. This gives a single dial to tweak and we can
|
||||
// choose a max value for the memory setting based on internal transfers within Azure (which will give us the maximum throughput model).
|
||||
// We can even provide a utility to dial this number in for customer networks to optimize their copies.
|
||||
func copyFromReader(ctx context.Context, from io.Reader, to blockWriter, o UploadStreamOptions) (CommitBlockListResponse, error) {
|
||||
if err := o.format(); err != nil {
|
||||
return CommitBlockListResponse{}, err
|
||||
}
|
||||
// bufferManager provides an abstraction for the management of buffers.
|
||||
// this is mostly for testing purposes, but does allow for different implementations without changing the algorithm.
|
||||
type bufferManager[T ~[]byte] interface {
|
||||
// Acquire returns the channel that contains the pool of buffers.
|
||||
Acquire() <-chan T
|
||||
|
||||
// Release releases the buffer back to the pool for reuse/cleanup.
|
||||
Release(T)
|
||||
|
||||
// Grow grows the number of buffers, up to the predefined max.
|
||||
// It returns the total number of buffers or an error.
|
||||
// No error is returned if the number of buffers has reached max.
|
||||
// This is called only from the reading goroutine.
|
||||
Grow() (int, error)
|
||||
|
||||
// Free cleans up all buffers.
|
||||
Free()
|
||||
}
|
||||
|
||||
// copyFromReader copies a source io.Reader to blob storage using concurrent uploads.
|
||||
func copyFromReader[T ~[]byte](ctx context.Context, src io.Reader, dst blockWriter, options UploadStreamOptions, getBufferManager func(maxBuffers int, bufferSize int64) bufferManager[T]) (CommitBlockListResponse, error) {
|
||||
options.setDefaults()
|
||||
|
||||
wg := sync.WaitGroup{} // Used to know when all outgoing blocks have finished processing
|
||||
errCh := make(chan error, 1) // contains the first error encountered during processing
|
||||
|
||||
buffers := getBufferManager(options.Concurrency, options.BlockSize)
|
||||
defer buffers.Free()
|
||||
|
||||
// this controls the lifetime of the uploading goroutines.
|
||||
// if an error is encountered, cancel() is called which will terminate all uploads.
|
||||
// NOTE: the ordering is important here. cancel MUST execute before
|
||||
// cleaning up the buffers so that any uploading goroutines exit first,
|
||||
// releasing their buffers back to the pool for cleanup.
|
||||
ctx, cancel := context.WithCancel(ctx)
|
||||
defer cancel()
|
||||
|
||||
var err error
|
||||
generatedUuid, err := uuid.New()
|
||||
// all blocks have IDs that start with a random UUID
|
||||
blockIDPrefix, err := uuid.New()
|
||||
if err != nil {
|
||||
return CommitBlockListResponse{}, err
|
||||
}
|
||||
|
||||
cp := &copier{
|
||||
ctx: ctx,
|
||||
cancel: cancel,
|
||||
reader: from,
|
||||
to: to,
|
||||
id: newID(generatedUuid),
|
||||
o: o,
|
||||
errCh: make(chan error, 1),
|
||||
tracker := blockTracker{
|
||||
blockIDPrefix: blockIDPrefix,
|
||||
options: options,
|
||||
}
|
||||
|
||||
// Send all our chunks until we get an error.
|
||||
for {
|
||||
if err = cp.sendChunk(); err != nil {
|
||||
// This goroutine grabs a buffer, reads from the stream into the buffer,
|
||||
// then creates a goroutine to upload/stage the block.
|
||||
for blockNum := uint32(0); true; blockNum++ {
|
||||
var buffer T
|
||||
select {
|
||||
case buffer = <-buffers.Acquire():
|
||||
// got a buffer
|
||||
default:
|
||||
// no buffer available; allocate a new buffer if possible
|
||||
if _, err := buffers.Grow(); err != nil {
|
||||
return CommitBlockListResponse{}, err
|
||||
}
|
||||
|
||||
// either grab the newly allocated buffer or wait for one to become available
|
||||
buffer = <-buffers.Acquire()
|
||||
}
|
||||
|
||||
var n int
|
||||
n, err = io.ReadFull(src, buffer)
|
||||
|
||||
if n > 0 {
|
||||
// some data was read, upload it
|
||||
wg.Add(1) // We're posting a buffer to be sent
|
||||
|
||||
// NOTE: we must pass blockNum as an arg to our goroutine else
|
||||
// it's captured by reference and can change underneath us!
|
||||
go func(blockNum uint32) {
|
||||
// Upload the outgoing block, matching the number of bytes read
|
||||
err := tracker.uploadBlock(ctx, dst, blockNum, buffer[:n])
|
||||
if err != nil {
|
||||
select {
|
||||
case errCh <- err:
|
||||
// error was set
|
||||
default:
|
||||
// some other error is already set
|
||||
}
|
||||
cancel()
|
||||
}
|
||||
buffers.Release(buffer) // The goroutine reading from the stream can reuse this buffer now
|
||||
|
||||
// signal that the block has been staged.
|
||||
// we MUST do this after attempting to write to errCh
|
||||
// to avoid it racing with the reading goroutine.
|
||||
wg.Done()
|
||||
}(blockNum)
|
||||
} else {
|
||||
// nothing was read so the buffer is empty, send it back for reuse/clean-up.
|
||||
buffers.Release(buffer)
|
||||
}
|
||||
|
||||
if err != nil { // The reader is done, no more outgoing buffers
|
||||
if errors.Is(err, io.EOF) || errors.Is(err, io.ErrUnexpectedEOF) {
|
||||
// these are expected errors, we don't surface those
|
||||
err = nil
|
||||
} else {
|
||||
// some other error happened, terminate any outstanding uploads
|
||||
cancel()
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
// If the error is not EOF, then we have a problem.
|
||||
if err != nil && !errors.Is(err, io.EOF) {
|
||||
|
||||
wg.Wait() // Wait for all outgoing blocks to complete
|
||||
|
||||
if err != nil {
|
||||
// there was an error reading from src, favor this error over any error during staging
|
||||
return CommitBlockListResponse{}, err
|
||||
}
|
||||
|
||||
// Close out our upload.
|
||||
if err := cp.close(); err != nil {
|
||||
return CommitBlockListResponse{}, err
|
||||
}
|
||||
|
||||
return cp.result, nil
|
||||
}
|
||||
|
||||
// copier streams a file via chunks in parallel from a reader representing a file.
|
||||
// Do not use directly, instead use copyFromReader().
|
||||
type copier struct {
|
||||
// ctx holds the context of a copier. This is normally a faux pas to store a Context in a struct. In this case,
|
||||
// the copier has the lifetime of a function call, so it's fine.
|
||||
ctx context.Context
|
||||
cancel context.CancelFunc
|
||||
|
||||
// reader is the source to be written to storage.
|
||||
reader io.Reader
|
||||
// to is the location we are writing our chunks to.
|
||||
to blockWriter
|
||||
|
||||
// o contains our options for uploading.
|
||||
o UploadStreamOptions
|
||||
|
||||
// id provides the ids for each chunk.
|
||||
id *id
|
||||
|
||||
//// num is the current chunk we are on.
|
||||
//num int32
|
||||
//// ch is used to pass the next chunk of data from our reader to one of the writers.
|
||||
//ch chan copierChunk
|
||||
|
||||
// errCh is used to hold the first error from our concurrent writers.
|
||||
errCh chan error
|
||||
// wg provides a count of how many writers we are waiting to finish.
|
||||
wg sync.WaitGroup
|
||||
|
||||
// result holds the final result from blob storage after we have submitted all chunks.
|
||||
result CommitBlockListResponse
|
||||
}
|
||||
|
||||
// copierChunk contains buffer
|
||||
type copierChunk struct {
|
||||
buffer []byte
|
||||
id string
|
||||
length int
|
||||
}
|
||||
|
||||
// getErr returns an error by priority. First, if a function set an error, it returns that error. Next, if the Context has an error
|
||||
// it returns that error. Otherwise, it is nil. getErr supports only returning an error once per copier.
|
||||
func (c *copier) getErr() error {
|
||||
select {
|
||||
case err := <-c.errCh:
|
||||
return err
|
||||
case err = <-errCh:
|
||||
// there was an error during staging
|
||||
return CommitBlockListResponse{}, err
|
||||
default:
|
||||
// no error was encountered
|
||||
}
|
||||
return c.ctx.Err()
|
||||
|
||||
// If no error, after all blocks uploaded, commit them to the blob & return the result
|
||||
return tracker.commitBlocks(ctx, dst)
|
||||
}
|
||||
|
||||
// sendChunk reads data from out internal reader, creates a chunk, and sends it to be written via a channel.
|
||||
// sendChunk returns io.EOF when the reader returns an io.EOF or io.ErrUnexpectedEOF.
|
||||
func (c *copier) sendChunk() error {
|
||||
if err := c.getErr(); err != nil {
|
||||
return err
|
||||
}
|
||||
// used to manage the uploading and committing of blocks
|
||||
type blockTracker struct {
|
||||
blockIDPrefix uuid.UUID // UUID used with all blockIDs
|
||||
maxBlockNum uint32 // defaults to 0
|
||||
firstBlock []byte // Used only if maxBlockNum is 0
|
||||
options UploadStreamOptions
|
||||
}
|
||||
|
||||
buffer := c.o.transferManager.Get()
|
||||
if len(buffer) == 0 {
|
||||
return fmt.Errorf("transferManager returned a 0 size buffer, this is a bug in the manager")
|
||||
}
|
||||
func (bt *blockTracker) uploadBlock(ctx context.Context, to blockWriter, num uint32, buffer []byte) error {
|
||||
if num == 0 {
|
||||
bt.firstBlock = buffer
|
||||
|
||||
n, err := io.ReadFull(c.reader, buffer)
|
||||
if n > 0 {
|
||||
// Some data was read, schedule the Write.
|
||||
id := c.id.next()
|
||||
c.wg.Add(1)
|
||||
c.o.transferManager.Run(
|
||||
func() {
|
||||
defer c.wg.Done()
|
||||
c.write(copierChunk{buffer: buffer, id: id, length: n})
|
||||
},
|
||||
)
|
||||
// If whole payload fits in 1 block, don't stage it; End will upload it with 1 I/O operation
|
||||
// If the payload is exactly the same size as the buffer, there may be more content coming in.
|
||||
if len(buffer) < int(bt.options.BlockSize) {
|
||||
return nil
|
||||
}
|
||||
} else {
|
||||
// Return the unused buffer to the manager.
|
||||
c.o.transferManager.Put(buffer)
|
||||
}
|
||||
|
||||
if err == nil {
|
||||
return nil
|
||||
} else if err == io.EOF || err == io.ErrUnexpectedEOF {
|
||||
return io.EOF
|
||||
}
|
||||
|
||||
if cerr := c.getErr(); cerr != nil {
|
||||
return cerr
|
||||
// Else, upload a staged block...
|
||||
atomicMorphUint32(&bt.maxBlockNum, func(startVal uint32) (val uint32, morphResult uint32) {
|
||||
// Atomically remember (in t.numBlocks) the maximum block num we've ever seen
|
||||
if startVal < num {
|
||||
return num, 0
|
||||
}
|
||||
return startVal, 0
|
||||
})
|
||||
}
|
||||
|
||||
blockID := newUUIDBlockID(bt.blockIDPrefix).WithBlockNumber(num).ToBase64()
|
||||
_, err := to.StageBlock(ctx, blockID, streaming.NopCloser(bytes.NewReader(buffer)), bt.options.getStageBlockOptions())
|
||||
return err
|
||||
}
|
||||
|
||||
// write uploads a chunk to blob storage.
|
||||
func (c *copier) write(chunk copierChunk) {
|
||||
defer c.o.transferManager.Put(chunk.buffer)
|
||||
func (bt *blockTracker) commitBlocks(ctx context.Context, to blockWriter) (CommitBlockListResponse, error) {
|
||||
// If the first block had the exact same size as the buffer
|
||||
// we would have staged it as a block thinking that there might be more data coming
|
||||
if bt.maxBlockNum == 0 && len(bt.firstBlock) < int(bt.options.BlockSize) {
|
||||
// If whole payload fits in 1 block (block #0), upload it with 1 I/O operation
|
||||
up, err := to.Upload(ctx, streaming.NopCloser(bytes.NewReader(bt.firstBlock)), bt.options.getUploadOptions())
|
||||
if err != nil {
|
||||
return CommitBlockListResponse{}, err
|
||||
}
|
||||
|
||||
if err := c.ctx.Err(); err != nil {
|
||||
return
|
||||
// convert UploadResponse to CommitBlockListResponse
|
||||
return CommitBlockListResponse{
|
||||
ClientRequestID: up.ClientRequestID,
|
||||
ContentMD5: up.ContentMD5,
|
||||
Date: up.Date,
|
||||
ETag: up.ETag,
|
||||
EncryptionKeySHA256: up.EncryptionKeySHA256,
|
||||
EncryptionScope: up.EncryptionScope,
|
||||
IsServerEncrypted: up.IsServerEncrypted,
|
||||
LastModified: up.LastModified,
|
||||
RequestID: up.RequestID,
|
||||
Version: up.Version,
|
||||
VersionID: up.VersionID,
|
||||
//ContentCRC64: up.ContentCRC64, doesn't exist on UploadResponse
|
||||
}, nil
|
||||
}
|
||||
stageBlockOptions := c.o.getStageBlockOptions()
|
||||
_, err := c.to.StageBlock(c.ctx, chunk.id, shared.NopCloser(bytes.NewReader(chunk.buffer[:chunk.length])), stageBlockOptions)
|
||||
if err != nil {
|
||||
select {
|
||||
case c.errCh <- err:
|
||||
// failed to stage block, cancel the copy
|
||||
default:
|
||||
// don't block the goroutine if there's a pending error
|
||||
|
||||
// Multiple blocks staged, commit them all now
|
||||
blockID := newUUIDBlockID(bt.blockIDPrefix)
|
||||
blockIDs := make([]string, bt.maxBlockNum+1)
|
||||
for bn := uint32(0); bn < bt.maxBlockNum+1; bn++ {
|
||||
blockIDs[bn] = blockID.WithBlockNumber(bn).ToBase64()
|
||||
}
|
||||
|
||||
return to.CommitBlockList(ctx, blockIDs, bt.options.getCommitBlockListOptions())
|
||||
}
|
||||
|
||||
// AtomicMorpherUint32 identifies a method passed to and invoked by the AtomicMorph function.
|
||||
// The AtomicMorpher callback is passed a startValue and based on this value it returns
|
||||
// what the new value should be and the result that AtomicMorph should return to its caller.
|
||||
type atomicMorpherUint32 func(startVal uint32) (val uint32, morphResult uint32)
|
||||
|
||||
// AtomicMorph atomically morphs target in to new value (and result) as indicated bythe AtomicMorpher callback function.
|
||||
func atomicMorphUint32(target *uint32, morpher atomicMorpherUint32) uint32 {
|
||||
for {
|
||||
currentVal := atomic.LoadUint32(target)
|
||||
desiredVal, morphResult := morpher(currentVal)
|
||||
if atomic.CompareAndSwapUint32(target, currentVal, desiredVal) {
|
||||
return morphResult
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// close commits our blocks to blob storage and closes our writer.
|
||||
func (c *copier) close() error {
|
||||
c.wg.Wait()
|
||||
type blockID [64]byte
|
||||
|
||||
if err := c.getErr(); err != nil {
|
||||
return err
|
||||
func (blockID blockID) ToBase64() string {
|
||||
return base64.StdEncoding.EncodeToString(blockID[:])
|
||||
}
|
||||
|
||||
type uuidBlockID blockID
|
||||
|
||||
func newUUIDBlockID(u uuid.UUID) uuidBlockID {
|
||||
ubi := uuidBlockID{} // Create a new uuidBlockID
|
||||
copy(ubi[:len(u)], u[:]) // Copy the specified UUID into it
|
||||
// Block number defaults to 0
|
||||
return ubi
|
||||
}
|
||||
|
||||
func (ubi uuidBlockID) WithBlockNumber(blockNumber uint32) uuidBlockID {
|
||||
binary.BigEndian.PutUint32(ubi[len(uuid.UUID{}):], blockNumber) // Put block number after UUID
|
||||
return ubi // Return the passed-in copy
|
||||
}
|
||||
|
||||
func (ubi uuidBlockID) ToBase64() string {
|
||||
return blockID(ubi).ToBase64()
|
||||
}
|
||||
|
||||
// mmbPool implements the bufferManager interface.
|
||||
// it uses anonymous memory mapped files for buffers.
|
||||
// don't use this type directly, use newMMBPool() instead.
|
||||
type mmbPool struct {
|
||||
buffers chan mmb
|
||||
count int
|
||||
max int
|
||||
size int64
|
||||
}
|
||||
|
||||
func newMMBPool(maxBuffers int, bufferSize int64) bufferManager[mmb] {
|
||||
return &mmbPool{
|
||||
buffers: make(chan mmb, maxBuffers),
|
||||
max: maxBuffers,
|
||||
size: bufferSize,
|
||||
}
|
||||
|
||||
var err error
|
||||
commitBlockListOptions := c.o.getCommitBlockListOptions()
|
||||
c.result, err = c.to.CommitBlockList(c.ctx, c.id.issued(), commitBlockListOptions)
|
||||
return err
|
||||
}
|
||||
|
||||
// id allows the creation of unique IDs based on UUID4 + an int32. This auto-increments.
|
||||
type id struct {
|
||||
u [64]byte
|
||||
num uint32
|
||||
all []string
|
||||
func (pool *mmbPool) Acquire() <-chan mmb {
|
||||
return pool.buffers
|
||||
}
|
||||
|
||||
// newID constructs a new id.
|
||||
func newID(uu uuid.UUID) *id {
|
||||
u := [64]byte{}
|
||||
copy(u[:], uu[:])
|
||||
return &id{u: u}
|
||||
func (pool *mmbPool) Grow() (int, error) {
|
||||
if pool.count < pool.max {
|
||||
buffer, err := newMMB(pool.size)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
pool.buffers <- buffer
|
||||
pool.count++
|
||||
}
|
||||
return pool.count, nil
|
||||
}
|
||||
|
||||
// next returns the next ID.
|
||||
func (id *id) next() string {
|
||||
defer atomic.AddUint32(&id.num, 1)
|
||||
|
||||
binary.BigEndian.PutUint32(id.u[len(uuid.UUID{}):], atomic.LoadUint32(&id.num))
|
||||
str := base64.StdEncoding.EncodeToString(id.u[:])
|
||||
id.all = append(id.all, str)
|
||||
|
||||
return str
|
||||
func (pool *mmbPool) Release(buffer mmb) {
|
||||
pool.buffers <- buffer
|
||||
}
|
||||
|
||||
// issued returns all ids that have been issued. This returned value shares the internal slice, so it is not safe to modify the return.
|
||||
// The value is only valid until the next time next() is called.
|
||||
func (id *id) issued() []string {
|
||||
return id.all
|
||||
func (pool *mmbPool) Free() {
|
||||
for i := 0; i < pool.count; i++ {
|
||||
buffer := <-pool.buffers
|
||||
buffer.delete()
|
||||
}
|
||||
pool.count = 0
|
||||
}
|
||||
|
|
69
vendor/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob/client.go
generated
vendored
69
vendor/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob/client.go
generated
vendored
|
@ -14,6 +14,7 @@ import (
|
|||
"io"
|
||||
"os"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
|
||||
"github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime"
|
||||
|
@ -104,6 +105,11 @@ func (bb *Client) generated() *generated.BlockBlobClient {
|
|||
return blockBlob
|
||||
}
|
||||
|
||||
func (bb *Client) innerBlobGenerated() *generated.BlobClient {
|
||||
b := bb.BlobClient()
|
||||
return base.InnerClient((*base.Client[generated.BlobClient])(b))
|
||||
}
|
||||
|
||||
// URL returns the URL endpoint used by the Client object.
|
||||
func (bb *Client) URL() string {
|
||||
return bb.generated().Endpoint()
|
||||
|
@ -169,6 +175,13 @@ func (bb *Client) StageBlock(ctx context.Context, base64BlockID string, body io.
|
|||
|
||||
opts, leaseAccessConditions, cpkInfo, cpkScopeInfo := options.format()
|
||||
|
||||
if options != nil && options.TransactionalValidation != nil {
|
||||
body, err = options.TransactionalValidation.Apply(body, opts)
|
||||
if err != nil {
|
||||
return StageBlockResponse{}, nil
|
||||
}
|
||||
}
|
||||
|
||||
resp, err := bb.generated().StageBlock(ctx, base64BlockID, count, body, opts, leaseAccessConditions, cpkInfo, cpkScopeInfo)
|
||||
return resp, err
|
||||
}
|
||||
|
@ -218,6 +231,9 @@ func (bb *Client) CommitBlockList(ctx context.Context, base64BlockIDs []string,
|
|||
Timeout: options.Timeout,
|
||||
TransactionalContentCRC64: options.TransactionalContentCRC64,
|
||||
TransactionalContentMD5: options.TransactionalContentMD5,
|
||||
LegalHold: options.LegalHold,
|
||||
ImmutabilityPolicyMode: options.ImmutabilityPolicyMode,
|
||||
ImmutabilityPolicyExpiry: options.ImmutabilityPolicyExpiryTime,
|
||||
}
|
||||
|
||||
headers = options.HTTPHeaders
|
||||
|
@ -255,6 +271,24 @@ func (bb *Client) Undelete(ctx context.Context, o *blob.UndeleteOptions) (blob.U
|
|||
return bb.BlobClient().Undelete(ctx, o)
|
||||
}
|
||||
|
||||
// SetImmutabilityPolicy operation enables users to set the immutability policy on a blob.
|
||||
// https://learn.microsoft.com/en-us/azure/storage/blobs/immutable-storage-overview
|
||||
func (bb *Client) SetImmutabilityPolicy(ctx context.Context, expiryTime time.Time, options *blob.SetImmutabilityPolicyOptions) (blob.SetImmutabilityPolicyResponse, error) {
|
||||
return bb.BlobClient().SetImmutabilityPolicy(ctx, expiryTime, options)
|
||||
}
|
||||
|
||||
// DeleteImmutabilityPolicy operation enables users to delete the immutability policy on a blob.
|
||||
// https://learn.microsoft.com/en-us/azure/storage/blobs/immutable-storage-overview
|
||||
func (bb *Client) DeleteImmutabilityPolicy(ctx context.Context, options *blob.DeleteImmutabilityPolicyOptions) (blob.DeleteImmutabilityPolicyResponse, error) {
|
||||
return bb.BlobClient().DeleteImmutabilityPolicy(ctx, options)
|
||||
}
|
||||
|
||||
// SetLegalHold operation enables users to set legal hold on a blob.
|
||||
// https://learn.microsoft.com/en-us/azure/storage/blobs/immutable-storage-overview
|
||||
func (bb *Client) SetLegalHold(ctx context.Context, legalHold bool, options *blob.SetLegalHoldOptions) (blob.SetLegalHoldResponse, error) {
|
||||
return bb.BlobClient().SetLegalHold(ctx, legalHold, options)
|
||||
}
|
||||
|
||||
// SetTier operation sets the tier on a blob. The operation is allowed on a page
|
||||
// blob in a premium storage account and on a block blob in a blob storage account (locally
|
||||
// redundant storage only). A premium page blob's tier determines the allowed size, IOPS, and
|
||||
|
@ -265,6 +299,17 @@ func (bb *Client) SetTier(ctx context.Context, tier blob.AccessTier, o *blob.Set
|
|||
return bb.BlobClient().SetTier(ctx, tier, o)
|
||||
}
|
||||
|
||||
// SetExpiry operation sets an expiry time on an existing blob. This operation is only allowed on Hierarchical Namespace enabled accounts.
|
||||
// For more information, see https://learn.microsoft.com/en-us/rest/api/storageservices/set-blob-expiry
|
||||
func (bb *Client) SetExpiry(ctx context.Context, expiryType ExpiryType, o *SetExpiryOptions) (SetExpiryResponse, error) {
|
||||
if expiryType == nil {
|
||||
expiryType = ExpiryTypeNever{}
|
||||
}
|
||||
et, opts := expiryType.Format(o)
|
||||
resp, err := bb.innerBlobGenerated().SetExpiry(ctx, et, opts)
|
||||
return resp, err
|
||||
}
|
||||
|
||||
// GetProperties returns the blob's properties.
|
||||
// For more information, see https://docs.microsoft.com/rest/api/storageservices/get-blob-properties.
|
||||
func (bb *Client) GetProperties(ctx context.Context, o *blob.GetPropertiesOptions) (blob.GetPropertiesResponse, error) {
|
||||
|
@ -324,7 +369,8 @@ func (bb *Client) CopyFromURL(ctx context.Context, copySource string, o *blob.Co
|
|||
// Concurrent Upload Functions -----------------------------------------------------------------------------------------
|
||||
|
||||
// uploadFromReader uploads a buffer in blocks to a block blob.
|
||||
func (bb *Client) uploadFromReader(ctx context.Context, reader io.ReaderAt, readerSize int64, o *uploadFromReaderOptions) (uploadFromReaderResponse, error) {
|
||||
func (bb *Client) uploadFromReader(ctx context.Context, reader io.ReaderAt, actualSize int64, o *uploadFromReaderOptions) (uploadFromReaderResponse, error) {
|
||||
readerSize := actualSize
|
||||
if o.BlockSize == 0 {
|
||||
// If bufferSize > (MaxStageBlockBytes * MaxBlocks), then error
|
||||
if readerSize > MaxStageBlockBytes*MaxBlocks {
|
||||
|
@ -374,11 +420,17 @@ func (bb *Client) uploadFromReader(ctx context.Context, reader io.ReaderAt, read
|
|||
TransferSize: readerSize,
|
||||
ChunkSize: o.BlockSize,
|
||||
Concurrency: o.Concurrency,
|
||||
Operation: func(offset int64, count int64, ctx context.Context) error {
|
||||
Operation: func(ctx context.Context, offset int64, chunkSize int64) error {
|
||||
// This function is called once per block.
|
||||
// It is passed this block's offset within the buffer and its count of bytes
|
||||
// Prepare to read the proper block/section of the buffer
|
||||
var body io.ReadSeeker = io.NewSectionReader(reader, offset, count)
|
||||
if chunkSize < o.BlockSize {
|
||||
// this is the last block. its actual size might be less
|
||||
// than the calculated size due to rounding up of the payload
|
||||
// size to fit in a whole number of blocks.
|
||||
chunkSize = (actualSize - offset)
|
||||
}
|
||||
var body io.ReadSeeker = io.NewSectionReader(reader, offset, chunkSize)
|
||||
blockNum := offset / o.BlockSize
|
||||
if o.Progress != nil {
|
||||
blockProgress := int64(0)
|
||||
|
@ -440,20 +492,11 @@ func (bb *Client) UploadFile(ctx context.Context, file *os.File, o *UploadFileOp
|
|||
// UploadStream copies the file held in io.Reader to the Blob at blockBlobClient.
|
||||
// A Context deadline or cancellation will cause this to error.
|
||||
func (bb *Client) UploadStream(ctx context.Context, body io.Reader, o *UploadStreamOptions) (UploadStreamResponse, error) {
|
||||
if err := o.format(); err != nil {
|
||||
return CommitBlockListResponse{}, err
|
||||
}
|
||||
|
||||
if o == nil {
|
||||
o = &UploadStreamOptions{}
|
||||
}
|
||||
|
||||
// If we used the default manager, we need to close it.
|
||||
if o.transferMangerNotSet {
|
||||
defer o.transferManager.Close()
|
||||
}
|
||||
|
||||
result, err := copyFromReader(ctx, body, bb, *o)
|
||||
result, err := copyFromReader(ctx, body, bb, *o, newMMBPool)
|
||||
if err != nil {
|
||||
return CommitBlockListResponse{}, err
|
||||
}
|
||||
|
|
1
vendor/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob/constants.go
generated
vendored
1
vendor/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob/constants.go
generated
vendored
|
@ -8,7 +8,6 @@ package blockblob
|
|||
|
||||
import "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/internal/generated"
|
||||
|
||||
// nolint
|
||||
const (
|
||||
// CountToEnd specifies the end of the file
|
||||
CountToEnd = 0
|
||||
|
|
38
vendor/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob/mmf_unix.go
generated
vendored
Normal file
38
vendor/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob/mmf_unix.go
generated
vendored
Normal file
|
@ -0,0 +1,38 @@
|
|||
//go:build go1.18 && (linux || darwin || freebsd || openbsd || netbsd || solaris)
|
||||
// +build go1.18
|
||||
// +build linux darwin freebsd openbsd netbsd solaris
|
||||
|
||||
// Copyright (c) Microsoft Corporation. All rights reserved.
|
||||
// Licensed under the MIT License. See License.txt in the project root for license information.
|
||||
|
||||
package blockblob
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
// mmb is a memory mapped buffer
|
||||
type mmb []byte
|
||||
|
||||
// newMMB creates a new memory mapped buffer with the specified size
|
||||
func newMMB(size int64) (mmb, error) {
|
||||
prot, flags := syscall.PROT_READ|syscall.PROT_WRITE, syscall.MAP_ANON|syscall.MAP_PRIVATE
|
||||
addr, err := syscall.Mmap(-1, 0, int(size), prot, flags)
|
||||
if err != nil {
|
||||
return nil, os.NewSyscallError("Mmap", err)
|
||||
}
|
||||
return mmb(addr), nil
|
||||
}
|
||||
|
||||
// delete cleans up the memory mapped buffer
|
||||
func (m *mmb) delete() {
|
||||
err := syscall.Munmap(*m)
|
||||
*m = nil
|
||||
if err != nil {
|
||||
// if we get here, there is likely memory corruption.
|
||||
// please open an issue https://github.com/Azure/azure-sdk-for-go/issues
|
||||
panic(fmt.Sprintf("Munmap error: %v", err))
|
||||
}
|
||||
}
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue