Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files

This commit is contained in:
Aliaksandr Valialkin 2022-04-20 22:55:51 +03:00
commit 6a5d6244d4
No known key found for this signature in database
GPG key ID: A72BEC6CD3D0DED1
65 changed files with 1922 additions and 779 deletions

View file

@ -33,7 +33,7 @@ of Grafana dashboards if possible:
See how to setup monitoring here: See how to setup monitoring here:
* [monitoring for single-node VictoriaMetrics](https://docs.victoriametrics.com/#monitoring) * [monitoring for single-node VictoriaMetrics](https://docs.victoriametrics.com/#monitoring)
* [montioring for VictoriaMetrics cluster](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#monitoring) * [monitoring for VictoriaMetrics cluster](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#monitoring)
**Version** **Version**
The line returned when passing `--version` command line flag to the binary. For example: The line returned when passing `--version` command line flag to the binary. For example:

View file

@ -6,6 +6,9 @@ on:
pull_request: pull_request:
paths: paths:
- 'vendor' - 'vendor'
permissions:
contents: read
jobs: jobs:
build: build:
name: Build name: Build

70
.github/workflows/codeql-analysis.yml vendored Normal file
View file

@ -0,0 +1,70 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"
on:
push:
branches: [ master, cluster ]
pull_request:
# The branches below must be a subset of the branches above
branches: [ master, cluster ]
schedule:
- cron: '30 18 * * 2'
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language: [ 'go', 'javascript' ]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python', 'ruby' ]
# Learn more about CodeQL language support at https://git.io/codeql-language-support
steps:
- name: Checkout repository
uses: actions/checkout@v3
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v2
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# queries: ./path/to/local/query, your-org/your-repo/queries@main
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v2
# Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl
# ✏️ If the Autobuild fails above, remove it and uncomment the following three lines
# and modify them (or add more) to build your code if your project
# uses a compiled language
#- run: |
# make bootstrap
# make release
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2

View file

@ -8,6 +8,9 @@ on:
paths-ignore: paths-ignore:
- 'docs/**' - 'docs/**'
- '**.md' - '**.md'
permissions:
contents: read
jobs: jobs:
build: build:
name: Build name: Build

View file

@ -5,8 +5,13 @@ on:
- 'docs/*' - 'docs/*'
branches: branches:
- master - master
permissions:
contents: read
jobs: jobs:
build: build:
permissions:
contents: write # for Git to git push
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@master - uses: actions/checkout@master

View file

@ -1738,8 +1738,8 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
Auth key for /debug/pprof. It must be passed via authKey query arg. It overrides httpAuth.* settings Auth key for /debug/pprof. It must be passed via authKey query arg. It overrides httpAuth.* settings
-precisionBits int -precisionBits int
The number of precision bits to store per each value. Lower precision bits improves data compression at the cost of precision loss (default 64) The number of precision bits to store per each value. Lower precision bits improves data compression at the cost of precision loss (default 64)
-promscrape.cluster.memberNum int -promscrape.cluster.memberNum string
The number of number in the cluster of scrapers. It must be an unique value in the range 0 ... promscrape.cluster.membersCount-1 across scrapers in the cluster The number of number in the cluster of scrapers. It must be an unique value in the range 0 ... promscrape.cluster.membersCount-1 across scrapers in the cluster. Can be specified as pod name of Kubernetes StatefulSet - pod-name-Num, where Num is a numeric part of pod name (default "0")
-promscrape.cluster.membersCount int -promscrape.cluster.membersCount int
The number of members in a cluster of scrapers. Each member must have an unique -promscrape.cluster.memberNum in the range 0 ... promscrape.cluster.membersCount-1 . Each member then scrapes roughly 1/N of all the targets. By default cluster scraping is disabled, i.e. a single scraper scrapes all the targets The number of members in a cluster of scrapers. Each member must have an unique -promscrape.cluster.memberNum in the range 0 ... promscrape.cluster.membersCount-1 . Each member then scrapes roughly 1/N of all the targets. By default cluster scraping is disabled, i.e. a single scraper scrapes all the targets
-promscrape.cluster.replicationFactor int -promscrape.cluster.replicationFactor int
@ -1785,7 +1785,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
-promscrape.httpSDCheckInterval duration -promscrape.httpSDCheckInterval duration
Interval for checking for changes in http endpoint service discovery. This works only if http_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#http_sd_config for details (default 1m0s) Interval for checking for changes in http endpoint service discovery. This works only if http_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#http_sd_config for details (default 1m0s)
-promscrape.kubernetes.apiServerTimeout duration -promscrape.kubernetes.apiServerTimeout duration
How frequently to reload the full state from Kuberntes API server (default 30m0s) How frequently to reload the full state from Kubernetes API server (default 30m0s)
-promscrape.kubernetesSDCheckInterval duration -promscrape.kubernetesSDCheckInterval duration
Interval for checking for changes in Kubernetes API server. This works only if kubernetes_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config for details (default 30s) Interval for checking for changes in Kubernetes API server. This works only if kubernetes_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config for details (default 30s)
-promscrape.maxDroppedTargets int -promscrape.maxDroppedTargets int
@ -1918,11 +1918,14 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
-storageDataPath string -storageDataPath string
Path to storage data (default "victoria-metrics-data") Path to storage data (default "victoria-metrics-data")
-tls -tls
Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set Whether to enable TLS for incoming HTTP requests at -httpListenAddr (aka https). -tlsCertFile and -tlsKeyFile must be set if -tls is set
-tlsCertFile string -tlsCertFile string
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated Path to file with TLS certificate if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
-tlsCipherSuites array
Optional list of TLS cipher suites for incoming requests over HTTPS if -tls is set. See the list of supported cipher suites at https://pkg.go.dev/crypto/tls#pkg-constants
Supports an array of values separated by comma or specified via multiple flags.
-tlsKeyFile string -tlsKeyFile string
Path to file with TLS key. Used only if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated Path to file with TLS key if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
-version -version
Show VictoriaMetrics version Show VictoriaMetrics version
``` ```

View file

@ -463,7 +463,7 @@ It may be useful to perform `vmagent` rolling update without any scrape loss.
* Disabling staleness tracking with `-promscrape.noStaleMarkers` option. See [these docs](#prometheus-staleness-markers). * Disabling staleness tracking with `-promscrape.noStaleMarkers` option. See [these docs](#prometheus-staleness-markers).
* Enabling stream parsing mode if `vmagent` scrapes targets with millions of metrics per target. See [these docs](#stream-parsing-mode). * Enabling stream parsing mode if `vmagent` scrapes targets with millions of metrics per target. See [these docs](#stream-parsing-mode).
* Reducing the number of output queues with `-remoteWrite.queues` command-line option. * Reducing the number of output queues with `-remoteWrite.queues` command-line option.
* Reducing the amounts of RAM vmagent can use for in-memory buffering with `-memory.allowedPercent` or `-memory.allowedBytes` command-line option. Another option is to reduce memory limits in Docker and/or Kuberntes if `vmagent` runs under these systems. * Reducing the amounts of RAM vmagent can use for in-memory buffering with `-memory.allowedPercent` or `-memory.allowedBytes` command-line option. Another option is to reduce memory limits in Docker and/or Kubernetes if `vmagent` runs under these systems.
* Reducing the number of CPU cores vmagent can use by passing `GOMAXPROCS=N` environment variable to `vmagent`, where `N` is the desired limit on CPU cores. Another option is to reduce CPU limits in Docker or Kubernetes if `vmagent` runs under these systems. * Reducing the number of CPU cores vmagent can use by passing `GOMAXPROCS=N` environment variable to `vmagent`, where `N` is the desired limit on CPU cores. Another option is to reduce CPU limits in Docker or Kubernetes if `vmagent` runs under these systems.
* Passing `-promscrape.dropOriginalLabels` command-line option to `vmagent`, so it drops `"discoveredLabels"` and `"droppedTargets"` lists at `/api/v1/targets` page. This reduces memory usage when scraping big number of targets at the cost of reduced debuggability for improperly configured per-target relabeling. * Passing `-promscrape.dropOriginalLabels` command-line option to `vmagent`, so it drops `"discoveredLabels"` and `"droppedTargets"` lists at `/api/v1/targets` page. This reduces memory usage when scraping big number of targets at the cost of reduced debuggability for improperly configured per-target relabeling.
@ -841,8 +841,8 @@ See the docs at https://docs.victoriametrics.com/vmagent.html .
Trim timestamps for OpenTSDB HTTP data to this duration. Minimum practical duration is 1ms. Higher duration (i.e. 1s) may be used for reducing disk space usage for timestamp data (default 1ms) Trim timestamps for OpenTSDB HTTP data to this duration. Minimum practical duration is 1ms. Higher duration (i.e. 1s) may be used for reducing disk space usage for timestamp data (default 1ms)
-pprofAuthKey string -pprofAuthKey string
Auth key for /debug/pprof. It must be passed via authKey query arg. It overrides httpAuth.* settings Auth key for /debug/pprof. It must be passed via authKey query arg. It overrides httpAuth.* settings
-promscrape.cluster.memberNum int -promscrape.cluster.memberNum string
The number of number in the cluster of scrapers. It must be an unique value in the range 0 ... promscrape.cluster.membersCount-1 across scrapers in the cluster The number of number in the cluster of scrapers. It must be an unique value in the range 0 ... promscrape.cluster.membersCount-1 across scrapers in the cluster. Can be specified as pod name of Kubernetes StatefulSet - pod-name-Num, where Num is a numeric part of pod name (default "0")
-promscrape.cluster.membersCount int -promscrape.cluster.membersCount int
The number of members in a cluster of scrapers. Each member must have an unique -promscrape.cluster.memberNum in the range 0 ... promscrape.cluster.membersCount-1 . Each member then scrapes roughly 1/N of all the targets. By default cluster scraping is disabled, i.e. a single scraper scrapes all the targets The number of members in a cluster of scrapers. Each member must have an unique -promscrape.cluster.memberNum in the range 0 ... promscrape.cluster.membersCount-1 . Each member then scrapes roughly 1/N of all the targets. By default cluster scraping is disabled, i.e. a single scraper scrapes all the targets
-promscrape.cluster.replicationFactor int -promscrape.cluster.replicationFactor int
@ -888,7 +888,7 @@ See the docs at https://docs.victoriametrics.com/vmagent.html .
-promscrape.httpSDCheckInterval duration -promscrape.httpSDCheckInterval duration
Interval for checking for changes in http endpoint service discovery. This works only if http_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#http_sd_config for details (default 1m0s) Interval for checking for changes in http endpoint service discovery. This works only if http_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#http_sd_config for details (default 1m0s)
-promscrape.kubernetes.apiServerTimeout duration -promscrape.kubernetes.apiServerTimeout duration
How frequently to reload the full state from Kuberntes API server (default 30m0s) How frequently to reload the full state from Kubernetes API server (default 30m0s)
-promscrape.kubernetesSDCheckInterval duration -promscrape.kubernetesSDCheckInterval duration
Interval for checking for changes in Kubernetes API server. This works only if kubernetes_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config for details (default 30s) Interval for checking for changes in Kubernetes API server. This works only if kubernetes_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config for details (default 30s)
-promscrape.maxDroppedTargets int -promscrape.maxDroppedTargets int
@ -1016,11 +1016,14 @@ See the docs at https://docs.victoriametrics.com/vmagent.html .
-sortLabels -sortLabels
Whether to sort labels for incoming samples before writing them to all the configured remote storage systems. This may be needed for reducing memory usage at remote storage when the order of labels in incoming samples is random. For example, if m{k1="v1",k2="v2"} may be sent as m{k2="v2",k1="v1"}Enabled sorting for labels can slow down ingestion performance a bit Whether to sort labels for incoming samples before writing them to all the configured remote storage systems. This may be needed for reducing memory usage at remote storage when the order of labels in incoming samples is random. For example, if m{k1="v1",k2="v2"} may be sent as m{k2="v2",k1="v1"}Enabled sorting for labels can slow down ingestion performance a bit
-tls -tls
Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set Whether to enable TLS for incoming HTTP requests at -httpListenAddr (aka https). -tlsCertFile and -tlsKeyFile must be set if -tls is set
-tlsCertFile string -tlsCertFile string
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated Path to file with TLS certificate if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
-tlsCipherSuites array
Optional list of TLS cipher suites for incoming requests over HTTPS if -tls is set. See the list of supported cipher suites at https://pkg.go.dev/crypto/tls#pkg-constants
Supports an array of values separated by comma or specified via multiple flags.
-tlsKeyFile string -tlsKeyFile string
Path to file with TLS key. Used only if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated Path to file with TLS key if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
-version -version
Show VictoriaMetrics version Show VictoriaMetrics version
``` ```

View file

@ -83,7 +83,7 @@ run-vmalert-sd: vmalert
./bin/vmalert -rule=app/vmalert/config/testdata/rules2-good.rules \ ./bin/vmalert -rule=app/vmalert/config/testdata/rules2-good.rules \
-datasource.url=http://localhost:8428 \ -datasource.url=http://localhost:8428 \
-remoteWrite.url=http://localhost:8428 \ -remoteWrite.url=http://localhost:8428 \
-notifier.config=app/vmalert/notifier/testdata/consul.good.yaml \ -notifier.config=app/vmalert/notifier/testdata/mixed.good.yaml \
-configCheckInterval=10s -configCheckInterval=10s
replay-vmalert: vmalert replay-vmalert: vmalert

View file

@ -48,7 +48,7 @@ To start using `vmalert` you will need the following things:
* list of rules - PromQL/MetricsQL expressions to execute; * list of rules - PromQL/MetricsQL expressions to execute;
* datasource address - reachable MetricsQL endpoint to run queries against; * datasource address - reachable MetricsQL endpoint to run queries against;
* notifier address [optional] - reachable [Alert Manager](https://github.com/prometheus/alertmanager) instance for processing, * notifier address [optional] - reachable [Alert Manager](https://github.com/prometheus/alertmanager) instance for processing,
aggregating alerts, and sending notifications. Please note, notifier address also supports Consul Service Discovery via aggregating alerts, and sending notifications. Please note, notifier address also supports Consul and DNS Service Discovery via
[config file](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmalert/notifier/config.go). [config file](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmalert/notifier/config.go).
* remote write address [optional] - [remote write](https://prometheus.io/docs/prometheus/latest/storage/#remote-storage-integrations) * remote write address [optional] - [remote write](https://prometheus.io/docs/prometheus/latest/storage/#remote-storage-integrations)
compatible storage to persist rules and alerts state info; compatible storage to persist rules and alerts state info;
@ -688,6 +688,8 @@ The shortlist of configuration flags is the following:
The maximum number of concurrent requests to Prometheus autodiscovery API (Consul, Kubernetes, etc.) (default 100) The maximum number of concurrent requests to Prometheus autodiscovery API (Consul, Kubernetes, etc.) (default 100)
-promscrape.discovery.concurrentWaitTime duration -promscrape.discovery.concurrentWaitTime duration
The maximum duration for waiting to perform API requests if more than -promscrape.discovery.concurrency requests are simultaneously performed (default 1m0s) The maximum duration for waiting to perform API requests if more than -promscrape.discovery.concurrency requests are simultaneously performed (default 1m0s)
-promscrape.dnsSDCheckInterval duration
Interval for checking for changes in dns. This works only if dns_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dns_sd_config for details (default 30s)
-remoteRead.basicAuth.password string -remoteRead.basicAuth.password string
Optional basic auth password for -remoteRead.url Optional basic auth password for -remoteRead.url
-remoteRead.basicAuth.passwordFile string -remoteRead.basicAuth.passwordFile string
@ -798,11 +800,14 @@ The shortlist of configuration flags is the following:
-rule.validateTemplates -rule.validateTemplates
Whether to validate annotation and label templates (default true) Whether to validate annotation and label templates (default true)
-tls -tls
Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set Whether to enable TLS for incoming HTTP requests at -httpListenAddr (aka https). -tlsCertFile and -tlsKeyFile must be set if -tls is set
-tlsCertFile string -tlsCertFile string
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated Path to file with TLS certificate if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
-tlsCipherSuites array
Optional list of TLS cipher suites for incoming requests over HTTPS if -tls is set. See the list of supported cipher suites at https://pkg.go.dev/crypto/tls#pkg-constants
Supports an array of values separated by comma or specified via multiple flags.
-tlsKeyFile string -tlsKeyFile string
Path to file with TLS key. Used only if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated Path to file with TLS key if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
-version -version
Show VictoriaMetrics version Show VictoriaMetrics version
``` ```
@ -846,8 +851,9 @@ Notifier also supports configuration via file specified with flag `notifier.conf
-notifier.config=app/vmalert/notifier/testdata/consul.good.yaml -notifier.config=app/vmalert/notifier/testdata/consul.good.yaml
``` ```
The configuration file allows to configure static notifiers or discover notifiers via The configuration file allows to configure static notifiers, discover notifiers via
[Consul](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#consul_sd_config). [Consul](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#consul_sd_config)
and [DNS](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dns_sd_config):
For example: For example:
``` ```
@ -860,6 +866,12 @@ consul_sd_configs:
- server: localhost:8500 - server: localhost:8500
services: services:
- alertmanager - alertmanager
dns_sd_configs:
- names:
- my.domain.com
type: 'A'
port: 9093
``` ```
The list of configured or discovered Notifiers can be explored via [UI](#Web). The list of configured or discovered Notifiers can be explored via [UI](#Web).
@ -911,6 +923,11 @@ static_configs:
consul_sd_configs: consul_sd_configs:
[ - <consul_sd_config> ... ] [ - <consul_sd_config> ... ]
# List of DNS service discovery configurations.
# See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dns_sd_config
dns_sd_configs:
[ - <dns_sd_config> ... ]
# List of relabel configurations for entities discovered via service discovery. # List of relabel configurations for entities discovered via service discovery.
# Supports the same relabeling features as the rest of VictoriaMetrics components. # Supports the same relabeling features as the rest of VictoriaMetrics components.
# See https://docs.victoriametrics.com/vmagent.html#relabeling # See https://docs.victoriametrics.com/vmagent.html#relabeling

View file

@ -25,10 +25,10 @@ import (
type Group struct { type Group struct {
Type datasource.Type `yaml:"type,omitempty"` Type datasource.Type `yaml:"type,omitempty"`
File string File string
Name string `yaml:"name"` Name string `yaml:"name"`
Interval promutils.Duration `yaml:"interval"` Interval *promutils.Duration `yaml:"interval,omitempty"`
Rules []Rule `yaml:"rules"` Rules []Rule `yaml:"rules"`
Concurrency int `yaml:"concurrency"` Concurrency int `yaml:"concurrency"`
// ExtraFilterLabels is a list label filters applied to every rule // ExtraFilterLabels is a list label filters applied to every rule
// request withing a group. Is compatible only with VM datasources. // request withing a group. Is compatible only with VM datasources.
// See https://docs.victoriametrics.com#prometheus-querying-api-enhancements // See https://docs.victoriametrics.com#prometheus-querying-api-enhancements
@ -127,12 +127,12 @@ func (g *Group) Validate(validateAnnotations, validateExpressions bool) error {
// recording rule or alerting rule. // recording rule or alerting rule.
type Rule struct { type Rule struct {
ID uint64 ID uint64
Record string `yaml:"record,omitempty"` Record string `yaml:"record,omitempty"`
Alert string `yaml:"alert,omitempty"` Alert string `yaml:"alert,omitempty"`
Expr string `yaml:"expr"` Expr string `yaml:"expr"`
For promutils.Duration `yaml:"for"` For *promutils.Duration `yaml:"for,omitempty"`
Labels map[string]string `yaml:"labels,omitempty"` Labels map[string]string `yaml:"labels,omitempty"`
Annotations map[string]string `yaml:"annotations,omitempty"` Annotations map[string]string `yaml:"annotations,omitempty"`
// Catches all undefined fields and must be empty after parsing. // Catches all undefined fields and must be empty after parsing.
XXX map[string]interface{} `yaml:",inline"` XXX map[string]interface{} `yaml:",inline"`

View file

@ -15,6 +15,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal" "github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel" "github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/consul" "github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/consul"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/dns"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutils" "github.com/VictoriaMetrics/VictoriaMetrics/lib/promutils"
) )
@ -29,6 +30,10 @@ type Config struct {
// ConsulSDConfigs contains list of settings for service discovery via Consul // ConsulSDConfigs contains list of settings for service discovery via Consul
// see https://prometheus.io/docs/prometheus/latest/configuration/configuration/#consul_sd_config // see https://prometheus.io/docs/prometheus/latest/configuration/configuration/#consul_sd_config
ConsulSDConfigs []consul.SDConfig `yaml:"consul_sd_configs,omitempty"` ConsulSDConfigs []consul.SDConfig `yaml:"consul_sd_configs,omitempty"`
// DNSSDConfigs ontains list of settings for service discovery via DNS.
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dns_sd_config
DNSSDConfigs []dns.SDConfig `yaml:"dns_sd_configs,omitempty"`
// StaticConfigs contains list of static targets // StaticConfigs contains list of static targets
StaticConfigs []StaticConfig `yaml:"static_configs,omitempty"` StaticConfigs []StaticConfig `yaml:"static_configs,omitempty"`
@ -39,7 +44,7 @@ type Config struct {
// AlertRelabelConfigs contains list of relabeling rules alert labels // AlertRelabelConfigs contains list of relabeling rules alert labels
AlertRelabelConfigs []promrelabel.RelabelConfig `yaml:"alert_relabel_configs,omitempty"` AlertRelabelConfigs []promrelabel.RelabelConfig `yaml:"alert_relabel_configs,omitempty"`
// The timeout used when sending alerts. // The timeout used when sending alerts.
Timeout promutils.Duration `yaml:"timeout,omitempty"` Timeout *promutils.Duration `yaml:"timeout,omitempty"`
// Checksum stores the hash of yaml definition for the config. // Checksum stores the hash of yaml definition for the config.
// May be used to detect any changes to the config file. // May be used to detect any changes to the config file.

View file

@ -12,6 +12,7 @@ func TestConfigParseGood(t *testing.T) {
} }
f("testdata/mixed.good.yaml") f("testdata/mixed.good.yaml")
f("testdata/consul.good.yaml") f("testdata/consul.good.yaml")
f("testdata/dns.good.yaml")
f("testdata/static.good.yaml") f("testdata/static.good.yaml")
} }

View file

@ -7,6 +7,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger" "github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/consul" "github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/consul"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/dns"
) )
// configWatcher supports dynamic reload of Notifier objects // configWatcher supports dynamic reload of Notifier objects
@ -195,6 +196,24 @@ func (cw *configWatcher) start() error {
return fmt.Errorf("failed to start consulSD discovery: %s", err) return fmt.Errorf("failed to start consulSD discovery: %s", err)
} }
} }
if len(cw.cfg.DNSSDConfigs) > 0 {
err := cw.add(TargetDNS, *dns.SDCheckInterval, func() ([]map[string]string, error) {
var labels []map[string]string
for i := range cw.cfg.DNSSDConfigs {
sdc := &cw.cfg.DNSSDConfigs[i]
targetLabels, err := sdc.GetLabels(cw.cfg.baseDir)
if err != nil {
return nil, fmt.Errorf("got labels err: %s", err)
}
labels = append(labels, targetLabels...)
}
return labels, nil
})
if err != nil {
return fmt.Errorf("failed to start DNSSD discovery: %s", err)
}
}
return nil return nil
} }

View file

@ -162,6 +162,8 @@ const (
TargetStatic TargetType = "static" TargetStatic TargetType = "static"
// TargetConsul is for targets discovered via Consul // TargetConsul is for targets discovered via Consul
TargetConsul TargetType = "consulSD" TargetConsul TargetType = "consulSD"
// TargetDNS is for targets discovered via DNS
TargetDNS TargetType = "DNSSD"
) )
// GetTargets returns list of static or discovered targets // GetTargets returns list of static or discovered targets

View file

@ -0,0 +1,12 @@
dns_sd_configs:
- names:
- cloudflare.com
type: 'A'
port: 9093
relabel_configs:
- source_labels: [__meta_dns_name]
replacement: '${1}'
target_label: dns_name
alert_relabel_configs:
- target_label: "foo"
replacement: "aaa"

View file

@ -11,8 +11,18 @@ consul_sd_configs:
- server: localhost:8500 - server: localhost:8500
services: services:
- consul - consul
dns_sd_configs:
- names:
- cloudflare.com
type: 'A'
port: 9093
relabel_configs: relabel_configs:
- source_labels: [__meta_consul_tags] - source_labels: [__meta_consul_tags]
regex: .*,__scheme__=([^,]+),.* regex: .*,__scheme__=([^,]+),.*
replacement: '${1}' replacement: '${1}'
target_label: __scheme__ target_label: __scheme__
- source_labels: [__meta_dns_name]
replacement: '${1}'
target_label: dns_name

View file

@ -284,11 +284,14 @@ See the docs at https://docs.victoriametrics.com/vmauth.html .
-reloadAuthKey string -reloadAuthKey string
Auth key for /-/reload http endpoint. It must be passed as authKey=... Auth key for /-/reload http endpoint. It must be passed as authKey=...
-tls -tls
Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set Whether to enable TLS for incoming HTTP requests at -httpListenAddr (aka https). -tlsCertFile and -tlsKeyFile must be set if -tls is set
-tlsCertFile string -tlsCertFile string
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated Path to file with TLS certificate if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
-tlsCipherSuites array
Optional list of TLS cipher suites for incoming requests over HTTPS if -tls is set. See the list of supported cipher suites at https://pkg.go.dev/crypto/tls#pkg-constants
Supports an array of values separated by comma or specified via multiple flags.
-tlsKeyFile string -tlsKeyFile string
Path to file with TLS key. Used only if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated Path to file with TLS key if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
-version -version
Show VictoriaMetrics version Show VictoriaMetrics version
``` ```

View file

@ -215,7 +215,7 @@ func main() {
err = app.Run(os.Args) err = app.Run(os.Args)
if err != nil { if err != nil {
log.Println(err) log.Fatalln(err)
} }
log.Printf("Total time: %v", time.Since(start)) log.Printf("Total time: %v", time.Since(start))
} }

View file

@ -619,7 +619,7 @@ func newAggrFuncTopK(isReverse bool) aggrFunc {
}) })
fillNaNsAtIdx(n, ks[n], tss) fillNaNsAtIdx(n, ks[n], tss)
} }
tss = removeNaNs(tss) tss = removeEmptySeries(tss)
reverseSeries(tss) reverseSeries(tss)
return tss return tss
} }
@ -686,7 +686,7 @@ func getRangeTopKTimeseries(tss []*timeseries, modifier *metricsql.ModifierExpr,
if remainingSumTS != nil { if remainingSumTS != nil {
tss = append(tss, remainingSumTS) tss = append(tss, remainingSumTS)
} }
tss = removeNaNs(tss) tss = removeEmptySeries(tss)
reverseSeries(tss) reverseSeries(tss)
return tss return tss
} }

View file

@ -82,14 +82,35 @@ func newBinaryOpArithFunc(af func(left, right float64) float64) binaryOpFunc {
func newBinaryOpFunc(bf func(left, right float64, isBool bool) float64) binaryOpFunc { func newBinaryOpFunc(bf func(left, right float64, isBool bool) float64) binaryOpFunc {
return func(bfa *binaryOpFuncArg) ([]*timeseries, error) { return func(bfa *binaryOpFuncArg) ([]*timeseries, error) {
isBool := bfa.be.Bool left := bfa.left
left, right, dst, err := adjustBinaryOpTags(bfa.be, bfa.left, bfa.right) right := bfa.right
switch bfa.be.Op {
case "ifnot":
left = removeEmptySeries(left)
// Do not remove empty series on the right side,
// so the left-side series could be matched against them.
case "default":
// Do not remove empty series on the left side,
// so they could be replaced with the corresponding series on the right side.
right = removeEmptySeries(right)
if len(right) == 0 {
return left, nil
}
default:
left = removeEmptySeries(left)
right = removeEmptySeries(right)
}
if len(left) == 0 || len(right) == 0 {
return nil, nil
}
left, right, dst, err := adjustBinaryOpTags(bfa.be, left, right)
if err != nil { if err != nil {
return nil, err return nil, err
} }
if len(left) != len(right) || len(left) != len(dst) { if len(left) != len(right) || len(left) != len(dst) {
logger.Panicf("BUG: len(left) must match len(right) and len(dst); got %d vs %d vs %d", len(left), len(right), len(dst)) logger.Panicf("BUG: len(left) must match len(right) and len(dst); got %d vs %d vs %d", len(left), len(right), len(dst))
} }
isBool := bfa.be.Bool
for i, tsLeft := range left { for i, tsLeft := range left {
leftValues := tsLeft.Values leftValues := tsLeft.Values
rightValues := right[i].Values rightValues := right[i].Values
@ -206,7 +227,11 @@ func ensureSingleTimeseries(side string, be *metricsql.BinaryOpExpr, tss []*time
func groupJoin(singleTimeseriesSide string, be *metricsql.BinaryOpExpr, rvsLeft, rvsRight, tssLeft, tssRight []*timeseries) ([]*timeseries, []*timeseries, error) { func groupJoin(singleTimeseriesSide string, be *metricsql.BinaryOpExpr, rvsLeft, rvsRight, tssLeft, tssRight []*timeseries) ([]*timeseries, []*timeseries, error) {
joinTags := be.JoinModifier.Args joinTags := be.JoinModifier.Args
var m map[string]*timeseries type tsPair struct {
left *timeseries
right *timeseries
}
m := make(map[string]*tsPair)
for _, tsLeft := range tssLeft { for _, tsLeft := range tssLeft {
resetMetricGroupIfRequired(be, tsLeft) resetMetricGroupIfRequired(be, tsLeft)
if len(tssRight) == 1 { if len(tssRight) == 1 {
@ -219,12 +244,8 @@ func groupJoin(singleTimeseriesSide string, be *metricsql.BinaryOpExpr, rvsLeft,
// Hard case - right part contains multiple matching time series. // Hard case - right part contains multiple matching time series.
// Verify it doesn't result in duplicate MetricName values after adding missing tags. // Verify it doesn't result in duplicate MetricName values after adding missing tags.
if m == nil { for k := range m {
m = make(map[string]*timeseries, len(tssRight)) delete(m, k)
} else {
for k := range m {
delete(m, k)
}
} }
bb := bbPool.Get() bb := bbPool.Get()
for _, tsRight := range tssRight { for _, tsRight := range tssRight {
@ -232,20 +253,29 @@ func groupJoin(singleTimeseriesSide string, be *metricsql.BinaryOpExpr, rvsLeft,
tsCopy.CopyFromShallowTimestamps(tsLeft) tsCopy.CopyFromShallowTimestamps(tsLeft)
tsCopy.MetricName.SetTags(joinTags, &tsRight.MetricName) tsCopy.MetricName.SetTags(joinTags, &tsRight.MetricName)
bb.B = marshalMetricTagsSorted(bb.B[:0], &tsCopy.MetricName) bb.B = marshalMetricTagsSorted(bb.B[:0], &tsCopy.MetricName)
if tsExisting := m[string(bb.B)]; tsExisting != nil { pair, ok := m[string(bb.B)]
// Try merging tsExisting with tsRight if they don't overlap. if !ok {
if mergeNonOverlappingTimeseries(tsExisting, tsRight) { m[string(bb.B)] = &tsPair{
continue left: &tsCopy,
right: tsRight,
} }
continue
}
// Try merging pair.right with tsRight if they don't overlap.
var tmp timeseries
tmp.CopyFromShallowTimestamps(pair.right)
if !mergeNonOverlappingTimeseries(&tmp, tsRight) {
return nil, nil, fmt.Errorf("duplicate time series on the %s side of `%s %s %s`: %s and %s", return nil, nil, fmt.Errorf("duplicate time series on the %s side of `%s %s %s`: %s and %s",
singleTimeseriesSide, be.Op, be.GroupModifier.AppendString(nil), be.JoinModifier.AppendString(nil), singleTimeseriesSide, be.Op, be.GroupModifier.AppendString(nil), be.JoinModifier.AppendString(nil),
stringMetricTags(&tsExisting.MetricName), stringMetricTags(&tsRight.MetricName)) stringMetricTags(&tmp.MetricName), stringMetricTags(&tsRight.MetricName))
} }
m[string(bb.B)] = tsRight pair.right = &tmp
rvsLeft = append(rvsLeft, &tsCopy)
rvsRight = append(rvsRight, tsRight)
} }
bbPool.Put(bb) bbPool.Put(bb)
for _, pair := range m {
rvsLeft = append(rvsLeft, pair.left)
rvsRight = append(rvsRight, pair.right)
}
} }
return rvsLeft, rvsRight, nil return rvsLeft, rvsRight, nil
} }
@ -322,7 +352,7 @@ func binaryOpAnd(bfa *binaryOpFuncArg) ([]*timeseries, error) {
} }
} }
} }
tssLeft = removeNaNs(tssLeft) tssLeft = removeEmptySeries(tssLeft)
rvs = append(rvs, tssLeft...) rvs = append(rvs, tssLeft...)
} }
return rvs, nil return rvs, nil
@ -382,7 +412,7 @@ func binaryOpUnless(bfa *binaryOpFuncArg) ([]*timeseries, error) {
} }
} }
} }
tssLeft = removeNaNs(tssLeft) tssLeft = removeEmptySeries(tssLeft)
rvs = append(rvs, tssLeft...) rvs = append(rvs, tssLeft...)
} }
return rvs, nil return rvs, nil

View file

@ -90,7 +90,7 @@ func maySortResults(e metricsql.Expr, tss []*timeseries) bool {
} }
func timeseriesToResult(tss []*timeseries, maySort bool) ([]netstorage.Result, error) { func timeseriesToResult(tss []*timeseries, maySort bool) ([]netstorage.Result, error) {
tss = removeNaNs(tss) tss = removeEmptySeries(tss)
result := make([]netstorage.Result, len(tss)) result := make([]netstorage.Result, len(tss))
m := make(map[string]struct{}, len(tss)) m := make(map[string]struct{}, len(tss))
bb := bbPool.Get() bb := bbPool.Get()
@ -143,7 +143,7 @@ func metricNameLess(a, b *storage.MetricName) bool {
return len(ats) < len(bts) return len(ats) < len(bts)
} }
func removeNaNs(tss []*timeseries) []*timeseries { func removeEmptySeries(tss []*timeseries) []*timeseries {
rvs := tss[:0] rvs := tss[:0]
for _, ts := range tss { for _, ts := range tss {
allNans := true allNans := true

View file

@ -2675,6 +2675,17 @@ func TestExecSuccess(t *testing.T) {
resultExpected := []netstorage.Result{r} resultExpected := []netstorage.Result{r}
f(q, resultExpected) f(q, resultExpected)
}) })
t.Run(`scalar default NaN`, func(t *testing.T) {
t.Parallel()
q := `time() > 1400 default (time() < -100)`
r := netstorage.Result{
MetricName: metricNameExpected,
Values: []float64{nan, nan, nan, 1600, 1800, 2000},
Timestamps: timestampsExpected,
}
resultExpected := []netstorage.Result{r}
f(q, resultExpected)
})
t.Run(`vector default scalar`, func(t *testing.T) { t.Run(`vector default scalar`, func(t *testing.T) {
t.Parallel() t.Parallel()
q := `sort_desc(union( q := `sort_desc(union(

View file

@ -1,7 +1,7 @@
{ {
"files": { "files": {
"main.css": "./static/css/main.d8362c27.css", "main.css": "./static/css/main.d8362c27.css",
"main.js": "./static/js/main.d940c8c2.js", "main.js": "./static/js/main.1754e6b5.js",
"static/js/362.1a2113d4.chunk.js": "./static/js/362.1a2113d4.chunk.js", "static/js/362.1a2113d4.chunk.js": "./static/js/362.1a2113d4.chunk.js",
"static/js/27.939f971b.chunk.js": "./static/js/27.939f971b.chunk.js", "static/js/27.939f971b.chunk.js": "./static/js/27.939f971b.chunk.js",
"static/media/README.md": "./static/media/README.5e5724daf3ee333540a3.md", "static/media/README.md": "./static/media/README.5e5724daf3ee333540a3.md",
@ -9,6 +9,6 @@
}, },
"entrypoints": [ "entrypoints": [
"static/css/main.d8362c27.css", "static/css/main.d8362c27.css",
"static/js/main.d940c8c2.js" "static/js/main.1754e6b5.js"
] ]
} }

View file

@ -1 +1 @@
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="./favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1"/><meta name="theme-color" content="#000000"/><meta name="description" content="VM-UI is a metric explorer for Victoria Metrics"/><link rel="apple-touch-icon" href="./apple-touch-icon.png"/><link rel="icon" type="image/png" sizes="32x32" href="./favicon-32x32.png"><link rel="manifest" href="./manifest.json"/><title>VM UI</title><link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto:300,400,500,700&display=swap"/><script defer="defer" src="./static/js/main.d940c8c2.js"></script><link href="./static/css/main.d8362c27.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div></body></html> <!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="./favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1"/><meta name="theme-color" content="#000000"/><meta name="description" content="VM-UI is a metric explorer for Victoria Metrics"/><link rel="apple-touch-icon" href="./apple-touch-icon.png"/><link rel="icon" type="image/png" sizes="32x32" href="./favicon-32x32.png"><link rel="manifest" href="./manifest.json"/><title>VM UI</title><link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto:300,400,500,700&display=swap"/><script defer="defer" src="./static/js/main.1754e6b5.js"></script><link href="./static/css/main.d8362c27.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div></body></html>

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View file

@ -8,7 +8,6 @@ import {getLimitsYAxis, getTimeSeries} from "../../../utils/uplot/axes";
import {LegendItem} from "../../../utils/uplot/types"; import {LegendItem} from "../../../utils/uplot/types";
import {TimeParams} from "../../../types"; import {TimeParams} from "../../../types";
import {AxisRange, CustomStep, YaxisState} from "../../../state/graph/reducer"; import {AxisRange, CustomStep, YaxisState} from "../../../state/graph/reducer";
import Alert from "@mui/material/Alert";
export interface GraphViewProps { export interface GraphViewProps {
data?: MetricResult[]; data?: MetricResult[];
@ -129,14 +128,12 @@ const GraphView: FC<GraphViewProps> = ({
const containerRef = useRef<HTMLDivElement>(null); const containerRef = useRef<HTMLDivElement>(null);
return <> return <>
{(data.length > 0) ? <div style={{width: "100%"}} ref={containerRef}>
<div style={{width: "100%"}} ref={containerRef}> {containerRef?.current &&
{containerRef?.current &&
<LineChart data={dataChart} series={series} metrics={data} period={period} yaxis={yaxis} unit={unit} <LineChart data={dataChart} series={series} metrics={data} period={period} yaxis={yaxis} unit={unit}
setPeriod={setPeriod} container={containerRef?.current}/>} setPeriod={setPeriod} container={containerRef?.current}/>}
{showLegend && <Legend labels={legend} query={query} onChange={onChangeLegend}/>} {showLegend && <Legend labels={legend} query={query} onChange={onChangeLegend}/>}
</div> </div>
: <Alert color="warning" severity="warning" sx={{mt: 2}}>No data to show</Alert>}
</>; </>;
}; };

View file

@ -96,7 +96,8 @@ const LineChart: FC<LineChartProps> = ({data, series, metrics = [],
const getScales = (): Scales => { const getScales = (): Scales => {
const scales: { [key: string]: { range: Scale.Range } } = {x: {range: getRangeX}}; const scales: { [key: string]: { range: Scale.Range } } = {x: {range: getRangeX}};
Object.keys(yaxis.limits.range).forEach(axis => { const ranges = Object.keys(yaxis.limits.range);
(ranges.length ? ranges : ["1"]).forEach(axis => {
scales[axis] = {range: (u: uPlot, min = 0, max = 1) => getRangeY(u, min, max, axis)}; scales[axis] = {range: (u: uPlot, min = 0, max = 1) => getRangeY(u, min, max, axis)};
}); });
return scales; return scales;
@ -105,7 +106,7 @@ const LineChart: FC<LineChartProps> = ({data, series, metrics = [],
const options: uPlotOptions = { const options: uPlotOptions = {
...defaultOptions, ...defaultOptions,
series, series,
axes: getAxes(series, unit), axes: getAxes(series.length > 1 ? series : [{}, {scale: "1"}], unit),
scales: {...getScales()}, scales: {...getScales()},
width: layoutSize.width || 400, width: layoutSize.width || 400,
plugins: [{hooks: {ready: onReadyChart, setCursor, setSeries: seriesFocus}}], plugins: [{hooks: {ready: onReadyChart, setCursor, setSeries: seriesFocus}}],

View file

@ -4,7 +4,7 @@ DOCKER_NAMESPACE := victoriametrics
ROOT_IMAGE ?= alpine:3.15.4 ROOT_IMAGE ?= alpine:3.15.4
CERTS_IMAGE := alpine:3.15.4 CERTS_IMAGE := alpine:3.15.4
GO_BUILDER_IMAGE := golang:1.18.0-alpine GO_BUILDER_IMAGE := golang:1.18.1-alpine
BUILDER_IMAGE := local/builder:2.0.0-$(shell echo $(GO_BUILDER_IMAGE) | tr :/ __)-1 BUILDER_IMAGE := local/builder:2.0.0-$(shell echo $(GO_BUILDER_IMAGE) | tr :/ __)-1
BASE_IMAGE := local/base:1.1.3-$(shell echo $(ROOT_IMAGE) | tr :/ __)-$(shell echo $(CERTS_IMAGE) | tr :/ __) BASE_IMAGE := local/base:1.1.3-$(shell echo $(ROOT_IMAGE) | tr :/ __)-$(shell echo $(CERTS_IMAGE) | tr :/ __)

View file

@ -15,6 +15,18 @@ The following tip changes can be tested by building VictoriaMetrics components f
## tip ## tip
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): allow filtering targets by target url and by target labels with [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) on `http://vmagent:8429/targets` page. This may be useful when `vmagent` scrapes big number of targets. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1796).
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): reduce `-promscrape.config` reload duration when the config contains big number of jobs (aka [scrape_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config) sections) and only a few of them are changed. Previously all the jobs were restarted. Now only the jobs with changed configs are restarted. This should reduce the probability of data miss because of slow config reload. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2270).
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): improve service discovery speed for big number of scrape targets. This should help when `vmagent` discovers big number of targets (e.g. thousands) in Kubernetes cluster. The service discovery speed now should scale with the number of CPU cores available to `vmagent`.
* FEATURE: [vmalert](https://docs.victoriametrics.com/vmalert.html): add support for DNS-based discovery for notifiers in the same way as Prometheus does. See [these docs](https://docs.victoriametrics.com/vmalert.html#notifier-configuration-file) and [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2460).
* FEATURE: allow specifying TLS cipher suites for incoming https requests via `-tlsCipherSuites` command-line flag. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2404).
* FEATURE: allow specifying TLS cipher suites for mTLS connections between cluster components via `-cluster.tlsCipherSuites` command-line flag. See [these docs](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#mtls-protection).
* FEATURE: [vmui](https://docs.victoriametrics.com/#vmui): shown an empty graph on the selected time range when there is no data on it. Previously `No data to show` placeholder was shown instead of the graph in this case. This prevented from zooming and scrolling of such a graph.
* BUGFIX: [vmctl](https://docs.victoriametrics.com/vmctl.html): return non-zero exit code on error. This allows handling `vmctl` errors in shell scripts. Previously `vmctl` was returning 0 exit code on error. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2322).
* BUGFIX: [vmagent](https://docs.victoriametrics.com/vmagent.html): properly show `scrape_timeout` and `scrape_interval` options at `http://vmagent:8429/config` page. Previously these options weren't displayed even if they were set in `-promscrape.config`.
* BUGFIX: [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html): properly handle joins on time series filtered by values. For example, `kube_pod_container_resource_requests{resource="cpu"} * on (namespace,pod) group_left() (kube_pod_status_phase{phase=~"Pending|Running"}==1)`. This query may result in `duplicate time series on the right side` error even if `==1` filter leaves only a single time series per `(namespace,pod)` labels. Now such query is properly executed.
## [v1.76.1](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.76.1) ## [v1.76.1](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.76.1)
@ -39,7 +51,7 @@ Released at 07-04-2022
**Update notes:** this release introduces backwards-incompatible changes to communication protocol between `vmselect` and `vmstorage` nodes in cluster version of VictoriaMetrics, so `vmselect` and `vmstorage` nodes may log communication errors during the upgrade. These errors should stop after all the `vmselect` and `vmstorage` nodes are updated to new release. **Update notes:** this release introduces backwards-incompatible changes to communication protocol between `vmselect` and `vmstorage` nodes in cluster version of VictoriaMetrics, so `vmselect` and `vmstorage` nodes may log communication errors during the upgrade. These errors should stop after all the `vmselect` and `vmstorage` nodes are updated to new release.
* FEATURE: [vmctl](https://docs.victoriametrics.com/vmctl.html): add ability to verify files obtained via [native export](https://docs.victoriametrics.com/#how-to-export-data-in-native-format). See [these docs](https://docs.victoriametrics.com/vmctl.html#verifying-exported-blocks-from-victoriametrics) and [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2362). * FEATURE: [vmctl](https://docs.victoriametrics.com/vmctl.html): add ability to verify files obtained via [native export](https://docs.victoriametrics.com/#how-to-export-data-in-native-format). See [these docs](https://docs.victoriametrics.com/vmctl.html#verifying-exported-blocks-from-victoriametrics) and [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2362).
* FEATURE: [vmui](https://docs.victoriametrics.com/#vmui): add pre-defined dasbhoards for per-job CPU usage, memory usage and disk IO usage. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2243) for details. * FEATURE: [vmui](https://docs.victoriametrics.com/#vmui): add pre-defined dashboards for per-job CPU usage, memory usage and disk IO usage. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2243) for details.
* FEATURE: [vmalert](https://docs.victoriametrics.com/vmalert.html): improve compatibility with [Prometheus Alert Generator specification](https://github.com/prometheus/compliance/blob/main/alert_generator/specification.md). See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2340). * FEATURE: [vmalert](https://docs.victoriametrics.com/vmalert.html): improve compatibility with [Prometheus Alert Generator specification](https://github.com/prometheus/compliance/blob/main/alert_generator/specification.md). See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2340).
* FEATURE: [vmalert](https://docs.victoriametrics.com/vmalert.html): add `-datasource.disableKeepAlive` command-line flag, which can be used for disabling [HTTP keep-alive connections](https://en.wikipedia.org/wiki/HTTP_persistent_connection) to datasources. This option can be useful for distributing load among multiple datasources behind TCP proxy such as [HAProxy](http://www.haproxy.org/). * FEATURE: [vmalert](https://docs.victoriametrics.com/vmalert.html): add `-datasource.disableKeepAlive` command-line flag, which can be used for disabling [HTTP keep-alive connections](https://en.wikipedia.org/wiki/HTTP_persistent_connection) to datasources. This option can be useful for distributing load among multiple datasources behind TCP proxy such as [HAProxy](http://www.haproxy.org/).
* FEATURE: [Cluster version of VictoriaMetrics](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html): reduce memory usage by up to 50% for `vminsert` and `vmstorage` under high ingestion rate. * FEATURE: [Cluster version of VictoriaMetrics](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html): reduce memory usage by up to 50% for `vminsert` and `vmstorage` under high ingestion rate.
@ -706,7 +718,7 @@ Released at 02-03-2021
* `process_io_storage_read_bytes_total` - the number of bytes read from storage layer * `process_io_storage_read_bytes_total` - the number of bytes read from storage layer
* `process_io_storage_written_bytes_total` - the number of bytes written to storage layer * `process_io_storage_written_bytes_total` - the number of bytes written to storage layer
* FEATURE: vmagent: add ability to spread scrape targets among multiple `vmagent` instances. See [these docs](https://docs.victoriametrics.com/vmagent.html#scraping-big-number-of-targets) and [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1084) for details. * FEATURE: vmagent: add ability to spread scrape targets among multiple `vmagent` instances. See [these docs](https://docs.victoriametrics.com/vmagent.html#scraping-big-number-of-targets) and [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1084) for details.
* FEATURE: vmagent: use watch API for Kuberntes service discovery. This should reduce load on Kuberntes API server when it tracks big number of objects (for example, 10K pods). This should also reduce the time needed for k8s targets discovery. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1057) for details. * FEATURE: vmagent: use watch API for Kubernetes service discovery. This should reduce load on Kubernetes API server when it tracks big number of objects (for example, 10K pods). This should also reduce the time needed for k8s targets discovery. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1057) for details.
* FEATURE: vmagent: export `vm_promscrape_target_relabel_duration_seconds` metric, which can be used for monitoring the time spend on relabeling for discovered targets. * FEATURE: vmagent: export `vm_promscrape_target_relabel_duration_seconds` metric, which can be used for monitoring the time spend on relabeling for discovered targets.
* FEATURE: vmagent: optimize [relabeling](https://docs.victoriametrics.com/vmagent.html#relabeling) performance for common cases. * FEATURE: vmagent: optimize [relabeling](https://docs.victoriametrics.com/vmagent.html#relabeling) performance for common cases.
* FEATURE: add `increase_pure(m[d])` function to MetricsQL. It works the same as `increase(m[d])` except of various edge cases. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/962) for details. * FEATURE: add `increase_pure(m[d])` function to MetricsQL. It works the same as `increase(m[d])` except of various edge cases. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/962) for details.

View file

@ -129,8 +129,9 @@ A minimal cluster must contain the following nodes:
- a single `vminsert` node with `-storageNode=<vmstorage_host>` - a single `vminsert` node with `-storageNode=<vmstorage_host>`
- a single `vmselect` node with `-storageNode=<vmstorage_host>` - a single `vmselect` node with `-storageNode=<vmstorage_host>`
It is recommended to run at least two nodes for each service It is recommended to run at least two nodes for each service for high availability purposes. In this case the cluster continues working when a single node is temporarily unavailable and the remaining nodes can handle the increased workload. The node may be temporarily unavailable when the underlying hardware breaks, during software upgrades, migration or other maintenance tasks.
for high availability purposes.
It is preferred to run many small `vmstorage` nodes over a few big `vmstorage` nodes, since this reduces the workload increase on the remaining `vmstorage` nodes when some of `vmstorage` nodes become temporarily unavailable.
An http load balancer such as [vmauth](https://docs.victoriametrics.com/vmauth.html) or `nginx` must be put in front of `vminsert` and `vmselect` nodes. It must contain the following routing configs according to [the url format](#url-format): An http load balancer such as [vmauth](https://docs.victoriametrics.com/vmauth.html) or `nginx` must be put in front of `vminsert` and `vmselect` nodes. It must contain the following routing configs according to [the url format](#url-format):
@ -153,7 +154,12 @@ It is possible manualy setting up a toy cluster on a single host. In this case e
## mTLS protection ## mTLS protection
By default `vminsert` and `vmselect` nodes use unencrypted connections to `vmstorage` nodes, since it is assumed that all the cluster components run in a protected environment. [Enterprise version of VictoriaMetrics](https://victoriametrics.com/products/enterprise/) provides optional support for [mTLS connections](https://en.wikipedia.org/wiki/Mutual_authentication#mTLS) between cluster components. Pass `-cluster.tls=true` command-line flag to `vminsert`, `vmselect` and `vmstorage` nodes in order to enable mTLS protection. Additionally, `vminsert`, `vmselect` and `vmstorage` must be configured with mTLS certificates via `-cluster.tlsCertFile`, `-cluster.tlsKeyFile` command-line options. These certificates are mutually verified when `vminsert` and `vmselect` dial `vmstorage`. An optional `-cluster.tlsCAFile` command-line flag can be set at `vminsert`, `vmselect` and `vmstorage` for verifying peer certificates issued with custom [certificate authority](https://en.wikipedia.org/wiki/Certificate_authority). By default `vminsert` and `vmselect` nodes use unencrypted connections to `vmstorage` nodes, since it is assumed that all the cluster components run in a protected environment. [Enterprise version of VictoriaMetrics](https://victoriametrics.com/products/enterprise/) provides optional support for [mTLS connections](https://en.wikipedia.org/wiki/Mutual_authentication#mTLS) between cluster components. Pass `-cluster.tls=true` command-line flag to `vminsert`, `vmselect` and `vmstorage` nodes in order to enable mTLS protection. Additionally, `vminsert`, `vmselect` and `vmstorage` must be configured with mTLS certificates via `-cluster.tlsCertFile`, `-cluster.tlsKeyFile` command-line options. These certificates are mutually verified when `vminsert` and `vmselect` dial `vmstorage`.
Additionally the following optional command-line flags related to mTLS are supported:
- `-cluster.tlsCAFile` can be set at `vminsert`, `vmselect` and `vmstorage` for verifying peer certificates issued with custom [certificate authority](https://en.wikipedia.org/wiki/Certificate_authority). By default system-wide certificate authority is used for peer certificate verification.
- `-cluster.tlsCipherSuites` can be set to the list of supported TLS cipher suites at `vmstorage`. See [the list of supported TLS cipher suites](https://pkg.go.dev/crypto/tls#pkg-constants).
### Environment variables ### Environment variables
@ -255,15 +261,18 @@ It is recommended setting up alerts in [vmalert](https://docs.victoriametrics.co
## Cluster resizing and scalability ## Cluster resizing and scalability
Cluster performance and capacity scales with adding new nodes. Cluster performance and capacity can be scaled up in two ways:
- `vminsert` and `vmselect` nodes are stateless and may be added / removed at any time. - By adding more resources (CPU, RAM, disk IO, disk space, network bandwidth) to existing nodes in the cluster (aka vertical scalability).
Do not forget updating the list of these nodes on http load balancer. - By adding more nodes to the cluster (aka horizontal scalability).
Adding more `vminsert` nodes scales data ingestion rate. See [this comment](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/175#issuecomment-536925841)
about ingestion rate scalability. General recommendations for cluster scalability:
Adding more `vmselect` nodes scales select queries rate.
- `vmstorage` nodes own the ingested data, so they cannot be removed without data loss. - Adding more CPU and RAM to existing `vmselect` nodes improves the performance for heavy queries, which process big number of time series with big number of raw samples.
Adding more `vmstorage` nodes scales cluster capacity. - Adding more `vmstorage` nodes increases the number of [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series) the cluster can handle. This also increases query performance over time series with [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate). The cluster stability is also improved with the number of `vmstorage` nodes, since active `vmstorage` nodes need to handle lower additional workload when some of `vmstorage` nodes become unavailable.
- Adding more CPU and RAM to existing `vmstorage` nodes increases the number of [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series) the cluster can handle. It is preferred to add more `vmstorage` nodes over adding more CPU and RAM to existing `vmstorage` nodes, since higher number of `vmstorage` nodes increases cluster stability and improves query performance over time series with [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate).
- Adding more `vminsert` nodes increases the maximum possible data ingestion speed, since the ingested data may be split among bigger number of `vminsert` nodes.
- Adding more `vmselect` nodes increases the maximum possible queries rate, since the incoming concurrent requests may be split among bigger number of `vmselect` nodes.
Steps to add `vmstorage` node: Steps to add `vmstorage` node:
@ -336,7 +345,12 @@ Then [promxy](https://github.com/jacksontj/promxy) could be used for querying th
`vminsert` nodes can accept data from another `vminsert` nodes starting from [v1.60.0](https://docs.victoriametrics.com/CHANGELOG.html#v1600) if `-clusternativeListenAddr` command-line flag is set. For example, if `vminsert` is started with `-clusternativeListenAddr=:8400` command-line flag, then it can accept data from another `vminsert` nodes at TCP port 8400 in the same way as `vmstorage` nodes do. This allows chaining `vminsert` nodes and building multi-level cluster topologies with flexible configs. For example, the top level of `vminsert` nodes can replicate data among the second level of `vminsert` nodes located in distinct availability zones (AZ), while the second-level `vminsert` nodes can spread the data among `vmstorage` nodes located in the same AZ. Such setup guarantees cluster availability if some AZ becomes unavailable. The data from all the `vmstorage` nodes in all the AZs can be read via `vmselect` nodes, which are configured to query all the `vmstorage` nodes in all the availability zones (e.g. all the `vmstorage` addresses are passed via `-storageNode` command-line flag to `vmselect` nodes). Additionally, `-replicationFactor=k+1` must be passed to `vmselect` nodes, where `k` is the lowest number of `vmstorage` nodes in a single AZ. See [replication docs](#replication-and-data-safety) for more details. `vminsert` nodes can accept data from another `vminsert` nodes starting from [v1.60.0](https://docs.victoriametrics.com/CHANGELOG.html#v1600) if `-clusternativeListenAddr` command-line flag is set. For example, if `vminsert` is started with `-clusternativeListenAddr=:8400` command-line flag, then it can accept data from another `vminsert` nodes at TCP port 8400 in the same way as `vmstorage` nodes do. This allows chaining `vminsert` nodes and building multi-level cluster topologies with flexible configs. For example, the top level of `vminsert` nodes can replicate data among the second level of `vminsert` nodes located in distinct availability zones (AZ), while the second-level `vminsert` nodes can spread the data among `vmstorage` nodes located in the same AZ. Such setup guarantees cluster availability if some AZ becomes unavailable. The data from all the `vmstorage` nodes in all the AZs can be read via `vmselect` nodes, which are configured to query all the `vmstorage` nodes in all the availability zones (e.g. all the `vmstorage` addresses are passed via `-storageNode` command-line flag to `vmselect` nodes). Additionally, `-replicationFactor=k+1` must be passed to `vmselect` nodes, where `k` is the lowest number of `vmstorage` nodes in a single AZ. See [replication docs](#replication-and-data-safety) for more details.
Another option is to set up [vmagent](https://docs.victoriametrics.com/vmagent.html) for replicating the data among multiple VictoriaMetrics clusters. See [these docs](https://docs.victoriametrics.com/vmagent.html#multitenancy) for details. The multi-level cluster setup for `vminsert` nodes has the following shortcomings because of synchronous replication and data sharding:
* Data ingestion speed is limited by the slowest link to AZ.
* `vminsert` nodes at top level re-route incoming data to the remaining AZs when some AZs are temporariliy unavailable. This results in data gaps at AZs which were temporarily unavailable.
These issues are addressed by [vmagent](https://docs.victoriametrics.com/vmagent.html) when it runs in [multitenancy mode](https://docs.victoriametrics.com/vmagent.html#multitenancy). `vmagent` buffers data, which must be sent to a particular AZ, when this AZ is temporarily unavailable. The buffer is stored on disk. The buffered data is sent to AZ as soon as it becomes available.
## Helm ## Helm
@ -467,11 +481,11 @@ Below is the output for `/path/to/vminsert -help`:
-cluster.tls -cluster.tls
Whether to use TLS for connections to -storageNode. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#mtls-protection Whether to use TLS for connections to -storageNode. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#mtls-protection
-cluster.tlsCAFile string -cluster.tlsCAFile string
Path to TLS CA file to use for verifying certificates provided by -storageNode. By default system CA is used. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#mtls-protection Path to TLS CA file to use for verifying certificates provided by -storageNode if -cluster.tls flag is set. By default system CA is used. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#mtls-protection
-cluster.tlsCertFile string -cluster.tlsCertFile string
Path to client-side TLS certificate file to use when connecting to -storageNode. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#mtls-protection Path to client-side TLS certificate file to use when connecting to -storageNode if -cluster.tls flag is set. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#mtls-protection
-cluster.tlsKeyFile string -cluster.tlsKeyFile string
Path to client-side TLS key file to use when connecting to -storageNode. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#mtls-protection Path to client-side TLS key file to use when connecting to -storageNode if -cluster.tls flag is set. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#mtls-protection
-clusternativeListenAddr string -clusternativeListenAddr string
TCP address to listen for data from other vminsert nodes in multi-level cluster setup. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#multi-level-cluster-setup . Usually :8400 must be set. Doesn't work if empty TCP address to listen for data from other vminsert nodes in multi-level cluster setup. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#multi-level-cluster-setup . Usually :8400 must be set. Doesn't work if empty
-csvTrimTimestamp duration -csvTrimTimestamp duration
@ -587,11 +601,14 @@ Below is the output for `/path/to/vminsert -help`:
Comma-separated addresses of vmstorage nodes; usage: -storageNode=vmstorage-host1,...,vmstorage-hostN Comma-separated addresses of vmstorage nodes; usage: -storageNode=vmstorage-host1,...,vmstorage-hostN
Supports an array of values separated by comma or specified via multiple flags. Supports an array of values separated by comma or specified via multiple flags.
-tls -tls
Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set Whether to enable TLS for incoming HTTP requests at -httpListenAddr (aka https). -tlsCertFile and -tlsKeyFile must be set if -tls is set
-tlsCertFile string -tlsCertFile string
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated Path to file with TLS certificate if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
-tlsCipherSuites array
Optional list of TLS cipher suites for incoming requests over HTTPS if -tls is set. See the list of supported cipher suites at https://pkg.go.dev/crypto/tls#pkg-constants
Supports an array of values separated by comma or specified via multiple flags.
-tlsKeyFile string -tlsKeyFile string
Path to file with TLS key. Used only if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated Path to file with TLS key if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
-version -version
Show VictoriaMetrics version Show VictoriaMetrics version
``` ```
@ -606,11 +623,11 @@ Below is the output for `/path/to/vmselect -help`:
-cluster.tls -cluster.tls
Whether to use TLS for connections to -storageNode. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#mtls-protection Whether to use TLS for connections to -storageNode. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#mtls-protection
-cluster.tlsCAFile string -cluster.tlsCAFile string
Path to TLS CA file to use for verifying certificates provided by -storageNode. By default system CA is used. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#mtls-protection Path to TLS CA file to use for verifying certificates provided by -storageNode if -cluster.tls flag is set. By default system CA is used. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#mtls-protection
-cluster.tlsCertFile string -cluster.tlsCertFile string
Path to client-side TLS certificate file to use when connecting to -storageNode. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#mtls-protection Path to client-side TLS certificate file to use when connecting to -storageNode if -cluster.tls flag is set. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#mtls-protection
-cluster.tlsKeyFile string -cluster.tlsKeyFile string
Path to client-side TLS key file to use when connecting to -storageNode. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#mtls-protection Path to client-side TLS key file to use when connecting to -storageNode if -cluster.tls flag is set. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#mtls-protection
-dedup.minScrapeInterval duration -dedup.minScrapeInterval duration
Leave only the first sample in every time series per each discrete interval equal to -dedup.minScrapeInterval > 0. See https://docs.victoriametrics.com/#deduplication for details Leave only the first sample in every time series per each discrete interval equal to -dedup.minScrapeInterval > 0. See https://docs.victoriametrics.com/#deduplication for details
-downsampling.period array -downsampling.period array
@ -733,11 +750,14 @@ Below is the output for `/path/to/vmselect -help`:
Comma-separated addresses of vmstorage nodes; usage: -storageNode=vmstorage-host1,...,vmstorage-hostN Comma-separated addresses of vmstorage nodes; usage: -storageNode=vmstorage-host1,...,vmstorage-hostN
Supports an array of values separated by comma or specified via multiple flags. Supports an array of values separated by comma or specified via multiple flags.
-tls -tls
Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set Whether to enable TLS for incoming HTTP requests at -httpListenAddr (aka https). -tlsCertFile and -tlsKeyFile must be set if -tls is set
-tlsCertFile string -tlsCertFile string
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated Path to file with TLS certificate if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
-tlsCipherSuites array
Optional list of TLS cipher suites for incoming requests over HTTPS if -tls is set. See the list of supported cipher suites at https://pkg.go.dev/crypto/tls#pkg-constants
Supports an array of values separated by comma or specified via multiple flags.
-tlsKeyFile string -tlsKeyFile string
Path to file with TLS key. Used only if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated Path to file with TLS key if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
-version -version
Show VictoriaMetrics version Show VictoriaMetrics version
``` ```
@ -752,11 +772,14 @@ Below is the output for `/path/to/vmstorage -help`:
-cluster.tls -cluster.tls
Whether to use TLS when accepting connections from vminsert and vmselect. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#mtls-protection Whether to use TLS when accepting connections from vminsert and vmselect. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#mtls-protection
-cluster.tlsCAFile string -cluster.tlsCAFile string
Path to TLS CA file to use for verifying certificates provided by vminsert and vmselect. By default system CA is used. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#mtls-protection Path to TLS CA file to use for verifying certificates provided by vminsert and vmselect if -cluster.tls flag is set. By default system CA is used. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#mtls-protection
-cluster.tlsCertFile string -cluster.tlsCertFile string
Path to server-side TLS certificate file to use when accepting connections from vminsert and vmselect. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#mtls-protection Path to server-side TLS certificate file to use when accepting connections from vminsert and vmselect if -cluster.tls flag is set. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#mtls-protection
-cluster.tlsCipherSuites array
Optional list of TLS cipher suites used for connections from vminsert and vmselect if -cluster.tls flag is set. See the list of supported cipher suites at https://pkg.go.dev/crypto/tls#pkg-constants
Supports an array of values separated by comma or specified via multiple flags.
-cluster.tlsKeyFile string -cluster.tlsKeyFile string
Path to server-side TLS key file to use when accepting connections from vminsert and vmselect. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#mtls-protection Path to server-side TLS key file to use when accepting connections from vminsert and vmselect if -cluster.tls flag is set. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#mtls-protection
-dedup.minScrapeInterval duration -dedup.minScrapeInterval duration
Leave only the first sample in every time series per each discrete interval equal to -dedup.minScrapeInterval > 0. See https://docs.victoriametrics.com/#deduplication for details Leave only the first sample in every time series per each discrete interval equal to -dedup.minScrapeInterval > 0. See https://docs.victoriametrics.com/#deduplication for details
-denyQueriesOutsideRetention -denyQueriesOutsideRetention
@ -853,11 +876,14 @@ Below is the output for `/path/to/vmstorage -help`:
-storageDataPath string -storageDataPath string
Path to storage data (default "vmstorage-data") Path to storage data (default "vmstorage-data")
-tls -tls
Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set Whether to enable TLS for incoming HTTP requests at -httpListenAddr (aka https). -tlsCertFile and -tlsKeyFile must be set if -tls is set
-tlsCertFile string -tlsCertFile string
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated Path to file with TLS certificate if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
-tlsCipherSuites array
Optional list of TLS cipher suites for incoming requests over HTTPS if -tls is set. See the list of supported cipher suites at https://pkg.go.dev/crypto/tls#pkg-constants
Supports an array of values separated by comma or specified via multiple flags.
-tlsKeyFile string -tlsKeyFile string
Path to file with TLS key. Used only if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated Path to file with TLS key if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
-version -version
Show VictoriaMetrics version Show VictoriaMetrics version
-vminsertAddr string -vminsertAddr string

View file

@ -1738,8 +1738,8 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
Auth key for /debug/pprof. It must be passed via authKey query arg. It overrides httpAuth.* settings Auth key for /debug/pprof. It must be passed via authKey query arg. It overrides httpAuth.* settings
-precisionBits int -precisionBits int
The number of precision bits to store per each value. Lower precision bits improves data compression at the cost of precision loss (default 64) The number of precision bits to store per each value. Lower precision bits improves data compression at the cost of precision loss (default 64)
-promscrape.cluster.memberNum int -promscrape.cluster.memberNum string
The number of number in the cluster of scrapers. It must be an unique value in the range 0 ... promscrape.cluster.membersCount-1 across scrapers in the cluster The number of number in the cluster of scrapers. It must be an unique value in the range 0 ... promscrape.cluster.membersCount-1 across scrapers in the cluster. Can be specified as pod name of Kubernetes StatefulSet - pod-name-Num, where Num is a numeric part of pod name (default "0")
-promscrape.cluster.membersCount int -promscrape.cluster.membersCount int
The number of members in a cluster of scrapers. Each member must have an unique -promscrape.cluster.memberNum in the range 0 ... promscrape.cluster.membersCount-1 . Each member then scrapes roughly 1/N of all the targets. By default cluster scraping is disabled, i.e. a single scraper scrapes all the targets The number of members in a cluster of scrapers. Each member must have an unique -promscrape.cluster.memberNum in the range 0 ... promscrape.cluster.membersCount-1 . Each member then scrapes roughly 1/N of all the targets. By default cluster scraping is disabled, i.e. a single scraper scrapes all the targets
-promscrape.cluster.replicationFactor int -promscrape.cluster.replicationFactor int
@ -1785,7 +1785,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
-promscrape.httpSDCheckInterval duration -promscrape.httpSDCheckInterval duration
Interval for checking for changes in http endpoint service discovery. This works only if http_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#http_sd_config for details (default 1m0s) Interval for checking for changes in http endpoint service discovery. This works only if http_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#http_sd_config for details (default 1m0s)
-promscrape.kubernetes.apiServerTimeout duration -promscrape.kubernetes.apiServerTimeout duration
How frequently to reload the full state from Kuberntes API server (default 30m0s) How frequently to reload the full state from Kubernetes API server (default 30m0s)
-promscrape.kubernetesSDCheckInterval duration -promscrape.kubernetesSDCheckInterval duration
Interval for checking for changes in Kubernetes API server. This works only if kubernetes_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config for details (default 30s) Interval for checking for changes in Kubernetes API server. This works only if kubernetes_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config for details (default 30s)
-promscrape.maxDroppedTargets int -promscrape.maxDroppedTargets int
@ -1918,11 +1918,14 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
-storageDataPath string -storageDataPath string
Path to storage data (default "victoria-metrics-data") Path to storage data (default "victoria-metrics-data")
-tls -tls
Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set Whether to enable TLS for incoming HTTP requests at -httpListenAddr (aka https). -tlsCertFile and -tlsKeyFile must be set if -tls is set
-tlsCertFile string -tlsCertFile string
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated Path to file with TLS certificate if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
-tlsCipherSuites array
Optional list of TLS cipher suites for incoming requests over HTTPS if -tls is set. See the list of supported cipher suites at https://pkg.go.dev/crypto/tls#pkg-constants
Supports an array of values separated by comma or specified via multiple flags.
-tlsKeyFile string -tlsKeyFile string
Path to file with TLS key. Used only if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated Path to file with TLS key if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
-version -version
Show VictoriaMetrics version Show VictoriaMetrics version
``` ```

View file

@ -1742,8 +1742,8 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
Auth key for /debug/pprof. It must be passed via authKey query arg. It overrides httpAuth.* settings Auth key for /debug/pprof. It must be passed via authKey query arg. It overrides httpAuth.* settings
-precisionBits int -precisionBits int
The number of precision bits to store per each value. Lower precision bits improves data compression at the cost of precision loss (default 64) The number of precision bits to store per each value. Lower precision bits improves data compression at the cost of precision loss (default 64)
-promscrape.cluster.memberNum int -promscrape.cluster.memberNum string
The number of number in the cluster of scrapers. It must be an unique value in the range 0 ... promscrape.cluster.membersCount-1 across scrapers in the cluster The number of number in the cluster of scrapers. It must be an unique value in the range 0 ... promscrape.cluster.membersCount-1 across scrapers in the cluster. Can be specified as pod name of Kubernetes StatefulSet - pod-name-Num, where Num is a numeric part of pod name (default "0")
-promscrape.cluster.membersCount int -promscrape.cluster.membersCount int
The number of members in a cluster of scrapers. Each member must have an unique -promscrape.cluster.memberNum in the range 0 ... promscrape.cluster.membersCount-1 . Each member then scrapes roughly 1/N of all the targets. By default cluster scraping is disabled, i.e. a single scraper scrapes all the targets The number of members in a cluster of scrapers. Each member must have an unique -promscrape.cluster.memberNum in the range 0 ... promscrape.cluster.membersCount-1 . Each member then scrapes roughly 1/N of all the targets. By default cluster scraping is disabled, i.e. a single scraper scrapes all the targets
-promscrape.cluster.replicationFactor int -promscrape.cluster.replicationFactor int
@ -1789,7 +1789,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
-promscrape.httpSDCheckInterval duration -promscrape.httpSDCheckInterval duration
Interval for checking for changes in http endpoint service discovery. This works only if http_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#http_sd_config for details (default 1m0s) Interval for checking for changes in http endpoint service discovery. This works only if http_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#http_sd_config for details (default 1m0s)
-promscrape.kubernetes.apiServerTimeout duration -promscrape.kubernetes.apiServerTimeout duration
How frequently to reload the full state from Kuberntes API server (default 30m0s) How frequently to reload the full state from Kubernetes API server (default 30m0s)
-promscrape.kubernetesSDCheckInterval duration -promscrape.kubernetesSDCheckInterval duration
Interval for checking for changes in Kubernetes API server. This works only if kubernetes_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config for details (default 30s) Interval for checking for changes in Kubernetes API server. This works only if kubernetes_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config for details (default 30s)
-promscrape.maxDroppedTargets int -promscrape.maxDroppedTargets int
@ -1922,11 +1922,14 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
-storageDataPath string -storageDataPath string
Path to storage data (default "victoria-metrics-data") Path to storage data (default "victoria-metrics-data")
-tls -tls
Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set Whether to enable TLS for incoming HTTP requests at -httpListenAddr (aka https). -tlsCertFile and -tlsKeyFile must be set if -tls is set
-tlsCertFile string -tlsCertFile string
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated Path to file with TLS certificate if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
-tlsCipherSuites array
Optional list of TLS cipher suites for incoming requests over HTTPS if -tls is set. See the list of supported cipher suites at https://pkg.go.dev/crypto/tls#pkg-constants
Supports an array of values separated by comma or specified via multiple flags.
-tlsKeyFile string -tlsKeyFile string
Path to file with TLS key. Used only if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated Path to file with TLS key if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
-version -version
Show VictoriaMetrics version Show VictoriaMetrics version
``` ```

View file

@ -467,7 +467,7 @@ It may be useful to perform `vmagent` rolling update without any scrape loss.
* Disabling staleness tracking with `-promscrape.noStaleMarkers` option. See [these docs](#prometheus-staleness-markers). * Disabling staleness tracking with `-promscrape.noStaleMarkers` option. See [these docs](#prometheus-staleness-markers).
* Enabling stream parsing mode if `vmagent` scrapes targets with millions of metrics per target. See [these docs](#stream-parsing-mode). * Enabling stream parsing mode if `vmagent` scrapes targets with millions of metrics per target. See [these docs](#stream-parsing-mode).
* Reducing the number of output queues with `-remoteWrite.queues` command-line option. * Reducing the number of output queues with `-remoteWrite.queues` command-line option.
* Reducing the amounts of RAM vmagent can use for in-memory buffering with `-memory.allowedPercent` or `-memory.allowedBytes` command-line option. Another option is to reduce memory limits in Docker and/or Kuberntes if `vmagent` runs under these systems. * Reducing the amounts of RAM vmagent can use for in-memory buffering with `-memory.allowedPercent` or `-memory.allowedBytes` command-line option. Another option is to reduce memory limits in Docker and/or Kubernetes if `vmagent` runs under these systems.
* Reducing the number of CPU cores vmagent can use by passing `GOMAXPROCS=N` environment variable to `vmagent`, where `N` is the desired limit on CPU cores. Another option is to reduce CPU limits in Docker or Kubernetes if `vmagent` runs under these systems. * Reducing the number of CPU cores vmagent can use by passing `GOMAXPROCS=N` environment variable to `vmagent`, where `N` is the desired limit on CPU cores. Another option is to reduce CPU limits in Docker or Kubernetes if `vmagent` runs under these systems.
* Passing `-promscrape.dropOriginalLabels` command-line option to `vmagent`, so it drops `"discoveredLabels"` and `"droppedTargets"` lists at `/api/v1/targets` page. This reduces memory usage when scraping big number of targets at the cost of reduced debuggability for improperly configured per-target relabeling. * Passing `-promscrape.dropOriginalLabels` command-line option to `vmagent`, so it drops `"discoveredLabels"` and `"droppedTargets"` lists at `/api/v1/targets` page. This reduces memory usage when scraping big number of targets at the cost of reduced debuggability for improperly configured per-target relabeling.
@ -845,8 +845,8 @@ See the docs at https://docs.victoriametrics.com/vmagent.html .
Trim timestamps for OpenTSDB HTTP data to this duration. Minimum practical duration is 1ms. Higher duration (i.e. 1s) may be used for reducing disk space usage for timestamp data (default 1ms) Trim timestamps for OpenTSDB HTTP data to this duration. Minimum practical duration is 1ms. Higher duration (i.e. 1s) may be used for reducing disk space usage for timestamp data (default 1ms)
-pprofAuthKey string -pprofAuthKey string
Auth key for /debug/pprof. It must be passed via authKey query arg. It overrides httpAuth.* settings Auth key for /debug/pprof. It must be passed via authKey query arg. It overrides httpAuth.* settings
-promscrape.cluster.memberNum int -promscrape.cluster.memberNum string
The number of number in the cluster of scrapers. It must be an unique value in the range 0 ... promscrape.cluster.membersCount-1 across scrapers in the cluster The number of number in the cluster of scrapers. It must be an unique value in the range 0 ... promscrape.cluster.membersCount-1 across scrapers in the cluster. Can be specified as pod name of Kubernetes StatefulSet - pod-name-Num, where Num is a numeric part of pod name (default "0")
-promscrape.cluster.membersCount int -promscrape.cluster.membersCount int
The number of members in a cluster of scrapers. Each member must have an unique -promscrape.cluster.memberNum in the range 0 ... promscrape.cluster.membersCount-1 . Each member then scrapes roughly 1/N of all the targets. By default cluster scraping is disabled, i.e. a single scraper scrapes all the targets The number of members in a cluster of scrapers. Each member must have an unique -promscrape.cluster.memberNum in the range 0 ... promscrape.cluster.membersCount-1 . Each member then scrapes roughly 1/N of all the targets. By default cluster scraping is disabled, i.e. a single scraper scrapes all the targets
-promscrape.cluster.replicationFactor int -promscrape.cluster.replicationFactor int
@ -892,7 +892,7 @@ See the docs at https://docs.victoriametrics.com/vmagent.html .
-promscrape.httpSDCheckInterval duration -promscrape.httpSDCheckInterval duration
Interval for checking for changes in http endpoint service discovery. This works only if http_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#http_sd_config for details (default 1m0s) Interval for checking for changes in http endpoint service discovery. This works only if http_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#http_sd_config for details (default 1m0s)
-promscrape.kubernetes.apiServerTimeout duration -promscrape.kubernetes.apiServerTimeout duration
How frequently to reload the full state from Kuberntes API server (default 30m0s) How frequently to reload the full state from Kubernetes API server (default 30m0s)
-promscrape.kubernetesSDCheckInterval duration -promscrape.kubernetesSDCheckInterval duration
Interval for checking for changes in Kubernetes API server. This works only if kubernetes_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config for details (default 30s) Interval for checking for changes in Kubernetes API server. This works only if kubernetes_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config for details (default 30s)
-promscrape.maxDroppedTargets int -promscrape.maxDroppedTargets int
@ -1020,11 +1020,14 @@ See the docs at https://docs.victoriametrics.com/vmagent.html .
-sortLabels -sortLabels
Whether to sort labels for incoming samples before writing them to all the configured remote storage systems. This may be needed for reducing memory usage at remote storage when the order of labels in incoming samples is random. For example, if m{k1="v1",k2="v2"} may be sent as m{k2="v2",k1="v1"}Enabled sorting for labels can slow down ingestion performance a bit Whether to sort labels for incoming samples before writing them to all the configured remote storage systems. This may be needed for reducing memory usage at remote storage when the order of labels in incoming samples is random. For example, if m{k1="v1",k2="v2"} may be sent as m{k2="v2",k1="v1"}Enabled sorting for labels can slow down ingestion performance a bit
-tls -tls
Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set Whether to enable TLS for incoming HTTP requests at -httpListenAddr (aka https). -tlsCertFile and -tlsKeyFile must be set if -tls is set
-tlsCertFile string -tlsCertFile string
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated Path to file with TLS certificate if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
-tlsCipherSuites array
Optional list of TLS cipher suites for incoming requests over HTTPS if -tls is set. See the list of supported cipher suites at https://pkg.go.dev/crypto/tls#pkg-constants
Supports an array of values separated by comma or specified via multiple flags.
-tlsKeyFile string -tlsKeyFile string
Path to file with TLS key. Used only if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated Path to file with TLS key if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
-version -version
Show VictoriaMetrics version Show VictoriaMetrics version
``` ```

View file

@ -52,7 +52,7 @@ To start using `vmalert` you will need the following things:
* list of rules - PromQL/MetricsQL expressions to execute; * list of rules - PromQL/MetricsQL expressions to execute;
* datasource address - reachable MetricsQL endpoint to run queries against; * datasource address - reachable MetricsQL endpoint to run queries against;
* notifier address [optional] - reachable [Alert Manager](https://github.com/prometheus/alertmanager) instance for processing, * notifier address [optional] - reachable [Alert Manager](https://github.com/prometheus/alertmanager) instance for processing,
aggregating alerts, and sending notifications. Please note, notifier address also supports Consul Service Discovery via aggregating alerts, and sending notifications. Please note, notifier address also supports Consul and DNS Service Discovery via
[config file](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmalert/notifier/config.go). [config file](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmalert/notifier/config.go).
* remote write address [optional] - [remote write](https://prometheus.io/docs/prometheus/latest/storage/#remote-storage-integrations) * remote write address [optional] - [remote write](https://prometheus.io/docs/prometheus/latest/storage/#remote-storage-integrations)
compatible storage to persist rules and alerts state info; compatible storage to persist rules and alerts state info;
@ -692,6 +692,8 @@ The shortlist of configuration flags is the following:
The maximum number of concurrent requests to Prometheus autodiscovery API (Consul, Kubernetes, etc.) (default 100) The maximum number of concurrent requests to Prometheus autodiscovery API (Consul, Kubernetes, etc.) (default 100)
-promscrape.discovery.concurrentWaitTime duration -promscrape.discovery.concurrentWaitTime duration
The maximum duration for waiting to perform API requests if more than -promscrape.discovery.concurrency requests are simultaneously performed (default 1m0s) The maximum duration for waiting to perform API requests if more than -promscrape.discovery.concurrency requests are simultaneously performed (default 1m0s)
-promscrape.dnsSDCheckInterval duration
Interval for checking for changes in dns. This works only if dns_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dns_sd_config for details (default 30s)
-remoteRead.basicAuth.password string -remoteRead.basicAuth.password string
Optional basic auth password for -remoteRead.url Optional basic auth password for -remoteRead.url
-remoteRead.basicAuth.passwordFile string -remoteRead.basicAuth.passwordFile string
@ -802,11 +804,14 @@ The shortlist of configuration flags is the following:
-rule.validateTemplates -rule.validateTemplates
Whether to validate annotation and label templates (default true) Whether to validate annotation and label templates (default true)
-tls -tls
Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set Whether to enable TLS for incoming HTTP requests at -httpListenAddr (aka https). -tlsCertFile and -tlsKeyFile must be set if -tls is set
-tlsCertFile string -tlsCertFile string
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated Path to file with TLS certificate if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
-tlsCipherSuites array
Optional list of TLS cipher suites for incoming requests over HTTPS if -tls is set. See the list of supported cipher suites at https://pkg.go.dev/crypto/tls#pkg-constants
Supports an array of values separated by comma or specified via multiple flags.
-tlsKeyFile string -tlsKeyFile string
Path to file with TLS key. Used only if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated Path to file with TLS key if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
-version -version
Show VictoriaMetrics version Show VictoriaMetrics version
``` ```
@ -850,8 +855,9 @@ Notifier also supports configuration via file specified with flag `notifier.conf
-notifier.config=app/vmalert/notifier/testdata/consul.good.yaml -notifier.config=app/vmalert/notifier/testdata/consul.good.yaml
``` ```
The configuration file allows to configure static notifiers or discover notifiers via The configuration file allows to configure static notifiers, discover notifiers via
[Consul](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#consul_sd_config). [Consul](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#consul_sd_config)
and [DNS](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dns_sd_config):
For example: For example:
``` ```
@ -864,6 +870,12 @@ consul_sd_configs:
- server: localhost:8500 - server: localhost:8500
services: services:
- alertmanager - alertmanager
dns_sd_configs:
- names:
- my.domain.com
type: 'A'
port: 9093
``` ```
The list of configured or discovered Notifiers can be explored via [UI](#Web). The list of configured or discovered Notifiers can be explored via [UI](#Web).
@ -915,6 +927,11 @@ static_configs:
consul_sd_configs: consul_sd_configs:
[ - <consul_sd_config> ... ] [ - <consul_sd_config> ... ]
# List of DNS service discovery configurations.
# See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dns_sd_config
dns_sd_configs:
[ - <dns_sd_config> ... ]
# List of relabel configurations for entities discovered via service discovery. # List of relabel configurations for entities discovered via service discovery.
# Supports the same relabeling features as the rest of VictoriaMetrics components. # Supports the same relabeling features as the rest of VictoriaMetrics components.
# See https://docs.victoriametrics.com/vmagent.html#relabeling # See https://docs.victoriametrics.com/vmagent.html#relabeling

View file

@ -288,11 +288,14 @@ See the docs at https://docs.victoriametrics.com/vmauth.html .
-reloadAuthKey string -reloadAuthKey string
Auth key for /-/reload http endpoint. It must be passed as authKey=... Auth key for /-/reload http endpoint. It must be passed as authKey=...
-tls -tls
Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set Whether to enable TLS for incoming HTTP requests at -httpListenAddr (aka https). -tlsCertFile and -tlsKeyFile must be set if -tls is set
-tlsCertFile string -tlsCertFile string
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated Path to file with TLS certificate if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
-tlsCipherSuites array
Optional list of TLS cipher suites for incoming requests over HTTPS if -tls is set. See the list of supported cipher suites at https://pkg.go.dev/crypto/tls#pkg-constants
Supports an array of values separated by comma or specified via multiple flags.
-tlsKeyFile string -tlsKeyFile string
Path to file with TLS key. Used only if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated Path to file with TLS key if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
-version -version
Show VictoriaMetrics version Show VictoriaMetrics version
``` ```

10
go.mod
View file

@ -11,7 +11,7 @@ require (
github.com/VictoriaMetrics/fasthttp v1.1.0 github.com/VictoriaMetrics/fasthttp v1.1.0
github.com/VictoriaMetrics/metrics v1.18.1 github.com/VictoriaMetrics/metrics v1.18.1
github.com/VictoriaMetrics/metricsql v0.41.0 github.com/VictoriaMetrics/metricsql v0.41.0
github.com/aws/aws-sdk-go v1.43.37 github.com/aws/aws-sdk-go v1.43.41
github.com/cespare/xxhash/v2 v2.1.2 github.com/cespare/xxhash/v2 v2.1.2
github.com/cheggaaa/pb/v3 v3.0.8 github.com/cheggaaa/pb/v3 v3.0.8
github.com/cpuguy83/go-md2man/v2 v2.0.1 // indirect github.com/cpuguy83/go-md2man/v2 v2.0.1 // indirect
@ -31,16 +31,16 @@ require (
github.com/valyala/fasttemplate v1.2.1 github.com/valyala/fasttemplate v1.2.1
github.com/valyala/gozstd v1.16.0 github.com/valyala/gozstd v1.16.0
github.com/valyala/quicktemplate v1.7.0 github.com/valyala/quicktemplate v1.7.0
golang.org/x/net v0.0.0-20220412020605-290c469a71a5 golang.org/x/net v0.0.0-20220418201149-a630d4f3e7a2
golang.org/x/oauth2 v0.0.0-20220411215720-9780585627b5 golang.org/x/oauth2 v0.0.0-20220411215720-9780585627b5
golang.org/x/sys v0.0.0-20220412071739-889880a91fd5 golang.org/x/sys v0.0.0-20220412211240-33da011f77ad
google.golang.org/api v0.74.0 google.golang.org/api v0.74.0
gopkg.in/yaml.v2 v2.4.0 gopkg.in/yaml.v2 v2.4.0
) )
require ( require (
cloud.google.com/go v0.100.2 // indirect cloud.google.com/go v0.100.2 // indirect
cloud.google.com/go/compute v1.5.0 // indirect cloud.google.com/go/compute v1.6.0 // indirect
cloud.google.com/go/iam v0.3.0 // indirect cloud.google.com/go/iam v0.3.0 // indirect
github.com/VividCortex/ewma v1.2.0 // indirect github.com/VividCortex/ewma v1.2.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect github.com/beorn7/perks v1.0.1 // indirect
@ -69,7 +69,7 @@ require (
golang.org/x/text v0.3.7 // indirect golang.org/x/text v0.3.7 // indirect
golang.org/x/xerrors v0.0.0-20220411194840-2f41105eb62f // indirect golang.org/x/xerrors v0.0.0-20220411194840-2f41105eb62f // indirect
google.golang.org/appengine v1.6.7 // indirect google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20220407144326-9054f6ed7bac // indirect google.golang.org/genproto v0.0.0-20220414192740-2d67ff6cf2b4 // indirect
google.golang.org/grpc v1.45.0 // indirect google.golang.org/grpc v1.45.0 // indirect
google.golang.org/protobuf v1.28.0 // indirect google.golang.org/protobuf v1.28.0 // indirect
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect

19
go.sum
View file

@ -41,8 +41,9 @@ cloud.google.com/go/bigtable v1.2.0/go.mod h1:JcVAOl45lrTmQfLj7T6TxyMzIN/3FGGcFm
cloud.google.com/go/bigtable v1.10.1/go.mod h1:cyHeKlx6dcZCO0oSQucYdauseD8kIENGuDOJPKMCVg8= cloud.google.com/go/bigtable v1.10.1/go.mod h1:cyHeKlx6dcZCO0oSQucYdauseD8kIENGuDOJPKMCVg8=
cloud.google.com/go/compute v0.1.0/go.mod h1:GAesmwr110a34z04OlxYkATPBEfVhkymfTBXtfbBFow= cloud.google.com/go/compute v0.1.0/go.mod h1:GAesmwr110a34z04OlxYkATPBEfVhkymfTBXtfbBFow=
cloud.google.com/go/compute v1.3.0/go.mod h1:cCZiE1NHEtai4wiufUhW8I8S1JKkAnhnQJWM7YD99wM= cloud.google.com/go/compute v1.3.0/go.mod h1:cCZiE1NHEtai4wiufUhW8I8S1JKkAnhnQJWM7YD99wM=
cloud.google.com/go/compute v1.5.0 h1:b1zWmYuuHz7gO9kDcM/EpHGr06UgsYNRpNJzI2kFiLM=
cloud.google.com/go/compute v1.5.0/go.mod h1:9SMHyhJlzhlkJqrPAc839t2BZFTSk6Jdj6mkzQJeu0M= cloud.google.com/go/compute v1.5.0/go.mod h1:9SMHyhJlzhlkJqrPAc839t2BZFTSk6Jdj6mkzQJeu0M=
cloud.google.com/go/compute v1.6.0 h1:XdQIN5mdPTSBVwSIVDuY5e8ZzVAccsHvD3qTEz4zIps=
cloud.google.com/go/compute v1.6.0/go.mod h1:T29tfhtVbq1wvAPo0E3+7vhgmkOYeXjhFvz/FMzPu0s=
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE= cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk= cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
cloud.google.com/go/iam v0.3.0 h1:exkAomrVUuzx9kWFI1wm3KI0uoDeUFPB4kKGzx6x+Gc= cloud.google.com/go/iam v0.3.0 h1:exkAomrVUuzx9kWFI1wm3KI0uoDeUFPB4kKGzx6x+Gc=
@ -162,8 +163,8 @@ github.com/aws/aws-sdk-go v1.30.12/go.mod h1:5zCpMtNQVjRREroY7sYe8lOMRSxkhG6MZve
github.com/aws/aws-sdk-go v1.34.28/go.mod h1:H7NKnBqNVzoTJpGfLrQkkD+ytBA93eiDYi/+8rV9s48= github.com/aws/aws-sdk-go v1.34.28/go.mod h1:H7NKnBqNVzoTJpGfLrQkkD+ytBA93eiDYi/+8rV9s48=
github.com/aws/aws-sdk-go v1.35.31/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro= github.com/aws/aws-sdk-go v1.35.31/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
github.com/aws/aws-sdk-go v1.40.45/go.mod h1:585smgzpB/KqRA+K3y/NL/oYRqQvpNJYvLm+LY1U59Q= github.com/aws/aws-sdk-go v1.40.45/go.mod h1:585smgzpB/KqRA+K3y/NL/oYRqQvpNJYvLm+LY1U59Q=
github.com/aws/aws-sdk-go v1.43.37 h1:kyZ7UjaPZaCik+asF33UFOOYSwr9liDRr/UM/vuw8yY= github.com/aws/aws-sdk-go v1.43.41 h1:HaazVplP8/t6SOfybQlNUmjAxLWDKdLdX8BSEHFlJdY=
github.com/aws/aws-sdk-go v1.43.37/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo= github.com/aws/aws-sdk-go v1.43.41/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo=
github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g= github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g=
github.com/aws/aws-sdk-go-v2 v1.9.1/go.mod h1:cK/D0BBs0b/oWPIcX/Z/obahJK1TT7IPVjy53i/mX/4= github.com/aws/aws-sdk-go-v2 v1.9.1/go.mod h1:cK/D0BBs0b/oWPIcX/Z/obahJK1TT7IPVjy53i/mX/4=
github.com/aws/aws-sdk-go-v2/service/cloudwatch v1.8.1/go.mod h1:CM+19rL1+4dFWnOQKwDc7H1KwXTz+h61oUSHyhV0b3o= github.com/aws/aws-sdk-go-v2/service/cloudwatch v1.8.1/go.mod h1:CM+19rL1+4dFWnOQKwDc7H1KwXTz+h61oUSHyhV0b3o=
@ -1181,8 +1182,8 @@ golang.org/x/net v0.0.0-20210917221730-978cfadd31cf/go.mod h1:9nx3DQGgdP8bBQD5qx
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk= golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk= golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220325170049-de3da57026de/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk= golang.org/x/net v0.0.0-20220325170049-de3da57026de/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220412020605-290c469a71a5 h1:bRb386wvrE+oBNdF1d/Xh9mQrfQ4ecYhW5qJ5GvTGT4= golang.org/x/net v0.0.0-20220418201149-a630d4f3e7a2 h1:6mzvA99KwZxbOrxww4EvWVQUnN1+xEu9tafK5ZxkYeA=
golang.org/x/net v0.0.0-20220412020605-290c469a71a5/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk= golang.org/x/net v0.0.0-20220418201149-a630d4f3e7a2/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@ -1320,8 +1321,8 @@ golang.org/x/sys v0.0.0-20220209214540-3681064d5158/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220227234510-4e6760a101f9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220227234510-4e6760a101f9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220328115105-d36c6a25d886/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220328115105-d36c6a25d886/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220405052023-b1e9470b6e64/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220405052023-b1e9470b6e64/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220412071739-889880a91fd5 h1:NubxfvTRuNb4RVzWrIDAUzUvREH1HkCD4JjyQTSG9As= golang.org/x/sys v0.0.0-20220412211240-33da011f77ad h1:ntjMns5wyP/fN65tdBD4g8J5w8n015+iIIs9rtjXkY0=
golang.org/x/sys v0.0.0-20220412071739-889880a91fd5/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220412211240-33da011f77ad/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw= golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
@ -1565,8 +1566,10 @@ google.golang.org/genproto v0.0.0-20220304144024-325a89244dc8/go.mod h1:kGP+zUP2
google.golang.org/genproto v0.0.0-20220310185008-1973136f34c6/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI= google.golang.org/genproto v0.0.0-20220310185008-1973136f34c6/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
google.golang.org/genproto v0.0.0-20220324131243-acbaeb5b85eb/go.mod h1:hAL49I2IFola2sVEjAn7MEwsja0xp51I0tlGAf9hz4E= google.golang.org/genproto v0.0.0-20220324131243-acbaeb5b85eb/go.mod h1:hAL49I2IFola2sVEjAn7MEwsja0xp51I0tlGAf9hz4E=
google.golang.org/genproto v0.0.0-20220405205423-9d709892a2bf/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo= google.golang.org/genproto v0.0.0-20220405205423-9d709892a2bf/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
google.golang.org/genproto v0.0.0-20220407144326-9054f6ed7bac h1:qSNTkEN+L2mvWcLgJOR+8bdHX9rN/IdU3A1Ghpfb1Rg=
google.golang.org/genproto v0.0.0-20220407144326-9054f6ed7bac/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo= google.golang.org/genproto v0.0.0-20220407144326-9054f6ed7bac/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
google.golang.org/genproto v0.0.0-20220413183235-5e96e2839df9/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
google.golang.org/genproto v0.0.0-20220414192740-2d67ff6cf2b4 h1:myaecH64R0bIEDjNORIel4iXubqzaHU1K2z8ajBwWcM=
google.golang.org/genproto v0.0.0-20220414192740-2d67ff6cf2b4/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs= google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM= google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM=

View file

@ -30,9 +30,10 @@ import (
) )
var ( var (
tlsEnable = flag.Bool("tls", false, "Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set") tlsEnable = flag.Bool("tls", false, "Whether to enable TLS for incoming HTTP requests at -httpListenAddr (aka https). -tlsCertFile and -tlsKeyFile must be set if -tls is set")
tlsCertFile = flag.String("tlsCertFile", "", "Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated") tlsCertFile = flag.String("tlsCertFile", "", "Path to file with TLS certificate if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated")
tlsKeyFile = flag.String("tlsKeyFile", "", "Path to file with TLS key. Used only if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated") tlsKeyFile = flag.String("tlsKeyFile", "", "Path to file with TLS key if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated")
tlsCipherSuites = flagutil.NewArray("tlsCipherSuites", "Optional list of TLS cipher suites for incoming requests over HTTPS if -tls is set. See the list of supported cipher suites at https://pkg.go.dev/crypto/tls#pkg-constants")
pathPrefix = flag.String("http.pathPrefix", "", "An optional prefix to add to all the paths handled by http server. For example, if '-http.pathPrefix=/foo/bar' is set, "+ pathPrefix = flag.String("http.pathPrefix", "", "An optional prefix to add to all the paths handled by http server. For example, if '-http.pathPrefix=/foo/bar' is set, "+
"then all the http requests will be handled on '/foo/bar/*' paths. This may be useful for proxied requests. "+ "then all the http requests will be handled on '/foo/bar/*' paths. This may be useful for proxied requests. "+
@ -90,40 +91,18 @@ func Serve(addr string, rh RequestHandler) {
} }
logger.Infof("starting http server at %s://%s/", scheme, hostAddr) logger.Infof("starting http server at %s://%s/", scheme, hostAddr)
logger.Infof("pprof handlers are exposed at %s://%s/debug/pprof/", scheme, hostAddr) logger.Infof("pprof handlers are exposed at %s://%s/debug/pprof/", scheme, hostAddr)
lnTmp, err := netutil.NewTCPListener(scheme, addr) var tlsConfig *tls.Config
if *tlsEnable {
tc, err := netutil.GetServerTLSConfig(*tlsCertFile, *tlsKeyFile, *tlsCipherSuites)
if err != nil {
logger.Fatalf("cannot load TLS cert from -tlsCertFile=%q, -tlsKeyFile=%q: %s", *tlsCertFile, *tlsKeyFile, err)
}
tlsConfig = tc
}
ln, err := netutil.NewTCPListener(scheme, addr, tlsConfig)
if err != nil { if err != nil {
logger.Fatalf("cannot start http server at %s: %s", addr, err) logger.Fatalf("cannot start http server at %s: %s", addr, err)
} }
ln := net.Listener(lnTmp)
if *tlsEnable {
var certLock sync.Mutex
var certDeadline uint64
var cert *tls.Certificate
c, err := tls.LoadX509KeyPair(*tlsCertFile, *tlsKeyFile)
if err != nil {
logger.Fatalf("cannot load TLS cert from tlsCertFile=%q, tlsKeyFile=%q: %s", *tlsCertFile, *tlsKeyFile, err)
}
cert = &c
cfg := &tls.Config{
MinVersion: tls.VersionTLS12,
PreferServerCipherSuites: true,
GetCertificate: func(info *tls.ClientHelloInfo) (*tls.Certificate, error) {
certLock.Lock()
defer certLock.Unlock()
if fasttime.UnixTimestamp() > certDeadline {
c, err = tls.LoadX509KeyPair(*tlsCertFile, *tlsKeyFile)
if err != nil {
return nil, fmt.Errorf("cannot load TLS cert from tlsCertFile=%q, tlsKeyFile=%q: %w", *tlsCertFile, *tlsKeyFile, err)
}
certDeadline = fasttime.UnixTimestamp() + 1
cert = &c
}
return cert, nil
},
}
ln = tls.NewListener(ln, cfg)
}
serveWithListener(addr, ln, rh) serveWithListener(addr, ln, rh)
} }

View file

@ -40,7 +40,7 @@ type Server struct {
// MustStop must be called on the returned server when it is no longer needed. // MustStop must be called on the returned server when it is no longer needed.
func MustStart(addr string, insertHandler func(r io.Reader) error) *Server { func MustStart(addr string, insertHandler func(r io.Reader) error) *Server {
logger.Infof("starting TCP Graphite server at %q", addr) logger.Infof("starting TCP Graphite server at %q", addr)
lnTCP, err := netutil.NewTCPListener("graphite", addr) lnTCP, err := netutil.NewTCPListener("graphite", addr, nil)
if err != nil { if err != nil {
logger.Fatalf("cannot start TCP Graphite server at %q: %s", addr, err) logger.Fatalf("cannot start TCP Graphite server at %q: %s", addr, err)
} }

View file

@ -40,7 +40,7 @@ type Server struct {
// MustStop must be called on the returned server when it is no longer needed. // MustStop must be called on the returned server when it is no longer needed.
func MustStart(addr string, insertHandler func(r io.Reader) error) *Server { func MustStart(addr string, insertHandler func(r io.Reader) error) *Server {
logger.Infof("starting TCP InfluxDB server at %q", addr) logger.Infof("starting TCP InfluxDB server at %q", addr)
lnTCP, err := netutil.NewTCPListener("influx", addr) lnTCP, err := netutil.NewTCPListener("influx", addr, nil)
if err != nil { if err != nil {
logger.Fatalf("cannot start TCP InfluxDB server at %q: %s", addr, err) logger.Fatalf("cannot start TCP InfluxDB server at %q: %s", addr, err)
} }

View file

@ -43,7 +43,7 @@ type Server struct {
// MustStop must be called on the returned server when it is no longer needed. // MustStop must be called on the returned server when it is no longer needed.
func MustStart(addr string, telnetInsertHandler func(r io.Reader) error, httpInsertHandler func(req *http.Request) error) *Server { func MustStart(addr string, telnetInsertHandler func(r io.Reader) error, httpInsertHandler func(req *http.Request) error) *Server {
logger.Infof("starting TCP OpenTSDB collector at %q", addr) logger.Infof("starting TCP OpenTSDB collector at %q", addr)
lnTCP, err := netutil.NewTCPListener("opentsdb", addr) lnTCP, err := netutil.NewTCPListener("opentsdb", addr, nil)
if err != nil { if err != nil {
logger.Fatalf("cannot start TCP OpenTSDB collector at %q: %s", addr, err) logger.Fatalf("cannot start TCP OpenTSDB collector at %q: %s", addr, err)
} }

View file

@ -30,7 +30,7 @@ type Server struct {
// MustStop must be called on the returned server when it is no longer needed. // MustStop must be called on the returned server when it is no longer needed.
func MustStart(addr string, insertHandler func(r *http.Request) error) *Server { func MustStart(addr string, insertHandler func(r *http.Request) error) *Server {
logger.Infof("starting HTTP OpenTSDB server at %q", addr) logger.Infof("starting HTTP OpenTSDB server at %q", addr)
lnTCP, err := netutil.NewTCPListener("opentsdbhttp", addr) lnTCP, err := netutil.NewTCPListener("opentsdbhttp", addr, nil)
if err != nil { if err != nil {
logger.Fatalf("cannot start HTTP OpenTSDB collector at %q: %s", addr, err) logger.Fatalf("cannot start HTTP OpenTSDB collector at %q: %s", addr, err)
} }

View file

@ -1,6 +1,7 @@
package netutil package netutil
import ( import (
"crypto/tls"
"errors" "errors"
"flag" "flag"
"fmt" "fmt"
@ -13,16 +14,19 @@ import (
var enableTCP6 = flag.Bool("enableTCP6", false, "Whether to enable IPv6 for listening and dialing. By default only IPv4 TCP and UDP is used") var enableTCP6 = flag.Bool("enableTCP6", false, "Whether to enable IPv6 for listening and dialing. By default only IPv4 TCP and UDP is used")
// NewTCPListener returns new TCP listener for the given addr. // NewTCPListener returns new TCP listener for the given addr and optional tlsConfig.
// //
// name is used for exported metrics. Each listener in the program must have // name is used for exported metrics. Each listener in the program must have
// distinct name. // distinct name.
func NewTCPListener(name, addr string) (*TCPListener, error) { func NewTCPListener(name, addr string, tlsConfig *tls.Config) (*TCPListener, error) {
network := GetTCPNetwork() network := GetTCPNetwork()
ln, err := net.Listen(network, addr) ln, err := net.Listen(network, addr)
if err != nil { if err != nil {
return nil, err return nil, err
} }
if tlsConfig != nil {
ln = tls.NewListener(ln, tlsConfig)
}
tln := &TCPListener{ tln := &TCPListener{
Listener: ln, Listener: ln,

65
lib/netutil/tls.go Normal file
View file

@ -0,0 +1,65 @@
package netutil
import (
"crypto/tls"
"fmt"
"strings"
"sync"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
)
// GetServerTLSConfig returns TLS config for the server.
func GetServerTLSConfig(tlsCertFile, tlsKeyFile string, tlsCipherSuites []string) (*tls.Config, error) {
var certLock sync.Mutex
var certDeadline uint64
var cert *tls.Certificate
c, err := tls.LoadX509KeyPair(tlsCertFile, tlsKeyFile)
if err != nil {
return nil, fmt.Errorf("cannot load TLS cert from certFile=%q, keyFile=%q: %w", tlsCertFile, tlsKeyFile, err)
}
cipherSuites, err := cipherSuitesFromNames(tlsCipherSuites)
if err != nil {
return nil, fmt.Errorf("cannot use TLS cipher suites from tlsCipherSuites=%q: %w", tlsCipherSuites, err)
}
cert = &c
cfg := &tls.Config{
MinVersion: tls.VersionTLS12,
PreferServerCipherSuites: true,
GetCertificate: func(info *tls.ClientHelloInfo) (*tls.Certificate, error) {
certLock.Lock()
defer certLock.Unlock()
if fasttime.UnixTimestamp() > certDeadline {
c, err = tls.LoadX509KeyPair(tlsCertFile, tlsKeyFile)
if err != nil {
return nil, fmt.Errorf("cannot load TLS cert from certFile=%q, keyFile=%q: %w", tlsCertFile, tlsKeyFile, err)
}
certDeadline = fasttime.UnixTimestamp() + 1
cert = &c
}
return cert, nil
},
CipherSuites: cipherSuites,
}
return cfg, nil
}
func cipherSuitesFromNames(cipherSuiteNames []string) ([]uint16, error) {
if len(cipherSuiteNames) == 0 {
return nil, nil
}
css := tls.CipherSuites()
cssMap := make(map[string]uint16, len(css))
for _, cs := range css {
cssMap[strings.ToLower(cs.Name)] = cs.ID
}
cipherSuites := make([]uint16, 0, len(cipherSuiteNames))
for _, name := range cipherSuiteNames {
id, ok := cssMap[strings.ToLower(name)]
if !ok {
return nil, fmt.Errorf("unsupported TLS cipher suite name: %s", name)
}
cipherSuites = append(cipherSuites, id)
}
return cipherSuites, nil
}

78
lib/netutil/tls_test.go Normal file
View file

@ -0,0 +1,78 @@
package netutil
import (
"reflect"
"testing"
)
func TestCipherSuitesFromNames(t *testing.T) {
type args struct {
definedCipherSuites []string
}
tests := []struct {
name string
args args
want []uint16
wantErr bool
}{
{
name: "empty cipher suites",
args: args{definedCipherSuites: []string{}},
want: nil,
},
{
name: "got wrong string",
args: args{definedCipherSuites: []string{"word"}},
want: nil,
wantErr: true,
},
{
name: "got wrong number",
args: args{definedCipherSuites: []string{"123"}},
want: nil,
wantErr: true,
},
{
name: "got correct string cipher suite",
args: args{definedCipherSuites: []string{"TLS_RSA_WITH_AES_128_CBC_SHA", "TLS_RSA_WITH_AES_256_CBC_SHA"}},
want: []uint16{0x2f, 0x35},
wantErr: false,
},
{
name: "got correct string with different cases (upper and lower) cipher suite",
args: args{definedCipherSuites: []string{"tls_rsa_with_aes_128_cbc_sha", "TLS_RSA_WITH_AES_256_CBC_SHA"}},
want: []uint16{0x2f, 0x35},
wantErr: false,
},
{
name: "got correct number cipher suite",
args: args{definedCipherSuites: []string{"0x2f", "0x35"}},
want: nil,
wantErr: true,
},
{
name: "got insecure number cipher suite",
args: args{definedCipherSuites: []string{"0x0005", "0x000a"}},
want: nil,
wantErr: true,
},
{
name: "got insecure string cipher suite",
args: args{definedCipherSuites: []string{"TLS_ECDHE_ECDSA_WITH_RC4_128_SHA", "TLS_ECDHE_RSA_WITH_RC4_128_SHA"}},
want: nil,
wantErr: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := cipherSuitesFromNames(tt.args.definedCipherSuites)
if (err != nil) != tt.wantErr {
t.Errorf("cipherSuitesFromNames() error = %v, wantErr %v", err, tt.wantErr)
return
}
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("validateCipherSuites() got = %v, want %v", got, tt.want)
}
})
}
}

View file

@ -17,26 +17,34 @@ type IfExpression struct {
lfs []*labelFilter lfs []*labelFilter
} }
// Parse parses `if` expression from s and stores it to ie.
func (ie *IfExpression) Parse(s string) error {
expr, err := metricsql.Parse(s)
if err != nil {
return err
}
me, ok := expr.(*metricsql.MetricExpr)
if !ok {
return fmt.Errorf("expecting series selector; got %q", expr.AppendString(nil))
}
lfs, err := metricExprToLabelFilters(me)
if err != nil {
return fmt.Errorf("cannot parse series selector: %w", err)
}
ie.s = s
ie.lfs = lfs
return nil
}
// UnmarshalYAML unmarshals ie from YAML passed to f. // UnmarshalYAML unmarshals ie from YAML passed to f.
func (ie *IfExpression) UnmarshalYAML(f func(interface{}) error) error { func (ie *IfExpression) UnmarshalYAML(f func(interface{}) error) error {
var s string var s string
if err := f(&s); err != nil { if err := f(&s); err != nil {
return fmt.Errorf("cannot unmarshal `if` option: %w", err) return fmt.Errorf("cannot unmarshal `if` option: %w", err)
} }
expr, err := metricsql.Parse(s) if err := ie.Parse(s); err != nil {
if err != nil {
return fmt.Errorf("cannot parse `if` series selector: %w", err) return fmt.Errorf("cannot parse `if` series selector: %w", err)
} }
me, ok := expr.(*metricsql.MetricExpr)
if !ok {
return fmt.Errorf("expecting `if` series selector; got %q", expr.AppendString(nil))
}
lfs, err := metricExprToLabelFilters(me)
if err != nil {
return fmt.Errorf("cannot parse `if` filters: %w", err)
}
ie.s = s
ie.lfs = lfs
return nil return nil
} }

View file

@ -10,6 +10,32 @@ import (
"gopkg.in/yaml.v2" "gopkg.in/yaml.v2"
) )
func TestIfExpressionParseFailure(t *testing.T) {
f := func(s string) {
t.Helper()
var ie IfExpression
if err := ie.Parse(s); err == nil {
t.Fatalf("expecting non-nil error when parsing %q", s)
}
}
f(`{`)
f(`{foo`)
f(`foo{`)
}
func TestIfExpressionParseSuccess(t *testing.T) {
f := func(s string) {
t.Helper()
var ie IfExpression
if err := ie.Parse(s); err != nil {
t.Fatalf("unexpected error: %s", err)
}
}
f(`foo`)
f(`{foo="bar"}`)
f(`foo{bar=~"baz", x!="y"}`)
}
func TestIfExpressionUnmarshalFailure(t *testing.T) { func TestIfExpressionUnmarshalFailure(t *testing.T) {
f := func(s string) { f := func(s string) {
t.Helper() t.Helper()

View file

@ -18,6 +18,17 @@ func SortLabels(labels []prompbmarshal.Label) {
labelsSorterPool.Put(ls) labelsSorterPool.Put(ls)
} }
// SortLabelsStable sorts labels using stable sort.
func SortLabelsStable(labels []prompbmarshal.Label) {
ls := labelsSorterPool.Get().(*labelsSorter)
*ls = labels
if !sort.IsSorted(ls) {
sort.Stable(ls)
}
*ls = nil
labelsSorterPool.Put(ls)
}
var labelsSorterPool = &sync.Pool{ var labelsSorterPool = &sync.Pool{
New: func() interface{} { New: func() interface{} {
return &labelsSorter{} return &labelsSorter{}

View file

@ -9,6 +9,7 @@ import (
"strconv" "strconv"
"strings" "strings"
"sync" "sync"
"sync/atomic"
"time" "time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil" "github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
@ -75,14 +76,27 @@ func mustInitClusterMemberID() {
// Config represents essential parts from Prometheus config defined at https://prometheus.io/docs/prometheus/latest/configuration/configuration/ // Config represents essential parts from Prometheus config defined at https://prometheus.io/docs/prometheus/latest/configuration/configuration/
type Config struct { type Config struct {
Global GlobalConfig `yaml:"global,omitempty"` Global GlobalConfig `yaml:"global,omitempty"`
ScrapeConfigs []ScrapeConfig `yaml:"scrape_configs,omitempty"` ScrapeConfigs []*ScrapeConfig `yaml:"scrape_configs,omitempty"`
ScrapeConfigFiles []string `yaml:"scrape_config_files,omitempty"` ScrapeConfigFiles []string `yaml:"scrape_config_files,omitempty"`
// This is set to the directory from where the config has been loaded. // This is set to the directory from where the config has been loaded.
baseDir string baseDir string
} }
func (cfg *Config) unmarshal(data []byte, isStrict bool) error {
data = envtemplate.Replace(data)
var err error
if isStrict {
if err = yaml.UnmarshalStrict(data, cfg); err != nil {
err = fmt.Errorf("%w; pass -promscrape.config.strictParse=false command-line flag for ignoring unknown fields in yaml config", err)
}
} else {
err = yaml.Unmarshal(data, cfg)
}
return err
}
func (cfg *Config) marshal() []byte { func (cfg *Config) marshal() []byte {
data, err := yaml.Marshal(cfg) data, err := yaml.Marshal(cfg)
if err != nil { if err != nil {
@ -94,19 +108,77 @@ func (cfg *Config) marshal() []byte {
func (cfg *Config) mustStart() { func (cfg *Config) mustStart() {
startTime := time.Now() startTime := time.Now()
logger.Infof("starting service discovery routines...") logger.Infof("starting service discovery routines...")
for i := range cfg.ScrapeConfigs { for _, sc := range cfg.ScrapeConfigs {
cfg.ScrapeConfigs[i].mustStart(cfg.baseDir) sc.mustStart(cfg.baseDir)
} }
jobNames := cfg.getJobNames() jobNames := cfg.getJobNames()
tsmGlobal.registerJobNames(jobNames) tsmGlobal.registerJobNames(jobNames)
logger.Infof("started service discovery routines in %.3f seconds", time.Since(startTime).Seconds()) logger.Infof("started service discovery routines in %.3f seconds", time.Since(startTime).Seconds())
} }
func (cfg *Config) mustRestart(prevCfg *Config) {
startTime := time.Now()
logger.Infof("restarting service discovery routines...")
prevScrapeCfgByName := make(map[string]*ScrapeConfig, len(prevCfg.ScrapeConfigs))
for _, scPrev := range prevCfg.ScrapeConfigs {
prevScrapeCfgByName[scPrev.JobName] = scPrev
}
// Loop over the the new jobs, start new ones and restart updated ones.
var started, stopped, restarted int
currentJobNames := make(map[string]struct{}, len(cfg.ScrapeConfigs))
for i, sc := range cfg.ScrapeConfigs {
currentJobNames[sc.JobName] = struct{}{}
scPrev := prevScrapeCfgByName[sc.JobName]
if scPrev == nil {
// New scrape config has been appeared. Start it.
sc.mustStart(cfg.baseDir)
started++
continue
}
if areEqualScrapeConfigs(scPrev, sc) {
// The scrape config didn't change, so no need to restart it.
// Use the reference to the previous job, so it could be stopped properly later.
cfg.ScrapeConfigs[i] = scPrev
} else {
// The scrape config has been changed. Stop the previous scrape config and start new one.
scPrev.mustStop()
sc.mustStart(cfg.baseDir)
restarted++
}
}
// Stop preious jobs which weren't found in the current configuration.
for _, scPrev := range prevCfg.ScrapeConfigs {
if _, ok := currentJobNames[scPrev.JobName]; !ok {
scPrev.mustStop()
stopped++
}
}
jobNames := cfg.getJobNames()
tsmGlobal.registerJobNames(jobNames)
logger.Infof("restarted service discovery routines in %.3f seconds, stopped=%d, started=%d, restarted=%d", time.Since(startTime).Seconds(), stopped, started, restarted)
}
func areEqualScrapeConfigs(a, b *ScrapeConfig) bool {
sa := a.marshal()
sb := b.marshal()
return string(sa) == string(sb)
}
func (sc *ScrapeConfig) marshal() []byte {
data, err := yaml.Marshal(sc)
if err != nil {
logger.Panicf("BUG: cannot marshal ScrapeConfig: %s", err)
}
return data
}
func (cfg *Config) mustStop() { func (cfg *Config) mustStop() {
startTime := time.Now() startTime := time.Now()
logger.Infof("stopping service discovery routines...") logger.Infof("stopping service discovery routines...")
for i := range cfg.ScrapeConfigs { for _, sc := range cfg.ScrapeConfigs {
cfg.ScrapeConfigs[i].mustStop() sc.mustStop()
} }
logger.Infof("stopped service discovery routines in %.3f seconds", time.Since(startTime).Seconds()) logger.Infof("stopped service discovery routines in %.3f seconds", time.Since(startTime).Seconds())
} }
@ -114,8 +186,8 @@ func (cfg *Config) mustStop() {
// getJobNames returns all the scrape job names from the cfg. // getJobNames returns all the scrape job names from the cfg.
func (cfg *Config) getJobNames() []string { func (cfg *Config) getJobNames() []string {
a := make([]string, 0, len(cfg.ScrapeConfigs)) a := make([]string, 0, len(cfg.ScrapeConfigs))
for i := range cfg.ScrapeConfigs { for _, sc := range cfg.ScrapeConfigs {
a = append(a, cfg.ScrapeConfigs[i].JobName) a = append(a, sc.JobName)
} }
return a return a
} }
@ -124,9 +196,9 @@ func (cfg *Config) getJobNames() []string {
// //
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/ // See https://prometheus.io/docs/prometheus/latest/configuration/configuration/
type GlobalConfig struct { type GlobalConfig struct {
ScrapeInterval promutils.Duration `yaml:"scrape_interval,omitempty"` ScrapeInterval *promutils.Duration `yaml:"scrape_interval,omitempty"`
ScrapeTimeout promutils.Duration `yaml:"scrape_timeout,omitempty"` ScrapeTimeout *promutils.Duration `yaml:"scrape_timeout,omitempty"`
ExternalLabels map[string]string `yaml:"external_labels,omitempty"` ExternalLabels map[string]string `yaml:"external_labels,omitempty"`
} }
// ScrapeConfig represents essential parts for `scrape_config` section of Prometheus config. // ScrapeConfig represents essential parts for `scrape_config` section of Prometheus config.
@ -134,8 +206,8 @@ type GlobalConfig struct {
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config // See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config
type ScrapeConfig struct { type ScrapeConfig struct {
JobName string `yaml:"job_name"` JobName string `yaml:"job_name"`
ScrapeInterval promutils.Duration `yaml:"scrape_interval,omitempty"` ScrapeInterval *promutils.Duration `yaml:"scrape_interval,omitempty"`
ScrapeTimeout promutils.Duration `yaml:"scrape_timeout,omitempty"` ScrapeTimeout *promutils.Duration `yaml:"scrape_timeout,omitempty"`
MetricsPath string `yaml:"metrics_path,omitempty"` MetricsPath string `yaml:"metrics_path,omitempty"`
HonorLabels bool `yaml:"honor_labels,omitempty"` HonorLabels bool `yaml:"honor_labels,omitempty"`
HonorTimestamps *bool `yaml:"honor_timestamps,omitempty"` HonorTimestamps *bool `yaml:"honor_timestamps,omitempty"`
@ -168,8 +240,8 @@ type ScrapeConfig struct {
DisableCompression bool `yaml:"disable_compression,omitempty"` DisableCompression bool `yaml:"disable_compression,omitempty"`
DisableKeepAlive bool `yaml:"disable_keepalive,omitempty"` DisableKeepAlive bool `yaml:"disable_keepalive,omitempty"`
StreamParse bool `yaml:"stream_parse,omitempty"` StreamParse bool `yaml:"stream_parse,omitempty"`
ScrapeAlignInterval promutils.Duration `yaml:"scrape_align_interval,omitempty"` ScrapeAlignInterval *promutils.Duration `yaml:"scrape_align_interval,omitempty"`
ScrapeOffset promutils.Duration `yaml:"scrape_offset,omitempty"` ScrapeOffset *promutils.Duration `yaml:"scrape_offset,omitempty"`
SeriesLimit int `yaml:"series_limit,omitempty"` SeriesLimit int `yaml:"series_limit,omitempty"`
ProxyClientConfig promauth.ProxyClientConfig `yaml:",inline"` ProxyClientConfig promauth.ProxyClientConfig `yaml:",inline"`
@ -271,8 +343,8 @@ func loadConfig(path string) (*Config, []byte, error) {
return &c, dataNew, nil return &c, dataNew, nil
} }
func loadScrapeConfigFiles(baseDir string, scrapeConfigFiles []string) ([]ScrapeConfig, []byte, error) { func loadScrapeConfigFiles(baseDir string, scrapeConfigFiles []string) ([]*ScrapeConfig, []byte, error) {
var scrapeConfigs []ScrapeConfig var scrapeConfigs []*ScrapeConfig
var scsData []byte var scsData []byte
for _, filePath := range scrapeConfigFiles { for _, filePath := range scrapeConfigFiles {
filePath := fs.GetFilepath(baseDir, filePath) filePath := fs.GetFilepath(baseDir, filePath)
@ -291,7 +363,7 @@ func loadScrapeConfigFiles(baseDir string, scrapeConfigFiles []string) ([]Scrape
return nil, nil, fmt.Errorf("cannot load %q: %w", path, err) return nil, nil, fmt.Errorf("cannot load %q: %w", path, err)
} }
data = envtemplate.Replace(data) data = envtemplate.Replace(data)
var scs []ScrapeConfig var scs []*ScrapeConfig
if err = yaml.UnmarshalStrict(data, &scs); err != nil { if err = yaml.UnmarshalStrict(data, &scs); err != nil {
return nil, nil, fmt.Errorf("cannot parse %q: %w", path, err) return nil, nil, fmt.Errorf("cannot parse %q: %w", path, err)
} }
@ -309,7 +381,7 @@ func IsDryRun() bool {
} }
func (cfg *Config) parseData(data []byte, path string) ([]byte, error) { func (cfg *Config) parseData(data []byte, path string) ([]byte, error) {
if err := unmarshalMaybeStrict(data, cfg); err != nil { if err := cfg.unmarshal(data, *strictParse); err != nil {
return nil, fmt.Errorf("cannot unmarshal data: %w", err) return nil, fmt.Errorf("cannot unmarshal data: %w", err)
} }
absPath, err := filepath.Abs(path) absPath, err := filepath.Abs(path)
@ -329,8 +401,8 @@ func (cfg *Config) parseData(data []byte, path string) ([]byte, error) {
// Check that all the scrape configs have unique JobName // Check that all the scrape configs have unique JobName
m := make(map[string]struct{}, len(cfg.ScrapeConfigs)) m := make(map[string]struct{}, len(cfg.ScrapeConfigs))
for i := range cfg.ScrapeConfigs { for _, sc := range cfg.ScrapeConfigs {
jobName := cfg.ScrapeConfigs[i].JobName jobName := sc.JobName
if _, ok := m[jobName]; ok { if _, ok := m[jobName]; ok {
return nil, fmt.Errorf("duplicate `job_name` in `scrape_configs` loaded from %q: %q", path, jobName) return nil, fmt.Errorf("duplicate `job_name` in `scrape_configs` loaded from %q: %q", path, jobName)
} }
@ -338,8 +410,7 @@ func (cfg *Config) parseData(data []byte, path string) ([]byte, error) {
} }
// Initialize cfg.ScrapeConfigs // Initialize cfg.ScrapeConfigs
for i := range cfg.ScrapeConfigs { for i, sc := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
swc, err := getScrapeWorkConfig(sc, cfg.baseDir, &cfg.Global) swc, err := getScrapeWorkConfig(sc, cfg.baseDir, &cfg.Global)
if err != nil { if err != nil {
return nil, fmt.Errorf("cannot parse `scrape_config` #%d: %w", i+1, err) return nil, fmt.Errorf("cannot parse `scrape_config` #%d: %w", i+1, err)
@ -349,19 +420,6 @@ func (cfg *Config) parseData(data []byte, path string) ([]byte, error) {
return dataNew, nil return dataNew, nil
} }
func unmarshalMaybeStrict(data []byte, dst interface{}) error {
data = envtemplate.Replace(data)
var err error
if *strictParse {
if err = yaml.UnmarshalStrict(data, dst); err != nil {
err = fmt.Errorf("%w; pass -promscrape.config.strictParse=false command-line flag for ignoring unknown fields in yaml config", err)
}
} else {
err = yaml.Unmarshal(data, dst)
}
return err
}
func getSWSByJob(sws []*ScrapeWork) map[string][]*ScrapeWork { func getSWSByJob(sws []*ScrapeWork) map[string][]*ScrapeWork {
m := make(map[string][]*ScrapeWork) m := make(map[string][]*ScrapeWork)
for _, sw := range sws { for _, sw := range sws {
@ -374,8 +432,7 @@ func getSWSByJob(sws []*ScrapeWork) map[string][]*ScrapeWork {
func (cfg *Config) getConsulSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork { func (cfg *Config) getConsulSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
swsPrevByJob := getSWSByJob(prev) swsPrevByJob := getSWSByJob(prev)
dst := make([]*ScrapeWork, 0, len(prev)) dst := make([]*ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs { for _, sc := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst) dstLen := len(dst)
ok := true ok := true
for j := range sc.ConsulSDConfigs { for j := range sc.ConsulSDConfigs {
@ -402,8 +459,7 @@ func (cfg *Config) getConsulSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
func (cfg *Config) getDigitalOceanDScrapeWork(prev []*ScrapeWork) []*ScrapeWork { func (cfg *Config) getDigitalOceanDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
swsPrevByJob := getSWSByJob(prev) swsPrevByJob := getSWSByJob(prev)
dst := make([]*ScrapeWork, 0, len(prev)) dst := make([]*ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs { for _, sc := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst) dstLen := len(dst)
ok := true ok := true
for j := range sc.DigitaloceanSDConfigs { for j := range sc.DigitaloceanSDConfigs {
@ -430,8 +486,7 @@ func (cfg *Config) getDigitalOceanDScrapeWork(prev []*ScrapeWork) []*ScrapeWork
func (cfg *Config) getDNSSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork { func (cfg *Config) getDNSSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
swsPrevByJob := getSWSByJob(prev) swsPrevByJob := getSWSByJob(prev)
dst := make([]*ScrapeWork, 0, len(prev)) dst := make([]*ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs { for _, sc := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst) dstLen := len(dst)
ok := true ok := true
for j := range sc.DNSSDConfigs { for j := range sc.DNSSDConfigs {
@ -458,8 +513,7 @@ func (cfg *Config) getDNSSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
func (cfg *Config) getDockerSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork { func (cfg *Config) getDockerSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
swsPrevByJob := getSWSByJob(prev) swsPrevByJob := getSWSByJob(prev)
dst := make([]*ScrapeWork, 0, len(prev)) dst := make([]*ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs { for _, sc := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst) dstLen := len(dst)
ok := true ok := true
for j := range sc.DockerSDConfigs { for j := range sc.DockerSDConfigs {
@ -486,8 +540,7 @@ func (cfg *Config) getDockerSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
func (cfg *Config) getDockerSwarmSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork { func (cfg *Config) getDockerSwarmSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
swsPrevByJob := getSWSByJob(prev) swsPrevByJob := getSWSByJob(prev)
dst := make([]*ScrapeWork, 0, len(prev)) dst := make([]*ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs { for _, sc := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst) dstLen := len(dst)
ok := true ok := true
for j := range sc.DockerSwarmSDConfigs { for j := range sc.DockerSwarmSDConfigs {
@ -514,8 +567,7 @@ func (cfg *Config) getDockerSwarmSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork
func (cfg *Config) getEC2SDScrapeWork(prev []*ScrapeWork) []*ScrapeWork { func (cfg *Config) getEC2SDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
swsPrevByJob := getSWSByJob(prev) swsPrevByJob := getSWSByJob(prev)
dst := make([]*ScrapeWork, 0, len(prev)) dst := make([]*ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs { for _, sc := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst) dstLen := len(dst)
ok := true ok := true
for j := range sc.EC2SDConfigs { for j := range sc.EC2SDConfigs {
@ -542,8 +594,7 @@ func (cfg *Config) getEC2SDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
func (cfg *Config) getEurekaSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork { func (cfg *Config) getEurekaSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
swsPrevByJob := getSWSByJob(prev) swsPrevByJob := getSWSByJob(prev)
dst := make([]*ScrapeWork, 0, len(prev)) dst := make([]*ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs { for _, sc := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst) dstLen := len(dst)
ok := true ok := true
for j := range sc.EurekaSDConfigs { for j := range sc.EurekaSDConfigs {
@ -579,8 +630,7 @@ func (cfg *Config) getFileSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
} }
} }
dst := make([]*ScrapeWork, 0, len(prev)) dst := make([]*ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs { for _, sc := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
for j := range sc.FileSDConfigs { for j := range sc.FileSDConfigs {
sdc := &sc.FileSDConfigs[j] sdc := &sc.FileSDConfigs[j]
dst = sdc.appendScrapeWork(dst, swsMapPrev, cfg.baseDir, sc.swc) dst = sdc.appendScrapeWork(dst, swsMapPrev, cfg.baseDir, sc.swc)
@ -593,8 +643,7 @@ func (cfg *Config) getFileSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
func (cfg *Config) getGCESDScrapeWork(prev []*ScrapeWork) []*ScrapeWork { func (cfg *Config) getGCESDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
swsPrevByJob := getSWSByJob(prev) swsPrevByJob := getSWSByJob(prev)
dst := make([]*ScrapeWork, 0, len(prev)) dst := make([]*ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs { for _, sc := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst) dstLen := len(dst)
ok := true ok := true
for j := range sc.GCESDConfigs { for j := range sc.GCESDConfigs {
@ -621,8 +670,7 @@ func (cfg *Config) getGCESDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
func (cfg *Config) getHTTPDScrapeWork(prev []*ScrapeWork) []*ScrapeWork { func (cfg *Config) getHTTPDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
swsPrevByJob := getSWSByJob(prev) swsPrevByJob := getSWSByJob(prev)
dst := make([]*ScrapeWork, 0, len(prev)) dst := make([]*ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs { for _, sc := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst) dstLen := len(dst)
ok := true ok := true
for j := range sc.HTTPSDConfigs { for j := range sc.HTTPSDConfigs {
@ -649,8 +697,7 @@ func (cfg *Config) getHTTPDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
func (cfg *Config) getKubernetesSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork { func (cfg *Config) getKubernetesSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
swsPrevByJob := getSWSByJob(prev) swsPrevByJob := getSWSByJob(prev)
dst := make([]*ScrapeWork, 0, len(prev)) dst := make([]*ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs { for _, sc := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst) dstLen := len(dst)
ok := true ok := true
for j := range sc.KubernetesSDConfigs { for j := range sc.KubernetesSDConfigs {
@ -682,8 +729,7 @@ func (cfg *Config) getKubernetesSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
func (cfg *Config) getOpenStackSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork { func (cfg *Config) getOpenStackSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
swsPrevByJob := getSWSByJob(prev) swsPrevByJob := getSWSByJob(prev)
dst := make([]*ScrapeWork, 0, len(prev)) dst := make([]*ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs { for _, sc := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst) dstLen := len(dst)
ok := true ok := true
for j := range sc.OpenStackSDConfigs { for j := range sc.OpenStackSDConfigs {
@ -709,8 +755,7 @@ func (cfg *Config) getOpenStackSDScrapeWork(prev []*ScrapeWork) []*ScrapeWork {
// getStaticScrapeWork returns `static_configs` ScrapeWork from from cfg. // getStaticScrapeWork returns `static_configs` ScrapeWork from from cfg.
func (cfg *Config) getStaticScrapeWork() []*ScrapeWork { func (cfg *Config) getStaticScrapeWork() []*ScrapeWork {
var dst []*ScrapeWork var dst []*ScrapeWork
for i := range cfg.ScrapeConfigs { for _, sc := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i]
for j := range sc.StaticConfigs { for j := range sc.StaticConfigs {
stc := &sc.StaticConfigs[j] stc := &sc.StaticConfigs[j]
dst = stc.appendScrapeWork(dst, sc.swc, nil) dst = stc.appendScrapeWork(dst, sc.swc, nil)
@ -789,7 +834,9 @@ func getScrapeWorkConfig(sc *ScrapeConfig, baseDir string, globalCfg *GlobalConf
} }
swc := &scrapeWorkConfig{ swc := &scrapeWorkConfig{
scrapeInterval: scrapeInterval, scrapeInterval: scrapeInterval,
scrapeIntervalString: scrapeInterval.String(),
scrapeTimeout: scrapeTimeout, scrapeTimeout: scrapeTimeout,
scrapeTimeoutString: scrapeTimeout.String(),
jobName: jobName, jobName: jobName,
metricsPath: metricsPath, metricsPath: metricsPath,
scheme: scheme, scheme: scheme,
@ -816,7 +863,9 @@ func getScrapeWorkConfig(sc *ScrapeConfig, baseDir string, globalCfg *GlobalConf
type scrapeWorkConfig struct { type scrapeWorkConfig struct {
scrapeInterval time.Duration scrapeInterval time.Duration
scrapeIntervalString string
scrapeTimeout time.Duration scrapeTimeout time.Duration
scrapeTimeoutString string
jobName string jobName string
metricsPath string metricsPath string
scheme string scheme string
@ -992,20 +1041,46 @@ func needSkipScrapeWork(key string, membersCount, replicasCount, memberNum int)
return true return true
} }
type labelsContext struct {
labels []prompbmarshal.Label
}
func getLabelsContext() *labelsContext {
v := labelsContextPool.Get()
if v == nil {
return &labelsContext{}
}
return v.(*labelsContext)
}
func putLabelsContext(lctx *labelsContext) {
labels := lctx.labels
for i := range labels {
labels[i].Name = ""
labels[i].Value = ""
}
lctx.labels = lctx.labels[:0]
labelsContextPool.Put(lctx)
}
var labelsContextPool sync.Pool
var scrapeWorkKeyBufPool bytesutil.ByteBufferPool var scrapeWorkKeyBufPool bytesutil.ByteBufferPool
func (swc *scrapeWorkConfig) getScrapeWork(target string, extraLabels, metaLabels map[string]string) (*ScrapeWork, error) { func (swc *scrapeWorkConfig) getScrapeWork(target string, extraLabels, metaLabels map[string]string) (*ScrapeWork, error) {
labels := mergeLabels(swc, target, extraLabels, metaLabels) lctx := getLabelsContext()
lctx.labels = mergeLabels(lctx.labels[:0], swc, target, extraLabels, metaLabels)
var originalLabels []prompbmarshal.Label var originalLabels []prompbmarshal.Label
if !*dropOriginalLabels { if !*dropOriginalLabels {
originalLabels = append([]prompbmarshal.Label{}, labels...) originalLabels = append([]prompbmarshal.Label{}, lctx.labels...)
} }
labels = swc.relabelConfigs.Apply(labels, 0, false) lctx.labels = swc.relabelConfigs.Apply(lctx.labels, 0, false)
labels = promrelabel.RemoveMetaLabels(labels[:0], labels) lctx.labels = promrelabel.RemoveMetaLabels(lctx.labels[:0], lctx.labels)
// Remove references to already deleted labels, so GC could clean strings for label name and label value past len(labels). // Remove references to already deleted labels, so GC could clean strings for label name and label value past len(labels).
// This should reduce memory usage when relabeling creates big number of temporary labels with long names and/or values. // This should reduce memory usage when relabeling creates big number of temporary labels with long names and/or values.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/825 for details. // See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/825 for details.
labels = append([]prompbmarshal.Label{}, labels...) labels := append([]prompbmarshal.Label{}, lctx.labels...)
putLabelsContext(lctx)
// Verify whether the scrape work must be skipped because of `-promscrape.cluster.*` configs. // Verify whether the scrape work must be skipped because of `-promscrape.cluster.*` configs.
// Perform the verification on labels after the relabeling in order to guarantee that targets with the same set of labels // Perform the verification on labels after the relabeling in order to guarantee that targets with the same set of labels
@ -1147,26 +1222,31 @@ func internLabelStrings(labels []prompbmarshal.Label) {
} }
func internString(s string) string { func internString(s string) string {
internStringsMapLock.Lock() m := internStringsMap.Load().(*sync.Map)
defer internStringsMapLock.Unlock() if v, ok := m.Load(s); ok {
sp := v.(*string)
if sInterned, ok := internStringsMap[s]; ok { return *sp
return sInterned
} }
// Make a new copy for s in order to remove references from possible bigger string s refers to. // Make a new copy for s in order to remove references from possible bigger string s refers to.
sCopy := string(append([]byte{}, s...)) sCopy := string(append([]byte{}, s...))
internStringsMap[sCopy] = sCopy m.Store(sCopy, &sCopy)
if len(internStringsMap) > 100e3 { n := atomic.AddUint64(&internStringsMapLen, 1)
internStringsMap = make(map[string]string, 100e3) if n > 100e3 {
atomic.StoreUint64(&internStringsMapLen, 0)
internStringsMap.Store(&sync.Map{})
} }
return sCopy return sCopy
} }
var ( var (
internStringsMapLock sync.Mutex internStringsMap atomic.Value
internStringsMap = make(map[string]string, 100e3) internStringsMapLen uint64
) )
func init() {
internStringsMap.Store(&sync.Map{})
}
func getParamsFromLabels(labels []prompbmarshal.Label, paramsOrig map[string][]string) map[string][]string { func getParamsFromLabels(labels []prompbmarshal.Label, paramsOrig map[string][]string) map[string][]string {
// See https://www.robustperception.io/life-of-a-label // See https://www.robustperception.io/life-of-a-label
m := make(map[string][]string) m := make(map[string][]string)
@ -1185,40 +1265,77 @@ func getParamsFromLabels(labels []prompbmarshal.Label, paramsOrig map[string][]s
return m return m
} }
func mergeLabels(swc *scrapeWorkConfig, target string, extraLabels, metaLabels map[string]string) []prompbmarshal.Label { func mergeLabels(dst []prompbmarshal.Label, swc *scrapeWorkConfig, target string, extraLabels, metaLabels map[string]string) []prompbmarshal.Label {
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config if len(dst) > 0 {
m := make(map[string]string, 6+len(swc.externalLabels)+len(swc.params)+len(extraLabels)+len(metaLabels)) logger.Panicf("BUG: len(dst) must be 0; got %d", len(dst))
for k, v := range swc.externalLabels {
m[k] = v
} }
m["job"] = swc.jobName // See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config
m["__address__"] = target for k, v := range swc.externalLabels {
m["__scheme__"] = swc.scheme dst = appendLabel(dst, k, v)
m["__metrics_path__"] = swc.metricsPath }
m["__scrape_interval__"] = swc.scrapeInterval.String() dst = appendLabel(dst, "job", swc.jobName)
m["__scrape_timeout__"] = swc.scrapeTimeout.String() dst = appendLabel(dst, "__address__", target)
dst = appendLabel(dst, "__scheme__", swc.scheme)
dst = appendLabel(dst, "__metrics_path__", swc.metricsPath)
dst = appendLabel(dst, "__scrape_interval__", swc.scrapeIntervalString)
dst = appendLabel(dst, "__scrape_timeout__", swc.scrapeTimeoutString)
for k, args := range swc.params { for k, args := range swc.params {
if len(args) == 0 { if len(args) == 0 {
continue continue
} }
k = "__param_" + k k = "__param_" + k
v := args[0] v := args[0]
m[k] = v dst = appendLabel(dst, k, v)
} }
for k, v := range extraLabels { for k, v := range extraLabels {
m[k] = v dst = appendLabel(dst, k, v)
} }
for k, v := range metaLabels { for k, v := range metaLabels {
m[k] = v dst = appendLabel(dst, k, v)
} }
result := make([]prompbmarshal.Label, 0, len(m)) if len(dst) < 2 {
for k, v := range m { return dst
result = append(result, prompbmarshal.Label{
Name: k,
Value: v,
})
} }
return result // Remove duplicate labels if any.
// Stable sorting is needed in order to preserve the order for labels with identical names.
// This is needed in order to remove labels with duplicate names other than the last one.
promrelabel.SortLabelsStable(dst)
prevName := dst[0].Name
hasDuplicateLabels := false
for _, label := range dst[1:] {
if label.Name == prevName {
hasDuplicateLabels = true
break
}
prevName = label.Name
}
if !hasDuplicateLabels {
return dst
}
prevName = dst[0].Name
tmp := dst[:1]
for _, label := range dst[1:] {
if label.Name == prevName {
tmp[len(tmp)-1] = label
} else {
tmp = append(tmp, label)
prevName = label.Name
}
}
tail := dst[len(tmp):]
for i := range tail {
label := &tail[i]
label.Name = ""
label.Value = ""
}
return tmp
}
func appendLabel(dst []prompbmarshal.Label, name, value string) []prompbmarshal.Label {
return append(dst, prompbmarshal.Label{
Name: name,
Value: value,
})
} }
func addMissingPort(scheme, target string) string { func addMissingPort(scheme, target string) string {

View file

@ -5,6 +5,7 @@ import (
"fmt" "fmt"
"reflect" "reflect"
"strconv" "strconv"
"strings"
"testing" "testing"
"time" "time"
@ -13,6 +14,142 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/proxy" "github.com/VictoriaMetrics/VictoriaMetrics/lib/proxy"
) )
func TestInternStringSerial(t *testing.T) {
if err := testInternString(t); err != nil {
t.Fatalf("unexpected error: %s", err)
}
}
func TestInternStringConcurrent(t *testing.T) {
concurrency := 5
resultCh := make(chan error, concurrency)
for i := 0; i < concurrency; i++ {
go func() {
resultCh <- testInternString(t)
}()
}
timer := time.NewTimer(5 * time.Second)
for i := 0; i < concurrency; i++ {
select {
case err := <-resultCh:
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
case <-timer.C:
t.Fatalf("timeout")
}
}
}
func testInternString(t *testing.T) error {
for i := 0; i < 1000; i++ {
s := fmt.Sprintf("foo_%d", i)
s1 := internString(s)
if s != s1 {
return fmt.Errorf("unexpected string returned from internString; got %q; want %q", s1, s)
}
}
return nil
}
func TestMergeLabels(t *testing.T) {
f := func(swc *scrapeWorkConfig, target string, extraLabels, metaLabels map[string]string, resultExpected string) {
t.Helper()
var labels []prompbmarshal.Label
labels = mergeLabels(labels[:0], swc, target, extraLabels, metaLabels)
result := promLabelsString(labels)
if result != resultExpected {
t.Fatalf("unexpected result;\ngot\n%s\nwant\n%s", result, resultExpected)
}
}
f(&scrapeWorkConfig{}, "foo", nil, nil, `{__address__="foo",__metrics_path__="",__scheme__="",__scrape_interval__="",__scrape_timeout__="",job=""}`)
f(&scrapeWorkConfig{}, "foo", map[string]string{"foo": "bar"}, nil, `{__address__="foo",__metrics_path__="",__scheme__="",__scrape_interval__="",__scrape_timeout__="",foo="bar",job=""}`)
f(&scrapeWorkConfig{}, "foo", map[string]string{"job": "bar"}, nil, `{__address__="foo",__metrics_path__="",__scheme__="",__scrape_interval__="",__scrape_timeout__="",job="bar"}`)
f(&scrapeWorkConfig{
jobName: "xyz",
scheme: "https",
metricsPath: "/foo/bar",
scrapeIntervalString: "15s",
scrapeTimeoutString: "10s",
externalLabels: map[string]string{
"job": "bar",
"a": "b",
},
}, "foo", nil, nil, `{__address__="foo",__metrics_path__="/foo/bar",__scheme__="https",__scrape_interval__="15s",__scrape_timeout__="10s",a="b",job="xyz"}`)
f(&scrapeWorkConfig{
jobName: "xyz",
scheme: "https",
metricsPath: "/foo/bar",
externalLabels: map[string]string{
"job": "bar",
"a": "b",
},
}, "foo", map[string]string{
"job": "extra_job",
"foo": "extra_foo",
"a": "xyz",
}, map[string]string{
"__meta_x": "y",
}, `{__address__="foo",__meta_x="y",__metrics_path__="/foo/bar",__scheme__="https",__scrape_interval__="",__scrape_timeout__="",a="xyz",foo="extra_foo",job="extra_job"}`)
}
func TestScrapeConfigUnmarshalMarshal(t *testing.T) {
f := func(data string) {
t.Helper()
var cfg Config
data = strings.TrimSpace(data)
if err := cfg.unmarshal([]byte(data), true); err != nil {
t.Fatalf("parse error: %s\ndata:\n%s", err, data)
}
resultData := string(cfg.marshal())
result := strings.TrimSpace(resultData)
if result != data {
t.Fatalf("unexpected marshaled config:\ngot\n%s\nwant\n%s", result, data)
}
}
f(`
global:
scrape_interval: 10s
`)
f(`
scrape_config_files:
- foo
- bar
`)
f(`
scrape_configs:
- job_name: foo
scrape_timeout: 1.5s
static_configs:
- targets:
- foo
- bar
labels:
foo: bar
`)
f(`
scrape_configs:
- job_name: foo
honor_labels: true
honor_timestamps: false
scheme: https
params:
foo:
- x
authorization:
type: foobar
relabel_configs:
- source_labels: [abc]
static_configs:
- targets:
- foo
relabel_debug: true
scrape_align_interval: 1h30m0s
proxy_bearer_token_file: file.txt
`)
}
func TestNeedSkipScrapeWork(t *testing.T) { func TestNeedSkipScrapeWork(t *testing.T) {
f := func(key string, membersCount, replicationFactor, memberNum int, needSkipExpected bool) { f := func(key string, membersCount, replicationFactor, memberNum int, needSkipExpected bool) {
t.Helper() t.Helper()

View file

@ -0,0 +1,25 @@
package promscrape
import (
"fmt"
"testing"
)
func BenchmarkInternString(b *testing.B) {
a := make([]string, 10000)
for i := range a {
a[i] = fmt.Sprintf("string_%d", i)
}
b.ReportAllocs()
b.SetBytes(int64(len(a)))
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
for _, s := range a {
sResult := internString(s)
if sResult != s {
panic(fmt.Sprintf("unexpected string obtained; got %q; want %q", sResult, s))
}
}
}
})
}

View file

@ -16,12 +16,13 @@ import (
"sync/atomic" "sync/atomic"
"time" "time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger" "github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth" "github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
"github.com/VictoriaMetrics/metrics" "github.com/VictoriaMetrics/metrics"
) )
var apiServerTimeout = flag.Duration("promscrape.kubernetes.apiServerTimeout", 30*time.Minute, "How frequently to reload the full state from Kuberntes API server") var apiServerTimeout = flag.Duration("promscrape.kubernetes.apiServerTimeout", 30*time.Minute, "How frequently to reload the full state from Kubernetes API server")
// WatchEvent is a watch event returned from API server endpoints if `watch=1` query arg is set. // WatchEvent is a watch event returned from API server endpoints if `watch=1` query arg is set.
// //
@ -45,7 +46,7 @@ type parseObjectFunc func(data []byte) (object, error)
// parseObjectListFunc must parse objectList from the given r. // parseObjectListFunc must parse objectList from the given r.
type parseObjectListFunc func(r io.Reader) (map[string]object, ListMeta, error) type parseObjectListFunc func(r io.Reader) (map[string]object, ListMeta, error)
// apiWatcher is used for watching for Kuberntes object changes and caching their latest states. // apiWatcher is used for watching for Kubernetes object changes and caching their latest states.
type apiWatcher struct { type apiWatcher struct {
role string role string
@ -107,7 +108,7 @@ func (aw *apiWatcher) reloadScrapeWorks(uw *urlWatcher, swosByKey map[string][]i
} }
func (aw *apiWatcher) setScrapeWorks(uw *urlWatcher, key string, labels []map[string]string) { func (aw *apiWatcher) setScrapeWorks(uw *urlWatcher, key string, labels []map[string]string) {
swos := aw.getScrapeWorkObjectsForLabels(labels) swos := getScrapeWorkObjectsForLabels(aw.swcFunc, labels)
aw.swosByURLWatcherLock.Lock() aw.swosByURLWatcherLock.Lock()
swosByKey := aw.swosByURLWatcher[uw] swosByKey := aw.swosByURLWatcher[uw]
if swosByKey == nil { if swosByKey == nil {
@ -133,10 +134,11 @@ func (aw *apiWatcher) removeScrapeWorks(uw *urlWatcher, key string) {
aw.swosByURLWatcherLock.Unlock() aw.swosByURLWatcherLock.Unlock()
} }
func (aw *apiWatcher) getScrapeWorkObjectsForLabels(labelss []map[string]string) []interface{} { func getScrapeWorkObjectsForLabels(swcFunc ScrapeWorkConstructorFunc, labelss []map[string]string) []interface{} {
swos := make([]interface{}, 0, len(labelss)) // Do not pre-allocate swos, since it is likely the swos will be empty because of relabeling
var swos []interface{}
for _, labels := range labelss { for _, labels := range labelss {
swo := aw.swcFunc(labels) swo := swcFunc(labels)
// The reflect check is needed because of https://mangatmodi.medium.com/go-check-nil-interface-the-right-way-d142776edef1 // The reflect check is needed because of https://mangatmodi.medium.com/go-check-nil-interface-the-right-way-d142776edef1
if swo != nil && !reflect.ValueOf(swo).IsNil() { if swo != nil && !reflect.ValueOf(swo).IsNil() {
swos = append(swos, swo) swos = append(swos, swo)
@ -149,21 +151,14 @@ func (aw *apiWatcher) getScrapeWorkObjectsForLabels(labelss []map[string]string)
func (aw *apiWatcher) getScrapeWorkObjects() []interface{} { func (aw *apiWatcher) getScrapeWorkObjects() []interface{} {
aw.gw.registerPendingAPIWatchers() aw.gw.registerPendingAPIWatchers()
swos := make([]interface{}, 0, aw.swosCount.Get())
aw.swosByURLWatcherLock.Lock() aw.swosByURLWatcherLock.Lock()
defer aw.swosByURLWatcherLock.Unlock()
size := 0
for _, swosByKey := range aw.swosByURLWatcher {
for _, swosLocal := range swosByKey {
size += len(swosLocal)
}
}
swos := make([]interface{}, 0, size)
for _, swosByKey := range aw.swosByURLWatcher { for _, swosByKey := range aw.swosByURLWatcher {
for _, swosLocal := range swosByKey { for _, swosLocal := range swosByKey {
swos = append(swos, swosLocal...) swos = append(swos, swosLocal...)
} }
} }
aw.swosByURLWatcherLock.Unlock()
return swos return swos
} }
@ -295,18 +290,6 @@ func (gw *groupWatcher) startWatchersForRole(role string, aw *apiWatcher) {
if needStart { if needStart {
uw.reloadObjects() uw.reloadObjects()
go uw.watchForUpdates() go uw.watchForUpdates()
if role == "endpoints" || role == "endpointslice" {
// Refresh endpoints and enpointslices targets in background, since they depend on other object types such as pod and service.
// This should fix https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1240 .
go func() {
for {
time.Sleep(5 * time.Second)
gw.mu.Lock()
uw.reloadScrapeWorksForAPIWatchersLocked(uw.aws)
gw.mu.Unlock()
}
}()
}
} }
} }
} }
@ -377,7 +360,7 @@ type urlWatcher struct {
// Batch registering saves CPU time needed for registering big number of Kubernetes objects // Batch registering saves CPU time needed for registering big number of Kubernetes objects
// shared among big number of scrape jobs, since per-object labels are generated only once // shared among big number of scrape jobs, since per-object labels are generated only once
// for all the scrape jobs (each scrape job is associated with a single apiWatcher). // for all the scrape jobs (each scrape job is associated with a single apiWatcher).
// See reloadScrapeWorksForAPIWatchersLocked for details. // See registerPendingAPIWatchersLocked for details.
awsPending map[*apiWatcher]struct{} awsPending map[*apiWatcher]struct{}
// aws contains registered apiWatcher objects // aws contains registered apiWatcher objects
@ -433,15 +416,45 @@ func (uw *urlWatcher) registerPendingAPIWatchersLocked() {
if len(uw.awsPending) == 0 { if len(uw.awsPending) == 0 {
return return
} }
awsPending := make([]*apiWatcher, 0, len(uw.awsPending)) aws := make([]*apiWatcher, 0, len(uw.awsPending))
for aw := range uw.awsPending { for aw := range uw.awsPending {
awsPending = append(awsPending, aw)
uw.aws[aw] = struct{}{} uw.aws[aw] = struct{}{}
aws = append(aws, aw)
}
swosByKey := make([]map[string][]interface{}, len(aws))
for i := range aws {
swosByKey[i] = make(map[string][]interface{})
}
// Generate ScrapeWork objects in parallel on available CPU cores.
// This should reduce the time needed for their generation on systems with many CPU cores.
var swosByKeyLock sync.Mutex
var wg sync.WaitGroup
limiterCh := make(chan struct{}, cgroup.AvailableCPUs())
for key, o := range uw.objectsByKey {
labels := o.getTargetLabels(uw.gw)
wg.Add(1)
limiterCh <- struct{}{}
go func(key string, labels []map[string]string) {
for i, aw := range aws {
swos := getScrapeWorkObjectsForLabels(aw.swcFunc, labels)
if len(swos) > 0 {
swosByKeyLock.Lock()
swosByKey[i][key] = swos
swosByKeyLock.Unlock()
}
}
wg.Done()
<-limiterCh
}(key, labels)
}
wg.Wait()
for i, aw := range aws {
aw.reloadScrapeWorks(uw, swosByKey[i])
} }
uw.reloadScrapeWorksForAPIWatchersLocked(uw.awsPending)
uw.awsPending = make(map[*apiWatcher]struct{}) uw.awsPending = make(map[*apiWatcher]struct{})
metrics.GetOrCreateCounter(fmt.Sprintf(`vm_promscrape_discovery_kubernetes_subscribers{role=%q,status="working"}`, uw.role)).Add(len(awsPending)) metrics.GetOrCreateCounter(fmt.Sprintf(`vm_promscrape_discovery_kubernetes_subscribers{role=%q,status="working"}`, uw.role)).Add(len(aws))
metrics.GetOrCreateCounter(fmt.Sprintf(`vm_promscrape_discovery_kubernetes_subscribers{role=%q,status="pending"}`, uw.role)).Add(-len(awsPending)) metrics.GetOrCreateCounter(fmt.Sprintf(`vm_promscrape_discovery_kubernetes_subscribers{role=%q,status="pending"}`, uw.role)).Add(-len(aws))
} }
func (uw *urlWatcher) unsubscribeAPIWatcherLocked(aw *apiWatcher) { func (uw *urlWatcher) unsubscribeAPIWatcherLocked(aw *apiWatcher) {
@ -484,19 +497,21 @@ func (uw *urlWatcher) reloadObjects() string {
uw.gw.mu.Lock() uw.gw.mu.Lock()
var updated, removed, added int var updated, removed, added int
for key := range uw.objectsByKey { for key := range uw.objectsByKey {
if _, ok := objectsByKey[key]; ok { o, ok := objectsByKey[key]
if ok {
uw.updateObjectLocked(key, o)
updated++ updated++
} else { } else {
uw.removeObjectLocked(key)
removed++ removed++
} }
} }
for key := range objectsByKey { for key, o := range objectsByKey {
if _, ok := uw.objectsByKey[key]; !ok { if _, ok := uw.objectsByKey[key]; !ok {
uw.updateObjectLocked(key, o)
added++ added++
} }
} }
uw.objectsByKey = objectsByKey
uw.reloadScrapeWorksForAPIWatchersLocked(uw.aws)
uw.gw.mu.Unlock() uw.gw.mu.Unlock()
uw.objectsUpdated.Add(updated) uw.objectsUpdated.Add(updated)
@ -510,32 +525,6 @@ func (uw *urlWatcher) reloadObjects() string {
return uw.resourceVersion return uw.resourceVersion
} }
func (uw *urlWatcher) reloadScrapeWorksForAPIWatchersLocked(awsMap map[*apiWatcher]struct{}) {
if len(awsMap) == 0 {
return
}
aws := make([]*apiWatcher, 0, len(awsMap))
for aw := range awsMap {
aws = append(aws, aw)
}
swosByKey := make([]map[string][]interface{}, len(aws))
for i := range aws {
swosByKey[i] = make(map[string][]interface{})
}
for key, o := range uw.objectsByKey {
labels := o.getTargetLabels(uw.gw)
for i, aw := range aws {
swos := aw.getScrapeWorkObjectsForLabels(labels)
if len(swos) > 0 {
swosByKey[i][key] = swos
}
}
}
for i, aw := range aws {
aw.reloadScrapeWorks(uw, swosByKey[i])
}
}
// watchForUpdates watches for object updates starting from uw.resourceVersion and updates the corresponding objects to the latest state. // watchForUpdates watches for object updates starting from uw.resourceVersion and updates the corresponding objects to the latest state.
// //
// See https://kubernetes.io/docs/reference/using-api/api-concepts/#efficient-detection-of-changes // See https://kubernetes.io/docs/reference/using-api/api-concepts/#efficient-detection-of-changes
@ -597,7 +586,7 @@ func (uw *urlWatcher) watchForUpdates() {
} }
} }
// readObjectUpdateStream reads Kuberntes watch events from r and updates locally cached objects according to the received events. // readObjectUpdateStream reads Kubernetes watch events from r and updates locally cached objects according to the received events.
func (uw *urlWatcher) readObjectUpdateStream(r io.Reader) error { func (uw *urlWatcher) readObjectUpdateStream(r io.Reader) error {
d := json.NewDecoder(r) d := json.NewDecoder(r)
var we WatchEvent var we WatchEvent
@ -614,20 +603,12 @@ func (uw *urlWatcher) readObjectUpdateStream(r io.Reader) error {
key := o.key() key := o.key()
uw.gw.mu.Lock() uw.gw.mu.Lock()
if _, ok := uw.objectsByKey[key]; !ok { if _, ok := uw.objectsByKey[key]; !ok {
// if we.Type == "MODIFIED" is expected condition after recovering from the bookmarked resourceVersion.
uw.objectsCount.Inc() uw.objectsCount.Inc()
uw.objectsAdded.Inc() uw.objectsAdded.Inc()
} else { } else {
// if we.Type == "ADDED" is expected condition after recovering from the bookmarked resourceVersion.
uw.objectsUpdated.Inc() uw.objectsUpdated.Inc()
} }
uw.objectsByKey[key] = o uw.updateObjectLocked(key, o)
if len(uw.aws) > 0 {
labels := o.getTargetLabels(uw.gw)
for aw := range uw.aws {
aw.setScrapeWorks(uw, key, labels)
}
}
uw.gw.mu.Unlock() uw.gw.mu.Unlock()
case "DELETED": case "DELETED":
o, err := uw.parseObject(we.Object) o, err := uw.parseObject(we.Object)
@ -639,11 +620,8 @@ func (uw *urlWatcher) readObjectUpdateStream(r io.Reader) error {
if _, ok := uw.objectsByKey[key]; ok { if _, ok := uw.objectsByKey[key]; ok {
uw.objectsCount.Dec() uw.objectsCount.Dec()
uw.objectsRemoved.Inc() uw.objectsRemoved.Inc()
delete(uw.objectsByKey, key)
}
for aw := range uw.aws {
aw.removeScrapeWorks(uw, key)
} }
uw.removeObjectLocked(key)
uw.gw.mu.Unlock() uw.gw.mu.Unlock()
case "BOOKMARK": case "BOOKMARK":
// See https://kubernetes.io/docs/reference/using-api/api-concepts/#watch-bookmarks // See https://kubernetes.io/docs/reference/using-api/api-concepts/#watch-bookmarks
@ -670,6 +648,29 @@ func (uw *urlWatcher) readObjectUpdateStream(r io.Reader) error {
} }
} }
func (uw *urlWatcher) updateObjectLocked(key string, o object) {
oPrev, ok := uw.objectsByKey[key]
if ok && reflect.DeepEqual(oPrev, o) {
// Nothing to do, since the new object is equal to the previous one.
return
}
uw.objectsByKey[key] = o
if len(uw.aws) == 0 {
return
}
labels := o.getTargetLabels(uw.gw)
for aw := range uw.aws {
aw.setScrapeWorks(uw, key, labels)
}
}
func (uw *urlWatcher) removeObjectLocked(key string) {
delete(uw.objectsByKey, key)
for aw := range uw.aws {
aw.removeScrapeWorks(uw, key)
}
}
// Bookmark is a bookmark message from Kubernetes Watch API. // Bookmark is a bookmark message from Kubernetes Watch API.
// See https://kubernetes.io/docs/reference/using-api/api-concepts/#watch-bookmarks // See https://kubernetes.io/docs/reference/using-api/api-concepts/#watch-bookmarks
type Bookmark struct { type Bookmark struct {

View file

@ -145,8 +145,7 @@ func runScraper(configFile string, pushData func(wr *prompbmarshal.WriteRequest)
logger.Infof("nothing changed in %q", configFile) logger.Infof("nothing changed in %q", configFile)
goto waitForChans goto waitForChans
} }
cfg.mustStop() cfgNew.mustRestart(cfg)
cfgNew.mustStart()
cfg = cfgNew cfg = cfgNew
data = dataNew data = dataNew
marshaledData = cfgNew.marshal() marshaledData = cfgNew.marshal()
@ -161,8 +160,7 @@ func runScraper(configFile string, pushData func(wr *prompbmarshal.WriteRequest)
// Nothing changed since the previous loadConfig // Nothing changed since the previous loadConfig
goto waitForChans goto waitForChans
} }
cfg.mustStop() cfgNew.mustRestart(cfg)
cfgNew.mustStart()
cfg = cfgNew cfg = cfgNew
data = dataNew data = dataNew
configData.Store(&marshaledData) configData.Store(&marshaledData)

View file

@ -5,6 +5,7 @@ import (
"fmt" "fmt"
"io" "io"
"net/http" "net/http"
"regexp"
"sort" "sort"
"strconv" "strconv"
"strings" "strings"
@ -15,6 +16,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime" "github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal" "github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel" "github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
xxhash "github.com/cespare/xxhash/v2"
) )
var maxDroppedTargets = flag.Int("promscrape.maxDroppedTargets", 1000, "The maximum number of droppedTargets to show at /api/v1/targets page. "+ var maxDroppedTargets = flag.Int("promscrape.maxDroppedTargets", 1000, "The maximum number of droppedTargets to show at /api/v1/targets page. "+
@ -45,12 +47,14 @@ func WriteTargetResponse(w http.ResponseWriter, r *http.Request) error {
func WriteHumanReadableTargetsStatus(w http.ResponseWriter, r *http.Request) { func WriteHumanReadableTargetsStatus(w http.ResponseWriter, r *http.Request) {
showOriginalLabels, _ := strconv.ParseBool(r.FormValue("show_original_labels")) showOriginalLabels, _ := strconv.ParseBool(r.FormValue("show_original_labels"))
showOnlyUnhealthy, _ := strconv.ParseBool(r.FormValue("show_only_unhealthy")) showOnlyUnhealthy, _ := strconv.ParseBool(r.FormValue("show_only_unhealthy"))
endpointSearch := strings.TrimSpace(r.FormValue("endpoint_search"))
labelSearch := strings.TrimSpace(r.FormValue("label_search"))
if accept := r.Header.Get("Accept"); strings.Contains(accept, "text/html") { if accept := r.Header.Get("Accept"); strings.Contains(accept, "text/html") {
w.Header().Set("Content-Type", "text/html; charset=utf-8") w.Header().Set("Content-Type", "text/html; charset=utf-8")
tsmGlobal.WriteTargetsHTML(w, showOnlyUnhealthy) tsmGlobal.WriteTargetsHTML(w, showOnlyUnhealthy, endpointSearch, labelSearch)
} else { } else {
w.Header().Set("Content-Type", "text/plain; charset=utf-8") w.Header().Set("Content-Type", "text/plain; charset=utf-8")
tsmGlobal.WriteTargetsPlain(w, showOriginalLabels) tsmGlobal.WriteTargetsPlain(w, showOriginalLabels, showOnlyUnhealthy, endpointSearch, labelSearch)
} }
} }
@ -242,7 +246,7 @@ func (st *targetStatus) getDurationFromLastScrape() time.Duration {
type droppedTargets struct { type droppedTargets struct {
mu sync.Mutex mu sync.Mutex
m map[string]droppedTarget m map[uint64]droppedTarget
lastCleanupTime uint64 lastCleanupTime uint64
} }
@ -252,7 +256,8 @@ type droppedTarget struct {
} }
func (dt *droppedTargets) Register(originalLabels []prompbmarshal.Label) { func (dt *droppedTargets) Register(originalLabels []prompbmarshal.Label) {
key := promLabelsString(originalLabels) // It is better to have hash collisions instead of spending additional CPU on promLabelsString() call.
key := labelsHash(originalLabels)
currentTime := fasttime.UnixTimestamp() currentTime := fasttime.UnixTimestamp()
dt.mu.Lock() dt.mu.Lock()
if k, ok := dt.m[key]; ok { if k, ok := dt.m[key]; ok {
@ -275,6 +280,24 @@ func (dt *droppedTargets) Register(originalLabels []prompbmarshal.Label) {
dt.mu.Unlock() dt.mu.Unlock()
} }
func labelsHash(labels []prompbmarshal.Label) uint64 {
d := xxhashPool.Get().(*xxhash.Digest)
for _, label := range labels {
_, _ = d.WriteString(label.Name)
_, _ = d.WriteString(label.Value)
}
h := d.Sum64()
d.Reset()
xxhashPool.Put(d)
return h
}
var xxhashPool = &sync.Pool{
New: func() interface{} {
return xxhash.New()
},
}
// WriteDroppedTargetsJSON writes `droppedTargets` contents to w according to https://prometheus.io/docs/prometheus/latest/querying/api/#targets // WriteDroppedTargetsJSON writes `droppedTargets` contents to w according to https://prometheus.io/docs/prometheus/latest/querying/api/#targets
func (dt *droppedTargets) WriteDroppedTargetsJSON(w io.Writer) { func (dt *droppedTargets) WriteDroppedTargetsJSON(w io.Writer) {
dt.mu.Lock() dt.mu.Lock()
@ -308,7 +331,7 @@ func (dt *droppedTargets) WriteDroppedTargetsJSON(w io.Writer) {
} }
var droppedTargetsMap = &droppedTargets{ var droppedTargetsMap = &droppedTargets{
m: make(map[string]droppedTarget), m: make(map[uint64]droppedTarget),
} }
type jobTargetsStatuses struct { type jobTargetsStatuses struct {
@ -318,7 +341,7 @@ type jobTargetsStatuses struct {
targetsStatus []targetStatus targetsStatus []targetStatus
} }
func (tsm *targetStatusMap) getTargetsStatusByJob() ([]jobTargetsStatuses, []string) { func (tsm *targetStatusMap) getTargetsStatusByJob(endpointSearch, labelSearch string) ([]jobTargetsStatuses, []string, error) {
byJob := make(map[string][]targetStatus) byJob := make(map[string][]targetStatus)
tsm.mu.Lock() tsm.mu.Lock()
for _, st := range tsm.m { for _, st := range tsm.m {
@ -352,7 +375,78 @@ func (tsm *targetStatusMap) getTargetsStatusByJob() ([]jobTargetsStatuses, []str
return jts[i].job < jts[j].job return jts[i].job < jts[j].job
}) })
emptyJobs := getEmptyJobs(jts, jobNames) emptyJobs := getEmptyJobs(jts, jobNames)
return jts, emptyJobs var err error
jts, err = filterTargets(jts, endpointSearch, labelSearch)
if len(endpointSearch) > 0 || len(labelSearch) > 0 {
// Do not show empty jobs if target filters are set.
emptyJobs = nil
}
return jts, emptyJobs, err
}
func filterTargetsByEndpoint(jts []jobTargetsStatuses, searchQuery string) ([]jobTargetsStatuses, error) {
if searchQuery == "" {
return jts, nil
}
finder, err := regexp.Compile(searchQuery)
if err != nil {
return nil, fmt.Errorf("cannot parse %s: %w", searchQuery, err)
}
var jtsFiltered []jobTargetsStatuses
for _, job := range jts {
var tss []targetStatus
for _, ts := range job.targetsStatus {
if finder.MatchString(ts.sw.Config.ScrapeURL) {
tss = append(tss, ts)
}
}
if len(tss) == 0 {
// Skip jobs with zero targets after filtering, so users could see only the requested targets
continue
}
job.targetsStatus = tss
jtsFiltered = append(jtsFiltered, job)
}
return jtsFiltered, nil
}
func filterTargetsByLabels(jts []jobTargetsStatuses, searchQuery string) ([]jobTargetsStatuses, error) {
if searchQuery == "" {
return jts, nil
}
var ie promrelabel.IfExpression
if err := ie.Parse(searchQuery); err != nil {
return nil, fmt.Errorf("cannot parse %s: %w", searchQuery, err)
}
var jtsFiltered []jobTargetsStatuses
for _, job := range jts {
var tss []targetStatus
for _, ts := range job.targetsStatus {
if ie.Match(ts.sw.Config.Labels) {
tss = append(tss, ts)
}
}
if len(tss) == 0 {
// Skip jobs with zero targets after filtering, so users could see only the requested targets
continue
}
job.targetsStatus = tss
jtsFiltered = append(jtsFiltered, job)
}
return jtsFiltered, nil
}
func filterTargets(jts []jobTargetsStatuses, endpointQuery, labelQuery string) ([]jobTargetsStatuses, error) {
var err error
jts, err = filterTargetsByEndpoint(jts, endpointQuery)
if err != nil {
return nil, err
}
jts, err = filterTargetsByLabels(jts, labelQuery)
if err != nil {
return nil, err
}
return jts, nil
} }
func getEmptyJobs(jts []jobTargetsStatuses, jobNames []string) []string { func getEmptyJobs(jts []jobTargetsStatuses, jobNames []string) []string {
@ -373,14 +467,14 @@ func getEmptyJobs(jts []jobTargetsStatuses, jobNames []string) []string {
// WriteTargetsHTML writes targets status grouped by job into writer w in html table, // WriteTargetsHTML writes targets status grouped by job into writer w in html table,
// accepts filter to show only unhealthy targets. // accepts filter to show only unhealthy targets.
func (tsm *targetStatusMap) WriteTargetsHTML(w io.Writer, showOnlyUnhealthy bool) { func (tsm *targetStatusMap) WriteTargetsHTML(w io.Writer, showOnlyUnhealthy bool, endpointSearch, labelSearch string) {
jss, emptyJobs := tsm.getTargetsStatusByJob() jss, emptyJobs, err := tsm.getTargetsStatusByJob(endpointSearch, labelSearch)
WriteTargetsResponseHTML(w, jss, emptyJobs, showOnlyUnhealthy) WriteTargetsResponseHTML(w, jss, emptyJobs, showOnlyUnhealthy, endpointSearch, labelSearch, err)
} }
// WriteTargetsPlain writes targets grouped by job into writer w in plain text, // WriteTargetsPlain writes targets grouped by job into writer w in plain text,
// accept filter to show original labels. // accept filter to show original labels.
func (tsm *targetStatusMap) WriteTargetsPlain(w io.Writer, showOriginalLabels bool) { func (tsm *targetStatusMap) WriteTargetsPlain(w io.Writer, showOriginalLabels, showOnlyUnhealthy bool, endpointSearch, labelSearch string) {
jss, emptyJobs := tsm.getTargetsStatusByJob() jss, emptyJobs, err := tsm.getTargetsStatusByJob(endpointSearch, labelSearch)
WriteTargetsResponsePlain(w, jss, emptyJobs, showOriginalLabels) WriteTargetsResponsePlain(w, jss, emptyJobs, showOriginalLabels, showOnlyUnhealthy, err)
} }

View file

@ -1,4 +1,5 @@
{% import ( {% import (
"net/url"
"time" "time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal" "github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel" "github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
@ -6,25 +7,32 @@
{% stripspace %} {% stripspace %}
{% func TargetsResponsePlain(jts []jobTargetsStatuses, emptyJobs []string, showOriginLabels bool) %} {% func TargetsResponsePlain(jts []jobTargetsStatuses, emptyJobs []string, showOriginLabels, showOnlyUnhealthy bool, err error) %}
{% if err != nil %}
{%s= err.Error() %}
{% return %}
{% endif %}
{% for _, js := range jts %} {% for _, js := range jts %}
job={%q= js.job %} ({%d js.upCount %}/{%d js.targetsTotal %}{% space %}up) {% if showOnlyUnhealthy && js.upCount == js.targetsTotal %}{% continue %}{% endif %}
{% newline %} job={%q= js.job %} ({%d js.upCount %}/{%d js.targetsTotal %}{% space %}up)
{% for _, ts := range js.targetsStatus %} {% newline %}
{%s= "\t" %} {% for _, ts := range js.targetsStatus %}
state={% if ts.up %}up{% else %}down{% endif %},{% space %} {% if showOnlyUnhealthy && ts.up %}{% continue %}{% endif %}
endpoint={%s= ts.sw.Config.ScrapeURL %},{% space %} {%s= "\t" %}
labels={%s= promLabelsString(promrelabel.FinalizeLabels(nil, ts.sw.Config.Labels)) %},{% space %} state={% if ts.up %}up{% else %}down{% endif %},{% space %}
{% if showOriginLabels %}originalLabels={%s= promLabelsString(ts.sw.Config.OriginalLabels) %},{% space %}{% endif %} endpoint={%s= ts.sw.Config.ScrapeURL %},{% space %}
scrapes_total={%d ts.scrapesTotal %},{% space %} labels={%s= promLabelsString(promrelabel.FinalizeLabels(nil, ts.sw.Config.Labels)) %},{% space %}
scrapes_failed={%d ts.scrapesFailed %},{% space %} {% if showOriginLabels %}originalLabels={%s= promLabelsString(ts.sw.Config.OriginalLabels) %},{% space %}{% endif %}
last_scrape={%f.3 ts.getDurationFromLastScrape().Seconds() %}s ago,{% space %} scrapes_total={%d ts.scrapesTotal %},{% space %}
scrape_duration={%d int(ts.scrapeDuration) %}ms,{% space %} scrapes_failed={%d ts.scrapesFailed %},{% space %}
samples_scraped={%d ts.samplesScraped %},{% space %} last_scrape={%f.3 ts.getDurationFromLastScrape().Seconds() %}s ago,{% space %}
error={% if ts.err != nil %}{%s= ts.err.Error() %}{% endif %} scrape_duration={%d int(ts.scrapeDuration) %}ms,{% space %}
{% newline %} samples_scraped={%d ts.samplesScraped %},{% space %}
{% endfor %} error={% if ts.err != nil %}{%s= ts.err.Error() %}{% endif %}
{% newline %}
{% endfor %}
{% endfor %} {% endfor %}
{% for _, jobName := range emptyJobs %} {% for _, jobName := range emptyJobs %}
@ -34,7 +42,7 @@ job={%q= jobName %} (0/0 up)
{% endfunc %} {% endfunc %}
{% func TargetsResponseHTML(jts []jobTargetsStatuses, emptyJobs []string, onlyUnhealthy bool) %} {% func TargetsResponseHTML(jts []jobTargetsStatuses, emptyJobs []string, showOnlyUnhealthy bool, endpointSearch, labelSearch string, err error) %}
<!DOCTYPE html> <!DOCTYPE html>
<html lang="en"> <html lang="en">
<head> <head>
@ -63,109 +71,198 @@ function expand_all() {
} }
</script> </script>
</head> </head>
<body class="m-3"> <body>
<h1>Scrape targets</h1> <div class="navbar navbar-dark bg-dark box-shadow">
<div style="padding: 3px"> <div class="d-flex justify-content-between">
<button type="button" class="btn{% space %}{% if !onlyUnhealthy %}btn-primary{% else %}btn-secondary{% endif %}" onclick="location.href='targets'"> <a href="#" class="navbar-brand d-flex align-items-center ms-3" title="The High Performance Open Source Time Series Database &amp; Monitoring Solution ">
All <svg xmlns="http://www.w3.org/2000/svg" id="VM_logo" viewBox="0 0 464.61 533.89" width="20" height="20" class="me-1"><defs><style>.cls-1{fill:#fff;}</style></defs><path class="cls-1" d="M459.86,467.77c9,7.67,24.12,13.49,39.3,13.69v0h1.68v0c15.18-.2,30.31-6,39.3-13.69,47.43-40.45,184.65-166.24,184.65-166.24,36.84-34.27-65.64-68.28-223.95-68.47h-1.68c-158.31.19-260.79,34.2-224,68.47C275.21,301.53,412.43,427.32,459.86,467.77Z" transform="translate(-267.7 -233.05)"/><path class="cls-1" d="M540.1,535.88c-9,7.67-24.12,13.5-39.3,13.7h-1.6c-15.18-.2-30.31-6-39.3-13.7-32.81-28-148.56-132.93-192.16-172.7v60.74c0,6.67,2.55,15.52,7.09,19.68,29.64,27.18,143.94,131.8,185.07,166.88,9,7.67,24.12,13.49,39.3,13.69v0h1.6v0c15.18-.2,30.31-6,39.3-13.69,41.13-35.08,155.43-139.7,185.07-166.88,4.54-4.16,7.09-13,7.09-19.68V363.18C688.66,403,572.91,507.9,540.1,535.88Z" transform="translate(-267.7 -233.05)"/><path class="cls-1" d="M540.1,678.64c-9,7.67-24.12,13.49-39.3,13.69v0h-1.6v0c-15.18-.2-30.31-6-39.3-13.69-32.81-28-148.56-132.94-192.16-172.7v60.73c0,6.67,2.55,15.53,7.09,19.69,29.64,27.17,143.94,131.8,185.07,166.87,9,7.67,24.12,13.5,39.3,13.7h1.6c15.18-.2,30.31-6,39.3-13.7,41.13-35.07,155.43-139.7,185.07-166.87,4.54-4.16,7.09-13,7.09-19.69V505.94C688.66,545.7,572.91,650.66,540.1,678.64Z" transform="translate(-267.7 -233.05)"/></svg>
</button> <strong>VictoriaMetrics</strong>
<button type="button" class="btn{% space %}{% if onlyUnhealthy %}btn-primary{% else %}btn-secondary{% endif %}" onclick="location.href='targets?show_only_unhealthy=true'"> </a>
Unhealthy </div>
</button>
<button type="button" class="btn btn-primary" onclick="collapse_all()">
Collapse all
</button>
<button type="button" class="btn btn-secondary" onclick="expand_all()">
Expand all
</button>
</div>
{% for i, js := range jts %}
{% if onlyUnhealthy && js.upCount == js.targetsTotal %}{% continue %}{% endif %}
<div>
<h4>
{%s js.job %}{% space %}({%d js.upCount %}/{%d js.targetsTotal %}{% space %}up)
<button type="button" class="btn btn-primary" onclick="document.getElementById('table-{%d i %}').style.display='none'">collapse</button>
<button type="button" class="btn btn-secondary" onclick="document.getElementById('table-{%d i %}').style.display='block'">expand</button>
</h4>
<div id="table-{%d i %}">
<table class="table table-striped table-hover table-bordered table-sm">
<thead>
<tr>
<th scope="col">Endpoint</th>
<th scope="col">State</th>
<th scope="col" title="scrape target labels">Labels</th>
<th scope="col" title="total scrapes">Scrapes</th>
<th scope="col" title="total scrape errors">Errors</th>
<th scope="col" title="the time of the last scrape">Last Scrape</th>
<th scope="col" title="the duration of the last scrape">Duration</th>
<th scope="col" title="the number of metrics scraped during the last scrape">Samples</th>
<th scope="col" title="error from the last scrape (if any)">Last error</th>
</tr>
</thead>
<tbody>
{% for _, ts := range js.targetsStatus %}
{% code
endpoint := ts.sw.Config.ScrapeURL
targetID := getTargetID(ts.sw)
lastScrapeTime := ts.getDurationFromLastScrape()
%}
{% if onlyUnhealthy && ts.up %}{% continue %}{% endif %}
<tr {% if !ts.up %}{%space%}class="alert alert-danger" role="alert"{% endif %}>
<td><a href="{%s endpoint %}" target="_blank">{%s endpoint %}</a> (
<a href="target_response?id={%s targetID %}" target="_blank" title="click to fetch target response on behalf of the scraper">response</a>
)</td>
<td>{% if ts.up %}UP{% else %}DOWN{% endif %}</td>
<td>
<div title="click to show original labels" onclick="document.getElementById('original_labels_{%s targetID %}').style.display='block'">
{%= formatLabel(promrelabel.FinalizeLabels(nil, ts.sw.Config.Labels)) %}
</div>
<div style="display:none" id="original_labels_{%s targetID %}">
{%= formatLabel(ts.sw.Config.OriginalLabels) %}
</div>
</td>
<td>{%d ts.scrapesTotal %}</td>
<td>{%d ts.scrapesFailed %}</td>
<td>
{% if lastScrapeTime < 365*24*time.Hour %}
{%f.3 lastScrapeTime.Seconds() %}s ago
{% else %}
none
{% endif %}
<td>{%d int(ts.scrapeDuration) %}ms</td>
<td>{%d ts.samplesScraped %}</td>
<td>{% if ts.err != nil %}{%s ts.err.Error() %}{% endif %}</td>
</tr>
{% endfor %}
</tbody>
</table>
</div>
</div> </div>
{% endfor %} <div class="container-fluid">
{% if err != nil %}
{%= errorNotification(err) %}
{% endif %}
<div class="row">
<main class="col-12">
<h1>Scrape targets</h1>
<hr />
<div class="row g-3 align-items-center mb-3">
<div class="col-auto">
<button type="button" class="btn{% space %}{% if !showOnlyUnhealthy %}btn-secondary{% else %}btn-success{% endif %}" onclick="location.href='?{%= queryArgs(map[string]string{
"show_only_unhealthy": "false",
"endpoint_search": endpointSearch,
"label_search": labelSearch,
}) %}'">
All
</button>
</div>
<div class="col-auto">
<button type="button" class="btn{% space %}{% if showOnlyUnhealthy %}btn-secondary{% else %}btn-danger{% endif %}" onclick="location.href='?{%= queryArgs(map[string]string{
"show_only_unhealthy": "true",
"endpoint_search": endpointSearch,
"label_search": labelSearch,
}) %}'">
Unhealthy
</button>
</div>
<div class="col-auto">
<button type="button" class="btn btn-primary" onclick="collapse_all()">
Collapse all
</button>
</div>
<div class="col-auto">
<button type="button" class="btn btn-secondary" onclick="expand_all()">
Expand all
</button>
</div>
<div class="col-auto">
{% if endpointSearch == "" && labelSearch == "" %}
<button type="button" class="btn btn-success" onclick="document.getElementById('filters').style.display='block'">
Filter targets
</button>
{% else %}
<button type="button" class="btn btn-danger" onclick="location.href='?'">
Clear target filters
</button>
{% endif %}
</div>
</div>
<div id="filters" {% if endpointSearch == "" && labelSearch == "" %}style="display:none"{% endif %}>
<form class="form-horizontal">
<div class="form-group mb-3">
<label for="endpoint_search" class="col-sm-10 control-label">Endpoint filter (<a target="_blank" href="https://github.com/google/re2/wiki/Syntax">Regexp</a> is accepted)</label>
<div class="col-sm-10">
<input type="text" id="endpoint_search" name="endpoint_search"
placeholder="For example, 127.0.0.1" class="form-control" value="{%s endpointSearch %}"/>
</div>
</div>
<div class="form-group mb-3">
<label for="label_search" class="col-sm-10 control-label">Labels filter (<a target="_blank" href="https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors">Arbitrary time series selectors</a> are accepted)</label>
<div class="col-sm-10">
<input type="text" id="label_search" name="label_search"
placeholder="For example, {instance=~'.+:9100'}" class="form-control" value="{%s labelSearch %}"/>
</div>
</div>
<input type="hidden" name="show_only_unhealthy" value="{%v showOnlyUnhealthy %}"/>
<button type="submit" class="btn btn-success mb-3">Submit</button>
</form>
</div>
<hr />
<div class="row">
<div class="col-12">
{% for i, js := range jts %}
{% if showOnlyUnhealthy && js.upCount == js.targetsTotal %}{% continue %}{% endif %}
<div class="row mb-4">
<div class="col-12">
<h4>
<span class="me-2">{%s js.job %}{% space %}({%d js.upCount %}/{%d js.targetsTotal %}{% space %}up)</span>
<button type="button" class="btn btn-primary btn-sm me-1"
onclick="document.getElementById('table-{%d i %}').style.display='none'">collapse
</button>
<button type="button" class="btn btn-secondary btn-sm me-1"
onclick="document.getElementById('table-{%d i %}').style.display='block'">expand
</button>
</h4>
<div id="table-{%d i %}" class="table-responsive">
<table class="table table-striped table-hover table-bordered table-sm">
<thead>
<tr>
<th scope="col">Endpoint</th>
<th scope="col">State</th>
<th scope="col" title="scrape target labels">Labels</th>
<th scope="col" title="total scrapes">Scrapes</th>
<th scope="col" title="total scrape errors">Errors</th>
<th scope="col" title="the time of the last scrape">Last Scrape</th>
<th scope="col" title="the duration of the last scrape">Duration</th>
<th scope="col" title="the number of metrics scraped during the last scrape">Samples</th>
<th scope="col" title="error from the last scrape (if any)">Last error</th>
</tr>
</thead>
<tbody class="list-{%d i %}">
{% for _, ts := range js.targetsStatus %}
{% code
endpoint := ts.sw.Config.ScrapeURL
targetID := getTargetID(ts.sw)
lastScrapeTime := ts.getDurationFromLastScrape()
%}
{% if showOnlyUnhealthy && ts.up %}{% continue %}{% endif %}
<tr {% if !ts.up %}{%space%}class="alert alert-danger" role="alert" {% endif %}>
<td class="endpoint"><a href="{%s endpoint %}" target="_blank">{%s endpoint %}</a> (
<a href="target_response?id={%s targetID %}" target="_blank"
title="click to fetch target response on behalf of the scraper">response</a>
)
</td>
<td>{% if ts.up %}UP{% else %}DOWN{% endif %}</td>
<td class="labels">
<div title="click to show original labels"
onclick="document.getElementById('original_labels_{%s targetID %}').style.display='block'">
{%= formatLabel(promrelabel.FinalizeLabels(nil, ts.sw.Config.Labels)) %}
</div>
<div style="display:none" id="original_labels_{%s targetID %}">
{%= formatLabel(ts.sw.Config.OriginalLabels) %}
</div>
</td>
<td>{%d ts.scrapesTotal %}</td>
<td>{%d ts.scrapesFailed %}</td>
<td>
{% if lastScrapeTime < 365*24*time.Hour %}
{%f.3 lastScrapeTime.Seconds() %}s ago
{% else %}
none
{% endif %}
<td>{%d int(ts.scrapeDuration) %}ms</td>
<td>{%d ts.samplesScraped %}</td>
<td>{% if ts.err != nil %}{%s ts.err.Error() %}{% endif %}</td>
</tr>
{% endfor %}
</tbody>
</table>
</div>
</div>
</div>
{% endfor %}
</div>
</div>
{% for _, jobName := range emptyJobs %} {% for _, jobName := range emptyJobs %}
<div> <div>
<h4> <h4>
<a>{%s jobName %} (0/0 up)</a> <a>{%s jobName %} (0/0 up)</a>
</h4> </h4>
<table class="table table-striped table-hover table-bordered table-sm"> <table class="table table-striped table-hover table-bordered table-sm">
<thead> <thead>
<tr> <tr>
<th scope="col">Endpoint</th> <th scope="col">Endpoint</th>
<th scope="col">State</th> <th scope="col">State</th>
<th scope="col">Labels</th> <th scope="col">Labels</th>
<th scope="col">Last Scrape</th> <th scope="col">Last Scrape</th>
<th scope="col">Scrape Duration</th> <th scope="col">Scrape Duration</th>
<th scope="col">Samples Scraped</th> <th scope="col">Samples Scraped</th>
<th scope="col">Error</th> <th scope="col">Error</th>
</tr> </tr>
</thead> </thead>
</table> </table>
</div>
{% endfor %}
</main>
</div>
</div> </div>
{% endfor %}
</body> </body>
</html> </html>
{% endfunc %} {% endfunc %}
{% func queryArgs(m map[string]string) %}
{% code
qa := make(url.Values, len(m))
for k, v := range m {
qa[k] = []string{v}
}
%}
{%s qa.Encode() %}
{% endfunc %}
{% func formatLabel(labels []prompbmarshal.Label) %} {% func formatLabel(labels []prompbmarshal.Label) %}
{ {
{% for i, label := range labels %} {% for i, label := range labels %}
@ -175,4 +272,14 @@ function expand_all() {
} }
{% endfunc %} {% endfunc %}
{% func errorNotification(err error) %}
<div class="alert alert-danger d-flex align-items-center" role="alert">
<svg class="bi flex-shrink-0 me-2" width="24" height="24" role="img" aria-label="Danger:">
<use xlink:href="#exclamation-triangle-fill"/></svg>
<div>
{%s err.Error() %}
</div>
</div>
{% endfunc %}
{% endstripspace %} {% endstripspace %}

View file

@ -1,474 +1,633 @@
// Code generated by qtc from "targetstatus.qtpl". DO NOT EDIT. // Code generated by qtc from "targetstatus.qtpl". DO NOT EDIT.
// See https://github.com/valyala/quicktemplate for details. // See https://github.com/valyala/quicktemplate for details.
//line lib/promscrape/targetstatus.qtpl:1 //line targetstatus.qtpl:1
package promscrape package promscrape
//line lib/promscrape/targetstatus.qtpl:1 //line targetstatus.qtpl:1
import ( import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal" "github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel" "github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
"net/url"
"time" "time"
) )
//line lib/promscrape/targetstatus.qtpl:9 //line targetstatus.qtpl:10
import ( import (
qtio422016 "io" qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate" qt422016 "github.com/valyala/quicktemplate"
) )
//line lib/promscrape/targetstatus.qtpl:9 //line targetstatus.qtpl:10
var ( var (
_ = qtio422016.Copy _ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer _ = qt422016.AcquireByteBuffer
) )
//line lib/promscrape/targetstatus.qtpl:9 //line targetstatus.qtpl:10
func StreamTargetsResponsePlain(qw422016 *qt422016.Writer, jts []jobTargetsStatuses, emptyJobs []string, showOriginLabels bool) { func StreamTargetsResponsePlain(qw422016 *qt422016.Writer, jts []jobTargetsStatuses, emptyJobs []string, showOriginLabels, showOnlyUnhealthy bool, err error) {
//line lib/promscrape/targetstatus.qtpl:11 //line targetstatus.qtpl:12
if err != nil {
//line targetstatus.qtpl:13
qw422016.N().S(err.Error())
//line targetstatus.qtpl:14
return
//line targetstatus.qtpl:15
}
//line targetstatus.qtpl:17
for _, js := range jts { for _, js := range jts {
//line lib/promscrape/targetstatus.qtpl:11 //line targetstatus.qtpl:18
if showOnlyUnhealthy && js.upCount == js.targetsTotal {
//line targetstatus.qtpl:18
continue
//line targetstatus.qtpl:18
}
//line targetstatus.qtpl:18
qw422016.N().S(`job=`) qw422016.N().S(`job=`)
//line lib/promscrape/targetstatus.qtpl:12 //line targetstatus.qtpl:19
qw422016.N().Q(js.job) qw422016.N().Q(js.job)
//line lib/promscrape/targetstatus.qtpl:12 //line targetstatus.qtpl:19
qw422016.N().S(`(`) qw422016.N().S(`(`)
//line lib/promscrape/targetstatus.qtpl:12 //line targetstatus.qtpl:19
qw422016.N().D(js.upCount) qw422016.N().D(js.upCount)
//line lib/promscrape/targetstatus.qtpl:12 //line targetstatus.qtpl:19
qw422016.N().S(`/`) qw422016.N().S(`/`)
//line lib/promscrape/targetstatus.qtpl:12 //line targetstatus.qtpl:19
qw422016.N().D(js.targetsTotal) qw422016.N().D(js.targetsTotal)
//line lib/promscrape/targetstatus.qtpl:12 //line targetstatus.qtpl:19
qw422016.N().S(` `) qw422016.N().S(` `)
//line lib/promscrape/targetstatus.qtpl:12 //line targetstatus.qtpl:19
qw422016.N().S(`up)`) qw422016.N().S(`up)`)
//line lib/promscrape/targetstatus.qtpl:13 //line targetstatus.qtpl:20
qw422016.N().S(` qw422016.N().S(`
`) `)
//line lib/promscrape/targetstatus.qtpl:14 //line targetstatus.qtpl:21
for _, ts := range js.targetsStatus { for _, ts := range js.targetsStatus {
//line lib/promscrape/targetstatus.qtpl:15 //line targetstatus.qtpl:22
if showOnlyUnhealthy && ts.up {
//line targetstatus.qtpl:22
continue
//line targetstatus.qtpl:22
}
//line targetstatus.qtpl:23
qw422016.N().S("\t") qw422016.N().S("\t")
//line lib/promscrape/targetstatus.qtpl:15 //line targetstatus.qtpl:23
qw422016.N().S(`state=`) qw422016.N().S(`state=`)
//line lib/promscrape/targetstatus.qtpl:16 //line targetstatus.qtpl:24
if ts.up { if ts.up {
//line lib/promscrape/targetstatus.qtpl:16 //line targetstatus.qtpl:24
qw422016.N().S(`up`) qw422016.N().S(`up`)
//line lib/promscrape/targetstatus.qtpl:16 //line targetstatus.qtpl:24
} else { } else {
//line lib/promscrape/targetstatus.qtpl:16 //line targetstatus.qtpl:24
qw422016.N().S(`down`) qw422016.N().S(`down`)
//line lib/promscrape/targetstatus.qtpl:16 //line targetstatus.qtpl:24
} }
//line lib/promscrape/targetstatus.qtpl:16 //line targetstatus.qtpl:24
qw422016.N().S(`,`) qw422016.N().S(`,`)
//line lib/promscrape/targetstatus.qtpl:16 //line targetstatus.qtpl:24
qw422016.N().S(` `) qw422016.N().S(` `)
//line lib/promscrape/targetstatus.qtpl:16 //line targetstatus.qtpl:24
qw422016.N().S(`endpoint=`) qw422016.N().S(`endpoint=`)
//line lib/promscrape/targetstatus.qtpl:17 //line targetstatus.qtpl:25
qw422016.N().S(ts.sw.Config.ScrapeURL) qw422016.N().S(ts.sw.Config.ScrapeURL)
//line lib/promscrape/targetstatus.qtpl:17 //line targetstatus.qtpl:25
qw422016.N().S(`,`) qw422016.N().S(`,`)
//line lib/promscrape/targetstatus.qtpl:17 //line targetstatus.qtpl:25
qw422016.N().S(` `) qw422016.N().S(` `)
//line lib/promscrape/targetstatus.qtpl:17 //line targetstatus.qtpl:25
qw422016.N().S(`labels=`) qw422016.N().S(`labels=`)
//line lib/promscrape/targetstatus.qtpl:18 //line targetstatus.qtpl:26
qw422016.N().S(promLabelsString(promrelabel.FinalizeLabels(nil, ts.sw.Config.Labels))) qw422016.N().S(promLabelsString(promrelabel.FinalizeLabels(nil, ts.sw.Config.Labels)))
//line lib/promscrape/targetstatus.qtpl:18 //line targetstatus.qtpl:26
qw422016.N().S(`,`) qw422016.N().S(`,`)
//line lib/promscrape/targetstatus.qtpl:18 //line targetstatus.qtpl:26
qw422016.N().S(` `) qw422016.N().S(` `)
//line lib/promscrape/targetstatus.qtpl:19 //line targetstatus.qtpl:27
if showOriginLabels { if showOriginLabels {
//line lib/promscrape/targetstatus.qtpl:19 //line targetstatus.qtpl:27
qw422016.N().S(`originalLabels=`) qw422016.N().S(`originalLabels=`)
//line lib/promscrape/targetstatus.qtpl:19 //line targetstatus.qtpl:27
qw422016.N().S(promLabelsString(ts.sw.Config.OriginalLabels)) qw422016.N().S(promLabelsString(ts.sw.Config.OriginalLabels))
//line lib/promscrape/targetstatus.qtpl:19 //line targetstatus.qtpl:27
qw422016.N().S(`,`) qw422016.N().S(`,`)
//line lib/promscrape/targetstatus.qtpl:19 //line targetstatus.qtpl:27
qw422016.N().S(` `) qw422016.N().S(` `)
//line lib/promscrape/targetstatus.qtpl:19 //line targetstatus.qtpl:27
} }
//line lib/promscrape/targetstatus.qtpl:19 //line targetstatus.qtpl:27
qw422016.N().S(`scrapes_total=`) qw422016.N().S(`scrapes_total=`)
//line lib/promscrape/targetstatus.qtpl:20 //line targetstatus.qtpl:28
qw422016.N().D(ts.scrapesTotal) qw422016.N().D(ts.scrapesTotal)
//line lib/promscrape/targetstatus.qtpl:20 //line targetstatus.qtpl:28
qw422016.N().S(`,`) qw422016.N().S(`,`)
//line lib/promscrape/targetstatus.qtpl:20 //line targetstatus.qtpl:28
qw422016.N().S(` `) qw422016.N().S(` `)
//line lib/promscrape/targetstatus.qtpl:20 //line targetstatus.qtpl:28
qw422016.N().S(`scrapes_failed=`) qw422016.N().S(`scrapes_failed=`)
//line lib/promscrape/targetstatus.qtpl:21 //line targetstatus.qtpl:29
qw422016.N().D(ts.scrapesFailed) qw422016.N().D(ts.scrapesFailed)
//line lib/promscrape/targetstatus.qtpl:21 //line targetstatus.qtpl:29
qw422016.N().S(`,`) qw422016.N().S(`,`)
//line lib/promscrape/targetstatus.qtpl:21 //line targetstatus.qtpl:29
qw422016.N().S(` `) qw422016.N().S(` `)
//line lib/promscrape/targetstatus.qtpl:21 //line targetstatus.qtpl:29
qw422016.N().S(`last_scrape=`) qw422016.N().S(`last_scrape=`)
//line lib/promscrape/targetstatus.qtpl:22 //line targetstatus.qtpl:30
qw422016.N().FPrec(ts.getDurationFromLastScrape().Seconds(), 3) qw422016.N().FPrec(ts.getDurationFromLastScrape().Seconds(), 3)
//line lib/promscrape/targetstatus.qtpl:22 //line targetstatus.qtpl:30
qw422016.N().S(`s ago,`) qw422016.N().S(`s ago,`)
//line lib/promscrape/targetstatus.qtpl:22 //line targetstatus.qtpl:30
qw422016.N().S(` `) qw422016.N().S(` `)
//line lib/promscrape/targetstatus.qtpl:22 //line targetstatus.qtpl:30
qw422016.N().S(`scrape_duration=`) qw422016.N().S(`scrape_duration=`)
//line lib/promscrape/targetstatus.qtpl:23 //line targetstatus.qtpl:31
qw422016.N().D(int(ts.scrapeDuration)) qw422016.N().D(int(ts.scrapeDuration))
//line lib/promscrape/targetstatus.qtpl:23 //line targetstatus.qtpl:31
qw422016.N().S(`ms,`) qw422016.N().S(`ms,`)
//line lib/promscrape/targetstatus.qtpl:23 //line targetstatus.qtpl:31
qw422016.N().S(` `) qw422016.N().S(` `)
//line lib/promscrape/targetstatus.qtpl:23 //line targetstatus.qtpl:31
qw422016.N().S(`samples_scraped=`) qw422016.N().S(`samples_scraped=`)
//line lib/promscrape/targetstatus.qtpl:24 //line targetstatus.qtpl:32
qw422016.N().D(ts.samplesScraped) qw422016.N().D(ts.samplesScraped)
//line lib/promscrape/targetstatus.qtpl:24 //line targetstatus.qtpl:32
qw422016.N().S(`,`) qw422016.N().S(`,`)
//line lib/promscrape/targetstatus.qtpl:24 //line targetstatus.qtpl:32
qw422016.N().S(` `) qw422016.N().S(` `)
//line lib/promscrape/targetstatus.qtpl:24 //line targetstatus.qtpl:32
qw422016.N().S(`error=`) qw422016.N().S(`error=`)
//line lib/promscrape/targetstatus.qtpl:25 //line targetstatus.qtpl:33
if ts.err != nil { if ts.err != nil {
//line lib/promscrape/targetstatus.qtpl:25 //line targetstatus.qtpl:33
qw422016.N().S(ts.err.Error()) qw422016.N().S(ts.err.Error())
//line lib/promscrape/targetstatus.qtpl:25 //line targetstatus.qtpl:33
} }
//line lib/promscrape/targetstatus.qtpl:26 //line targetstatus.qtpl:34
qw422016.N().S(` qw422016.N().S(`
`) `)
//line lib/promscrape/targetstatus.qtpl:27 //line targetstatus.qtpl:35
} }
//line lib/promscrape/targetstatus.qtpl:28 //line targetstatus.qtpl:36
} }
//line lib/promscrape/targetstatus.qtpl:30 //line targetstatus.qtpl:38
for _, jobName := range emptyJobs { for _, jobName := range emptyJobs {
//line lib/promscrape/targetstatus.qtpl:30 //line targetstatus.qtpl:38
qw422016.N().S(`job=`) qw422016.N().S(`job=`)
//line lib/promscrape/targetstatus.qtpl:31 //line targetstatus.qtpl:39
qw422016.N().Q(jobName) qw422016.N().Q(jobName)
//line lib/promscrape/targetstatus.qtpl:31 //line targetstatus.qtpl:39
qw422016.N().S(`(0/0 up)`) qw422016.N().S(`(0/0 up)`)
//line lib/promscrape/targetstatus.qtpl:32 //line targetstatus.qtpl:40
qw422016.N().S(` qw422016.N().S(`
`) `)
//line lib/promscrape/targetstatus.qtpl:33 //line targetstatus.qtpl:41
} }
//line lib/promscrape/targetstatus.qtpl:35 //line targetstatus.qtpl:43
} }
//line lib/promscrape/targetstatus.qtpl:35 //line targetstatus.qtpl:43
func WriteTargetsResponsePlain(qq422016 qtio422016.Writer, jts []jobTargetsStatuses, emptyJobs []string, showOriginLabels bool) { func WriteTargetsResponsePlain(qq422016 qtio422016.Writer, jts []jobTargetsStatuses, emptyJobs []string, showOriginLabels, showOnlyUnhealthy bool, err error) {
//line lib/promscrape/targetstatus.qtpl:35 //line targetstatus.qtpl:43
qw422016 := qt422016.AcquireWriter(qq422016) qw422016 := qt422016.AcquireWriter(qq422016)
//line lib/promscrape/targetstatus.qtpl:35 //line targetstatus.qtpl:43
StreamTargetsResponsePlain(qw422016, jts, emptyJobs, showOriginLabels) StreamTargetsResponsePlain(qw422016, jts, emptyJobs, showOriginLabels, showOnlyUnhealthy, err)
//line lib/promscrape/targetstatus.qtpl:35 //line targetstatus.qtpl:43
qt422016.ReleaseWriter(qw422016) qt422016.ReleaseWriter(qw422016)
//line lib/promscrape/targetstatus.qtpl:35 //line targetstatus.qtpl:43
} }
//line lib/promscrape/targetstatus.qtpl:35 //line targetstatus.qtpl:43
func TargetsResponsePlain(jts []jobTargetsStatuses, emptyJobs []string, showOriginLabels bool) string { func TargetsResponsePlain(jts []jobTargetsStatuses, emptyJobs []string, showOriginLabels, showOnlyUnhealthy bool, err error) string {
//line lib/promscrape/targetstatus.qtpl:35 //line targetstatus.qtpl:43
qb422016 := qt422016.AcquireByteBuffer() qb422016 := qt422016.AcquireByteBuffer()
//line lib/promscrape/targetstatus.qtpl:35 //line targetstatus.qtpl:43
WriteTargetsResponsePlain(qb422016, jts, emptyJobs, showOriginLabels) WriteTargetsResponsePlain(qb422016, jts, emptyJobs, showOriginLabels, showOnlyUnhealthy, err)
//line lib/promscrape/targetstatus.qtpl:35 //line targetstatus.qtpl:43
qs422016 := string(qb422016.B) qs422016 := string(qb422016.B)
//line lib/promscrape/targetstatus.qtpl:35 //line targetstatus.qtpl:43
qt422016.ReleaseByteBuffer(qb422016) qt422016.ReleaseByteBuffer(qb422016)
//line lib/promscrape/targetstatus.qtpl:35 //line targetstatus.qtpl:43
return qs422016 return qs422016
//line lib/promscrape/targetstatus.qtpl:35 //line targetstatus.qtpl:43
} }
//line lib/promscrape/targetstatus.qtpl:37 //line targetstatus.qtpl:45
func StreamTargetsResponseHTML(qw422016 *qt422016.Writer, jts []jobTargetsStatuses, emptyJobs []string, onlyUnhealthy bool) { func StreamTargetsResponseHTML(qw422016 *qt422016.Writer, jts []jobTargetsStatuses, emptyJobs []string, showOnlyUnhealthy bool, endpointSearch, labelSearch string, err error) {
//line lib/promscrape/targetstatus.qtpl:37 //line targetstatus.qtpl:45
qw422016.N().S(`<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"><meta name="viewport" content="width=device-width, initial-scale=1"><link href="https://cdn.jsdelivr.net/npm/bootstrap@5.0.2/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-EVSTQN3/azprG1Anm3QDgpJLIm9Nao0Yz1ztcQTwFspd3yD65VohhpuuCOmLASjC" crossorigin="anonymous"><title>Scrape targets</title><script>function collapse_all() {for (var i = 0; i <`) qw422016.N().S(`<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"><meta name="viewport" content="width=device-width, initial-scale=1"><link href="https://cdn.jsdelivr.net/npm/bootstrap@5.0.2/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-EVSTQN3/azprG1Anm3QDgpJLIm9Nao0Yz1ztcQTwFspd3yD65VohhpuuCOmLASjC" crossorigin="anonymous"><title>Scrape targets</title><script>function collapse_all() {for (var i = 0; i <`)
//line lib/promscrape/targetstatus.qtpl:47 //line targetstatus.qtpl:55
qw422016.N().D(len(jts)) qw422016.N().D(len(jts))
//line lib/promscrape/targetstatus.qtpl:47 //line targetstatus.qtpl:55
qw422016.N().S(`; i++) {let el = document.getElementById("table-" + i);if (!el) {continue;}el.style.display = 'none';}}function expand_all() {for (var i = 0; i <`) qw422016.N().S(`; i++) {let el = document.getElementById("table-" + i);if (!el) {continue;}el.style.display = 'none';}}function expand_all() {for (var i = 0; i <`)
//line lib/promscrape/targetstatus.qtpl:56 //line targetstatus.qtpl:64
qw422016.N().D(len(jts)) qw422016.N().D(len(jts))
//line lib/promscrape/targetstatus.qtpl:56 //line targetstatus.qtpl:64
qw422016.N().S(`; i++) {let el = document.getElementById("table-" + i);if (!el) {continue;}el.style.display = 'block';}}</script></head><body class="m-3"><h1>Scrape targets</h1><div style="padding: 3px"><button type="button" class="btn`) qw422016.N().S(`; i++) {let el = document.getElementById("table-" + i);if (!el) {continue;}el.style.display = 'block';}}</script></head><body><div class="navbar navbar-dark bg-dark box-shadow"><div class="d-flex justify-content-between"><a href="#" class="navbar-brand d-flex align-items-center ms-3" title="The High Performance Open Source Time Series Database &amp; Monitoring Solution "><svg xmlns="http://www.w3.org/2000/svg" id="VM_logo" viewBox="0 0 464.61 533.89" width="20" height="20" class="me-1"><defs><style>.cls-1{fill:#fff;}</style></defs><path class="cls-1" d="M459.86,467.77c9,7.67,24.12,13.49,39.3,13.69v0h1.68v0c15.18-.2,30.31-6,39.3-13.69,47.43-40.45,184.65-166.24,184.65-166.24,36.84-34.27-65.64-68.28-223.95-68.47h-1.68c-158.31.19-260.79,34.2-224,68.47C275.21,301.53,412.43,427.32,459.86,467.77Z" transform="translate(-267.7 -233.05)"/><path class="cls-1" d="M540.1,535.88c-9,7.67-24.12,13.5-39.3,13.7h-1.6c-15.18-.2-30.31-6-39.3-13.7-32.81-28-148.56-132.93-192.16-172.7v60.74c0,6.67,2.55,15.52,7.09,19.68,29.64,27.18,143.94,131.8,185.07,166.88,9,7.67,24.12,13.49,39.3,13.69v0h1.6v0c15.18-.2,30.31-6,39.3-13.69,41.13-35.08,155.43-139.7,185.07-166.88,4.54-4.16,7.09-13,7.09-19.68V363.18C688.66,403,572.91,507.9,540.1,535.88Z" transform="translate(-267.7 -233.05)"/><path class="cls-1" d="M540.1,678.64c-9,7.67-24.12,13.49-39.3,13.69v0h-1.6v0c-15.18-.2-30.31-6-39.3-13.69-32.81-28-148.56-132.94-192.16-172.7v60.73c0,6.67,2.55,15.53,7.09,19.69,29.64,27.17,143.94,131.8,185.07,166.87,9,7.67,24.12,13.5,39.3,13.7h1.6c15.18-.2,30.31-6,39.3-13.7,41.13-35.07,155.43-139.7,185.07-166.87,4.54-4.16,7.09-13,7.09-19.69V505.94C688.66,545.7,572.91,650.66,540.1,678.64Z" transform="translate(-267.7 -233.05)"/></svg><strong>VictoriaMetrics</strong></a></div></div><div class="container-fluid">`)
//line lib/promscrape/targetstatus.qtpl:69 //line targetstatus.qtpl:84
qw422016.N().S(` `) if err != nil {
//line lib/promscrape/targetstatus.qtpl:69 //line targetstatus.qtpl:85
if !onlyUnhealthy { streamerrorNotification(qw422016, err)
//line lib/promscrape/targetstatus.qtpl:69 //line targetstatus.qtpl:86
qw422016.N().S(`btn-primary`)
//line lib/promscrape/targetstatus.qtpl:69
} else {
//line lib/promscrape/targetstatus.qtpl:69
qw422016.N().S(`btn-secondary`)
//line lib/promscrape/targetstatus.qtpl:69
} }
//line lib/promscrape/targetstatus.qtpl:69 //line targetstatus.qtpl:86
qw422016.N().S(`" onclick="location.href='targets'">All</button><button type="button" class="btn`) qw422016.N().S(`<div class="row"><main class="col-12"><h1>Scrape targets</h1><hr /><div class="row g-3 align-items-center mb-3"><div class="col-auto"><button type="button" class="btn`)
//line lib/promscrape/targetstatus.qtpl:72 //line targetstatus.qtpl:93
qw422016.N().S(` `) qw422016.N().S(` `)
//line lib/promscrape/targetstatus.qtpl:72 //line targetstatus.qtpl:93
if onlyUnhealthy { if !showOnlyUnhealthy {
//line lib/promscrape/targetstatus.qtpl:72 //line targetstatus.qtpl:93
qw422016.N().S(`btn-primary`)
//line lib/promscrape/targetstatus.qtpl:72
} else {
//line lib/promscrape/targetstatus.qtpl:72
qw422016.N().S(`btn-secondary`) qw422016.N().S(`btn-secondary`)
//line lib/promscrape/targetstatus.qtpl:72 //line targetstatus.qtpl:93
} else {
//line targetstatus.qtpl:93
qw422016.N().S(`btn-success`)
//line targetstatus.qtpl:93
} }
//line lib/promscrape/targetstatus.qtpl:72 //line targetstatus.qtpl:93
qw422016.N().S(`" onclick="location.href='targets?show_only_unhealthy=true'">Unhealthy</button><button type="button" class="btn btn-primary" onclick="collapse_all()">Collapse all</button><button type="button" class="btn btn-secondary" onclick="expand_all()">Expand all</button></div>`) qw422016.N().S(`" onclick="location.href='?`)
//line lib/promscrape/targetstatus.qtpl:82 //line targetstatus.qtpl:93
streamqueryArgs(qw422016, map[string]string{
"show_only_unhealthy": "false",
"endpoint_search": endpointSearch,
"label_search": labelSearch,
})
//line targetstatus.qtpl:97
qw422016.N().S(`'">All</button></div><div class="col-auto"><button type="button" class="btn`)
//line targetstatus.qtpl:102
qw422016.N().S(` `)
//line targetstatus.qtpl:102
if showOnlyUnhealthy {
//line targetstatus.qtpl:102
qw422016.N().S(`btn-secondary`)
//line targetstatus.qtpl:102
} else {
//line targetstatus.qtpl:102
qw422016.N().S(`btn-danger`)
//line targetstatus.qtpl:102
}
//line targetstatus.qtpl:102
qw422016.N().S(`" onclick="location.href='?`)
//line targetstatus.qtpl:102
streamqueryArgs(qw422016, map[string]string{
"show_only_unhealthy": "true",
"endpoint_search": endpointSearch,
"label_search": labelSearch,
})
//line targetstatus.qtpl:106
qw422016.N().S(`'">Unhealthy</button></div><div class="col-auto"><button type="button" class="btn btn-primary" onclick="collapse_all()">Collapse all</button></div><div class="col-auto"><button type="button" class="btn btn-secondary" onclick="expand_all()">Expand all</button></div><div class="col-auto">`)
//line targetstatus.qtpl:121
if endpointSearch == "" && labelSearch == "" {
//line targetstatus.qtpl:121
qw422016.N().S(`<button type="button" class="btn btn-success" onclick="document.getElementById('filters').style.display='block'">Filter targets</button>`)
//line targetstatus.qtpl:125
} else {
//line targetstatus.qtpl:125
qw422016.N().S(`<button type="button" class="btn btn-danger" onclick="location.href='?'">Clear target filters</button>`)
//line targetstatus.qtpl:129
}
//line targetstatus.qtpl:129
qw422016.N().S(`</div></div><div id="filters"`)
//line targetstatus.qtpl:132
if endpointSearch == "" && labelSearch == "" {
//line targetstatus.qtpl:132
qw422016.N().S(`style="display:none"`)
//line targetstatus.qtpl:132
}
//line targetstatus.qtpl:132
qw422016.N().S(`><form class="form-horizontal"><div class="form-group mb-3"><label for="endpoint_search" class="col-sm-10 control-label">Endpoint filter (<a target="_blank" href="https://github.com/google/re2/wiki/Syntax">Regexp</a> is accepted)</label><div class="col-sm-10"><input type="text" id="endpoint_search" name="endpoint_search"placeholder="For example, 127.0.0.1" class="form-control" value="`)
//line targetstatus.qtpl:138
qw422016.E().S(endpointSearch)
//line targetstatus.qtpl:138
qw422016.N().S(`"/></div></div><div class="form-group mb-3"><label for="label_search" class="col-sm-10 control-label">Labels filter (<a target="_blank" href="https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors">Arbitrary time series selectors</a> are accepted)</label><div class="col-sm-10"><input type="text" id="label_search" name="label_search"placeholder="For example, {instance=~'.+:9100'}" class="form-control" value="`)
//line targetstatus.qtpl:145
qw422016.E().S(labelSearch)
//line targetstatus.qtpl:145
qw422016.N().S(`"/></div></div><input type="hidden" name="show_only_unhealthy" value="`)
//line targetstatus.qtpl:148
qw422016.E().V(showOnlyUnhealthy)
//line targetstatus.qtpl:148
qw422016.N().S(`"/><button type="submit" class="btn btn-success mb-3">Submit</button></form></div><hr /><div class="row"><div class="col-12">`)
//line targetstatus.qtpl:155
for i, js := range jts { for i, js := range jts {
//line lib/promscrape/targetstatus.qtpl:83 //line targetstatus.qtpl:156
if onlyUnhealthy && js.upCount == js.targetsTotal { if showOnlyUnhealthy && js.upCount == js.targetsTotal {
//line lib/promscrape/targetstatus.qtpl:83 //line targetstatus.qtpl:156
continue continue
//line lib/promscrape/targetstatus.qtpl:83 //line targetstatus.qtpl:156
} }
//line lib/promscrape/targetstatus.qtpl:83 //line targetstatus.qtpl:156
qw422016.N().S(`<div><h4>`) qw422016.N().S(`<div class="row mb-4"><div class="col-12"><h4><span class="me-2">`)
//line lib/promscrape/targetstatus.qtpl:86 //line targetstatus.qtpl:160
qw422016.E().S(js.job) qw422016.E().S(js.job)
//line lib/promscrape/targetstatus.qtpl:86 //line targetstatus.qtpl:160
qw422016.N().S(` `) qw422016.N().S(` `)
//line lib/promscrape/targetstatus.qtpl:86 //line targetstatus.qtpl:160
qw422016.N().S(`(`) qw422016.N().S(`(`)
//line lib/promscrape/targetstatus.qtpl:86 //line targetstatus.qtpl:160
qw422016.N().D(js.upCount) qw422016.N().D(js.upCount)
//line lib/promscrape/targetstatus.qtpl:86 //line targetstatus.qtpl:160
qw422016.N().S(`/`) qw422016.N().S(`/`)
//line lib/promscrape/targetstatus.qtpl:86 //line targetstatus.qtpl:160
qw422016.N().D(js.targetsTotal) qw422016.N().D(js.targetsTotal)
//line lib/promscrape/targetstatus.qtpl:86 //line targetstatus.qtpl:160
qw422016.N().S(` `) qw422016.N().S(` `)
//line lib/promscrape/targetstatus.qtpl:86 //line targetstatus.qtpl:160
qw422016.N().S(`up)<button type="button" class="btn btn-primary" onclick="document.getElementById('table-`) qw422016.N().S(`up)</span><button type="button" class="btn btn-primary btn-sm me-1"onclick="document.getElementById('table-`)
//line lib/promscrape/targetstatus.qtpl:87 //line targetstatus.qtpl:162
qw422016.N().D(i) qw422016.N().D(i)
//line lib/promscrape/targetstatus.qtpl:87 //line targetstatus.qtpl:162
qw422016.N().S(`').style.display='none'">collapse</button><button type="button" class="btn btn-secondary" onclick="document.getElementById('table-`) qw422016.N().S(`').style.display='none'">collapse</button><button type="button" class="btn btn-secondary btn-sm me-1"onclick="document.getElementById('table-`)
//line lib/promscrape/targetstatus.qtpl:88 //line targetstatus.qtpl:165
qw422016.N().D(i) qw422016.N().D(i)
//line lib/promscrape/targetstatus.qtpl:88 //line targetstatus.qtpl:165
qw422016.N().S(`').style.display='block'">expand</button></h4><div id="table-`) qw422016.N().S(`').style.display='block'">expand</button></h4><div id="table-`)
//line lib/promscrape/targetstatus.qtpl:90 //line targetstatus.qtpl:168
qw422016.N().D(i) qw422016.N().D(i)
//line lib/promscrape/targetstatus.qtpl:90 //line targetstatus.qtpl:168
qw422016.N().S(`"><table class="table table-striped table-hover table-bordered table-sm"><thead><tr><th scope="col">Endpoint</th><th scope="col">State</th><th scope="col" title="scrape target labels">Labels</th><th scope="col" title="total scrapes">Scrapes</th><th scope="col" title="total scrape errors">Errors</th><th scope="col" title="the time of the last scrape">Last Scrape</th><th scope="col" title="the duration of the last scrape">Duration</th><th scope="col" title="the number of metrics scraped during the last scrape">Samples</th><th scope="col" title="error from the last scrape (if any)">Last error</th></tr></thead><tbody>`) qw422016.N().S(`" class="table-responsive"><table class="table table-striped table-hover table-bordered table-sm"><thead><tr><th scope="col">Endpoint</th><th scope="col">State</th><th scope="col" title="scrape target labels">Labels</th><th scope="col" title="total scrapes">Scrapes</th><th scope="col" title="total scrape errors">Errors</th><th scope="col" title="the time of the last scrape">Last Scrape</th><th scope="col" title="the duration of the last scrape">Duration</th><th scope="col" title="the number of metrics scraped during the last scrape">Samples</th><th scope="col" title="error from the last scrape (if any)">Last error</th></tr></thead><tbody class="list-`)
//line lib/promscrape/targetstatus.qtpl:106 //line targetstatus.qtpl:183
qw422016.N().D(i)
//line targetstatus.qtpl:183
qw422016.N().S(`">`)
//line targetstatus.qtpl:184
for _, ts := range js.targetsStatus { for _, ts := range js.targetsStatus {
//line lib/promscrape/targetstatus.qtpl:108 //line targetstatus.qtpl:186
endpoint := ts.sw.Config.ScrapeURL endpoint := ts.sw.Config.ScrapeURL
targetID := getTargetID(ts.sw) targetID := getTargetID(ts.sw)
lastScrapeTime := ts.getDurationFromLastScrape() lastScrapeTime := ts.getDurationFromLastScrape()
//line lib/promscrape/targetstatus.qtpl:112 //line targetstatus.qtpl:190
if onlyUnhealthy && ts.up { if showOnlyUnhealthy && ts.up {
//line lib/promscrape/targetstatus.qtpl:112 //line targetstatus.qtpl:190
continue continue
//line lib/promscrape/targetstatus.qtpl:112 //line targetstatus.qtpl:190
} }
//line lib/promscrape/targetstatus.qtpl:112 //line targetstatus.qtpl:190
qw422016.N().S(`<tr`) qw422016.N().S(`<tr`)
//line lib/promscrape/targetstatus.qtpl:113 //line targetstatus.qtpl:191
if !ts.up { if !ts.up {
//line lib/promscrape/targetstatus.qtpl:113 //line targetstatus.qtpl:191
qw422016.N().S(` `) qw422016.N().S(` `)
//line lib/promscrape/targetstatus.qtpl:113 //line targetstatus.qtpl:191
qw422016.N().S(`class="alert alert-danger" role="alert"`) qw422016.N().S(`class="alert alert-danger" role="alert"`)
//line lib/promscrape/targetstatus.qtpl:113 //line targetstatus.qtpl:191
} }
//line lib/promscrape/targetstatus.qtpl:113 //line targetstatus.qtpl:191
qw422016.N().S(`><td><a href="`) qw422016.N().S(`><td class="endpoint"><a href="`)
//line lib/promscrape/targetstatus.qtpl:114 //line targetstatus.qtpl:192
qw422016.E().S(endpoint) qw422016.E().S(endpoint)
//line lib/promscrape/targetstatus.qtpl:114 //line targetstatus.qtpl:192
qw422016.N().S(`" target="_blank">`) qw422016.N().S(`" target="_blank">`)
//line lib/promscrape/targetstatus.qtpl:114 //line targetstatus.qtpl:192
qw422016.E().S(endpoint) qw422016.E().S(endpoint)
//line lib/promscrape/targetstatus.qtpl:114 //line targetstatus.qtpl:192
qw422016.N().S(`</a> (<a href="target_response?id=`) qw422016.N().S(`</a> (<a href="target_response?id=`)
//line lib/promscrape/targetstatus.qtpl:115 //line targetstatus.qtpl:193
qw422016.E().S(targetID) qw422016.E().S(targetID)
//line lib/promscrape/targetstatus.qtpl:115 //line targetstatus.qtpl:193
qw422016.N().S(`" target="_blank" title="click to fetch target response on behalf of the scraper">response</a>)</td><td>`) qw422016.N().S(`" target="_blank"title="click to fetch target response on behalf of the scraper">response</a>)</td><td>`)
//line lib/promscrape/targetstatus.qtpl:117 //line targetstatus.qtpl:197
if ts.up { if ts.up {
//line lib/promscrape/targetstatus.qtpl:117 //line targetstatus.qtpl:197
qw422016.N().S(`UP`) qw422016.N().S(`UP`)
//line lib/promscrape/targetstatus.qtpl:117 //line targetstatus.qtpl:197
} else { } else {
//line lib/promscrape/targetstatus.qtpl:117 //line targetstatus.qtpl:197
qw422016.N().S(`DOWN`) qw422016.N().S(`DOWN`)
//line lib/promscrape/targetstatus.qtpl:117 //line targetstatus.qtpl:197
} }
//line lib/promscrape/targetstatus.qtpl:117 //line targetstatus.qtpl:197
qw422016.N().S(`</td><td><div title="click to show original labels" onclick="document.getElementById('original_labels_`) qw422016.N().S(`</td><td class="labels"><div title="click to show original labels"onclick="document.getElementById('original_labels_`)
//line lib/promscrape/targetstatus.qtpl:119 //line targetstatus.qtpl:200
qw422016.E().S(targetID) qw422016.E().S(targetID)
//line lib/promscrape/targetstatus.qtpl:119 //line targetstatus.qtpl:200
qw422016.N().S(`').style.display='block'">`) qw422016.N().S(`').style.display='block'">`)
//line lib/promscrape/targetstatus.qtpl:120 //line targetstatus.qtpl:201
streamformatLabel(qw422016, promrelabel.FinalizeLabels(nil, ts.sw.Config.Labels)) streamformatLabel(qw422016, promrelabel.FinalizeLabels(nil, ts.sw.Config.Labels))
//line lib/promscrape/targetstatus.qtpl:120 //line targetstatus.qtpl:201
qw422016.N().S(`</div><div style="display:none" id="original_labels_`) qw422016.N().S(`</div><div style="display:none" id="original_labels_`)
//line lib/promscrape/targetstatus.qtpl:122 //line targetstatus.qtpl:203
qw422016.E().S(targetID) qw422016.E().S(targetID)
//line lib/promscrape/targetstatus.qtpl:122 //line targetstatus.qtpl:203
qw422016.N().S(`">`) qw422016.N().S(`">`)
//line lib/promscrape/targetstatus.qtpl:123 //line targetstatus.qtpl:204
streamformatLabel(qw422016, ts.sw.Config.OriginalLabels) streamformatLabel(qw422016, ts.sw.Config.OriginalLabels)
//line lib/promscrape/targetstatus.qtpl:123 //line targetstatus.qtpl:204
qw422016.N().S(`</div></td><td>`) qw422016.N().S(`</div></td><td>`)
//line lib/promscrape/targetstatus.qtpl:126 //line targetstatus.qtpl:207
qw422016.N().D(ts.scrapesTotal) qw422016.N().D(ts.scrapesTotal)
//line lib/promscrape/targetstatus.qtpl:126 //line targetstatus.qtpl:207
qw422016.N().S(`</td><td>`) qw422016.N().S(`</td><td>`)
//line lib/promscrape/targetstatus.qtpl:127 //line targetstatus.qtpl:208
qw422016.N().D(ts.scrapesFailed) qw422016.N().D(ts.scrapesFailed)
//line lib/promscrape/targetstatus.qtpl:127 //line targetstatus.qtpl:208
qw422016.N().S(`</td><td>`) qw422016.N().S(`</td><td>`)
//line lib/promscrape/targetstatus.qtpl:129 //line targetstatus.qtpl:210
if lastScrapeTime < 365*24*time.Hour { if lastScrapeTime < 365*24*time.Hour {
//line lib/promscrape/targetstatus.qtpl:130 //line targetstatus.qtpl:211
qw422016.N().FPrec(lastScrapeTime.Seconds(), 3) qw422016.N().FPrec(lastScrapeTime.Seconds(), 3)
//line lib/promscrape/targetstatus.qtpl:130 //line targetstatus.qtpl:211
qw422016.N().S(`s ago`) qw422016.N().S(`s ago`)
//line lib/promscrape/targetstatus.qtpl:131 //line targetstatus.qtpl:212
} else { } else {
//line lib/promscrape/targetstatus.qtpl:131 //line targetstatus.qtpl:212
qw422016.N().S(`none`) qw422016.N().S(`none`)
//line lib/promscrape/targetstatus.qtpl:133 //line targetstatus.qtpl:214
} }
//line lib/promscrape/targetstatus.qtpl:133 //line targetstatus.qtpl:214
qw422016.N().S(`<td>`) qw422016.N().S(`<td>`)
//line lib/promscrape/targetstatus.qtpl:134 //line targetstatus.qtpl:215
qw422016.N().D(int(ts.scrapeDuration)) qw422016.N().D(int(ts.scrapeDuration))
//line lib/promscrape/targetstatus.qtpl:134 //line targetstatus.qtpl:215
qw422016.N().S(`ms</td><td>`) qw422016.N().S(`ms</td><td>`)
//line lib/promscrape/targetstatus.qtpl:135 //line targetstatus.qtpl:216
qw422016.N().D(ts.samplesScraped) qw422016.N().D(ts.samplesScraped)
//line lib/promscrape/targetstatus.qtpl:135 //line targetstatus.qtpl:216
qw422016.N().S(`</td><td>`) qw422016.N().S(`</td><td>`)
//line lib/promscrape/targetstatus.qtpl:136 //line targetstatus.qtpl:217
if ts.err != nil { if ts.err != nil {
//line lib/promscrape/targetstatus.qtpl:136 //line targetstatus.qtpl:217
qw422016.E().S(ts.err.Error()) qw422016.E().S(ts.err.Error())
//line lib/promscrape/targetstatus.qtpl:136 //line targetstatus.qtpl:217
} }
//line lib/promscrape/targetstatus.qtpl:136 //line targetstatus.qtpl:217
qw422016.N().S(`</td></tr>`) qw422016.N().S(`</td></tr>`)
//line lib/promscrape/targetstatus.qtpl:138 //line targetstatus.qtpl:219
} }
//line lib/promscrape/targetstatus.qtpl:138 //line targetstatus.qtpl:219
qw422016.N().S(`</tbody></table></div></div>`) qw422016.N().S(`</tbody></table></div></div></div>`)
//line lib/promscrape/targetstatus.qtpl:143 //line targetstatus.qtpl:225
} }
//line lib/promscrape/targetstatus.qtpl:145 //line targetstatus.qtpl:225
qw422016.N().S(`</div></div>`)
//line targetstatus.qtpl:229
for _, jobName := range emptyJobs { for _, jobName := range emptyJobs {
//line lib/promscrape/targetstatus.qtpl:145 //line targetstatus.qtpl:229
qw422016.N().S(`<div><h4><a>`) qw422016.N().S(`<div><h4><a>`)
//line lib/promscrape/targetstatus.qtpl:148 //line targetstatus.qtpl:232
qw422016.E().S(jobName) qw422016.E().S(jobName)
//line lib/promscrape/targetstatus.qtpl:148 //line targetstatus.qtpl:232
qw422016.N().S(`(0/0 up)</a></h4><table class="table table-striped table-hover table-bordered table-sm"><thead><tr><th scope="col">Endpoint</th><th scope="col">State</th><th scope="col">Labels</th><th scope="col">Last Scrape</th><th scope="col">Scrape Duration</th><th scope="col">Samples Scraped</th><th scope="col">Error</th></tr></thead></table></div>`) qw422016.N().S(`(0/0 up)</a></h4><table class="table table-striped table-hover table-bordered table-sm"><thead><tr><th scope="col">Endpoint</th><th scope="col">State</th><th scope="col">Labels</th><th scope="col">Last Scrape</th><th scope="col">Scrape Duration</th><th scope="col">Samples Scraped</th><th scope="col">Error</th></tr></thead></table></div>`)
//line lib/promscrape/targetstatus.qtpl:164 //line targetstatus.qtpl:248
} }
//line lib/promscrape/targetstatus.qtpl:164 //line targetstatus.qtpl:248
qw422016.N().S(`</body></html>`) qw422016.N().S(`</main></div></div></body></html>`)
//line lib/promscrape/targetstatus.qtpl:167 //line targetstatus.qtpl:254
} }
//line lib/promscrape/targetstatus.qtpl:167 //line targetstatus.qtpl:254
func WriteTargetsResponseHTML(qq422016 qtio422016.Writer, jts []jobTargetsStatuses, emptyJobs []string, onlyUnhealthy bool) { func WriteTargetsResponseHTML(qq422016 qtio422016.Writer, jts []jobTargetsStatuses, emptyJobs []string, showOnlyUnhealthy bool, endpointSearch, labelSearch string, err error) {
//line lib/promscrape/targetstatus.qtpl:167 //line targetstatus.qtpl:254
qw422016 := qt422016.AcquireWriter(qq422016) qw422016 := qt422016.AcquireWriter(qq422016)
//line lib/promscrape/targetstatus.qtpl:167 //line targetstatus.qtpl:254
StreamTargetsResponseHTML(qw422016, jts, emptyJobs, onlyUnhealthy) StreamTargetsResponseHTML(qw422016, jts, emptyJobs, showOnlyUnhealthy, endpointSearch, labelSearch, err)
//line lib/promscrape/targetstatus.qtpl:167 //line targetstatus.qtpl:254
qt422016.ReleaseWriter(qw422016) qt422016.ReleaseWriter(qw422016)
//line lib/promscrape/targetstatus.qtpl:167 //line targetstatus.qtpl:254
} }
//line lib/promscrape/targetstatus.qtpl:167 //line targetstatus.qtpl:254
func TargetsResponseHTML(jts []jobTargetsStatuses, emptyJobs []string, onlyUnhealthy bool) string { func TargetsResponseHTML(jts []jobTargetsStatuses, emptyJobs []string, showOnlyUnhealthy bool, endpointSearch, labelSearch string, err error) string {
//line lib/promscrape/targetstatus.qtpl:167 //line targetstatus.qtpl:254
qb422016 := qt422016.AcquireByteBuffer() qb422016 := qt422016.AcquireByteBuffer()
//line lib/promscrape/targetstatus.qtpl:167 //line targetstatus.qtpl:254
WriteTargetsResponseHTML(qb422016, jts, emptyJobs, onlyUnhealthy) WriteTargetsResponseHTML(qb422016, jts, emptyJobs, showOnlyUnhealthy, endpointSearch, labelSearch, err)
//line lib/promscrape/targetstatus.qtpl:167 //line targetstatus.qtpl:254
qs422016 := string(qb422016.B) qs422016 := string(qb422016.B)
//line lib/promscrape/targetstatus.qtpl:167 //line targetstatus.qtpl:254
qt422016.ReleaseByteBuffer(qb422016) qt422016.ReleaseByteBuffer(qb422016)
//line lib/promscrape/targetstatus.qtpl:167 //line targetstatus.qtpl:254
return qs422016 return qs422016
//line lib/promscrape/targetstatus.qtpl:167 //line targetstatus.qtpl:254
} }
//line lib/promscrape/targetstatus.qtpl:169 //line targetstatus.qtpl:256
func streamqueryArgs(qw422016 *qt422016.Writer, m map[string]string) {
//line targetstatus.qtpl:258
qa := make(url.Values, len(m))
for k, v := range m {
qa[k] = []string{v}
}
//line targetstatus.qtpl:263
qw422016.E().S(qa.Encode())
//line targetstatus.qtpl:264
}
//line targetstatus.qtpl:264
func writequeryArgs(qq422016 qtio422016.Writer, m map[string]string) {
//line targetstatus.qtpl:264
qw422016 := qt422016.AcquireWriter(qq422016)
//line targetstatus.qtpl:264
streamqueryArgs(qw422016, m)
//line targetstatus.qtpl:264
qt422016.ReleaseWriter(qw422016)
//line targetstatus.qtpl:264
}
//line targetstatus.qtpl:264
func queryArgs(m map[string]string) string {
//line targetstatus.qtpl:264
qb422016 := qt422016.AcquireByteBuffer()
//line targetstatus.qtpl:264
writequeryArgs(qb422016, m)
//line targetstatus.qtpl:264
qs422016 := string(qb422016.B)
//line targetstatus.qtpl:264
qt422016.ReleaseByteBuffer(qb422016)
//line targetstatus.qtpl:264
return qs422016
//line targetstatus.qtpl:264
}
//line targetstatus.qtpl:266
func streamformatLabel(qw422016 *qt422016.Writer, labels []prompbmarshal.Label) { func streamformatLabel(qw422016 *qt422016.Writer, labels []prompbmarshal.Label) {
//line lib/promscrape/targetstatus.qtpl:169 //line targetstatus.qtpl:266
qw422016.N().S(`{`) qw422016.N().S(`{`)
//line lib/promscrape/targetstatus.qtpl:171 //line targetstatus.qtpl:268
for i, label := range labels { for i, label := range labels {
//line lib/promscrape/targetstatus.qtpl:172 //line targetstatus.qtpl:269
qw422016.E().S(label.Name) qw422016.E().S(label.Name)
//line lib/promscrape/targetstatus.qtpl:172 //line targetstatus.qtpl:269
qw422016.N().S(`=`) qw422016.N().S(`=`)
//line lib/promscrape/targetstatus.qtpl:172 //line targetstatus.qtpl:269
qw422016.E().Q(label.Value) qw422016.E().Q(label.Value)
//line lib/promscrape/targetstatus.qtpl:173 //line targetstatus.qtpl:270
if i+1 < len(labels) { if i+1 < len(labels) {
//line lib/promscrape/targetstatus.qtpl:173 //line targetstatus.qtpl:270
qw422016.N().S(`,`) qw422016.N().S(`,`)
//line lib/promscrape/targetstatus.qtpl:173 //line targetstatus.qtpl:270
qw422016.N().S(` `) qw422016.N().S(` `)
//line lib/promscrape/targetstatus.qtpl:173 //line targetstatus.qtpl:270
} }
//line lib/promscrape/targetstatus.qtpl:174 //line targetstatus.qtpl:271
} }
//line lib/promscrape/targetstatus.qtpl:174 //line targetstatus.qtpl:271
qw422016.N().S(`}`) qw422016.N().S(`}`)
//line lib/promscrape/targetstatus.qtpl:176 //line targetstatus.qtpl:273
} }
//line lib/promscrape/targetstatus.qtpl:176 //line targetstatus.qtpl:273
func writeformatLabel(qq422016 qtio422016.Writer, labels []prompbmarshal.Label) { func writeformatLabel(qq422016 qtio422016.Writer, labels []prompbmarshal.Label) {
//line lib/promscrape/targetstatus.qtpl:176 //line targetstatus.qtpl:273
qw422016 := qt422016.AcquireWriter(qq422016) qw422016 := qt422016.AcquireWriter(qq422016)
//line lib/promscrape/targetstatus.qtpl:176 //line targetstatus.qtpl:273
streamformatLabel(qw422016, labels) streamformatLabel(qw422016, labels)
//line lib/promscrape/targetstatus.qtpl:176 //line targetstatus.qtpl:273
qt422016.ReleaseWriter(qw422016) qt422016.ReleaseWriter(qw422016)
//line lib/promscrape/targetstatus.qtpl:176 //line targetstatus.qtpl:273
} }
//line lib/promscrape/targetstatus.qtpl:176 //line targetstatus.qtpl:273
func formatLabel(labels []prompbmarshal.Label) string { func formatLabel(labels []prompbmarshal.Label) string {
//line lib/promscrape/targetstatus.qtpl:176 //line targetstatus.qtpl:273
qb422016 := qt422016.AcquireByteBuffer() qb422016 := qt422016.AcquireByteBuffer()
//line lib/promscrape/targetstatus.qtpl:176 //line targetstatus.qtpl:273
writeformatLabel(qb422016, labels) writeformatLabel(qb422016, labels)
//line lib/promscrape/targetstatus.qtpl:176 //line targetstatus.qtpl:273
qs422016 := string(qb422016.B) qs422016 := string(qb422016.B)
//line lib/promscrape/targetstatus.qtpl:176 //line targetstatus.qtpl:273
qt422016.ReleaseByteBuffer(qb422016) qt422016.ReleaseByteBuffer(qb422016)
//line lib/promscrape/targetstatus.qtpl:176 //line targetstatus.qtpl:273
return qs422016 return qs422016
//line lib/promscrape/targetstatus.qtpl:176 //line targetstatus.qtpl:273
}
//line targetstatus.qtpl:275
func streamerrorNotification(qw422016 *qt422016.Writer, err error) {
//line targetstatus.qtpl:275
qw422016.N().S(`<div class="alert alert-danger d-flex align-items-center" role="alert"><svg class="bi flex-shrink-0 me-2" width="24" height="24" role="img" aria-label="Danger:"><use xlink:href="#exclamation-triangle-fill"/></svg><div>`)
//line targetstatus.qtpl:280
qw422016.E().S(err.Error())
//line targetstatus.qtpl:280
qw422016.N().S(`</div></div>`)
//line targetstatus.qtpl:283
}
//line targetstatus.qtpl:283
func writeerrorNotification(qq422016 qtio422016.Writer, err error) {
//line targetstatus.qtpl:283
qw422016 := qt422016.AcquireWriter(qq422016)
//line targetstatus.qtpl:283
streamerrorNotification(qw422016, err)
//line targetstatus.qtpl:283
qt422016.ReleaseWriter(qw422016)
//line targetstatus.qtpl:283
}
//line targetstatus.qtpl:283
func errorNotification(err error) string {
//line targetstatus.qtpl:283
qb422016 := qt422016.AcquireByteBuffer()
//line targetstatus.qtpl:283
writeerrorNotification(qb422016, err)
//line targetstatus.qtpl:283
qs422016 := string(qb422016.B)
//line targetstatus.qtpl:283
qt422016.ReleaseByteBuffer(qb422016)
//line targetstatus.qtpl:283
return qs422016
//line targetstatus.qtpl:283
} }

View file

@ -12,8 +12,8 @@ type Duration struct {
} }
// NewDuration returns Duration for given d. // NewDuration returns Duration for given d.
func NewDuration(d time.Duration) Duration { func NewDuration(d time.Duration) *Duration {
return Duration{ return &Duration{
d: d, d: d,
} }
} }
@ -38,7 +38,10 @@ func (pd *Duration) UnmarshalYAML(unmarshal func(interface{}) error) error {
} }
// Duration returns duration for pd. // Duration returns duration for pd.
func (pd Duration) Duration() time.Duration { func (pd *Duration) Duration() time.Duration {
if pd == nil {
return 0
}
return pd.d return pd.d
} }

View file

@ -1,4 +1,4 @@
GO_VERSION ?=1.17.7 GO_VERSION ?=1.18.1
SNAP_BUILDER_IMAGE := local/snap-builder:2.0.0-$(shell echo $(GO_VERSION) | tr :/ __) SNAP_BUILDER_IMAGE := local/snap-builder:2.0.0-$(shell echo $(GO_VERSION) | tr :/ __)

View file

@ -19167,6 +19167,40 @@ var awsPartition = partition{
}, },
}, },
}, },
"sms-voice": service{
Endpoints: serviceEndpoints{
endpointKey{
Region: "ap-northeast-1",
}: endpoint{},
endpointKey{
Region: "ap-south-1",
}: endpoint{},
endpointKey{
Region: "ap-southeast-1",
}: endpoint{},
endpointKey{
Region: "ap-southeast-2",
}: endpoint{},
endpointKey{
Region: "ca-central-1",
}: endpoint{},
endpointKey{
Region: "eu-central-1",
}: endpoint{},
endpointKey{
Region: "eu-west-1",
}: endpoint{},
endpointKey{
Region: "eu-west-2",
}: endpoint{},
endpointKey{
Region: "us-east-1",
}: endpoint{},
endpointKey{
Region: "us-west-2",
}: endpoint{},
},
},
"snowball": service{ "snowball": service{
Endpoints: serviceEndpoints{ Endpoints: serviceEndpoints{
endpointKey{ endpointKey{
@ -28035,6 +28069,13 @@ var awsusgovPartition = partition{
}, },
}, },
}, },
"sms-voice": service{
Endpoints: serviceEndpoints{
endpointKey{
Region: "us-gov-west-1",
}: endpoint{},
},
},
"snowball": service{ "snowball": service{
Endpoints: serviceEndpoints{ Endpoints: serviceEndpoints{
endpointKey{ endpointKey{
@ -29037,6 +29078,18 @@ var awsisoPartition = partition{
}: endpoint{}, }: endpoint{},
}, },
}, },
"eks": service{
Defaults: endpointDefaults{
defaultKey{}: endpoint{
Protocols: []string{"http", "https"},
},
},
Endpoints: serviceEndpoints{
endpointKey{
Region: "us-iso-east-1",
}: endpoint{},
},
},
"elasticache": service{ "elasticache": service{
Endpoints: serviceEndpoints{ Endpoints: serviceEndpoints{
endpointKey{ endpointKey{
@ -29706,6 +29759,18 @@ var awsisobPartition = partition{
}: endpoint{}, }: endpoint{},
}, },
}, },
"eks": service{
Defaults: endpointDefaults{
defaultKey{}: endpoint{
Protocols: []string{"http", "https"},
},
},
Endpoints: serviceEndpoints{
endpointKey{
Region: "us-isob-east-1",
}: endpoint{},
},
},
"elasticache": service{ "elasticache": service{
Endpoints: serviceEndpoints{ Endpoints: serviceEndpoints{
endpointKey{ endpointKey{

View file

@ -5,4 +5,4 @@ package aws
const SDKName = "aws-sdk-go" const SDKName = "aws-sdk-go"
// SDKVersion is the version of this SDK // SDKVersion is the version of this SDK
const SDKVersion = "1.43.37" const SDKVersion = "1.43.41"

View file

@ -2316,17 +2316,18 @@ type requestBody struct {
_ incomparable _ incomparable
stream *stream stream *stream
conn *serverConn conn *serverConn
closed bool // for use by Close only closeOnce sync.Once // for use by Close only
sawEOF bool // for use by Read only sawEOF bool // for use by Read only
pipe *pipe // non-nil if we have a HTTP entity message body pipe *pipe // non-nil if we have a HTTP entity message body
needsContinue bool // need to send a 100-continue needsContinue bool // need to send a 100-continue
} }
func (b *requestBody) Close() error { func (b *requestBody) Close() error {
if b.pipe != nil && !b.closed { b.closeOnce.Do(func() {
b.pipe.BreakWithError(errClosedBody) if b.pipe != nil {
} b.pipe.BreakWithError(errClosedBody)
b.closed = true }
})
return nil return nil
} }

View file

@ -2189,7 +2189,7 @@ func Faccessat(dirfd int, path string, mode uint32, flags int) (err error) {
gid = Getgid() gid = Getgid()
} }
if uint32(gid) == st.Gid || isGroupMember(gid) { if uint32(gid) == st.Gid || isGroupMember(int(st.Gid)) {
fmode = (st.Mode >> 3) & 7 fmode = (st.Mode >> 3) & 7
} else { } else {
fmode = st.Mode & 7 fmode = st.Mode & 7

10
vendor/modules.txt vendored
View file

@ -5,7 +5,7 @@ cloud.google.com/go/internal
cloud.google.com/go/internal/optional cloud.google.com/go/internal/optional
cloud.google.com/go/internal/trace cloud.google.com/go/internal/trace
cloud.google.com/go/internal/version cloud.google.com/go/internal/version
# cloud.google.com/go/compute v1.5.0 # cloud.google.com/go/compute v1.6.0
## explicit; go 1.15 ## explicit; go 1.15
cloud.google.com/go/compute/metadata cloud.google.com/go/compute/metadata
# cloud.google.com/go/iam v0.3.0 # cloud.google.com/go/iam v0.3.0
@ -34,7 +34,7 @@ github.com/VictoriaMetrics/metricsql/binaryop
# github.com/VividCortex/ewma v1.2.0 # github.com/VividCortex/ewma v1.2.0
## explicit; go 1.12 ## explicit; go 1.12
github.com/VividCortex/ewma github.com/VividCortex/ewma
# github.com/aws/aws-sdk-go v1.43.37 # github.com/aws/aws-sdk-go v1.43.41
## explicit; go 1.11 ## explicit; go 1.11
github.com/aws/aws-sdk-go/aws github.com/aws/aws-sdk-go/aws
github.com/aws/aws-sdk-go/aws/arn github.com/aws/aws-sdk-go/aws/arn
@ -268,7 +268,7 @@ go.opencensus.io/trace/tracestate
go.uber.org/atomic go.uber.org/atomic
# go.uber.org/goleak v1.1.11-0.20210813005559-691160354723 # go.uber.org/goleak v1.1.11-0.20210813005559-691160354723
## explicit; go 1.13 ## explicit; go 1.13
# golang.org/x/net v0.0.0-20220412020605-290c469a71a5 # golang.org/x/net v0.0.0-20220418201149-a630d4f3e7a2
## explicit; go 1.17 ## explicit; go 1.17
golang.org/x/net/context golang.org/x/net/context
golang.org/x/net/context/ctxhttp golang.org/x/net/context/ctxhttp
@ -293,7 +293,7 @@ golang.org/x/oauth2/jwt
# golang.org/x/sync v0.0.0-20210220032951-036812b2e83c # golang.org/x/sync v0.0.0-20210220032951-036812b2e83c
## explicit ## explicit
golang.org/x/sync/errgroup golang.org/x/sync/errgroup
# golang.org/x/sys v0.0.0-20220412071739-889880a91fd5 # golang.org/x/sys v0.0.0-20220412211240-33da011f77ad
## explicit; go 1.17 ## explicit; go 1.17
golang.org/x/sys/internal/unsafeheader golang.org/x/sys/internal/unsafeheader
golang.org/x/sys/unix golang.org/x/sys/unix
@ -341,7 +341,7 @@ google.golang.org/appengine/internal/socket
google.golang.org/appengine/internal/urlfetch google.golang.org/appengine/internal/urlfetch
google.golang.org/appengine/socket google.golang.org/appengine/socket
google.golang.org/appengine/urlfetch google.golang.org/appengine/urlfetch
# google.golang.org/genproto v0.0.0-20220407144326-9054f6ed7bac # google.golang.org/genproto v0.0.0-20220414192740-2d67ff6cf2b4
## explicit; go 1.15 ## explicit; go 1.15
google.golang.org/genproto/googleapis/api/annotations google.golang.org/genproto/googleapis/api/annotations
google.golang.org/genproto/googleapis/iam/v1 google.golang.org/genproto/googleapis/iam/v1