mirror of
https://github.com/VictoriaMetrics/VictoriaMetrics.git
synced 2024-12-11 14:53:49 +00:00
Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files
This commit is contained in:
commit
307034fc2f
266 changed files with 12130 additions and 7555 deletions
60
README.md
60
README.md
|
@ -265,6 +265,52 @@ VictoriaMetrics also supports [importing data in Prometheus exposition format](#
|
|||
See also [vmagent](https://docs.victoriametrics.com/vmagent.html), which can be used as drop-in replacement for Prometheus.
|
||||
|
||||
|
||||
## How to send data from DataDog agent
|
||||
|
||||
VictoriaMetrics accepts data from [DataDog agent](https://docs.datadoghq.com/agent/) or [DogStatsD]() via ["submit metrics" API](https://docs.datadoghq.com/api/latest/metrics/#submit-metrics) at `/datadog/api/v1/series` path.
|
||||
|
||||
Run DataDog agent with `DD_DD_URL=http://victoriametrics-host:8428/datadog` environment variable in order to write data to VictoriaMetrics at `victoriametrics-host` host. Another option is to set `dd_url` param at [DataDog agent configuration file](https://docs.datadoghq.com/agent/guide/agent-configuration-files/) to `http://victoriametrics-host:8428/datadog`.
|
||||
|
||||
Example on how to send data to VictoriaMetrics via DataDog "submit metrics" API from command line:
|
||||
|
||||
```bash
|
||||
echo '
|
||||
{
|
||||
"series": [
|
||||
{
|
||||
"host": "test.example.com",
|
||||
"interval": 20,
|
||||
"metric": "system.load.1",
|
||||
"points": [[
|
||||
0,
|
||||
0.5
|
||||
]],
|
||||
"tags": [
|
||||
"environment:test"
|
||||
],
|
||||
"type": "rate"
|
||||
}
|
||||
]
|
||||
}
|
||||
' | curl -X POST --data-binary @- http://localhost:8428/datadog/api/v1/series
|
||||
```
|
||||
|
||||
The imported data can be read via [export API](https://docs.victoriametrics.com/#how-to-export-data-in-json-line-format):
|
||||
|
||||
```bash
|
||||
curl http://localhost:8428/api/v1/export -d 'match[]=system.load.1'
|
||||
```
|
||||
|
||||
This command should return the following output if everything is OK:
|
||||
|
||||
```
|
||||
{"metric":{"__name__":"system.load.1","environment":"test","host":"test.example.com"},"values":[0.5],"timestamps":[1632833641000]}
|
||||
```
|
||||
|
||||
Extra labels may be added to all the written time series by passing `extra_label=name=value` query args.
|
||||
For example, `/datadog/api/v1/series?extra_label=foo=bar` would add `{foo="bar"}` label to all the ingested metrics.
|
||||
|
||||
|
||||
## How to send data from InfluxDB-compatible agents such as [Telegraf](https://www.influxdata.com/time-series-platform/telegraf/)
|
||||
|
||||
Use `http://<victoriametric-addr>:8428` url instead of InfluxDB url in agents' configs.
|
||||
|
@ -479,7 +525,7 @@ By default, VictoriaMetrics returns time series for the last 5 minutes from `/ap
|
|||
|
||||
Additionally VictoriaMetrics provides the following handlers:
|
||||
|
||||
* `/vmui` - Basic Web UI
|
||||
* `/vmui` - Basic Web UI. See [these docs](#vmui).
|
||||
* `/api/v1/series/count` - returns the total number of time series in the database. Some notes:
|
||||
* the handler scans all the inverted index, so it can be slow if the database contains tens of millions of time series;
|
||||
* the handler may count [deleted time series](#how-to-delete-time-series) additionally to normal time series due to internal implementation restrictions;
|
||||
|
@ -550,6 +596,15 @@ VictoriaMetrics supports the following handlers from [Graphite Tags API](https:/
|
|||
* [/tags/delSeries](https://graphite.readthedocs.io/en/stable/tags.html#removing-series-from-the-tagdb)
|
||||
|
||||
|
||||
## vmui
|
||||
|
||||
VictoriaMetrics provides UI for query troubleshooting and exploration. The UI is available at `http://victoriametrics:8428/vmui`.
|
||||
The UI allows exploring query results via graphs and tables. Graphs support scrolling and zooming:
|
||||
|
||||
* Drag the graph to the left / right in order to move the displayed time range into the past / future.
|
||||
* Hold `Ctrl` (or `Cmd` on MacOS) and scroll up / down in order to zoom in / out the graph.
|
||||
|
||||
|
||||
## How to build from sources
|
||||
|
||||
We recommend using either [binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) or
|
||||
|
@ -790,6 +845,7 @@ The exported CSV data can be imported to VictoriaMetrics via [/api/v1/import/csv
|
|||
Time series data can be imported via any supported ingestion protocol:
|
||||
|
||||
* [Prometheus remote_write API](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write). See [these docs](#prometheus-setup) for details.
|
||||
* DataDog `submit metrics` API. See [these docs](#how-to-send-data-from-datadog-agent) for details.
|
||||
* InfluxDB line protocol. See [these docs](#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf) for details.
|
||||
* Graphite plaintext protocol. See [these docs](#how-to-send-data-from-graphite-compatible-agents-such-as-statsd) for details.
|
||||
* OpenTSDB telnet put protocol. See [these docs](#sending-data-via-telnet-put-protocol) for details.
|
||||
|
@ -1650,7 +1706,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
The maximum size of scrape response in bytes to process from Prometheus targets. Bigger responses are rejected
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 16777216)
|
||||
-promscrape.noStaleMarkers
|
||||
Whether to disable seding Prometheus stale markers for metrics when scrape target disappears. This option may reduce memory usage if stale markers aren't needed for your setup. See also https://docs.victoriametrics.com/vmagent.html#stream-parsing-mode
|
||||
Whether to disable sending Prometheus stale markers for metrics when scrape target disappears. This option may reduce memory usage if stale markers aren't needed for your setup. See also https://docs.victoriametrics.com/vmagent.html#stream-parsing-mode
|
||||
-promscrape.openstackSDCheckInterval duration
|
||||
Interval for checking for changes in openstack API server. This works only if openstack_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#openstack_sd_config for details (default 30s)
|
||||
-promscrape.seriesLimitPerTarget int
|
||||
|
|
|
@ -19,6 +19,7 @@
|
|||
["{TIME_S-110s}","3"],
|
||||
["{TIME_S-100s}","3"],
|
||||
["{TIME_S-90s}","3"],
|
||||
["{TIME_S-80s}","3"],
|
||||
["{TIME_S-60s}","2"],
|
||||
["{TIME_S-50s}","2"],
|
||||
["{TIME_S-40s}","2"],
|
||||
|
|
|
@ -17,10 +17,12 @@ to `vmagent` such as the ability to push metrics instead of pulling them. We did
|
|||
|
||||
## Features
|
||||
|
||||
* Can be used as a drop-in replacement for Prometheus for scraping targets such as [node_exporter](https://github.com/prometheus/node_exporter).
|
||||
See [Quick Start](#quick-start) for details.
|
||||
* Can be used as a drop-in replacement for Prometheus for scraping targets such as [node_exporter](https://github.com/prometheus/node_exporter). See [Quick Start](#quick-start) for details.
|
||||
* Can read data from Kafka. See [these docs](#reading-metrics-from-kafka).
|
||||
* Can write data to Kafka. See [these docs](#writing-metrics-to-kafka).
|
||||
* Can add, remove and modify labels (aka tags) via Prometheus relabeling. Can filter data before sending it to remote storage. See [these docs](#relabeling) for details.
|
||||
* Accepts data via all ingestion protocols supported by VictoriaMetrics:
|
||||
* DataDog "submit metrics" API. See [these docs](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-send-data-from-datadog-agent).
|
||||
* InfluxDB line protocol via `http://<vmagent>:8429/write`. See [these docs](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf).
|
||||
* Graphite plaintext protocol if `-graphiteListenAddr` command-line flag is set. See [these docs](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-send-data-from-graphite-compatible-agents-such-as-statsd).
|
||||
* OpenTSDB telnet and http protocols if `-opentsdbListenAddr` command-line flag is set. See [these docs](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-send-data-from-opentsdb-compatible-agents).
|
||||
|
@ -520,6 +522,108 @@ It may be useful to perform `vmagent` rolling update without any scrape loss.
|
|||
regex: true
|
||||
```
|
||||
|
||||
## Kafka integration
|
||||
|
||||
[Enterprise version](https://victoriametrics.com/enterprise.html) of `vmagent` can read and write metrics from / to Kafka:
|
||||
|
||||
* [Reading metrics from Kafka](#reading-metrics-from-kafka)
|
||||
* [Writing metrics to Kafka](#writing-metrics-to-kafka)
|
||||
|
||||
The enterprise version of vmagent is available for evaluation at [releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) page in `vmutils-*-enteprise.tar.gz` archives and in [docker images](https://hub.docker.com/r/victoriametrics/vmagent/tags) with tags containing `enterprise` suffix.
|
||||
|
||||
|
||||
### Reading metrics from Kafka
|
||||
|
||||
[Enterprise version](https://victoriametrics.com/enterprise.html) of `vmagent` can read metrics in various formats from Kafka messages. These formats can be configured with `-kafka.consumer.topic.defaultFormat` or `-kafka.consumer.topic.format` command-line options. The following formats are supported:
|
||||
|
||||
* `promremotewrite` - [Prometheus remote_write](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write). Messages in this format can be sent by vmagent - see [these docs](#writing-metrics-to-kafka).
|
||||
* `influx` - [InfluxDB line protocol format](https://docs.influxdata.com/influxdb/v1.7/write_protocols/line_protocol_tutorial/).
|
||||
* `prometheus` - [Prometheus text exposition format](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#text-based-format) and [OpenMetrics format](https://github.com/OpenObservability/OpenMetrics/blob/master/specification/OpenMetrics.md).
|
||||
* `graphite` - [Graphite plaintext format](https://graphite.readthedocs.io/en/latest/feeding-carbon.html#the-plaintext-protocol).
|
||||
* `jsonline` - [JSON line format](https://docs.victoriametrics.com/#how-to-import-data-in-json-line-format).
|
||||
|
||||
Every Kafka message may contain multiple lines in `influx`, `prometheus`, `graphite` and `jsonline` format delimited by `\n`.
|
||||
|
||||
`vmagent` consumes messages from Kafka topics specified by `-kafka.consumer.topic` command-line flag. Multiple topics can be specified by passing multiple `-kafka.consumer.topic` command-line flags to `vmagent`.
|
||||
|
||||
`vmagent` consumes messages from Kafka brokers specified by `-kafka.consumer.topic.brokers` command-line flag. Multiple brokers can be specified per each `-kafka.consumer.topic` by passing a list of brokers delimited by `;`. For example, `-kafka.consumer.topic.brokers=host1:9092;host2:9092`.
|
||||
|
||||
The following command starts `vmagent`, which reads metrics in InfluxDB line protocol format from Kafka broker at `localhost:9092` from the topic `metrics-by-telegraf` and sends them to remote storage at `http://localhost:8428/api/v1/write`:
|
||||
|
||||
```bash
|
||||
./bin/vmagent -remoteWrite.url=http://localhost:8428/api/v1/write \
|
||||
-kafka.consumer.topic.brokers=localhost:9092 \
|
||||
-kafka.consumer.topic.format=influx \
|
||||
-kafka.consumer.topic=metrics-by-telegraf \
|
||||
-kafka.consumer.topic.groupID=some-id
|
||||
```
|
||||
|
||||
It is expected that [Telegraf](https://github.com/influxdata/telegraf) sends metrics to the `metrics-by-telegraf` topic with the following config:
|
||||
|
||||
```yaml
|
||||
[[outputs.kafka]]
|
||||
brokers = ["localhost:9092"]
|
||||
topic = "influx"
|
||||
data_format = "influx"
|
||||
```
|
||||
|
||||
|
||||
#### Command-line flags for Kafka consumer
|
||||
|
||||
These command-line flags are available only in [enterprise](https://victoriametrics.com/enterprise.html) version of `vmagent`, which can be downloaded for evaluation from [releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) page (see `vmutils-*-enteprise.tar.gz` archives) and from [docker images](https://hub.docker.com/r/victoriametrics/vmagent/tags) with tags containing `enterprise` suffix.
|
||||
|
||||
```
|
||||
-kafka.consumer.topic array
|
||||
Kafka topic names for data consumption.
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
-kafka.consumer.topic.basicAuth.password array
|
||||
Optional basic auth password for -kafka.consumer.topic. Must be used in conjunction with any supported auth methods for kafka client, specified by flag -kafka.consumer.topic.options='security.protocol=SASL_SSL;sasl.mechanisms=PLAIN'
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
-kafka.consumer.topic.basicAuth.username array
|
||||
Optional basic auth username for -kafka.consumer.topic. Must be used in conjunction with any supported auth methods for kafka client, specified by flag -kafka.consumer.topic.options='security.protocol=SASL_SSL;sasl.mechanisms=PLAIN'
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
-kafka.consumer.topic.brokers array
|
||||
List of brokers to connect for given topic, e.g. -kafka.consumer.topic.broker=host-1:9092;host-2:9092
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
-kafka.consumer.topic.defaultFormat string
|
||||
Expected data format in the topic if -kafka.consumer.topic.format is skipped. (default "promremotewrite")
|
||||
-kafka.consumer.topic.format array
|
||||
data format for corresponding kafka topic. Valid formats: influx, prometheus, promremotewrite, graphite, jsonline
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
-kafka.consumer.topic.groupID array
|
||||
Defines group.id for topic
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
-kafka.consumer.topic.isGzipped array
|
||||
Enables gzip setting for topic messages payload. Only prometheus, jsonline and influx formats accept gzipped messages.
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
-kafka.consumer.topic.options array
|
||||
Optional key=value;key1=value2 settings for topic consumer. See full configuration options at https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md.
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
```
|
||||
|
||||
### Writing metrics to Kafka
|
||||
|
||||
[Enterprise version](https://victoriametrics.com/enterprise.html) of `vmagent` writes data to Kafka with `at-least-once` semantics if `-remoteWrite.url` contains e.g. Kafka url. For example, if `vmagent` is started with `-remoteWrite.url=kafka://localhost:9092/?topic=prom-rw`, then it would send Prometheus remote_write messages to Kafka bootstrap server at `localhost:9092` with the topic `prom-rw`. These messages can be read later from Kafka by another `vmagent` - see [these docs](#reading-metrics-from-kafka) for details.
|
||||
|
||||
Additional Kafka options can be passed as query params to `-remoteWrite.url`. For instance, `kafka://localhost:9092/?topic=prom-rw&client.id=my-favorite-id` sets `client.id` Kafka option to `my-favorite-id`. The full list of Kafka options is available [here](https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md).
|
||||
|
||||
|
||||
#### Kafka broker authorization and authentication
|
||||
|
||||
Two types of auth are supported:
|
||||
|
||||
* sasl with username and password:
|
||||
|
||||
```bash
|
||||
./bin/vmagent -remoteWrite.url=kafka://localhost:9092/?topic=prom-rw&security.protocol=SASL_SSL&sasl.mechanisms=PLAIN -remoteWrite.basicAuth.username=user -remoteWrite.basicAuth.password=password
|
||||
```
|
||||
|
||||
* tls certificates:
|
||||
|
||||
```bash
|
||||
./bin/vmagent -remoteWrite.url=kafka://localhost:9092/?topic=prom-rw&security.protocol=SSL -remoteWrite.tlsCAFile=/opt/ca.pem -remoteWrite.tlsCertFile=/opt/cert.pem -remoteWrite.tlsKeyFile=/opt/key.pem
|
||||
```
|
||||
|
||||
|
||||
## How to build from sources
|
||||
|
||||
|
|
99
app/vmagent/datadog/request_handler.go
Normal file
99
app/vmagent/datadog/request_handler.go
Normal file
|
@ -0,0 +1,99 @@
|
|||
package datadog
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"strings"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/common"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
|
||||
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
|
||||
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadog"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
)
|
||||
|
||||
var (
|
||||
rowsInserted = metrics.NewCounter(`vmagent_rows_inserted_total{type="datadog"}`)
|
||||
rowsTenantInserted = tenantmetrics.NewCounterMap(`vmagent_tenant_inserted_rows_total{type="datadog"}`)
|
||||
rowsPerInsert = metrics.NewHistogram(`vmagent_rows_per_insert{type="datadog"}`)
|
||||
)
|
||||
|
||||
// InsertHandlerForHTTP processes remote write for DataDog POST /api/v1/series request.
|
||||
//
|
||||
// See https://docs.datadoghq.com/api/latest/metrics/#submit-metrics
|
||||
func InsertHandlerForHTTP(at *auth.Token, req *http.Request) error {
|
||||
extraLabels, err := parserCommon.GetExtraLabels(req)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return writeconcurrencylimiter.Do(func() error {
|
||||
ce := req.Header.Get("Content-Encoding")
|
||||
return parser.ParseStream(req.Body, ce, func(series []parser.Series) error {
|
||||
return insertRows(at, series, extraLabels)
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
func insertRows(at *auth.Token, series []parser.Series, extraLabels []prompbmarshal.Label) error {
|
||||
ctx := common.GetPushCtx()
|
||||
defer common.PutPushCtx(ctx)
|
||||
|
||||
rowsTotal := 0
|
||||
tssDst := ctx.WriteRequest.Timeseries[:0]
|
||||
labels := ctx.Labels[:0]
|
||||
samples := ctx.Samples[:0]
|
||||
for i := range series {
|
||||
ss := &series[i]
|
||||
rowsTotal += len(ss.Points)
|
||||
labelsLen := len(labels)
|
||||
labels = append(labels, prompbmarshal.Label{
|
||||
Name: "__name__",
|
||||
Value: ss.Metric,
|
||||
})
|
||||
labels = append(labels, prompbmarshal.Label{
|
||||
Name: "host",
|
||||
Value: ss.Host,
|
||||
})
|
||||
for _, tag := range ss.Tags {
|
||||
n := strings.IndexByte(tag, ':')
|
||||
if n < 0 {
|
||||
return fmt.Errorf("cannot find ':' in tag %q", tag)
|
||||
}
|
||||
name := tag[:n]
|
||||
value := tag[n+1:]
|
||||
if name == "host" {
|
||||
name = "exported_host"
|
||||
}
|
||||
labels = append(labels, prompbmarshal.Label{
|
||||
Name: name,
|
||||
Value: value,
|
||||
})
|
||||
}
|
||||
labels = append(labels, extraLabels...)
|
||||
samplesLen := len(samples)
|
||||
for _, pt := range ss.Points {
|
||||
samples = append(samples, prompbmarshal.Sample{
|
||||
Timestamp: pt.Timestamp(),
|
||||
Value: pt.Value(),
|
||||
})
|
||||
}
|
||||
tssDst = append(tssDst, prompbmarshal.TimeSeries{
|
||||
Labels: labels[labelsLen:],
|
||||
Samples: samples[samplesLen:],
|
||||
})
|
||||
}
|
||||
ctx.WriteRequest.Timeseries = tssDst
|
||||
ctx.Labels = labels
|
||||
ctx.Samples = samples
|
||||
remotewrite.PushWithAuthToken(at, &ctx.WriteRequest)
|
||||
rowsInserted.Add(rowsTotal)
|
||||
if at != nil {
|
||||
rowsTenantInserted.Get(at).Add(rowsTotal)
|
||||
}
|
||||
rowsPerInsert.Update(float64(rowsTotal))
|
||||
return nil
|
||||
}
|
|
@ -11,6 +11,7 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/csvimport"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/datadog"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/graphite"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/influx"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/native"
|
||||
|
@ -224,6 +225,36 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
|
|||
influxQueryRequests.Inc()
|
||||
influxutils.WriteDatabaseNames(w)
|
||||
return true
|
||||
case "/datadog/api/v1/series":
|
||||
datadogWriteRequests.Inc()
|
||||
if err := datadog.InsertHandlerForHTTP(nil, r); err != nil {
|
||||
datadogWriteErrors.Inc()
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return true
|
||||
}
|
||||
// See https://docs.datadoghq.com/api/latest/metrics/#submit-metrics
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
w.WriteHeader(202)
|
||||
fmt.Fprintf(w, `{"status":"ok"}`)
|
||||
return true
|
||||
case "/datadog/api/v1/validate":
|
||||
datadogValidateRequests.Inc()
|
||||
// See https://docs.datadoghq.com/api/latest/authentication/#validate-api-key
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
fmt.Fprintf(w, `{"valid":true}`)
|
||||
return true
|
||||
case "/datadog/api/v1/check_run":
|
||||
datadogCheckRunRequests.Inc()
|
||||
// See https://docs.datadoghq.com/api/latest/service-checks/#submit-a-service-check
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
w.WriteHeader(202)
|
||||
fmt.Fprintf(w, `{"status":"ok"}`)
|
||||
return true
|
||||
case "/datadog/intake/":
|
||||
datadogIntakeRequests.Inc()
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
fmt.Fprintf(w, `{}`)
|
||||
return true
|
||||
case "/targets":
|
||||
promscrapeTargetsRequests.Inc()
|
||||
promscrape.WriteHumanReadableTargetsStatus(w, r)
|
||||
|
@ -330,6 +361,35 @@ func processMultitenantRequest(w http.ResponseWriter, r *http.Request, path stri
|
|||
influxQueryRequests.Inc()
|
||||
influxutils.WriteDatabaseNames(w)
|
||||
return true
|
||||
case "datadog/api/v1/series":
|
||||
datadogWriteRequests.Inc()
|
||||
if err := datadog.InsertHandlerForHTTP(at, r); err != nil {
|
||||
datadogWriteErrors.Inc()
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return true
|
||||
}
|
||||
// See https://docs.datadoghq.com/api/latest/metrics/#submit-metrics
|
||||
w.WriteHeader(202)
|
||||
fmt.Fprintf(w, `{"status":"ok"}`)
|
||||
return true
|
||||
case "datadog/api/v1/validate":
|
||||
datadogValidateRequests.Inc()
|
||||
// See https://docs.datadoghq.com/api/latest/authentication/#validate-api-key
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
fmt.Fprintf(w, `{"valid":true}`)
|
||||
return true
|
||||
case "datadog/api/v1/check_run":
|
||||
datadogCheckRunRequests.Inc()
|
||||
// See https://docs.datadoghq.com/api/latest/service-checks/#submit-a-service-check
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
w.WriteHeader(202)
|
||||
fmt.Fprintf(w, `{"status":"ok"}`)
|
||||
return true
|
||||
case "datadog/intake/":
|
||||
datadogIntakeRequests.Inc()
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
fmt.Fprintf(w, `{}`)
|
||||
return true
|
||||
default:
|
||||
httpserver.Errorf(w, r, "unsupported multitenant path suffix: %q", p.Suffix)
|
||||
return true
|
||||
|
@ -352,10 +412,17 @@ var (
|
|||
nativeimportRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/api/v1/import/native", protocol="nativeimport"}`)
|
||||
nativeimportErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/api/v1/import/native", protocol="nativeimport"}`)
|
||||
|
||||
influxWriteRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/write", protocol="influx"}`)
|
||||
influxWriteErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/write", protocol="influx"}`)
|
||||
influxWriteRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/influx/write", protocol="influx"}`)
|
||||
influxWriteErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/influx/write", protocol="influx"}`)
|
||||
|
||||
influxQueryRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/query", protocol="influx"}`)
|
||||
influxQueryRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/influx/query", protocol="influx"}`)
|
||||
|
||||
datadogWriteRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/v1/series", protocol="datadog"}`)
|
||||
datadogWriteErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/datadog/api/v1/series", protocol="datadog"}`)
|
||||
|
||||
datadogValidateRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/v1/validate", protocol="datadog"}`)
|
||||
datadogCheckRunRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/api/v1/check_run", protocol="datadog"}`)
|
||||
datadogIntakeRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/datadog/intake/", protocol="datadog"}`)
|
||||
|
||||
promscrapeTargetsRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/targets"}`)
|
||||
promscrapeAPIV1TargetsRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/api/v1/targets"}`)
|
||||
|
|
|
@ -37,10 +37,10 @@ func InsertHandler(at *auth.Token, req *http.Request) error {
|
|||
}
|
||||
|
||||
// InsertHandlerForReader processes metrics from given reader
|
||||
func InsertHandlerForReader(r io.Reader) error {
|
||||
func InsertHandlerForReader(at *auth.Token, r io.Reader) error {
|
||||
return writeconcurrencylimiter.Do(func() error {
|
||||
return parser.ParseStream(r, func(tss []prompb.TimeSeries) error {
|
||||
return insertRows(nil, tss, nil)
|
||||
return insertRows(at, tss, nil)
|
||||
})
|
||||
})
|
||||
}
|
||||
|
|
|
@ -67,7 +67,8 @@ type client struct {
|
|||
fq *persistentqueue.FastQueue
|
||||
hc *http.Client
|
||||
|
||||
authCfg *promauth.Config
|
||||
sendBlock func(block []byte) bool
|
||||
authCfg *promauth.Config
|
||||
|
||||
rl rateLimiter
|
||||
|
||||
|
@ -84,7 +85,7 @@ type client struct {
|
|||
stopCh chan struct{}
|
||||
}
|
||||
|
||||
func newClient(argIdx int, remoteWriteURL, sanitizedURL string, fq *persistentqueue.FastQueue, concurrency int) *client {
|
||||
func newHTTPClient(argIdx int, remoteWriteURL, sanitizedURL string, fq *persistentqueue.FastQueue, concurrency int) *client {
|
||||
authCfg, err := getAuthConfig(argIdx)
|
||||
if err != nil {
|
||||
logger.Panicf("FATAL: cannot initialize auth config: %s", err)
|
||||
|
@ -121,6 +122,11 @@ func newClient(argIdx int, remoteWriteURL, sanitizedURL string, fq *persistentqu
|
|||
},
|
||||
stopCh: make(chan struct{}),
|
||||
}
|
||||
c.sendBlock = c.sendBlockHTTP
|
||||
return c
|
||||
}
|
||||
|
||||
func (c *client) init(argIdx, concurrency int, sanitizedURL string) {
|
||||
if bytesPerSec := rateLimit.GetOptionalArgOrDefault(argIdx, 0); bytesPerSec > 0 {
|
||||
logger.Infof("applying %d bytes per second rate limit for -remoteWrite.url=%q", bytesPerSec, sanitizedURL)
|
||||
c.rl.perSecondLimit = int64(bytesPerSec)
|
||||
|
@ -143,7 +149,6 @@ func newClient(argIdx int, remoteWriteURL, sanitizedURL string, fq *persistentqu
|
|||
}()
|
||||
}
|
||||
logger.Infof("initialized client for -remoteWrite.url=%q", c.sanitizedURL)
|
||||
return c
|
||||
}
|
||||
|
||||
func (c *client) MustStop() {
|
||||
|
@ -237,9 +242,9 @@ func (c *client) runWorker() {
|
|||
}
|
||||
}
|
||||
|
||||
// sendBlock returns false only if c.stopCh is closed.
|
||||
// sendBlockHTTP returns false only if c.stopCh is closed.
|
||||
// Otherwise it tries sending the block to remote storage indefinitely.
|
||||
func (c *client) sendBlock(block []byte) bool {
|
||||
func (c *client) sendBlockHTTP(block []byte) bool {
|
||||
c.rl.register(len(block), c.stopCh)
|
||||
retryDuration := time.Second
|
||||
retriesCount := 0
|
||||
|
|
|
@ -3,6 +3,7 @@ package remotewrite
|
|||
import (
|
||||
"flag"
|
||||
"fmt"
|
||||
"net/url"
|
||||
"strconv"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
|
@ -181,17 +182,21 @@ func newRemoteWriteCtxs(at *auth.Token, urls []string) []*remoteWriteCtx {
|
|||
maxInmemoryBlocks = 2
|
||||
}
|
||||
rwctxs := make([]*remoteWriteCtx, len(urls))
|
||||
for i, remoteWriteURL := range urls {
|
||||
for i, remoteWriteURLRaw := range urls {
|
||||
remoteWriteURL, err := url.Parse(remoteWriteURLRaw)
|
||||
if err != nil {
|
||||
logger.Fatalf("invalid -remoteWrite.url=%q: %s", remoteWriteURL, err)
|
||||
}
|
||||
sanitizedURL := fmt.Sprintf("%d:secret-url", i+1)
|
||||
if at != nil {
|
||||
// Construct full remote_write url for the given tenant according to https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format
|
||||
remoteWriteURL = fmt.Sprintf("%s/insert/%d:%d/prometheus/api/v1/write", remoteWriteURL, at.AccountID, at.ProjectID)
|
||||
remoteWriteURL.Path = fmt.Sprintf("%s/insert/%d:%d/prometheus/api/v1/write", remoteWriteURL.Path, at.AccountID, at.ProjectID)
|
||||
sanitizedURL = fmt.Sprintf("%s:%d:%d", sanitizedURL, at.AccountID, at.ProjectID)
|
||||
}
|
||||
if *showRemoteWriteURL {
|
||||
sanitizedURL = fmt.Sprintf("%d:%s", i+1, remoteWriteURL)
|
||||
}
|
||||
rwctxs[i] = newRemoteWriteCtx(i, remoteWriteURL, maxInmemoryBlocks, sanitizedURL)
|
||||
rwctxs[i] = newRemoteWriteCtx(i, at, remoteWriteURL, maxInmemoryBlocks, sanitizedURL)
|
||||
}
|
||||
return rwctxs
|
||||
}
|
||||
|
@ -403,17 +408,29 @@ type remoteWriteCtx struct {
|
|||
relabelMetricsDropped *metrics.Counter
|
||||
}
|
||||
|
||||
func newRemoteWriteCtx(argIdx int, remoteWriteURL string, maxInmemoryBlocks int, sanitizedURL string) *remoteWriteCtx {
|
||||
h := xxhash.Sum64([]byte(remoteWriteURL))
|
||||
path := fmt.Sprintf("%s/persistent-queue/%d_%016X", *tmpDataPath, argIdx+1, h)
|
||||
fq := persistentqueue.MustOpenFastQueue(path, sanitizedURL, maxInmemoryBlocks, maxPendingBytesPerURL.N)
|
||||
_ = metrics.GetOrCreateGauge(fmt.Sprintf(`vmagent_remotewrite_pending_data_bytes{path=%q, url=%q}`, path, sanitizedURL), func() float64 {
|
||||
func newRemoteWriteCtx(argIdx int, at *auth.Token, remoteWriteURL *url.URL, maxInmemoryBlocks int, sanitizedURL string) *remoteWriteCtx {
|
||||
// strip query params, otherwise changing params resets pq
|
||||
pqURL := *remoteWriteURL
|
||||
pqURL.RawQuery = ""
|
||||
pqURL.Fragment = ""
|
||||
h := xxhash.Sum64([]byte(pqURL.String()))
|
||||
queuePath := fmt.Sprintf("%s/persistent-queue/%d_%016X", *tmpDataPath, argIdx+1, h)
|
||||
fq := persistentqueue.MustOpenFastQueue(queuePath, sanitizedURL, maxInmemoryBlocks, maxPendingBytesPerURL.N)
|
||||
_ = metrics.GetOrCreateGauge(fmt.Sprintf(`vmagent_remotewrite_pending_data_bytes{path=%q, url=%q}`, queuePath, sanitizedURL), func() float64 {
|
||||
return float64(fq.GetPendingBytes())
|
||||
})
|
||||
_ = metrics.GetOrCreateGauge(fmt.Sprintf(`vmagent_remotewrite_pending_inmemory_blocks{path=%q, url=%q}`, path, sanitizedURL), func() float64 {
|
||||
_ = metrics.GetOrCreateGauge(fmt.Sprintf(`vmagent_remotewrite_pending_inmemory_blocks{path=%q, url=%q}`, queuePath, sanitizedURL), func() float64 {
|
||||
return float64(fq.GetInmemoryQueueLen())
|
||||
})
|
||||
c := newClient(argIdx, remoteWriteURL, sanitizedURL, fq, *queues)
|
||||
var c *client
|
||||
switch remoteWriteURL.Scheme {
|
||||
case "http", "https":
|
||||
c = newHTTPClient(argIdx, remoteWriteURL.String(), sanitizedURL, fq, *queues)
|
||||
default:
|
||||
logger.Fatalf("unsupported scheme: %s for remoteWriteURL: %s, want `http`, `https`", remoteWriteURL.Scheme, sanitizedURL)
|
||||
}
|
||||
c.init(argIdx, *queues, sanitizedURL)
|
||||
|
||||
sf := significantFigures.GetOptionalArgOrDefault(argIdx, 0)
|
||||
rd := roundDigits.GetOptionalArgOrDefault(argIdx, 100)
|
||||
pssLen := *queues
|
||||
|
@ -432,7 +449,7 @@ func newRemoteWriteCtx(argIdx int, remoteWriteURL string, maxInmemoryBlocks int,
|
|||
c: c,
|
||||
pss: pss,
|
||||
|
||||
relabelMetricsDropped: metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_relabel_metrics_dropped_total{path=%q, url=%q}`, path, sanitizedURL)),
|
||||
relabelMetricsDropped: metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_relabel_metrics_dropped_total{path=%q, url=%q}`, queuePath, sanitizedURL)),
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
|
||||
Supported storage systems for backups:
|
||||
|
||||
* [GCS](https://cloud.google.com/storage/). Example: `gcs://<bucket>/<path/to/backup>`
|
||||
* [GCS](https://cloud.google.com/storage/). Example: `gs://<bucket>/<path/to/backup>`
|
||||
* [S3](https://aws.amazon.com/s3/). Example: `s3://<bucket>/<path/to/backup>`
|
||||
* Any S3-compatible storage such as [MinIO](https://github.com/minio/minio), [Ceph](https://docs.ceph.com/docs/mimic/radosgw/s3/) or [Swift](https://www.swiftstack.com/docs/admin/middleware/s3_middleware.html). See [these docs](#advanced-usage) for details.
|
||||
* Local filesystem. Example: `fs://</absolute/path/to/backup>`
|
||||
|
@ -30,7 +30,7 @@ creation of hourly, daily, weekly and monthly backups.
|
|||
Regular backup can be performed with the following command:
|
||||
|
||||
```
|
||||
vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshotName=<local-snapshot> -dst=gcs://<bucket>/<path/to/new/backup>
|
||||
vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshotName=<local-snapshot> -dst=gs://<bucket>/<path/to/new/backup>
|
||||
```
|
||||
|
||||
* `</path/to/victoria-metrics-data>` - path to VictoriaMetrics data pointed by `-storageDataPath` command-line flag in single-node VictoriaMetrics or in cluster `vmstorage`.
|
||||
|
@ -46,7 +46,7 @@ If the destination GCS bucket already contains the previous backup at `-origin`
|
|||
with the following command:
|
||||
|
||||
```
|
||||
vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshotName=<local-snapshot> -dst=gcs://<bucket>/<path/to/new/backup> -origin=gcs://<bucket>/<path/to/existing/backup>
|
||||
vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshotName=<local-snapshot> -dst=gs://<bucket>/<path/to/new/backup> -origin=gs://<bucket>/<path/to/existing/backup>
|
||||
```
|
||||
|
||||
It saves time and network bandwidth costs by performing server-side copy for the shared data from the `-origin` to `-dst`.
|
||||
|
@ -58,7 +58,7 @@ Incremental backups performed if `-dst` points to an already existing backup. In
|
|||
It saves time and network bandwidth costs when working with big backups:
|
||||
|
||||
```
|
||||
vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshotName=<local-snapshot> -dst=gcs://<bucket>/<path/to/existing/backup>
|
||||
vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshotName=<local-snapshot> -dst=gs://<bucket>/<path/to/existing/backup>
|
||||
```
|
||||
|
||||
|
||||
|
@ -69,16 +69,16 @@ Smart backups mean storing full daily backups into `YYYYMMDD` folders and creati
|
|||
* Run the following command every hour:
|
||||
|
||||
```
|
||||
vmbackup -snapshotName=<latest-snapshot> -dst=gcs://<bucket>/latest
|
||||
vmbackup -snapshotName=<latest-snapshot> -dst=gs://<bucket>/latest
|
||||
```
|
||||
|
||||
Where `<latest-snapshot>` is the latest [snapshot](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-work-with-snapshots).
|
||||
The command will upload only changed data to `gcs://<bucket>/latest`.
|
||||
The command will upload only changed data to `gs://<bucket>/latest`.
|
||||
|
||||
* Run the following command once a day:
|
||||
|
||||
```
|
||||
vmbackup -snapshotName=<daily-snapshot> -dst=gcs://<bucket>/<YYYYMMDD> -origin=gcs://<bucket>/latest
|
||||
vmbackup -snapshotName=<daily-snapshot> -dst=gs://<bucket>/<YYYYMMDD> -origin=gs://<bucket>/latest
|
||||
```
|
||||
|
||||
Where `<daily-snapshot>` is the snapshot for the last day `<YYYYMMDD>`.
|
||||
|
@ -183,7 +183,7 @@ See [this article](https://medium.com/@valyala/speeding-up-backups-for-big-time-
|
|||
-customS3Endpoint string
|
||||
Custom S3 endpoint for use with S3-compatible storages (e.g. MinIO). S3 is used if not set
|
||||
-dst string
|
||||
Where to put the backup on the remote storage. Example: gcs://bucket/path/to/backup/dir, s3://bucket/path/to/backup/dir or fs:///path/to/local/backup/dir
|
||||
Where to put the backup on the remote storage. Example: gs://bucket/path/to/backup/dir, s3://bucket/path/to/backup/dir or fs:///path/to/local/backup/dir
|
||||
-dst can point to the previous backup. In this case incremental backup is performed, i.e. only changed data is uploaded
|
||||
-envflag.enable
|
||||
Whether to enable reading flags from environment variables additionally to command line. Command line flag values have priority over values from environment vars. Flags are read only from command line if this flag isn't set. See https://docs.victoriametrics.com/#environment-variables for more details
|
||||
|
|
|
@ -25,7 +25,7 @@ var (
|
|||
snapshotDeleteURL = flag.String("snapshot.deleteURL", "", "VictoriaMetrics delete snapshot url. Optional. Will be generated from -snapshot.createURL if not provided. "+
|
||||
"All created snapshots will be automatically deleted. Example: http://victoriametrics:8428/snapshot/delete")
|
||||
dst = flag.String("dst", "", "Where to put the backup on the remote storage. "+
|
||||
"Example: gcs://bucket/path/to/backup/dir, s3://bucket/path/to/backup/dir or fs:///path/to/local/backup/dir\n"+
|
||||
"Example: gs://bucket/path/to/backup/dir, s3://bucket/path/to/backup/dir or fs:///path/to/local/backup/dir\n"+
|
||||
"-dst can point to the previous backup. In this case incremental backup is performed, i.e. only changed data is uploaded")
|
||||
origin = flag.String("origin", "", "Optional origin directory on the remote storage with old backup for server-side copying when performing full backup. This speeds up full backups")
|
||||
concurrency = flag.Int("concurrency", 10, "The number of concurrent workers. Higher concurrency may reduce backup duration")
|
||||
|
|
|
@ -76,7 +76,7 @@ Backup manager launched with the following configuration:
|
|||
```console
|
||||
export NODE_IP=192.168.0.10
|
||||
export VMSTORAGE_ENDPOINT=http://127.0.0.1:8428
|
||||
./vmbackupmanager -dst=gcs://vmstorage-data/$NODE_IP -credsFilePath=credentials.json -storageDataPath=/vmstorage-data -snapshot.createURL=$VMSTORAGE_ENDPOINT/snapshot/create -eula
|
||||
./vmbackupmanager -dst=gs://vmstorage-data/$NODE_IP -credsFilePath=credentials.json -storageDataPath=/vmstorage-data -snapshot.createURL=$VMSTORAGE_ENDPOINT/snapshot/create -eula
|
||||
```
|
||||
|
||||
Expected logs in vmbackupmanager:
|
||||
|
@ -125,7 +125,7 @@ We enable backup retention policy for backup manager by using following configur
|
|||
```console
|
||||
export NODE_IP=192.168.0.10
|
||||
export VMSTORAGE_ENDPOINT=http://127.0.0.1:8428
|
||||
./vmbackupmanager -dst=gcs://vmstorage-data/$NODE_IP -credsFilePath=credentials.json -storageDataPath=/vmstorage-data -snapshot.createURL=$VMSTORAGE_ENDPOINT/snapshot/create
|
||||
./vmbackupmanager -dst=gs://vmstorage-data/$NODE_IP -credsFilePath=credentials.json -storageDataPath=/vmstorage-data -snapshot.createURL=$VMSTORAGE_ENDPOINT/snapshot/create
|
||||
-keepLastDaily=3 -eula
|
||||
```
|
||||
|
||||
|
|
97
app/vminsert/datadog/request_handler.go
Normal file
97
app/vminsert/datadog/request_handler.go
Normal file
|
@ -0,0 +1,97 @@
|
|||
package datadog
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"strings"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/relabel"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
|
||||
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
|
||||
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadog"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
)
|
||||
|
||||
var (
|
||||
rowsInserted = metrics.NewCounter(`vm_rows_inserted_total{type="datadog"}`)
|
||||
rowsPerInsert = metrics.NewHistogram(`vm_rows_per_insert{type="datadog"}`)
|
||||
)
|
||||
|
||||
// InsertHandlerForHTTP processes remote write for DataDog POST /api/v1/series request.
|
||||
//
|
||||
// See https://docs.datadoghq.com/api/latest/metrics/#submit-metrics
|
||||
func InsertHandlerForHTTP(req *http.Request) error {
|
||||
extraLabels, err := parserCommon.GetExtraLabels(req)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return writeconcurrencylimiter.Do(func() error {
|
||||
ce := req.Header.Get("Content-Encoding")
|
||||
err := parser.ParseStream(req.Body, ce, func(series []parser.Series) error {
|
||||
return insertRows(series, extraLabels)
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf("headers: %q; err: %w", req.Header, err)
|
||||
}
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
func insertRows(series []parser.Series, extraLabels []prompbmarshal.Label) error {
|
||||
ctx := common.GetInsertCtx()
|
||||
defer common.PutInsertCtx(ctx)
|
||||
|
||||
rowsLen := 0
|
||||
for i := range series {
|
||||
rowsLen += len(series[i].Points)
|
||||
}
|
||||
ctx.Reset(rowsLen)
|
||||
rowsTotal := 0
|
||||
hasRelabeling := relabel.HasRelabeling()
|
||||
for i := range series {
|
||||
ss := &series[i]
|
||||
rowsTotal += len(ss.Points)
|
||||
ctx.Labels = ctx.Labels[:0]
|
||||
ctx.AddLabel("", ss.Metric)
|
||||
ctx.AddLabel("host", ss.Host)
|
||||
for _, tag := range ss.Tags {
|
||||
n := strings.IndexByte(tag, ':')
|
||||
if n < 0 {
|
||||
return fmt.Errorf("cannot find ':' in tag %q", tag)
|
||||
}
|
||||
name := tag[:n]
|
||||
value := tag[n+1:]
|
||||
if name == "host" {
|
||||
name = "exported_host"
|
||||
}
|
||||
ctx.AddLabel(name, value)
|
||||
}
|
||||
for j := range extraLabels {
|
||||
label := &extraLabels[j]
|
||||
ctx.AddLabel(label.Name, label.Value)
|
||||
}
|
||||
if hasRelabeling {
|
||||
ctx.ApplyRelabeling()
|
||||
}
|
||||
if len(ctx.Labels) == 0 {
|
||||
// Skip metric without labels.
|
||||
continue
|
||||
}
|
||||
ctx.SortLabelsIfNeeded()
|
||||
var metricNameRaw []byte
|
||||
var err error
|
||||
for _, pt := range ss.Points {
|
||||
timestamp := pt.Timestamp()
|
||||
value := pt.Value()
|
||||
metricNameRaw, err = ctx.WriteDataPointExt(metricNameRaw, ctx.Labels, timestamp, value)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
rowsInserted.Add(rowsTotal)
|
||||
rowsPerInsert.Update(float64(rowsTotal))
|
||||
return ctx.FlushBufs()
|
||||
}
|
|
@ -9,6 +9,7 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/csvimport"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/datadog"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/graphite"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/influx"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/native"
|
||||
|
@ -155,6 +156,36 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
|
|||
influxQueryRequests.Inc()
|
||||
influxutils.WriteDatabaseNames(w)
|
||||
return true
|
||||
case "/datadog/api/v1/series":
|
||||
datadogWriteRequests.Inc()
|
||||
if err := datadog.InsertHandlerForHTTP(r); err != nil {
|
||||
datadogWriteErrors.Inc()
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
return true
|
||||
}
|
||||
// See https://docs.datadoghq.com/api/latest/metrics/#submit-metrics
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
w.WriteHeader(202)
|
||||
fmt.Fprintf(w, `{"status":"ok"}`)
|
||||
return true
|
||||
case "/datadog/api/v1/validate":
|
||||
datadogValidateRequests.Inc()
|
||||
// See https://docs.datadoghq.com/api/latest/authentication/#validate-api-key
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
fmt.Fprintf(w, `{"valid":true}`)
|
||||
return true
|
||||
case "/datadog/api/v1/check_run":
|
||||
datadogCheckRunRequests.Inc()
|
||||
// See https://docs.datadoghq.com/api/latest/service-checks/#submit-a-service-check
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
w.WriteHeader(202)
|
||||
fmt.Fprintf(w, `{"status":"ok"}`)
|
||||
return true
|
||||
case "/datadog/intake/":
|
||||
datadogIntakeRequests.Inc()
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
fmt.Fprintf(w, `{}`)
|
||||
return true
|
||||
case "/prometheus/targets", "/targets":
|
||||
promscrapeTargetsRequests.Inc()
|
||||
promscrape.WriteHumanReadableTargetsStatus(w, r)
|
||||
|
@ -204,10 +235,17 @@ var (
|
|||
nativeimportRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/import/native", protocol="nativeimport"}`)
|
||||
nativeimportErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/api/v1/import/native", protocol="nativeimport"}`)
|
||||
|
||||
influxWriteRequests = metrics.NewCounter(`vm_http_requests_total{path="/write", protocol="influx"}`)
|
||||
influxWriteErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/write", protocol="influx"}`)
|
||||
influxWriteRequests = metrics.NewCounter(`vm_http_requests_total{path="/influx/write", protocol="influx"}`)
|
||||
influxWriteErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/influx/write", protocol="influx"}`)
|
||||
|
||||
influxQueryRequests = metrics.NewCounter(`vm_http_requests_total{path="/query", protocol="influx"}`)
|
||||
influxQueryRequests = metrics.NewCounter(`vm_http_requests_total{path="/influx/query", protocol="influx"}`)
|
||||
|
||||
datadogWriteRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/api/v1/series", protocol="datadog"}`)
|
||||
datadogWriteErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/datadog/api/v1/series", protocol="datadog"}`)
|
||||
|
||||
datadogValidateRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/api/v1/validate", protocol="datadog"}`)
|
||||
datadogCheckRunRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/api/v1/check_run", protocol="datadog"}`)
|
||||
datadogIntakeRequests = metrics.NewCounter(`vm_http_requests_total{path="/datadog/intake/", protocol="datadog"}`)
|
||||
|
||||
promscrapeTargetsRequests = metrics.NewCounter(`vm_http_requests_total{path="/targets"}`)
|
||||
promscrapeAPIV1TargetsRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/targets"}`)
|
||||
|
|
|
@ -16,7 +16,7 @@ import (
|
|||
|
||||
var (
|
||||
relabelConfig = flag.String("relabelConfig", "", "Optional path to a file with relabeling rules, which are applied to all the ingested metrics. "+
|
||||
"See https://docs.victoriametrics.com/#relabeling for details")
|
||||
"See https://docs.victoriametrics.com/#relabeling for details. The config is reloaded on SIGHUP signal")
|
||||
relabelDebug = flag.Bool("relabelDebug", false, "Whether to log metrics before and after relabeling with -relabelConfig. If the -relabelDebug is enabled, "+
|
||||
"then the metrics aren't sent to storage. This is useful for debugging the relabeling configs")
|
||||
)
|
||||
|
|
|
@ -12,7 +12,7 @@ when restarting `vmrestore` with the same args.
|
|||
VictoriaMetrics must be stopped during the restore process.
|
||||
|
||||
```
|
||||
vmrestore -src=gcs://<bucket>/<path/to/backup> -storageDataPath=<local/path/to/restore>
|
||||
vmrestore -src=gs://<bucket>/<path/to/backup> -storageDataPath=<local/path/to/restore>
|
||||
|
||||
```
|
||||
|
||||
|
@ -116,7 +116,7 @@ i.e. the end result would be similar to [rsync --delete](https://askubuntu.com/q
|
|||
-skipBackupCompleteCheck
|
||||
Whether to skip checking for 'backup complete' file in -src. This may be useful for restoring from old backups, which were created without 'backup complete' file
|
||||
-src string
|
||||
Source path with backup on the remote storage. Example: gcs://bucket/path/to/backup/dir, s3://bucket/path/to/backup/dir or fs:///path/to/local/backup/dir
|
||||
Source path with backup on the remote storage. Example: gs://bucket/path/to/backup/dir, s3://bucket/path/to/backup/dir or fs:///path/to/local/backup/dir
|
||||
-storageDataPath string
|
||||
Destination path where backup must be restored. VictoriaMetrics must be stopped when restoring from backup. -storageDataPath dir can be non-empty. In this case the contents of -storageDataPath dir is synchronized with -src contents, i.e. it works like 'rsync --delete' (default "victoria-metrics-data")
|
||||
-version
|
||||
|
|
|
@ -16,7 +16,7 @@ import (
|
|||
|
||||
var (
|
||||
src = flag.String("src", "", "Source path with backup on the remote storage. "+
|
||||
"Example: gcs://bucket/path/to/backup/dir, s3://bucket/path/to/backup/dir or fs:///path/to/local/backup/dir")
|
||||
"Example: gs://bucket/path/to/backup/dir, s3://bucket/path/to/backup/dir or fs:///path/to/local/backup/dir")
|
||||
storageDataPath = flag.String("storageDataPath", "victoria-metrics-data", "Destination path where backup must be restored. "+
|
||||
"VictoriaMetrics must be stopped when restoring from backup. -storageDataPath dir can be non-empty. In this case the contents of -storageDataPath dir "+
|
||||
"is synchronized with -src contents, i.e. it works like 'rsync --delete'")
|
||||
|
|
|
@ -42,17 +42,12 @@ type Result struct {
|
|||
// Values are sorted by Timestamps.
|
||||
Values []float64
|
||||
Timestamps []int64
|
||||
|
||||
// Marshaled MetricName. Used only for results sorting
|
||||
// in app/vmselect/promql
|
||||
MetricNameMarshaled []byte
|
||||
}
|
||||
|
||||
func (r *Result) reset() {
|
||||
r.MetricName.Reset()
|
||||
r.Values = r.Values[:0]
|
||||
r.Timestamps = r.Timestamps[:0]
|
||||
r.MetricNameMarshaled = r.MetricNameMarshaled[:0]
|
||||
}
|
||||
|
||||
// Results holds results returned from ProcessSearchQuery.
|
||||
|
|
|
@ -11,7 +11,6 @@ import (
|
|||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
"github.com/VictoriaMetrics/metricsql"
|
||||
"github.com/valyala/histogram"
|
||||
)
|
||||
|
||||
var aggrFuncs = map[string]aggrFunc{
|
||||
|
@ -40,10 +39,12 @@ var aggrFuncs = map[string]aggrFunc{
|
|||
"topk_max": newAggrFuncRangeTopK(maxValue, false),
|
||||
"topk_avg": newAggrFuncRangeTopK(avgValue, false),
|
||||
"topk_median": newAggrFuncRangeTopK(medianValue, false),
|
||||
"topk_last": newAggrFuncRangeTopK(lastValue, false),
|
||||
"bottomk_min": newAggrFuncRangeTopK(minValue, true),
|
||||
"bottomk_max": newAggrFuncRangeTopK(maxValue, true),
|
||||
"bottomk_avg": newAggrFuncRangeTopK(avgValue, true),
|
||||
"bottomk_median": newAggrFuncRangeTopK(medianValue, true),
|
||||
"bottomk_last": newAggrFuncRangeTopK(lastValue, true),
|
||||
"any": aggrFuncAny,
|
||||
"mad": newAggrFunc(aggrFuncMAD),
|
||||
"outliers_mad": aggrFuncOutliersMAD,
|
||||
|
@ -801,15 +802,81 @@ func avgValue(values []float64) float64 {
|
|||
}
|
||||
|
||||
func medianValue(values []float64) float64 {
|
||||
h := histogram.GetFast()
|
||||
for _, v := range values {
|
||||
if !math.IsNaN(v) {
|
||||
h.Update(v)
|
||||
}
|
||||
return quantile(0.5, values)
|
||||
}
|
||||
|
||||
func lastValue(values []float64) float64 {
|
||||
values = skipTrailingNaNs(values)
|
||||
if len(values) == 0 {
|
||||
return nan
|
||||
}
|
||||
value := h.Quantile(0.5)
|
||||
histogram.PutFast(h)
|
||||
return value
|
||||
return values[len(values)-1]
|
||||
}
|
||||
|
||||
// quantiles calculates the given phis from originValues without modifying originValues, appends them to qs and returns the result.
|
||||
func quantiles(qs, phis []float64, originValues []float64) []float64 {
|
||||
a := getFloat64s()
|
||||
a.A = prepareForQuantileFloat64(a.A[:0], originValues)
|
||||
qs = quantilesSorted(qs, phis, a.A)
|
||||
putFloat64s(a)
|
||||
return qs
|
||||
}
|
||||
|
||||
// quantile calculates the given phi from originValues without modifying originValues
|
||||
func quantile(phi float64, originValues []float64) float64 {
|
||||
a := getFloat64s()
|
||||
a.A = prepareForQuantileFloat64(a.A[:0], originValues)
|
||||
q := quantileSorted(phi, a.A)
|
||||
putFloat64s(a)
|
||||
return q
|
||||
}
|
||||
|
||||
// prepareForQuantileFloat64 copies items from src to dst but removes NaNs and sorts the dst
|
||||
func prepareForQuantileFloat64(dst, src []float64) []float64 {
|
||||
for _, v := range src {
|
||||
if math.IsNaN(v) {
|
||||
continue
|
||||
}
|
||||
dst = append(dst, v)
|
||||
}
|
||||
sort.Float64s(dst)
|
||||
return dst
|
||||
}
|
||||
|
||||
// quantilesSorted calculates the given phis over a sorted list of values, appends them to qs and returns the result.
|
||||
//
|
||||
// It is expected that values won't contain NaN items.
|
||||
// The implementation mimics Prometheus implementation for compatibility's sake.
|
||||
func quantilesSorted(qs, phis []float64, values []float64) []float64 {
|
||||
for _, phi := range phis {
|
||||
q := quantileSorted(phi, values)
|
||||
qs = append(qs, q)
|
||||
}
|
||||
return qs
|
||||
}
|
||||
|
||||
// quantileSorted calculates the given quantile over a sorted list of values.
|
||||
//
|
||||
// It is expected that values won't contain NaN items.
|
||||
// The implementation mimics Prometheus implementation for compatibility's sake.
|
||||
func quantileSorted(phi float64, values []float64) float64 {
|
||||
if len(values) == 0 || math.IsNaN(phi) {
|
||||
return nan
|
||||
}
|
||||
if phi < 0 {
|
||||
return math.Inf(-1)
|
||||
}
|
||||
if phi > 1 {
|
||||
return math.Inf(+1)
|
||||
}
|
||||
n := float64(len(values))
|
||||
rank := phi * (n - 1)
|
||||
|
||||
lowerIndex := math.Max(0, math.Floor(rank))
|
||||
upperIndex := math.Min(n-1, lowerIndex+1)
|
||||
|
||||
weight := rank - math.Floor(rank)
|
||||
return values[int(lowerIndex)]*(1-weight) + values[int(upperIndex)]*weight
|
||||
}
|
||||
|
||||
func aggrFuncMAD(tss []*timeseries) []*timeseries {
|
||||
|
@ -889,36 +956,40 @@ func getPerPointMedians(tss []*timeseries) []float64 {
|
|||
logger.Panicf("BUG: expecting non-empty tss")
|
||||
}
|
||||
medians := make([]float64, len(tss[0].Values))
|
||||
h := histogram.GetFast()
|
||||
a := getFloat64s()
|
||||
values := a.A
|
||||
for n := range medians {
|
||||
h.Reset()
|
||||
values = values[:0]
|
||||
for j := range tss {
|
||||
v := tss[j].Values[n]
|
||||
if !math.IsNaN(v) {
|
||||
h.Update(v)
|
||||
values = append(values, v)
|
||||
}
|
||||
}
|
||||
medians[n] = h.Quantile(0.5)
|
||||
medians[n] = quantile(0.5, values)
|
||||
}
|
||||
histogram.PutFast(h)
|
||||
a.A = values
|
||||
putFloat64s(a)
|
||||
return medians
|
||||
}
|
||||
|
||||
func getPerPointMADs(tss []*timeseries, medians []float64) []float64 {
|
||||
mads := make([]float64, len(medians))
|
||||
h := histogram.GetFast()
|
||||
a := getFloat64s()
|
||||
values := a.A
|
||||
for n, median := range medians {
|
||||
h.Reset()
|
||||
values = values[:0]
|
||||
for j := range tss {
|
||||
v := tss[j].Values[n]
|
||||
if !math.IsNaN(v) {
|
||||
ad := math.Abs(v - median)
|
||||
h.Update(ad)
|
||||
values = append(values, ad)
|
||||
}
|
||||
}
|
||||
mads[n] = h.Quantile(0.5)
|
||||
mads[n] = quantile(0.5, values)
|
||||
}
|
||||
histogram.PutFast(h)
|
||||
a.A = values
|
||||
putFloat64s(a)
|
||||
return mads
|
||||
}
|
||||
|
||||
|
@ -987,22 +1058,25 @@ func aggrFuncQuantiles(afa *aggrFuncArg) ([]*timeseries, error) {
|
|||
ts.MetricName.AddTag(dstLabel, fmt.Sprintf("%g", phis[j]))
|
||||
tssDst[j] = ts
|
||||
}
|
||||
h := histogram.GetFast()
|
||||
defer histogram.PutFast(h)
|
||||
var qs []float64
|
||||
|
||||
b := getFloat64s()
|
||||
qs := b.A
|
||||
a := getFloat64s()
|
||||
values := a.A
|
||||
for n := range tss[0].Values {
|
||||
h.Reset()
|
||||
values = values[:0]
|
||||
for j := range tss {
|
||||
v := tss[j].Values[n]
|
||||
if !math.IsNaN(v) {
|
||||
h.Update(v)
|
||||
}
|
||||
values = append(values, tss[j].Values[n])
|
||||
}
|
||||
qs = h.Quantiles(qs[:0], phis)
|
||||
qs = quantiles(qs[:0], phis, values)
|
||||
for j := range tssDst {
|
||||
tssDst[j].Values[n] = qs[j]
|
||||
}
|
||||
}
|
||||
a.A = values
|
||||
putFloat64s(a)
|
||||
b.A = qs
|
||||
putFloat64s(b)
|
||||
return tssDst
|
||||
}
|
||||
return aggrFuncExt(afe, argOrig, &afa.ae.Modifier, afa.ae.Limit, false)
|
||||
|
@ -1034,19 +1108,17 @@ func aggrFuncMedian(afa *aggrFuncArg) ([]*timeseries, error) {
|
|||
func newAggrQuantileFunc(phis []float64) func(tss []*timeseries, modifier *metricsql.ModifierExpr) []*timeseries {
|
||||
return func(tss []*timeseries, modifier *metricsql.ModifierExpr) []*timeseries {
|
||||
dst := tss[0]
|
||||
h := histogram.GetFast()
|
||||
defer histogram.PutFast(h)
|
||||
a := getFloat64s()
|
||||
values := a.A
|
||||
for n := range dst.Values {
|
||||
h.Reset()
|
||||
values = values[:0]
|
||||
for j := range tss {
|
||||
v := tss[j].Values[n]
|
||||
if !math.IsNaN(v) {
|
||||
h.Update(v)
|
||||
}
|
||||
values = append(values, tss[j].Values[n])
|
||||
}
|
||||
phi := phis[n]
|
||||
dst.Values[n] = h.Quantile(phi)
|
||||
dst.Values[n] = quantile(phis[n], values)
|
||||
}
|
||||
a.A = values
|
||||
putFloat64s(a)
|
||||
tss[0] = dst
|
||||
return tss[:1]
|
||||
}
|
||||
|
|
|
@ -59,12 +59,12 @@ func newBinaryOpCmpFunc(cf func(left, right float64) bool) binaryOpFunc {
|
|||
}
|
||||
return nan
|
||||
}
|
||||
if cf(left, right) {
|
||||
return 1
|
||||
}
|
||||
if math.IsNaN(left) {
|
||||
return nan
|
||||
}
|
||||
if cf(left, right) {
|
||||
return 1
|
||||
}
|
||||
return 0
|
||||
}
|
||||
return newBinaryOpFunc(cfe)
|
||||
|
@ -303,12 +303,18 @@ func binaryOpAnd(bfa *binaryOpFuncArg) ([]*timeseries, error) {
|
|||
if tssLeft == nil {
|
||||
continue
|
||||
}
|
||||
// Add gaps to tssLeft if there are gaps at valuesRight.
|
||||
valuesRight := tssRight[0].Values
|
||||
// Add gaps to tssLeft if there are gaps at tssRight.
|
||||
for _, tsLeft := range tssLeft {
|
||||
valuesLeft := tsLeft.Values
|
||||
for i, v := range valuesRight {
|
||||
if math.IsNaN(v) {
|
||||
for i := range valuesLeft {
|
||||
hasValue := false
|
||||
for _, tsRight := range tssRight {
|
||||
if !math.IsNaN(tsRight.Values[i]) {
|
||||
hasValue = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !hasValue {
|
||||
valuesLeft[i] = nan
|
||||
}
|
||||
}
|
||||
|
@ -333,12 +339,18 @@ func binaryOpOr(bfa *binaryOpFuncArg) ([]*timeseries, error) {
|
|||
}
|
||||
// Fill gaps in tssLeft with values from tssRight as Prometheus does.
|
||||
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/552
|
||||
valuesRight := tssRight[0].Values
|
||||
for _, tsLeft := range tssLeft {
|
||||
valuesLeft := tsLeft.Values
|
||||
for i, v := range valuesLeft {
|
||||
if math.IsNaN(v) {
|
||||
valuesLeft[i] = valuesRight[i]
|
||||
if !math.IsNaN(v) {
|
||||
continue
|
||||
}
|
||||
for _, tsRight := range tssRight {
|
||||
vRight := tsRight.Values[i]
|
||||
if !math.IsNaN(vRight) {
|
||||
valuesLeft[i] = vRight
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -355,13 +367,15 @@ func binaryOpUnless(bfa *binaryOpFuncArg) ([]*timeseries, error) {
|
|||
rvs = append(rvs, tssLeft...)
|
||||
continue
|
||||
}
|
||||
// Add gaps to tssLeft if the are no gaps at valuesRight.
|
||||
valuesRight := tssRight[0].Values
|
||||
// Add gaps to tssLeft if the are no gaps at tssRight.
|
||||
for _, tsLeft := range tssLeft {
|
||||
valuesLeft := tsLeft.Values
|
||||
for i, v := range valuesRight {
|
||||
if !math.IsNaN(v) {
|
||||
valuesLeft[i] = nan
|
||||
for i := range valuesLeft {
|
||||
for _, tsRight := range tssRight {
|
||||
if !math.IsNaN(tsRight.Values[i]) {
|
||||
valuesLeft[i] = nan
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -13,6 +13,7 @@ import (
|
|||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/querystats"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/decimal"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
"github.com/VictoriaMetrics/metricsql"
|
||||
)
|
||||
|
@ -80,8 +81,8 @@ func maySortResults(e metricsql.Expr, tss []*timeseries) bool {
|
|||
case *metricsql.AggrFuncExpr:
|
||||
switch strings.ToLower(v.Name) {
|
||||
case "topk", "bottomk", "outliersk",
|
||||
"topk_max", "topk_min", "topk_avg", "topk_median",
|
||||
"bottomk_max", "bottomk_min", "bottomk_avg", "bottomk_median":
|
||||
"topk_max", "topk_min", "topk_avg", "topk_median", "topk_last",
|
||||
"bottomk_max", "bottomk_min", "bottomk_avg", "bottomk_median", "bottomk_last":
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
@ -101,7 +102,6 @@ func timeseriesToResult(tss []*timeseries, maySort bool) ([]netstorage.Result, e
|
|||
m[string(bb.B)] = struct{}{}
|
||||
|
||||
rs := &result[i]
|
||||
rs.MetricNameMarshaled = append(rs.MetricNameMarshaled[:0], bb.B...)
|
||||
rs.MetricName.CopyFrom(&ts.MetricName)
|
||||
rs.Values = append(rs.Values[:0], ts.Values...)
|
||||
rs.Timestamps = append(rs.Timestamps[:0], ts.Timestamps...)
|
||||
|
@ -110,13 +110,39 @@ func timeseriesToResult(tss []*timeseries, maySort bool) ([]netstorage.Result, e
|
|||
|
||||
if maySort {
|
||||
sort.Slice(result, func(i, j int) bool {
|
||||
return string(result[i].MetricNameMarshaled) < string(result[j].MetricNameMarshaled)
|
||||
return metricNameLess(&result[i].MetricName, &result[j].MetricName)
|
||||
})
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
func metricNameLess(a, b *storage.MetricName) bool {
|
||||
if string(a.MetricGroup) != string(b.MetricGroup) {
|
||||
return string(a.MetricGroup) < string(b.MetricGroup)
|
||||
}
|
||||
// Metric names for a and b match. Compare tags.
|
||||
// Tags must be already sorted by the caller, so just compare them.
|
||||
ats := a.Tags
|
||||
bts := b.Tags
|
||||
for i := range ats {
|
||||
if i >= len(bts) {
|
||||
// a contains more tags than b and all the previous tags were identical,
|
||||
// so a is considered bigger than b.
|
||||
return false
|
||||
}
|
||||
at := &ats[i]
|
||||
bt := &bts[i]
|
||||
if string(at.Key) != string(bt.Key) {
|
||||
return string(at.Key) < string(bt.Key)
|
||||
}
|
||||
if string(at.Value) != string(bt.Value) {
|
||||
return string(at.Value) < string(bt.Value)
|
||||
}
|
||||
}
|
||||
return len(ats) < len(bts)
|
||||
}
|
||||
|
||||
func removeNaNs(tss []*timeseries) []*timeseries {
|
||||
rvs := tss[:0]
|
||||
for _, ts := range tss {
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
package promql
|
||||
|
||||
import (
|
||||
"math"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
|
@ -2169,6 +2170,28 @@ func TestExecSuccess(t *testing.T) {
|
|||
resultExpected := []netstorage.Result{r}
|
||||
f(q, resultExpected)
|
||||
})
|
||||
t.Run(`nan!=bool scalar`, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
q := `(time() > 1234) !=bool 1400`
|
||||
r := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{nan, nan, 0, 1, 1, 1},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
resultExpected := []netstorage.Result{r}
|
||||
f(q, resultExpected)
|
||||
})
|
||||
t.Run(`scalar!=bool nan`, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
q := `1400 !=bool (time() > 1234)`
|
||||
r := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{nan, nan, 0, 1, 1, 1},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
resultExpected := []netstorage.Result{r}
|
||||
f(q, resultExpected)
|
||||
})
|
||||
t.Run(`scalar > time()`, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
q := `123 > time()`
|
||||
|
@ -4620,7 +4643,7 @@ func TestExecSuccess(t *testing.T) {
|
|||
q := `quantile_over_time(0.9, label_set(round(rand(0), 0.01), "__name__", "foo", "xx", "yy")[200s:5s])`
|
||||
r := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{0.89, 0.89, 0.95, 0.87, 0.92, 0.89},
|
||||
Values: []float64{0.893, 0.892, 0.9510000000000001, 0.8730000000000001, 0.9250000000000002, 0.891},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
r.MetricName.MetricGroup = []byte("foo")
|
||||
|
@ -4643,7 +4666,7 @@ func TestExecSuccess(t *testing.T) {
|
|||
)`
|
||||
r1 := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{0.47, 0.57, 0.49, 0.54, 0.56, 0.53},
|
||||
Values: []float64{0.46499999999999997, 0.57, 0.485, 0.54, 0.555, 0.515},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
r1.MetricName.MetricGroup = []byte("foo")
|
||||
|
@ -4659,7 +4682,7 @@ func TestExecSuccess(t *testing.T) {
|
|||
}
|
||||
r2 := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{0.89, 0.89, 0.95, 0.87, 0.92, 0.89},
|
||||
Values: []float64{0.893, 0.892, 0.9510000000000001, 0.8730000000000001, 0.9250000000000002, 0.891},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
r2.MetricName.MetricGroup = []byte("foo")
|
||||
|
@ -5098,7 +5121,7 @@ func TestExecSuccess(t *testing.T) {
|
|||
q := `sort(topk_min(1, label_set(10, "foo", "bar") or label_set(time()/150, "baz", "sss")))`
|
||||
r1 := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{10, 10, 10, nan, nan, nan},
|
||||
Values: []float64{10, 10, 10, 10, 10, 10},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
r1.MetricName.Tags = []storage.Tag{{
|
||||
|
@ -5113,7 +5136,7 @@ func TestExecSuccess(t *testing.T) {
|
|||
q := `sort(bottomk_min(1, label_set(10, "foo", "bar") or label_set(time()/150, "baz", "sss")))`
|
||||
r1 := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{nan, nan, nan, 10.666666666666666, 12, 13.333333333333334},
|
||||
Values: []float64{6.666666666666667, 8, 9.333333333333334, 10.666666666666666, 12, 13.333333333333334},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
r1.MetricName.Tags = []storage.Tag{{
|
||||
|
@ -5128,7 +5151,7 @@ func TestExecSuccess(t *testing.T) {
|
|||
q := `topk_max(1, label_set(10, "foo", "bar") or label_set(time()/150, "baz", "sss"))`
|
||||
r1 := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{nan, nan, nan, 10.666666666666666, 12, 13.333333333333334},
|
||||
Values: []float64{6.666666666666667, 8, 9.333333333333334, 10.666666666666666, 12, 13.333333333333334},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
r1.MetricName.Tags = []storage.Tag{{
|
||||
|
@ -5143,7 +5166,7 @@ func TestExecSuccess(t *testing.T) {
|
|||
q := `sort_desc(topk_max(1, label_set(10, "foo", "bar") or label_set(time()/150, "baz", "sss"), "remaining_sum=foo"))`
|
||||
r1 := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{nan, nan, nan, 10.666666666666666, 12, 13.333333333333334},
|
||||
Values: []float64{6.666666666666667, 8, 9.333333333333334, 10.666666666666666, 12, 13.333333333333334},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
r1.MetricName.Tags = []storage.Tag{{
|
||||
|
@ -5169,7 +5192,7 @@ func TestExecSuccess(t *testing.T) {
|
|||
q := `sort_desc(topk_max(2, label_set(10, "foo", "bar") or label_set(time()/150, "baz", "sss"), "remaining_sum"))`
|
||||
r1 := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{nan, nan, nan, 10.666666666666666, 12, 13.333333333333334},
|
||||
Values: []float64{6.666666666666667, 8, 9.333333333333334, 10.666666666666666, 12, 13.333333333333334},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
r1.MetricName.Tags = []storage.Tag{{
|
||||
|
@ -5195,7 +5218,7 @@ func TestExecSuccess(t *testing.T) {
|
|||
q := `sort_desc(topk_max(3, label_set(10, "foo", "bar") or label_set(time()/150, "baz", "sss"), "remaining_sum"))`
|
||||
r1 := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{nan, nan, nan, 10.666666666666666, 12, 13.333333333333334},
|
||||
Values: []float64{6.666666666666667, 8, 9.333333333333334, 10.666666666666666, 12, 13.333333333333334},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
r1.MetricName.Tags = []storage.Tag{{
|
||||
|
@ -5221,7 +5244,7 @@ func TestExecSuccess(t *testing.T) {
|
|||
q := `sort(bottomk_max(1, label_set(10, "foo", "bar") or label_set(time()/150, "baz", "sss")))`
|
||||
r1 := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{10, 10, 10, nan, nan, nan},
|
||||
Values: []float64{10, 10, 10, 10, 10, 10},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
r1.MetricName.Tags = []storage.Tag{{
|
||||
|
@ -5236,7 +5259,7 @@ func TestExecSuccess(t *testing.T) {
|
|||
q := `sort(topk_avg(1, label_set(10, "foo", "bar") or label_set(time()/150, "baz", "sss")))`
|
||||
r1 := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{nan, nan, nan, 10.666666666666666, 12, 13.333333333333334},
|
||||
Values: []float64{6.666666666666667, 8, 9.333333333333334, 10.666666666666666, 12, 13.333333333333334},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
r1.MetricName.Tags = []storage.Tag{{
|
||||
|
@ -5266,7 +5289,22 @@ func TestExecSuccess(t *testing.T) {
|
|||
q := `sort(topk_median(1, label_set(10, "foo", "bar") or label_set(time()/150, "baz", "sss")))`
|
||||
r1 := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{nan, nan, nan, 10.666666666666666, 12, 13.333333333333334},
|
||||
Values: []float64{6.666666666666667, 8, 9.333333333333334, 10.666666666666666, 12, 13.333333333333334},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
r1.MetricName.Tags = []storage.Tag{{
|
||||
Key: []byte("baz"),
|
||||
Value: []byte("sss"),
|
||||
}}
|
||||
resultExpected := []netstorage.Result{r1}
|
||||
f(q, resultExpected)
|
||||
})
|
||||
t.Run(`topk_last(1)`, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
q := `sort(topk_last(1, label_set(10, "foo", "bar") or label_set(time()/150, "baz", "sss")))`
|
||||
r1 := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{6.666666666666667, 8, 9.333333333333334, 10.666666666666666, 12, 13.333333333333334},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
r1.MetricName.Tags = []storage.Tag{{
|
||||
|
@ -5278,10 +5316,25 @@ func TestExecSuccess(t *testing.T) {
|
|||
})
|
||||
t.Run(`bottomk_median(1)`, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
q := `sort(bottomk_median(1, label_set(10, "foo", "bar") or label_set(time()/150, "baz", "sss")))`
|
||||
q := `sort(bottomk_median(1, label_set(10, "foo", "bar") or label_set(time()/15, "baz", "sss")))`
|
||||
r1 := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{10, 10, 10, nan, nan, nan},
|
||||
Values: []float64{10, 10, 10, 10, 10, 10},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
r1.MetricName.Tags = []storage.Tag{{
|
||||
Key: []byte("foo"),
|
||||
Value: []byte("bar"),
|
||||
}}
|
||||
resultExpected := []netstorage.Result{r1}
|
||||
f(q, resultExpected)
|
||||
})
|
||||
t.Run(`bottomk_last(1)`, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
q := `sort(bottomk_last(1, label_set(10, "foo", "bar") or label_set(time()/15, "baz", "sss")))`
|
||||
r1 := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{10, 10, 10, 10, 10, 10},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
r1.MetricName.Tags = []storage.Tag{{
|
||||
|
@ -5459,7 +5512,7 @@ func TestExecSuccess(t *testing.T) {
|
|||
q := `distinct_over_time((time() < 1700)[500s])`
|
||||
r1 := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{3, 3, 3, 3, nan, nan},
|
||||
Values: []float64{3, 3, 3, 3, 2, 1},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
resultExpected := []netstorage.Result{r1}
|
||||
|
@ -5470,7 +5523,7 @@ func TestExecSuccess(t *testing.T) {
|
|||
q := `distinct_over_time((time() < 1700)[2.5i])`
|
||||
r1 := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{3, 3, 3, 3, nan, nan},
|
||||
Values: []float64{3, 3, 3, 3, 2, 1},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
resultExpected := []netstorage.Result{r1}
|
||||
|
@ -5532,9 +5585,10 @@ func TestExecSuccess(t *testing.T) {
|
|||
t.Run(`quantile(-2)`, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
q := `quantile(-2, label_set(10, "foo", "bar") or label_set(time()/150, "baz", "sss"))`
|
||||
inf := math.Inf(-1)
|
||||
r := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{6.666666666666667, 8, 9.333333333333334, 10, 10, 10},
|
||||
Values: []float64{inf, inf, inf, inf, inf, inf},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
resultExpected := []netstorage.Result{r}
|
||||
|
@ -5545,7 +5599,7 @@ func TestExecSuccess(t *testing.T) {
|
|||
q := `quantile(0.2, label_set(10, "foo", "bar") or label_set(time()/150, "baz", "sss"))`
|
||||
r := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{6.666666666666667, 8, 9.333333333333334, 10, 10, 10},
|
||||
Values: []float64{7.333333333333334, 8.4, 9.466666666666669, 10.133333333333333, 10.4, 10.666666666666668},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
resultExpected := []netstorage.Result{r}
|
||||
|
@ -5556,7 +5610,7 @@ func TestExecSuccess(t *testing.T) {
|
|||
q := `quantile(0.5, label_set(10, "foo", "bar") or label_set(time()/150, "baz", "sss"))`
|
||||
r := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{10, 10, 10, 10.666666666666666, 12, 13.333333333333334},
|
||||
Values: []float64{8.333333333333334, 9, 9.666666666666668, 10.333333333333332, 11, 11.666666666666668},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
resultExpected := []netstorage.Result{r}
|
||||
|
@ -5567,7 +5621,7 @@ func TestExecSuccess(t *testing.T) {
|
|||
q := `sort(quantiles("phi", 0.2, 0.5, label_set(10, "foo", "bar") or label_set(time()/150, "baz", "sss")))`
|
||||
r1 := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{6.666666666666667, 8, 9.333333333333334, 10, 10, 10},
|
||||
Values: []float64{7.333333333333334, 8.4, 9.466666666666669, 10.133333333333333, 10.4, 10.666666666666668},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
r1.MetricName.Tags = []storage.Tag{{
|
||||
|
@ -5576,7 +5630,7 @@ func TestExecSuccess(t *testing.T) {
|
|||
}}
|
||||
r2 := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{10, 10, 10, 10.666666666666666, 12, 13.333333333333334},
|
||||
Values: []float64{8.333333333333334, 9, 9.666666666666668, 10.333333333333332, 11, 11.666666666666668},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
r2.MetricName.Tags = []storage.Tag{{
|
||||
|
@ -5591,7 +5645,7 @@ func TestExecSuccess(t *testing.T) {
|
|||
q := `median(label_set(10, "foo", "bar") or label_set(time()/150, "baz", "sss"))`
|
||||
r := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{10, 10, 10, 10.666666666666666, 12, 13.333333333333334},
|
||||
Values: []float64{8.333333333333334, 9, 9.666666666666668, 10.333333333333332, 11, 11.666666666666668},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
resultExpected := []netstorage.Result{r}
|
||||
|
@ -5611,9 +5665,10 @@ func TestExecSuccess(t *testing.T) {
|
|||
t.Run(`quantile(3)`, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
q := `quantile(3, label_set(10, "foo", "bar") or label_set(time()/150, "baz", "sss"))`
|
||||
inf := math.Inf(+1)
|
||||
r := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{10, 10, 10, 10.666666666666666, 12, 13.333333333333334},
|
||||
Values: []float64{inf, inf, inf, inf, inf, inf},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
resultExpected := []netstorage.Result{r}
|
||||
|
@ -5725,7 +5780,8 @@ func TestExecSuccess(t *testing.T) {
|
|||
q := `range_quantile(0.5, time())`
|
||||
r := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{1600, 1600, 1600, 1600, 1600, 1600},
|
||||
// time() results in [1000 1200 1400 1600 1800 2000]
|
||||
Values: []float64{1500, 1500, 1500, 1500, 1500, 1500},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
resultExpected := []netstorage.Result{r}
|
||||
|
@ -5736,7 +5792,8 @@ func TestExecSuccess(t *testing.T) {
|
|||
q := `range_median(time())`
|
||||
r := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{1600, 1600, 1600, 1600, 1600, 1600},
|
||||
// time() results in [1000 1200 1400 1600 1800 2000]
|
||||
Values: []float64{1500, 1500, 1500, 1500, 1500, 1500},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
resultExpected := []netstorage.Result{r}
|
||||
|
@ -6461,6 +6518,39 @@ func TestExecSuccess(t *testing.T) {
|
|||
resultExpected := []netstorage.Result{r1, r2, r3}
|
||||
f(q, resultExpected)
|
||||
})
|
||||
t.Run(`rollup_scrape_interval()`, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
q := `sort_by_label(rollup_scrape_interval(1[5m:10s]), "rollup")`
|
||||
r1 := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{10, 10, 10, 10, 10, 10},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
r1.MetricName.Tags = []storage.Tag{{
|
||||
Key: []byte("rollup"),
|
||||
Value: []byte("avg"),
|
||||
}}
|
||||
r2 := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{10, 10, 10, 10, 10, 10},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
r2.MetricName.Tags = []storage.Tag{{
|
||||
Key: []byte("rollup"),
|
||||
Value: []byte("max"),
|
||||
}}
|
||||
r3 := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{10, 10, 10, 10, 10, 10},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
r3.MetricName.Tags = []storage.Tag{{
|
||||
Key: []byte("rollup"),
|
||||
Value: []byte("min"),
|
||||
}}
|
||||
resultExpected := []netstorage.Result{r1, r2, r3}
|
||||
f(q, resultExpected)
|
||||
})
|
||||
t.Run(`rollup()`, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
q := `sort(rollup(time()[:50s]))`
|
||||
|
@ -6954,7 +7044,8 @@ func TestExecSuccess(t *testing.T) {
|
|||
Value: []byte("10"),
|
||||
},
|
||||
}
|
||||
resultExpected := []netstorage.Result{r1, r2, r3, r4}
|
||||
// expected sorted output for strings 1, 10, 2, 3
|
||||
resultExpected := []netstorage.Result{r1, r4, r2, r3}
|
||||
f(q, resultExpected)
|
||||
})
|
||||
t.Run(`count_values without (baz)`, func(t *testing.T) {
|
||||
|
@ -7008,6 +7099,44 @@ func TestExecSuccess(t *testing.T) {
|
|||
resultExpected := []netstorage.Result{r1, r2, r3}
|
||||
f(q, resultExpected)
|
||||
})
|
||||
t.Run(`result sorting`, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
q := `label_set(1, "instance", "localhost:1001", "type", "free")
|
||||
or label_set(1, "instance", "localhost:1001", "type", "buffers")
|
||||
or label_set(1, "instance", "localhost:1000", "type", "buffers")
|
||||
or label_set(1, "instance", "localhost:1000", "type", "free")
|
||||
`
|
||||
r1 := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{1, 1, 1, 1, 1, 1},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
testAddLabels(t, &r1.MetricName,
|
||||
"instance", "localhost:1000", "type", "buffers")
|
||||
r2 := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{1, 1, 1, 1, 1, 1},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
testAddLabels(t, &r2.MetricName,
|
||||
"instance", "localhost:1000", "type", "free")
|
||||
r3 := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{1, 1, 1, 1, 1, 1},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
testAddLabels(t, &r3.MetricName,
|
||||
"instance", "localhost:1001", "type", "buffers")
|
||||
r4 := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{1, 1, 1, 1, 1, 1},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
testAddLabels(t, &r4.MetricName,
|
||||
"instance", "localhost:1001", "type", "free")
|
||||
resultExpected := []netstorage.Result{r1, r2, r3, r4}
|
||||
f(q, resultExpected)
|
||||
})
|
||||
}
|
||||
|
||||
func TestExecError(t *testing.T) {
|
||||
|
@ -7092,12 +7221,14 @@ func TestExecError(t *testing.T) {
|
|||
f(`topk_max()`)
|
||||
f(`topk_avg()`)
|
||||
f(`topk_median()`)
|
||||
f(`topk_last()`)
|
||||
f(`limitk()`)
|
||||
f(`bottomk()`)
|
||||
f(`bottomk_min()`)
|
||||
f(`bottomk_max()`)
|
||||
f(`bottomk_avg()`)
|
||||
f(`bottomk_median()`)
|
||||
f(`bottomk_last()`)
|
||||
f(`time(123)`)
|
||||
f(`start(1)`)
|
||||
f(`end(1)`)
|
||||
|
@ -7285,3 +7416,16 @@ func testMetricNamesEqual(t *testing.T, mn, mnExpected *storage.MetricName, pos
|
|||
}
|
||||
}
|
||||
}
|
||||
|
||||
func testAddLabels(t *testing.T, mn *storage.MetricName, labels ...string) {
|
||||
t.Helper()
|
||||
if len(labels)%2 > 0 {
|
||||
t.Fatalf("uneven number of labels passed: %v", labels)
|
||||
}
|
||||
for i := 0; i < len(labels); i += 2 {
|
||||
mn.Tags = append(mn.Tags, storage.Tag{
|
||||
Key: []byte(labels[i]),
|
||||
Value: []byte(labels[i+1]),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
|
|
@ -12,7 +12,6 @@ import (
|
|||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
"github.com/VictoriaMetrics/metricsql"
|
||||
"github.com/valyala/histogram"
|
||||
)
|
||||
|
||||
var minStalenessInterval = flag.Duration("search.minStalenessInterval", 0, "The minimum interval for staleness calculations. "+
|
||||
|
@ -46,44 +45,45 @@ var rollupFuncs = map[string]newRollupFunc{
|
|||
"last_over_time": newRollupFuncOneArg(rollupLast),
|
||||
|
||||
// Additional rollup funcs.
|
||||
"default_rollup": newRollupFuncOneArg(rollupDefault), // default rollup func
|
||||
"range_over_time": newRollupFuncOneArg(rollupRange),
|
||||
"sum2_over_time": newRollupFuncOneArg(rollupSum2),
|
||||
"geomean_over_time": newRollupFuncOneArg(rollupGeomean),
|
||||
"first_over_time": newRollupFuncOneArg(rollupFirst),
|
||||
"distinct_over_time": newRollupFuncOneArg(rollupDistinct),
|
||||
"increases_over_time": newRollupFuncOneArg(rollupIncreases),
|
||||
"decreases_over_time": newRollupFuncOneArg(rollupDecreases),
|
||||
"increase_pure": newRollupFuncOneArg(rollupIncreasePure), // + rollupFuncsRemoveCounterResets
|
||||
"integrate": newRollupFuncOneArg(rollupIntegrate),
|
||||
"ideriv": newRollupFuncOneArg(rollupIderiv),
|
||||
"lifetime": newRollupFuncOneArg(rollupLifetime),
|
||||
"lag": newRollupFuncOneArg(rollupLag),
|
||||
"scrape_interval": newRollupFuncOneArg(rollupScrapeInterval),
|
||||
"tmin_over_time": newRollupFuncOneArg(rollupTmin),
|
||||
"tmax_over_time": newRollupFuncOneArg(rollupTmax),
|
||||
"tfirst_over_time": newRollupFuncOneArg(rollupTfirst),
|
||||
"tlast_over_time": newRollupFuncOneArg(rollupTlast),
|
||||
"share_le_over_time": newRollupShareLE,
|
||||
"share_gt_over_time": newRollupShareGT,
|
||||
"count_le_over_time": newRollupCountLE,
|
||||
"count_gt_over_time": newRollupCountGT,
|
||||
"count_eq_over_time": newRollupCountEQ,
|
||||
"count_ne_over_time": newRollupCountNE,
|
||||
"histogram_over_time": newRollupFuncOneArg(rollupHistogram),
|
||||
"rollup": newRollupFuncOneArg(rollupFake),
|
||||
"rollup_rate": newRollupFuncOneArg(rollupFake), // + rollupFuncsRemoveCounterResets
|
||||
"rollup_deriv": newRollupFuncOneArg(rollupFake),
|
||||
"rollup_delta": newRollupFuncOneArg(rollupFake),
|
||||
"rollup_increase": newRollupFuncOneArg(rollupFake), // + rollupFuncsRemoveCounterResets
|
||||
"rollup_candlestick": newRollupFuncOneArg(rollupFake),
|
||||
"aggr_over_time": newRollupFuncTwoArgs(rollupFake),
|
||||
"hoeffding_bound_upper": newRollupHoeffdingBoundUpper,
|
||||
"hoeffding_bound_lower": newRollupHoeffdingBoundLower,
|
||||
"ascent_over_time": newRollupFuncOneArg(rollupAscentOverTime),
|
||||
"descent_over_time": newRollupFuncOneArg(rollupDescentOverTime),
|
||||
"zscore_over_time": newRollupFuncOneArg(rollupZScoreOverTime),
|
||||
"quantiles_over_time": newRollupQuantiles,
|
||||
"default_rollup": newRollupFuncOneArg(rollupDefault), // default rollup func
|
||||
"range_over_time": newRollupFuncOneArg(rollupRange),
|
||||
"sum2_over_time": newRollupFuncOneArg(rollupSum2),
|
||||
"geomean_over_time": newRollupFuncOneArg(rollupGeomean),
|
||||
"first_over_time": newRollupFuncOneArg(rollupFirst),
|
||||
"distinct_over_time": newRollupFuncOneArg(rollupDistinct),
|
||||
"increases_over_time": newRollupFuncOneArg(rollupIncreases),
|
||||
"decreases_over_time": newRollupFuncOneArg(rollupDecreases),
|
||||
"increase_pure": newRollupFuncOneArg(rollupIncreasePure), // + rollupFuncsRemoveCounterResets
|
||||
"integrate": newRollupFuncOneArg(rollupIntegrate),
|
||||
"ideriv": newRollupFuncOneArg(rollupIderiv),
|
||||
"lifetime": newRollupFuncOneArg(rollupLifetime),
|
||||
"lag": newRollupFuncOneArg(rollupLag),
|
||||
"scrape_interval": newRollupFuncOneArg(rollupScrapeInterval),
|
||||
"tmin_over_time": newRollupFuncOneArg(rollupTmin),
|
||||
"tmax_over_time": newRollupFuncOneArg(rollupTmax),
|
||||
"tfirst_over_time": newRollupFuncOneArg(rollupTfirst),
|
||||
"tlast_over_time": newRollupFuncOneArg(rollupTlast),
|
||||
"share_le_over_time": newRollupShareLE,
|
||||
"share_gt_over_time": newRollupShareGT,
|
||||
"count_le_over_time": newRollupCountLE,
|
||||
"count_gt_over_time": newRollupCountGT,
|
||||
"count_eq_over_time": newRollupCountEQ,
|
||||
"count_ne_over_time": newRollupCountNE,
|
||||
"histogram_over_time": newRollupFuncOneArg(rollupHistogram),
|
||||
"rollup": newRollupFuncOneArg(rollupFake),
|
||||
"rollup_rate": newRollupFuncOneArg(rollupFake), // + rollupFuncsRemoveCounterResets
|
||||
"rollup_deriv": newRollupFuncOneArg(rollupFake),
|
||||
"rollup_delta": newRollupFuncOneArg(rollupFake),
|
||||
"rollup_increase": newRollupFuncOneArg(rollupFake), // + rollupFuncsRemoveCounterResets
|
||||
"rollup_candlestick": newRollupFuncOneArg(rollupFake),
|
||||
"rollup_scrape_interval": newRollupFuncOneArg(rollupFake),
|
||||
"aggr_over_time": newRollupFuncTwoArgs(rollupFake),
|
||||
"hoeffding_bound_upper": newRollupHoeffdingBoundUpper,
|
||||
"hoeffding_bound_lower": newRollupHoeffdingBoundLower,
|
||||
"ascent_over_time": newRollupFuncOneArg(rollupAscentOverTime),
|
||||
"descent_over_time": newRollupFuncOneArg(rollupDescentOverTime),
|
||||
"zscore_over_time": newRollupFuncOneArg(rollupZScoreOverTime),
|
||||
"quantiles_over_time": newRollupQuantiles,
|
||||
|
||||
// `timestamp` function must return timestamp for the last datapoint on the current window
|
||||
// in order to properly handle offset and timestamps unaligned to the current step.
|
||||
|
@ -145,35 +145,26 @@ var rollupAggrFuncs = map[string]rollupFunc{
|
|||
"rate_over_sum": rollupRateOverSum,
|
||||
}
|
||||
|
||||
var rollupFuncsCannotAdjustWindow = map[string]bool{
|
||||
"changes": true,
|
||||
"delta": true,
|
||||
"holt_winters": true,
|
||||
"idelta": true,
|
||||
"increase": true,
|
||||
"predict_linear": true,
|
||||
"resets": true,
|
||||
"avg_over_time": true,
|
||||
"sum_over_time": true,
|
||||
"count_over_time": true,
|
||||
"quantile_over_time": true,
|
||||
"quantiles_over_time": true,
|
||||
"stddev_over_time": true,
|
||||
"stdvar_over_time": true,
|
||||
"absent_over_time": true,
|
||||
"present_over_time": true,
|
||||
"sum2_over_time": true,
|
||||
"geomean_over_time": true,
|
||||
"distinct_over_time": true,
|
||||
"increases_over_time": true,
|
||||
"decreases_over_time": true,
|
||||
"increase_pure": true,
|
||||
"integrate": true,
|
||||
"ascent_over_time": true,
|
||||
"descent_over_time": true,
|
||||
"zscore_over_time": true,
|
||||
"first_over_time": true,
|
||||
"last_over_time": true,
|
||||
// VictoriaMetrics can increase lookbehind window in square brackets for these functions
|
||||
// if the given window doesn't contain enough samples for calculations.
|
||||
//
|
||||
// This is needed in order to return the expected non-empty graphs when zooming in the graph in Grafana,
|
||||
// which is built with `func_name(metric[$__interval])` query.
|
||||
var rollupFuncsCanAdjustWindow = map[string]bool{
|
||||
"default_rollup": true,
|
||||
"deriv": true,
|
||||
"deriv_fast": true,
|
||||
"ideriv": true,
|
||||
"irate": true,
|
||||
"rate": true,
|
||||
"rate_over_sum": true,
|
||||
"rollup": true,
|
||||
"rollup_candlestick": true,
|
||||
"rollup_deriv": true,
|
||||
"rollup_rate": true,
|
||||
"rollup_scrape_interval": true,
|
||||
"scrape_interval": true,
|
||||
"timestamp": true,
|
||||
}
|
||||
|
||||
var rollupFuncsRemoveCounterResets = map[string]bool{
|
||||
|
@ -285,7 +276,7 @@ func getRollupConfigs(name string, rf rollupFunc, expr metricsql.Expr, start, en
|
|||
End: end,
|
||||
Step: step,
|
||||
Window: window,
|
||||
MayAdjustWindow: !rollupFuncsCannotAdjustWindow[name],
|
||||
MayAdjustWindow: rollupFuncsCanAdjustWindow[name],
|
||||
LookbackDelta: lookbackDelta,
|
||||
Timestamps: sharedTimestamps,
|
||||
isDefaultRollup: name == "default_rollup",
|
||||
|
@ -320,6 +311,24 @@ func getRollupConfigs(name string, rf rollupFunc, expr metricsql.Expr, start, en
|
|||
rcs = append(rcs, newRollupConfig(rollupClose, "close"))
|
||||
rcs = append(rcs, newRollupConfig(rollupLow, "low"))
|
||||
rcs = append(rcs, newRollupConfig(rollupHigh, "high"))
|
||||
case "rollup_scrape_interval":
|
||||
preFuncPrev := preFunc
|
||||
preFunc = func(values []float64, timestamps []int64) {
|
||||
preFuncPrev(values, timestamps)
|
||||
// Calculate intervals in seconds between samples.
|
||||
tsSecsPrev := nan
|
||||
for i, ts := range timestamps {
|
||||
tsSecs := float64(ts) / 1000
|
||||
values[i] = tsSecs - tsSecsPrev
|
||||
tsSecsPrev = tsSecs
|
||||
}
|
||||
if len(values) > 1 {
|
||||
// Overwrite the first NaN interval with the second interval,
|
||||
// So min, max and avg rollups could be calculated properly, since they don't expect to receive NaNs.
|
||||
values[0] = values[1]
|
||||
}
|
||||
}
|
||||
rcs = appendRollupConfigs(rcs)
|
||||
case "aggr_over_time":
|
||||
aggrFuncNames, err := getRollupAggrFuncNames(expr)
|
||||
if err != nil {
|
||||
|
@ -643,18 +652,20 @@ func getScrapeInterval(timestamps []int64) int64 {
|
|||
}
|
||||
|
||||
// Estimate scrape interval as 0.6 quantile for the first 20 intervals.
|
||||
h := histogram.GetFast()
|
||||
tsPrev := timestamps[0]
|
||||
timestamps = timestamps[1:]
|
||||
if len(timestamps) > 20 {
|
||||
timestamps = timestamps[:20]
|
||||
}
|
||||
a := getFloat64s()
|
||||
intervals := a.A[:0]
|
||||
for _, ts := range timestamps {
|
||||
h.Update(float64(ts - tsPrev))
|
||||
intervals = append(intervals, float64(ts-tsPrev))
|
||||
tsPrev = ts
|
||||
}
|
||||
scrapeInterval := int64(h.Quantile(0.6))
|
||||
histogram.PutFast(h)
|
||||
scrapeInterval := int64(quantile(0.6, intervals))
|
||||
a.A = intervals
|
||||
putFloat64s(a)
|
||||
if scrapeInterval <= 0 {
|
||||
return int64(maxSilenceInterval)
|
||||
}
|
||||
|
@ -842,37 +853,29 @@ func linearRegression(rfa *rollupFuncArg) (float64, float64) {
|
|||
// before calling rollup funcs.
|
||||
values := rfa.values
|
||||
timestamps := rfa.timestamps
|
||||
if len(values) == 0 {
|
||||
return rfa.prevValue, 0
|
||||
n := float64(len(values))
|
||||
if n == 0 {
|
||||
return nan, nan
|
||||
}
|
||||
if n == 1 {
|
||||
return values[0], 0
|
||||
}
|
||||
|
||||
// See https://en.wikipedia.org/wiki/Simple_linear_regression#Numerical_example
|
||||
tFirst := rfa.prevTimestamp
|
||||
vSum := rfa.prevValue
|
||||
interceptTime := rfa.currTimestamp
|
||||
vSum := float64(0)
|
||||
tSum := float64(0)
|
||||
tvSum := float64(0)
|
||||
ttSum := float64(0)
|
||||
n := 1.0
|
||||
if math.IsNaN(rfa.prevValue) {
|
||||
tFirst = timestamps[0]
|
||||
vSum = 0
|
||||
n = 0
|
||||
}
|
||||
for i, v := range values {
|
||||
dt := float64(timestamps[i]-tFirst) / 1e3
|
||||
dt := float64(timestamps[i]-interceptTime) / 1e3
|
||||
vSum += v
|
||||
tSum += dt
|
||||
tvSum += dt * v
|
||||
ttSum += dt * dt
|
||||
}
|
||||
n += float64(len(values))
|
||||
if n == 1 {
|
||||
return vSum, 0
|
||||
}
|
||||
k := (n*tvSum - tSum*vSum) / (n*ttSum - tSum*tSum)
|
||||
v := (vSum - k*tSum) / n
|
||||
// Adjust v to the last timestamp on the given time range.
|
||||
v += k * (float64(timestamps[len(timestamps)-1]-tFirst) / 1e3)
|
||||
k := (tvSum - tSum*vSum/n) / (ttSum - tSum*tSum/n)
|
||||
v := vSum/n - k*tSum/n
|
||||
return v, k
|
||||
}
|
||||
|
||||
|
@ -1066,18 +1069,15 @@ func newRollupQuantiles(args []interface{}) (rollupFunc, error) {
|
|||
// Fast path - only a single value.
|
||||
return values[0]
|
||||
}
|
||||
hf := histogram.GetFast()
|
||||
for _, v := range values {
|
||||
hf.Update(v)
|
||||
}
|
||||
qs := hf.Quantiles(nil, phis)
|
||||
histogram.PutFast(hf)
|
||||
qs := getFloat64s()
|
||||
qs.A = quantiles(qs.A[:0], phis, values)
|
||||
idx := rfa.idx
|
||||
tsm := rfa.tsm
|
||||
for i, phiStr := range phiStrs {
|
||||
ts := tsm.GetOrCreateTimeseries(phiLabel, phiStr)
|
||||
ts.Values[idx] = qs[i]
|
||||
ts.Values[idx] = qs.A[i]
|
||||
}
|
||||
putFloat64s(qs)
|
||||
return nan
|
||||
}
|
||||
return rf, nil
|
||||
|
@ -1095,20 +1095,8 @@ func newRollupQuantile(args []interface{}) (rollupFunc, error) {
|
|||
// There is no need in handling NaNs here, since they must be cleaned up
|
||||
// before calling rollup funcs.
|
||||
values := rfa.values
|
||||
if len(values) == 0 {
|
||||
return rfa.prevValue
|
||||
}
|
||||
if len(values) == 1 {
|
||||
// Fast path - only a single value.
|
||||
return values[0]
|
||||
}
|
||||
hf := histogram.GetFast()
|
||||
for _, v := range values {
|
||||
hf.Update(v)
|
||||
}
|
||||
phi := phis[rfa.idx]
|
||||
qv := hf.Quantile(phi)
|
||||
histogram.PutFast(hf)
|
||||
qv := quantile(phi, values)
|
||||
return qv
|
||||
}
|
||||
return rf, nil
|
||||
|
@ -1347,10 +1335,7 @@ func rollupCount(rfa *rollupFuncArg) float64 {
|
|||
// before calling rollup funcs.
|
||||
values := rfa.values
|
||||
if len(values) == 0 {
|
||||
if math.IsNaN(rfa.prevValue) {
|
||||
return nan
|
||||
}
|
||||
return 0
|
||||
return nan
|
||||
}
|
||||
return float64(len(values))
|
||||
}
|
||||
|
@ -1367,14 +1352,11 @@ func rollupStdvar(rfa *rollupFuncArg) float64 {
|
|||
// before calling rollup funcs.
|
||||
values := rfa.values
|
||||
if len(values) == 0 {
|
||||
if math.IsNaN(rfa.prevValue) {
|
||||
return nan
|
||||
}
|
||||
return 0
|
||||
return nan
|
||||
}
|
||||
if len(values) == 1 {
|
||||
// Fast path.
|
||||
return values[0]
|
||||
return 0
|
||||
}
|
||||
var avg float64
|
||||
var count float64
|
||||
|
@ -1782,19 +1764,28 @@ func rollupModeOverTime(rfa *rollupFuncArg) float64 {
|
|||
// before calling rollup funcs.
|
||||
|
||||
// Copy rfa.values to a.A, since modeNoNaNs modifies a.A contents.
|
||||
a := float64sPool.Get().(*float64s)
|
||||
a := getFloat64s()
|
||||
a.A = append(a.A[:0], rfa.values...)
|
||||
result := modeNoNaNs(rfa.prevValue, a.A)
|
||||
float64sPool.Put(a)
|
||||
putFloat64s(a)
|
||||
return result
|
||||
}
|
||||
|
||||
var float64sPool = &sync.Pool{
|
||||
New: func() interface{} {
|
||||
return &float64s{}
|
||||
},
|
||||
func getFloat64s() *float64s {
|
||||
v := float64sPool.Get()
|
||||
if v == nil {
|
||||
v = &float64s{}
|
||||
}
|
||||
return v.(*float64s)
|
||||
}
|
||||
|
||||
func putFloat64s(a *float64s) {
|
||||
a.A = a.A[:0]
|
||||
float64sPool.Put(a)
|
||||
}
|
||||
|
||||
var float64sPool sync.Pool
|
||||
|
||||
type float64s struct {
|
||||
A []float64
|
||||
}
|
||||
|
|
|
@ -335,14 +335,14 @@ func TestRollupQuantileOverTime(t *testing.T) {
|
|||
testRollupFunc(t, "quantile_over_time", args, &me, vExpected)
|
||||
}
|
||||
|
||||
f(-123, 12)
|
||||
f(-0.5, 12)
|
||||
f(-123, math.Inf(-1))
|
||||
f(-0.5, math.Inf(-1))
|
||||
f(0, 12)
|
||||
f(0.1, 21)
|
||||
f(0.1, 22.1)
|
||||
f(0.5, 34)
|
||||
f(0.9, 99)
|
||||
f(0.9, 94.50000000000001)
|
||||
f(1, 123)
|
||||
f(234, 123)
|
||||
f(234, math.Inf(+1))
|
||||
}
|
||||
|
||||
func TestRollupPredictLinear(t *testing.T) {
|
||||
|
@ -357,10 +357,32 @@ func TestRollupPredictLinear(t *testing.T) {
|
|||
testRollupFunc(t, "predict_linear", args, &me, vExpected)
|
||||
}
|
||||
|
||||
f(0e-3, 30.382432471845043)
|
||||
f(50e-3, 17.03950235614201)
|
||||
f(100e-3, 3.696572240438975)
|
||||
f(200e-3, -22.989287990967092)
|
||||
f(0e-3, 65.07405077267295)
|
||||
f(50e-3, 51.7311206569699)
|
||||
f(100e-3, 38.38819054126685)
|
||||
f(200e-3, 11.702330309860756)
|
||||
}
|
||||
|
||||
func TestLinearRegression(t *testing.T) {
|
||||
f := func(values []float64, timestamps []int64, expV, expK float64) {
|
||||
t.Helper()
|
||||
rfa := &rollupFuncArg{
|
||||
values: values,
|
||||
timestamps: timestamps,
|
||||
currTimestamp: timestamps[0] + 100,
|
||||
}
|
||||
v, k := linearRegression(rfa)
|
||||
if err := compareValues([]float64{v}, []float64{expV}); err != nil {
|
||||
t.Fatalf("unexpected v err: %s", err)
|
||||
}
|
||||
if err := compareValues([]float64{k}, []float64{expK}); err != nil {
|
||||
t.Fatalf("unexpected k err: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
f([]float64{1}, []int64{1}, math.NaN(), math.NaN())
|
||||
f([]float64{1, 2}, []int64{100, 300}, 1.5, 5)
|
||||
f([]float64{2, 4, 6, 8, 10}, []int64{100, 200, 300, 400, 500}, 4, 20)
|
||||
}
|
||||
|
||||
func TestRollupHoltWinters(t *testing.T) {
|
||||
|
@ -448,7 +470,7 @@ func TestRollupNewRollupFuncSuccess(t *testing.T) {
|
|||
f("default_rollup", 34)
|
||||
f("changes", 11)
|
||||
f("delta", 34)
|
||||
f("deriv", -266.85860231406065)
|
||||
f("deriv", -266.85860231406093)
|
||||
f("deriv_fast", -712)
|
||||
f("idelta", 0)
|
||||
f("increase", 398)
|
||||
|
@ -626,7 +648,7 @@ func TestRollupNoWindowPartialPoints(t *testing.T) {
|
|||
}
|
||||
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
|
||||
values := rc.Do(nil, testValues, testTimestamps)
|
||||
valuesExpected := []float64{nan, nan, 123, 34, nan}
|
||||
valuesExpected := []float64{nan, nan, 123, 34, 32}
|
||||
timestampsExpected := []int64{-50, 0, 50, 100, 150}
|
||||
testRowsEqual(t, values, rc.Timestamps, valuesExpected, timestampsExpected)
|
||||
})
|
||||
|
@ -733,7 +755,7 @@ func TestRollupFuncsNoWindow(t *testing.T) {
|
|||
}
|
||||
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
|
||||
values := rc.Do(nil, testValues, testTimestamps)
|
||||
valuesExpected := []float64{nan, 123, 54, 44, nan}
|
||||
valuesExpected := []float64{nan, 123, 54, 44, 34}
|
||||
timestampsExpected := []int64{0, 40, 80, 120, 160}
|
||||
testRowsEqual(t, values, rc.Timestamps, valuesExpected, timestampsExpected)
|
||||
})
|
||||
|
@ -747,7 +769,7 @@ func TestRollupFuncsNoWindow(t *testing.T) {
|
|||
}
|
||||
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
|
||||
values := rc.Do(nil, testValues, testTimestamps)
|
||||
valuesExpected := []float64{nan, 4, 4, 3, nan}
|
||||
valuesExpected := []float64{nan, 4, 4, 3, 1}
|
||||
timestampsExpected := []int64{0, 40, 80, 120, 160}
|
||||
testRowsEqual(t, values, rc.Timestamps, valuesExpected, timestampsExpected)
|
||||
})
|
||||
|
@ -761,7 +783,7 @@ func TestRollupFuncsNoWindow(t *testing.T) {
|
|||
}
|
||||
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
|
||||
values := rc.Do(nil, testValues, testTimestamps)
|
||||
valuesExpected := []float64{nan, 21, 12, 32, nan}
|
||||
valuesExpected := []float64{nan, 21, 12, 32, 34}
|
||||
timestampsExpected := []int64{0, 40, 80, 120, 160}
|
||||
testRowsEqual(t, values, rc.Timestamps, valuesExpected, timestampsExpected)
|
||||
})
|
||||
|
@ -775,7 +797,7 @@ func TestRollupFuncsNoWindow(t *testing.T) {
|
|||
}
|
||||
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
|
||||
values := rc.Do(nil, testValues, testTimestamps)
|
||||
valuesExpected := []float64{nan, 123, 99, 44, nan}
|
||||
valuesExpected := []float64{nan, 123, 99, 44, 34}
|
||||
timestampsExpected := []int64{0, 40, 80, 120, 160}
|
||||
testRowsEqual(t, values, rc.Timestamps, valuesExpected, timestampsExpected)
|
||||
})
|
||||
|
@ -789,7 +811,7 @@ func TestRollupFuncsNoWindow(t *testing.T) {
|
|||
}
|
||||
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
|
||||
values := rc.Do(nil, testValues, testTimestamps)
|
||||
valuesExpected := []float64{nan, 222, 199, 110, nan}
|
||||
valuesExpected := []float64{nan, 222, 199, 110, 34}
|
||||
timestampsExpected := []int64{0, 40, 80, 120, 160}
|
||||
testRowsEqual(t, values, rc.Timestamps, valuesExpected, timestampsExpected)
|
||||
})
|
||||
|
@ -803,7 +825,7 @@ func TestRollupFuncsNoWindow(t *testing.T) {
|
|||
}
|
||||
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
|
||||
values := rc.Do(nil, testValues, testTimestamps)
|
||||
valuesExpected := []float64{nan, nan, -9, 22, nan}
|
||||
valuesExpected := []float64{nan, 21, -9, 22, 0}
|
||||
timestampsExpected := []int64{0, 40, 80, 120, 160}
|
||||
testRowsEqual(t, values, rc.Timestamps, valuesExpected, timestampsExpected)
|
||||
})
|
||||
|
@ -831,7 +853,7 @@ func TestRollupFuncsNoWindow(t *testing.T) {
|
|||
}
|
||||
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
|
||||
values := rc.Do(nil, testValues, testTimestamps)
|
||||
valuesExpected := []float64{nan, 0.004, 0, 0, nan}
|
||||
valuesExpected := []float64{nan, 0.004, 0, 0, 0.03}
|
||||
timestampsExpected := []int64{0, 40, 80, 120, 160}
|
||||
testRowsEqual(t, values, rc.Timestamps, valuesExpected, timestampsExpected)
|
||||
})
|
||||
|
@ -845,7 +867,7 @@ func TestRollupFuncsNoWindow(t *testing.T) {
|
|||
}
|
||||
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
|
||||
values := rc.Do(nil, testValues, testTimestamps)
|
||||
valuesExpected := []float64{nan, 0.031, 0.044, 0.04, nan}
|
||||
valuesExpected := []float64{nan, 0.031, 0.044, 0.04, 0.01}
|
||||
timestampsExpected := []int64{0, 40, 80, 120, 160}
|
||||
testRowsEqual(t, values, rc.Timestamps, valuesExpected, timestampsExpected)
|
||||
})
|
||||
|
@ -859,7 +881,7 @@ func TestRollupFuncsNoWindow(t *testing.T) {
|
|||
}
|
||||
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
|
||||
values := rc.Do(nil, testValues, testTimestamps)
|
||||
valuesExpected := []float64{nan, 0.031, 0.075, 0.115, nan}
|
||||
valuesExpected := []float64{nan, 0.031, 0.075, 0.115, 0.125}
|
||||
timestampsExpected := []int64{0, 40, 80, 120, 160}
|
||||
testRowsEqual(t, values, rc.Timestamps, valuesExpected, timestampsExpected)
|
||||
})
|
||||
|
@ -873,7 +895,7 @@ func TestRollupFuncsNoWindow(t *testing.T) {
|
|||
}
|
||||
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
|
||||
values := rc.Do(nil, testValues, testTimestamps)
|
||||
valuesExpected := []float64{nan, 0.010333333333333333, 0.011, 0.013333333333333334, nan}
|
||||
valuesExpected := []float64{nan, 0.010333333333333333, 0.011, 0.013333333333333334, 0.01}
|
||||
timestampsExpected := []int64{0, 40, 80, 120, 160}
|
||||
testRowsEqual(t, values, rc.Timestamps, valuesExpected, timestampsExpected)
|
||||
})
|
||||
|
@ -887,7 +909,7 @@ func TestRollupFuncsNoWindow(t *testing.T) {
|
|||
}
|
||||
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
|
||||
values := rc.Do(nil, testValues, testTimestamps)
|
||||
valuesExpected := []float64{nan, 0.010333333333333333, 0.010714285714285714, 0.012, nan}
|
||||
valuesExpected := []float64{nan, 0.010333333333333333, 0.010714285714285714, 0.012, 0.0125}
|
||||
timestampsExpected := []int64{0, 40, 80, 120, 160}
|
||||
testRowsEqual(t, values, rc.Timestamps, valuesExpected, timestampsExpected)
|
||||
})
|
||||
|
@ -901,7 +923,7 @@ func TestRollupFuncsNoWindow(t *testing.T) {
|
|||
}
|
||||
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
|
||||
values := rc.Do(nil, testValues, testTimestamps)
|
||||
valuesExpected := []float64{nan, 4, 4, 3, nan}
|
||||
valuesExpected := []float64{nan, 4, 4, 3, 0}
|
||||
timestampsExpected := []int64{0, 40, 80, 120, 160}
|
||||
testRowsEqual(t, values, rc.Timestamps, valuesExpected, timestampsExpected)
|
||||
})
|
||||
|
@ -929,7 +951,7 @@ func TestRollupFuncsNoWindow(t *testing.T) {
|
|||
}
|
||||
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
|
||||
values := rc.Do(nil, testValues, testTimestamps)
|
||||
valuesExpected := []float64{nan, 2, 2, 1, nan}
|
||||
valuesExpected := []float64{nan, 2, 2, 1, 0}
|
||||
timestampsExpected := []int64{0, 40, 80, 120, 160}
|
||||
testRowsEqual(t, values, rc.Timestamps, valuesExpected, timestampsExpected)
|
||||
})
|
||||
|
@ -943,7 +965,7 @@ func TestRollupFuncsNoWindow(t *testing.T) {
|
|||
}
|
||||
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
|
||||
values := rc.Do(nil, testValues, testTimestamps)
|
||||
valuesExpected := []float64{nan, 55.5, 49.75, 36.666666666666664, nan}
|
||||
valuesExpected := []float64{nan, 55.5, 49.75, 36.666666666666664, 34}
|
||||
timestampsExpected := []int64{0, 40, 80, 120, 160}
|
||||
testRowsEqual(t, values, rc.Timestamps, valuesExpected, timestampsExpected)
|
||||
})
|
||||
|
@ -957,7 +979,7 @@ func TestRollupFuncsNoWindow(t *testing.T) {
|
|||
}
|
||||
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
|
||||
values := rc.Do(nil, testValues, testTimestamps)
|
||||
valuesExpected := []float64{0, -2879.310344827587, 558.0608793686595, 422.84569138276544, nan}
|
||||
valuesExpected := []float64{nan, -2879.310344827588, 127.87627310448904, -496.5831435079728, 0}
|
||||
timestampsExpected := []int64{0, 40, 80, 120, 160}
|
||||
testRowsEqual(t, values, rc.Timestamps, valuesExpected, timestampsExpected)
|
||||
})
|
||||
|
@ -985,7 +1007,7 @@ func TestRollupFuncsNoWindow(t *testing.T) {
|
|||
}
|
||||
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
|
||||
values := rc.Do(nil, testValues, testTimestamps)
|
||||
valuesExpected := []float64{nan, -1916.6666666666665, -43500, 400, nan}
|
||||
valuesExpected := []float64{nan, -1916.6666666666665, -43500, 400, 0}
|
||||
timestampsExpected := []int64{0, 40, 80, 120, 160}
|
||||
testRowsEqual(t, values, rc.Timestamps, valuesExpected, timestampsExpected)
|
||||
})
|
||||
|
@ -999,7 +1021,7 @@ func TestRollupFuncsNoWindow(t *testing.T) {
|
|||
}
|
||||
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
|
||||
values := rc.Do(nil, testValues, testTimestamps)
|
||||
valuesExpected := []float64{nan, 39.81519810323691, 32.080952292598795, 5.2493385826745405, nan}
|
||||
valuesExpected := []float64{nan, 39.81519810323691, 32.080952292598795, 5.2493385826745405, 0}
|
||||
timestampsExpected := []int64{0, 40, 80, 120, 160}
|
||||
testRowsEqual(t, values, rc.Timestamps, valuesExpected, timestampsExpected)
|
||||
})
|
||||
|
@ -1013,7 +1035,7 @@ func TestRollupFuncsNoWindow(t *testing.T) {
|
|||
}
|
||||
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
|
||||
values := rc.Do(nil, testValues, testTimestamps)
|
||||
valuesExpected := []float64{nan, 2.148, 1.593, 1.156, nan}
|
||||
valuesExpected := []float64{nan, 2.148, 1.593, 1.156, 1.36}
|
||||
timestampsExpected := []int64{0, 40, 80, 120, 160}
|
||||
testRowsEqual(t, values, rc.Timestamps, valuesExpected, timestampsExpected)
|
||||
})
|
||||
|
@ -1027,7 +1049,7 @@ func TestRollupFuncsNoWindow(t *testing.T) {
|
|||
}
|
||||
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
|
||||
values := rc.Do(nil, testValues, testTimestamps)
|
||||
valuesExpected := []float64{nan, 4, 4, 3, nan}
|
||||
valuesExpected := []float64{nan, 4, 4, 3, 1}
|
||||
timestampsExpected := []int64{0, 40, 80, 120, 160}
|
||||
testRowsEqual(t, values, rc.Timestamps, valuesExpected, timestampsExpected)
|
||||
})
|
||||
|
@ -1041,7 +1063,7 @@ func TestRollupFuncsNoWindow(t *testing.T) {
|
|||
}
|
||||
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
|
||||
values := rc.Do(nil, testValues, testTimestamps)
|
||||
valuesExpected := []float64{nan, 4, 7, 6, nan}
|
||||
valuesExpected := []float64{nan, 4, 7, 6, 3}
|
||||
timestampsExpected := []int64{0, 40, 80, 120, 160}
|
||||
testRowsEqual(t, values, rc.Timestamps, valuesExpected, timestampsExpected)
|
||||
})
|
||||
|
@ -1055,7 +1077,7 @@ func TestRollupFuncsNoWindow(t *testing.T) {
|
|||
}
|
||||
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
|
||||
values := rc.Do(nil, testValues, testTimestamps)
|
||||
valuesExpected := []float64{nan, 21, 34, 34, nan}
|
||||
valuesExpected := []float64{nan, 21, 34, 34, 34}
|
||||
timestampsExpected := []int64{0, 40, 80, 120, 160}
|
||||
testRowsEqual(t, values, rc.Timestamps, valuesExpected, timestampsExpected)
|
||||
})
|
||||
|
@ -1069,7 +1091,7 @@ func TestRollupFuncsNoWindow(t *testing.T) {
|
|||
}
|
||||
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
|
||||
values := rc.Do(nil, testValues, testTimestamps)
|
||||
valuesExpected := []float64{nan, 2775, 5262.5, 3678.5714285714284, nan}
|
||||
valuesExpected := []float64{nan, 2775, 5262.5, 3678.5714285714284, 2880}
|
||||
timestampsExpected := []int64{0, 40, 80, 120, 160}
|
||||
testRowsEqual(t, values, rc.Timestamps, valuesExpected, timestampsExpected)
|
||||
})
|
||||
|
@ -1105,7 +1127,7 @@ func TestRollupBigNumberOfValues(t *testing.T) {
|
|||
srcTimestamps[i] = int64(i / 2)
|
||||
}
|
||||
values := rc.Do(nil, srcValues, srcTimestamps)
|
||||
valuesExpected := []float64{1, 4001, 8001, nan, nan, nan}
|
||||
valuesExpected := []float64{1, 4001, 8001, 9999, nan, nan}
|
||||
timestampsExpected := []int64{0, 2000, 4000, 6000, 8000, 10000}
|
||||
testRowsEqual(t, values, rc.Timestamps, valuesExpected, timestampsExpected)
|
||||
}
|
||||
|
@ -1138,6 +1160,13 @@ func testRowsEqual(t *testing.T, values []float64, timestamps []int64, valuesExp
|
|||
}
|
||||
continue
|
||||
}
|
||||
if math.IsNaN(vExpected) {
|
||||
if !math.IsNaN(v) {
|
||||
t.Fatalf("unexpected value at values[%d]; got %f; want nan\nvalues=\n%v\nvaluesExpected=\n%v",
|
||||
i, v, values, valuesExpected)
|
||||
}
|
||||
continue
|
||||
}
|
||||
if math.Abs(v-vExpected) > 1e-15 {
|
||||
t.Fatalf("unexpected value at values[%d]; got %f; want %f\nvalues=\n%v\nvaluesExpected=\n%v",
|
||||
i, v, vExpected, values, valuesExpected)
|
||||
|
|
|
@ -14,7 +14,6 @@ import (
|
|||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/decimal"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
|
||||
"github.com/VictoriaMetrics/metricsql"
|
||||
"github.com/valyala/histogram"
|
||||
)
|
||||
|
||||
var transformFuncsKeepMetricGroup = map[string]bool{
|
||||
|
@ -1178,23 +1177,26 @@ func transformRangeQuantile(tfa *transformFuncArg) ([]*timeseries, error) {
|
|||
}
|
||||
phi := phis[0]
|
||||
rvs := args[1]
|
||||
hf := histogram.GetFast()
|
||||
a := getFloat64s()
|
||||
values := a.A[:0]
|
||||
for _, ts := range rvs {
|
||||
hf.Reset()
|
||||
lastIdx := -1
|
||||
values := ts.Values
|
||||
for i, v := range values {
|
||||
originValues := ts.Values
|
||||
values = values[:0]
|
||||
for i, v := range originValues {
|
||||
if math.IsNaN(v) {
|
||||
continue
|
||||
}
|
||||
hf.Update(v)
|
||||
values = append(values, v)
|
||||
lastIdx = i
|
||||
}
|
||||
if lastIdx >= 0 {
|
||||
values[lastIdx] = hf.Quantile(phi)
|
||||
sort.Float64s(values)
|
||||
originValues[lastIdx] = quantileSorted(phi, values)
|
||||
}
|
||||
}
|
||||
histogram.PutFast(hf)
|
||||
a.A = values
|
||||
putFloat64s(a)
|
||||
setLastValues(rvs)
|
||||
return rvs, nil
|
||||
}
|
||||
|
|
|
@ -1,17 +1,19 @@
|
|||
{
|
||||
"files": {
|
||||
"main.css": "./static/css/main.6452b577.chunk.css",
|
||||
"main.js": "./static/js/main.801aa0ec.chunk.js",
|
||||
"runtime-main.js": "./static/js/runtime-main.55798746.js",
|
||||
"static/js/2.fd2c2c30.chunk.js": "./static/js/2.fd2c2c30.chunk.js",
|
||||
"static/js/3.c36fc28c.chunk.js": "./static/js/3.c36fc28c.chunk.js",
|
||||
"main.css": "./static/css/main.cbb91dd8.chunk.css",
|
||||
"main.js": "./static/js/main.3fb3f4d1.chunk.js",
|
||||
"runtime-main.js": "./static/js/runtime-main.22ec3f63.js",
|
||||
"static/css/2.a684aa27.chunk.css": "./static/css/2.a684aa27.chunk.css",
|
||||
"static/js/2.72d7cb01.chunk.js": "./static/js/2.72d7cb01.chunk.js",
|
||||
"static/js/3.3cf0cbc4.chunk.js": "./static/js/3.3cf0cbc4.chunk.js",
|
||||
"index.html": "./index.html",
|
||||
"static/js/2.fd2c2c30.chunk.js.LICENSE.txt": "./static/js/2.fd2c2c30.chunk.js.LICENSE.txt"
|
||||
"static/js/2.72d7cb01.chunk.js.LICENSE.txt": "./static/js/2.72d7cb01.chunk.js.LICENSE.txt"
|
||||
},
|
||||
"entrypoints": [
|
||||
"static/js/runtime-main.55798746.js",
|
||||
"static/js/2.fd2c2c30.chunk.js",
|
||||
"static/css/main.6452b577.chunk.css",
|
||||
"static/js/main.801aa0ec.chunk.js"
|
||||
"static/js/runtime-main.22ec3f63.js",
|
||||
"static/css/2.a684aa27.chunk.css",
|
||||
"static/js/2.72d7cb01.chunk.js",
|
||||
"static/css/main.cbb91dd8.chunk.css",
|
||||
"static/js/main.3fb3f4d1.chunk.js"
|
||||
]
|
||||
}
|
|
@ -1 +1 @@
|
|||
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="./favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1"/><meta name="theme-color" content="#000000"/><meta name="description" content="VM-UI is a metric explorer for Victoria Metrics"/><link rel="apple-touch-icon" href="./apple-touch-icon.png"/><link rel="icon" type="image/png" sizes="32x32" href="./favicon-32x32.png"><link rel="manifest" href="./manifest.json"/><title>VM UI</title><link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto:300,400,500,700&display=swap"/><link href="./static/css/main.6452b577.chunk.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div><script>!function(e){function r(r){for(var n,i,a=r[0],c=r[1],l=r[2],s=0,p=[];s<a.length;s++)i=a[s],Object.prototype.hasOwnProperty.call(o,i)&&o[i]&&p.push(o[i][0]),o[i]=0;for(n in c)Object.prototype.hasOwnProperty.call(c,n)&&(e[n]=c[n]);for(f&&f(r);p.length;)p.shift()();return u.push.apply(u,l||[]),t()}function t(){for(var e,r=0;r<u.length;r++){for(var t=u[r],n=!0,a=1;a<t.length;a++){var c=t[a];0!==o[c]&&(n=!1)}n&&(u.splice(r--,1),e=i(i.s=t[0]))}return e}var n={},o={1:0},u=[];function i(r){if(n[r])return n[r].exports;var t=n[r]={i:r,l:!1,exports:{}};return e[r].call(t.exports,t,t.exports,i),t.l=!0,t.exports}i.e=function(e){var r=[],t=o[e];if(0!==t)if(t)r.push(t[2]);else{var n=new Promise((function(r,n){t=o[e]=[r,n]}));r.push(t[2]=n);var u,a=document.createElement("script");a.charset="utf-8",a.timeout=120,i.nc&&a.setAttribute("nonce",i.nc),a.src=function(e){return i.p+"static/js/"+({}[e]||e)+"."+{3:"c36fc28c"}[e]+".chunk.js"}(e);var c=new Error;u=function(r){a.onerror=a.onload=null,clearTimeout(l);var t=o[e];if(0!==t){if(t){var n=r&&("load"===r.type?"missing":r.type),u=r&&r.target&&r.target.src;c.message="Loading chunk "+e+" failed.\n("+n+": "+u+")",c.name="ChunkLoadError",c.type=n,c.request=u,t[1](c)}o[e]=void 0}};var l=setTimeout((function(){u({type:"timeout",target:a})}),12e4);a.onerror=a.onload=u,document.head.appendChild(a)}return Promise.all(r)},i.m=e,i.c=n,i.d=function(e,r,t){i.o(e,r)||Object.defineProperty(e,r,{enumerable:!0,get:t})},i.r=function(e){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},i.t=function(e,r){if(1&r&&(e=i(e)),8&r)return e;if(4&r&&"object"==typeof e&&e&&e.__esModule)return e;var t=Object.create(null);if(i.r(t),Object.defineProperty(t,"default",{enumerable:!0,value:e}),2&r&&"string"!=typeof e)for(var n in e)i.d(t,n,function(r){return e[r]}.bind(null,n));return t},i.n=function(e){var r=e&&e.__esModule?function(){return e.default}:function(){return e};return i.d(r,"a",r),r},i.o=function(e,r){return Object.prototype.hasOwnProperty.call(e,r)},i.p="./",i.oe=function(e){throw console.error(e),e};var a=this.webpackJsonpvmui=this.webpackJsonpvmui||[],c=a.push.bind(a);a.push=r,a=a.slice();for(var l=0;l<a.length;l++)r(a[l]);var f=c;t()}([])</script><script src="./static/js/2.fd2c2c30.chunk.js"></script><script src="./static/js/main.801aa0ec.chunk.js"></script></body></html>
|
||||
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="./favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1"/><meta name="theme-color" content="#000000"/><meta name="description" content="VM-UI is a metric explorer for Victoria Metrics"/><link rel="apple-touch-icon" href="./apple-touch-icon.png"/><link rel="icon" type="image/png" sizes="32x32" href="./favicon-32x32.png"><link rel="manifest" href="./manifest.json"/><title>VM UI</title><link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto:300,400,500,700&display=swap"/><link href="./static/css/2.a684aa27.chunk.css" rel="stylesheet"><link href="./static/css/main.cbb91dd8.chunk.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div><script>!function(e){function r(r){for(var n,i,a=r[0],c=r[1],l=r[2],s=0,p=[];s<a.length;s++)i=a[s],Object.prototype.hasOwnProperty.call(o,i)&&o[i]&&p.push(o[i][0]),o[i]=0;for(n in c)Object.prototype.hasOwnProperty.call(c,n)&&(e[n]=c[n]);for(f&&f(r);p.length;)p.shift()();return u.push.apply(u,l||[]),t()}function t(){for(var e,r=0;r<u.length;r++){for(var t=u[r],n=!0,a=1;a<t.length;a++){var c=t[a];0!==o[c]&&(n=!1)}n&&(u.splice(r--,1),e=i(i.s=t[0]))}return e}var n={},o={1:0},u=[];function i(r){if(n[r])return n[r].exports;var t=n[r]={i:r,l:!1,exports:{}};return e[r].call(t.exports,t,t.exports,i),t.l=!0,t.exports}i.e=function(e){var r=[],t=o[e];if(0!==t)if(t)r.push(t[2]);else{var n=new Promise((function(r,n){t=o[e]=[r,n]}));r.push(t[2]=n);var u,a=document.createElement("script");a.charset="utf-8",a.timeout=120,i.nc&&a.setAttribute("nonce",i.nc),a.src=function(e){return i.p+"static/js/"+({}[e]||e)+"."+{3:"3cf0cbc4"}[e]+".chunk.js"}(e);var c=new Error;u=function(r){a.onerror=a.onload=null,clearTimeout(l);var t=o[e];if(0!==t){if(t){var n=r&&("load"===r.type?"missing":r.type),u=r&&r.target&&r.target.src;c.message="Loading chunk "+e+" failed.\n("+n+": "+u+")",c.name="ChunkLoadError",c.type=n,c.request=u,t[1](c)}o[e]=void 0}};var l=setTimeout((function(){u({type:"timeout",target:a})}),12e4);a.onerror=a.onload=u,document.head.appendChild(a)}return Promise.all(r)},i.m=e,i.c=n,i.d=function(e,r,t){i.o(e,r)||Object.defineProperty(e,r,{enumerable:!0,get:t})},i.r=function(e){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},i.t=function(e,r){if(1&r&&(e=i(e)),8&r)return e;if(4&r&&"object"==typeof e&&e&&e.__esModule)return e;var t=Object.create(null);if(i.r(t),Object.defineProperty(t,"default",{enumerable:!0,value:e}),2&r&&"string"!=typeof e)for(var n in e)i.d(t,n,function(r){return e[r]}.bind(null,n));return t},i.n=function(e){var r=e&&e.__esModule?function(){return e.default}:function(){return e};return i.d(r,"a",r),r},i.o=function(e,r){return Object.prototype.hasOwnProperty.call(e,r)},i.p="./",i.oe=function(e){throw console.error(e),e};var a=this.webpackJsonpvmui=this.webpackJsonpvmui||[],c=a.push.bind(a);a.push=r,a=a.slice();for(var l=0;l<a.length;l++)r(a[l]);var f=c;t()}([])</script><script src="./static/js/2.72d7cb01.chunk.js"></script><script src="./static/js/main.3fb3f4d1.chunk.js"></script></body></html>
|
1
app/vmselect/vmui/static/css/2.a684aa27.chunk.css
Normal file
1
app/vmselect/vmui/static/css/2.a684aa27.chunk.css
Normal file
|
@ -0,0 +1 @@
|
|||
.uplot,.uplot *,.uplot :after,.uplot :before{box-sizing:border-box}.uplot{font-family:system-ui,-apple-system,"Segoe UI",Roboto,"Helvetica Neue",Arial,"Noto Sans",sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol","Noto Color Emoji";line-height:1.5;width:-webkit-min-content;width:-moz-min-content;width:min-content}.u-title{text-align:center;font-size:18px;font-weight:700}.u-wrap{position:relative;-webkit-user-select:none;-ms-user-select:none;user-select:none}.u-over,.u-under{position:absolute}.u-under{overflow:hidden}.uplot canvas{display:block;position:relative;width:100%;height:100%}.u-legend{font-size:14px;margin:auto;text-align:center}.u-inline{display:block}.u-inline *{display:inline-block}.u-inline tr{margin-right:16px}.u-legend th{font-weight:600}.u-legend th>*{vertical-align:middle;display:inline-block}.u-legend .u-marker{width:1em;height:1em;margin-right:4px;background-clip:padding-box!important}.u-inline.u-live th:after{content:":";vertical-align:middle}.u-inline:not(.u-live) .u-value{display:none}.u-series>*{padding:4px}.u-series th{cursor:pointer}.u-legend .u-off>*{opacity:.3}.u-select{background:rgba(0,0,0,.07)}.u-cursor-x,.u-cursor-y,.u-select{position:absolute;pointer-events:none}.u-cursor-x,.u-cursor-y{left:0;top:0;will-change:transform;z-index:100}.u-hz .u-cursor-x,.u-vt .u-cursor-y{height:100%;border-right:1px dashed #607d8b}.u-hz .u-cursor-y,.u-vt .u-cursor-x{width:100%;border-bottom:1px dashed #607d8b}.u-cursor-pt{position:absolute;top:0;left:0;border-radius:50%;border:0 solid;pointer-events:none;will-change:transform;z-index:100;background-clip:padding-box!important}.u-cursor-pt.u-off,.u-cursor-x.u-off,.u-cursor-y.u-off,.u-select.u-off{display:none}
|
|
@ -1 +1 @@
|
|||
body{font-family:-apple-system,BlinkMacSystemFont,"Segoe UI","Roboto","Oxygen","Ubuntu","Cantarell","Fira Sans","Droid Sans","Helvetica Neue",sans-serif;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}code{font-family:source-code-pro,Menlo,Monaco,Consolas,"Courier New",monospace}.MuiAccordionSummary-content{margin:10px 0!important}.cm-activeLine{background-color:inherit!important}.cm-wrap{border-radius:4px;border:1px solid #b9b9b9;font-size:10px}.one-line-scroll .cm-wrap{height:24px}.cm-content,.cm-gutter{min-height:51px}.one-line-scroll .cm-content,.one-line-scroll .cm-gutter{min-height:auto}.line{fill:none;stroke-width:2}.overlay{fill:none;pointer-events:all}.dot{fill:#621773;stroke:#fff}
|
||||
body{font-family:-apple-system,BlinkMacSystemFont,"Segoe UI","Roboto","Oxygen","Ubuntu","Cantarell","Fira Sans","Droid Sans","Helvetica Neue",sans-serif;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}code{font-family:source-code-pro,Menlo,Monaco,Consolas,"Courier New",monospace}.MuiAccordionSummary-content{margin:10px 0!important}.cm-activeLine{background-color:inherit!important}.cm-wrap{border-radius:4px;border:1px solid #b9b9b9;font-size:10px}.one-line-scroll .cm-wrap{height:24px}.cm-content,.cm-gutter{min-height:51px}.one-line-scroll .cm-content,.one-line-scroll .cm-gutter{min-height:auto}.uplot .u-legend{display:grid;align-items:center;justify-content:start;text-align:left;margin-top:25px}.uplot .u-legend .u-series{font-size:12px}
|
2
app/vmselect/vmui/static/js/2.72d7cb01.chunk.js
Normal file
2
app/vmselect/vmui/static/js/2.72d7cb01.chunk.js
Normal file
File diff suppressed because one or more lines are too long
|
@ -4,6 +4,13 @@ object-assign
|
|||
@license MIT
|
||||
*/
|
||||
|
||||
/*!
|
||||
* chartjs-adapter-date-fns v2.0.0
|
||||
* https://www.chartjs.org
|
||||
* (c) 2021 chartjs-adapter-date-fns Contributors
|
||||
* Released under the MIT license
|
||||
*/
|
||||
|
||||
/*! *****************************************************************************
|
||||
Copyright (c) Microsoft Corporation.
|
||||
|
||||
|
@ -19,6 +26,48 @@ OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
|
|||
PERFORMANCE OF THIS SOFTWARE.
|
||||
***************************************************************************** */
|
||||
|
||||
/*! @preserve
|
||||
* numeral.js
|
||||
* version : 2.0.6
|
||||
* author : Adam Draper
|
||||
* license : MIT
|
||||
* http://adamwdraper.github.com/Numeral-js/
|
||||
*/
|
||||
|
||||
/*! Hammer.JS - v2.0.7 - 2016-04-22
|
||||
* http://hammerjs.github.io/
|
||||
*
|
||||
* Copyright (c) 2016 Jorik Tangelder;
|
||||
* Licensed under the MIT license */
|
||||
|
||||
/*! exports provided: default */
|
||||
|
||||
/*! exports provided: optionsUpdateState, dataMatch */
|
||||
|
||||
/*! no static exports found */
|
||||
|
||||
/*! react */
|
||||
|
||||
/*! uplot */
|
||||
|
||||
/*! uplot-wrappers-common */
|
||||
|
||||
/*!*************************!*\
|
||||
!*** ./common/index.ts ***!
|
||||
\*************************/
|
||||
|
||||
/*!*******************************!*\
|
||||
!*** ./react/uplot-react.tsx ***!
|
||||
\*******************************/
|
||||
|
||||
/*!**************************************************************************************!*\
|
||||
!*** external {"amd":"react","commonjs":"react","commonjs2":"react","root":"React"} ***!
|
||||
\**************************************************************************************/
|
||||
|
||||
/*!**************************************************************************************!*\
|
||||
!*** external {"amd":"uplot","commonjs":"uplot","commonjs2":"uplot","root":"uPlot"} ***!
|
||||
\**************************************************************************************/
|
||||
|
||||
/**
|
||||
* A better abstraction over CSS.
|
||||
*
|
File diff suppressed because one or more lines are too long
|
@ -1 +1 @@
|
|||
(this.webpackJsonpvmui=this.webpackJsonpvmui||[]).push([[3],{432:function(t,n,e){"use strict";e.r(n),e.d(n,"getCLS",(function(){return l})),e.d(n,"getFCP",(function(){return g})),e.d(n,"getFID",(function(){return h})),e.d(n,"getLCP",(function(){return y})),e.d(n,"getTTFB",(function(){return F}));var i,a,r=function(){return"".concat(Date.now(),"-").concat(Math.floor(8999999999999*Math.random())+1e12)},o=function(t){var n=arguments.length>1&&void 0!==arguments[1]?arguments[1]:-1;return{name:t,value:n,delta:0,entries:[],id:r(),isFinal:!1}},u=function(t,n){try{if(PerformanceObserver.supportedEntryTypes.includes(t)){var e=new PerformanceObserver((function(t){return t.getEntries().map(n)}));return e.observe({type:t,buffered:!0}),e}}catch(t){}},s=!1,c=!1,d=function(t){s=!t.persisted},f=function(){addEventListener("pagehide",d),addEventListener("beforeunload",(function(){}))},p=function(t){var n=arguments.length>1&&void 0!==arguments[1]&&arguments[1];c||(f(),c=!0),addEventListener("visibilitychange",(function(n){var e=n.timeStamp;"hidden"===document.visibilityState&&t({timeStamp:e,isUnloading:s})}),{capture:!0,once:n})},v=function(t,n,e,i){var a;return function(){e&&n.isFinal&&e.disconnect(),n.value>=0&&(i||n.isFinal||"hidden"===document.visibilityState)&&(n.delta=n.value-(a||0),(n.delta||n.isFinal||void 0===a)&&(t(n),a=n.value))}},l=function(t){var n,e=arguments.length>1&&void 0!==arguments[1]&&arguments[1],i=o("CLS",0),a=function(t){t.hadRecentInput||(i.value+=t.value,i.entries.push(t),n())},r=u("layout-shift",a);r&&(n=v(t,i,r,e),p((function(t){var e=t.isUnloading;r.takeRecords().map(a),e&&(i.isFinal=!0),n()})))},m=function(){return void 0===i&&(i="hidden"===document.visibilityState?0:1/0,p((function(t){var n=t.timeStamp;return i=n}),!0)),{get timeStamp(){return i}}},g=function(t){var n,e=o("FCP"),i=m(),a=u("paint",(function(t){"first-contentful-paint"===t.name&&t.startTime<i.timeStamp&&(e.value=t.startTime,e.isFinal=!0,e.entries.push(t),n())}));a&&(n=v(t,e,a))},h=function(t){var n=o("FID"),e=m(),i=function(t){t.startTime<e.timeStamp&&(n.value=t.processingStart-t.startTime,n.entries.push(t),n.isFinal=!0,r())},a=u("first-input",i),r=v(t,n,a);a?p((function(){a.takeRecords().map(i),a.disconnect()}),!0):window.perfMetrics&&window.perfMetrics.onFirstInputDelay&&window.perfMetrics.onFirstInputDelay((function(t,i){i.timeStamp<e.timeStamp&&(n.value=t,n.isFinal=!0,n.entries=[{entryType:"first-input",name:i.type,target:i.target,cancelable:i.cancelable,startTime:i.timeStamp,processingStart:i.timeStamp+t}],r())}))},S=function(){return a||(a=new Promise((function(t){return["scroll","keydown","pointerdown"].map((function(n){addEventListener(n,t,{once:!0,passive:!0,capture:!0})}))}))),a},y=function(t){var n,e=arguments.length>1&&void 0!==arguments[1]&&arguments[1],i=o("LCP"),a=m(),r=function(t){var e=t.startTime;e<a.timeStamp?(i.value=e,i.entries.push(t)):i.isFinal=!0,n()},s=u("largest-contentful-paint",r);if(s){n=v(t,i,s,e);var c=function(){i.isFinal||(s.takeRecords().map(r),i.isFinal=!0,n())};S().then(c),p(c,!0)}},F=function(t){var n,e=o("TTFB");n=function(){try{var n=performance.getEntriesByType("navigation")[0]||function(){var t=performance.timing,n={entryType:"navigation",startTime:0};for(var e in t)"navigationStart"!==e&&"toJSON"!==e&&(n[e]=Math.max(t[e]-t.navigationStart,0));return n}();e.value=e.delta=n.responseStart,e.entries=[n],e.isFinal=!0,t(e)}catch(t){}},"complete"===document.readyState?setTimeout(n,0):addEventListener("pageshow",n)}}}]);
|
||||
(this.webpackJsonpvmui=this.webpackJsonpvmui||[]).push([[3],{271:function(t,n,e){"use strict";e.r(n),e.d(n,"getCLS",(function(){return l})),e.d(n,"getFCP",(function(){return g})),e.d(n,"getFID",(function(){return h})),e.d(n,"getLCP",(function(){return y})),e.d(n,"getTTFB",(function(){return F}));var i,a,r=function(){return"".concat(Date.now(),"-").concat(Math.floor(8999999999999*Math.random())+1e12)},o=function(t){var n=arguments.length>1&&void 0!==arguments[1]?arguments[1]:-1;return{name:t,value:n,delta:0,entries:[],id:r(),isFinal:!1}},u=function(t,n){try{if(PerformanceObserver.supportedEntryTypes.includes(t)){var e=new PerformanceObserver((function(t){return t.getEntries().map(n)}));return e.observe({type:t,buffered:!0}),e}}catch(t){}},s=!1,c=!1,d=function(t){s=!t.persisted},f=function(){addEventListener("pagehide",d),addEventListener("beforeunload",(function(){}))},p=function(t){var n=arguments.length>1&&void 0!==arguments[1]&&arguments[1];c||(f(),c=!0),addEventListener("visibilitychange",(function(n){var e=n.timeStamp;"hidden"===document.visibilityState&&t({timeStamp:e,isUnloading:s})}),{capture:!0,once:n})},v=function(t,n,e,i){var a;return function(){e&&n.isFinal&&e.disconnect(),n.value>=0&&(i||n.isFinal||"hidden"===document.visibilityState)&&(n.delta=n.value-(a||0),(n.delta||n.isFinal||void 0===a)&&(t(n),a=n.value))}},l=function(t){var n,e=arguments.length>1&&void 0!==arguments[1]&&arguments[1],i=o("CLS",0),a=function(t){t.hadRecentInput||(i.value+=t.value,i.entries.push(t),n())},r=u("layout-shift",a);r&&(n=v(t,i,r,e),p((function(t){var e=t.isUnloading;r.takeRecords().map(a),e&&(i.isFinal=!0),n()})))},m=function(){return void 0===i&&(i="hidden"===document.visibilityState?0:1/0,p((function(t){var n=t.timeStamp;return i=n}),!0)),{get timeStamp(){return i}}},g=function(t){var n,e=o("FCP"),i=m(),a=u("paint",(function(t){"first-contentful-paint"===t.name&&t.startTime<i.timeStamp&&(e.value=t.startTime,e.isFinal=!0,e.entries.push(t),n())}));a&&(n=v(t,e,a))},h=function(t){var n=o("FID"),e=m(),i=function(t){t.startTime<e.timeStamp&&(n.value=t.processingStart-t.startTime,n.entries.push(t),n.isFinal=!0,r())},a=u("first-input",i),r=v(t,n,a);a?p((function(){a.takeRecords().map(i),a.disconnect()}),!0):window.perfMetrics&&window.perfMetrics.onFirstInputDelay&&window.perfMetrics.onFirstInputDelay((function(t,i){i.timeStamp<e.timeStamp&&(n.value=t,n.isFinal=!0,n.entries=[{entryType:"first-input",name:i.type,target:i.target,cancelable:i.cancelable,startTime:i.timeStamp,processingStart:i.timeStamp+t}],r())}))},S=function(){return a||(a=new Promise((function(t){return["scroll","keydown","pointerdown"].map((function(n){addEventListener(n,t,{once:!0,passive:!0,capture:!0})}))}))),a},y=function(t){var n,e=arguments.length>1&&void 0!==arguments[1]&&arguments[1],i=o("LCP"),a=m(),r=function(t){var e=t.startTime;e<a.timeStamp?(i.value=e,i.entries.push(t)):i.isFinal=!0,n()},s=u("largest-contentful-paint",r);if(s){n=v(t,i,s,e);var c=function(){i.isFinal||(s.takeRecords().map(r),i.isFinal=!0,n())};S().then(c),p(c,!0)}},F=function(t){var n,e=o("TTFB");n=function(){try{var n=performance.getEntriesByType("navigation")[0]||function(){var t=performance.timing,n={entryType:"navigation",startTime:0};for(var e in t)"navigationStart"!==e&&"toJSON"!==e&&(n[e]=Math.max(t[e]-t.navigationStart,0));return n}();e.value=e.delta=n.responseStart,e.entries=[n],e.isFinal=!0,t(e)}catch(t){}},"complete"===document.readyState?setTimeout(n,0):addEventListener("pageshow",n)}}}]);
|
1
app/vmselect/vmui/static/js/main.3fb3f4d1.chunk.js
Normal file
1
app/vmselect/vmui/static/js/main.3fb3f4d1.chunk.js
Normal file
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
|
@ -1 +1 @@
|
|||
!function(e){function r(r){for(var n,i,a=r[0],c=r[1],l=r[2],s=0,p=[];s<a.length;s++)i=a[s],Object.prototype.hasOwnProperty.call(o,i)&&o[i]&&p.push(o[i][0]),o[i]=0;for(n in c)Object.prototype.hasOwnProperty.call(c,n)&&(e[n]=c[n]);for(f&&f(r);p.length;)p.shift()();return u.push.apply(u,l||[]),t()}function t(){for(var e,r=0;r<u.length;r++){for(var t=u[r],n=!0,a=1;a<t.length;a++){var c=t[a];0!==o[c]&&(n=!1)}n&&(u.splice(r--,1),e=i(i.s=t[0]))}return e}var n={},o={1:0},u=[];function i(r){if(n[r])return n[r].exports;var t=n[r]={i:r,l:!1,exports:{}};return e[r].call(t.exports,t,t.exports,i),t.l=!0,t.exports}i.e=function(e){var r=[],t=o[e];if(0!==t)if(t)r.push(t[2]);else{var n=new Promise((function(r,n){t=o[e]=[r,n]}));r.push(t[2]=n);var u,a=document.createElement("script");a.charset="utf-8",a.timeout=120,i.nc&&a.setAttribute("nonce",i.nc),a.src=function(e){return i.p+"static/js/"+({}[e]||e)+"."+{3:"c36fc28c"}[e]+".chunk.js"}(e);var c=new Error;u=function(r){a.onerror=a.onload=null,clearTimeout(l);var t=o[e];if(0!==t){if(t){var n=r&&("load"===r.type?"missing":r.type),u=r&&r.target&&r.target.src;c.message="Loading chunk "+e+" failed.\n("+n+": "+u+")",c.name="ChunkLoadError",c.type=n,c.request=u,t[1](c)}o[e]=void 0}};var l=setTimeout((function(){u({type:"timeout",target:a})}),12e4);a.onerror=a.onload=u,document.head.appendChild(a)}return Promise.all(r)},i.m=e,i.c=n,i.d=function(e,r,t){i.o(e,r)||Object.defineProperty(e,r,{enumerable:!0,get:t})},i.r=function(e){"undefined"!==typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},i.t=function(e,r){if(1&r&&(e=i(e)),8&r)return e;if(4&r&&"object"===typeof e&&e&&e.__esModule)return e;var t=Object.create(null);if(i.r(t),Object.defineProperty(t,"default",{enumerable:!0,value:e}),2&r&&"string"!=typeof e)for(var n in e)i.d(t,n,function(r){return e[r]}.bind(null,n));return t},i.n=function(e){var r=e&&e.__esModule?function(){return e.default}:function(){return e};return i.d(r,"a",r),r},i.o=function(e,r){return Object.prototype.hasOwnProperty.call(e,r)},i.p="./",i.oe=function(e){throw console.error(e),e};var a=this.webpackJsonpvmui=this.webpackJsonpvmui||[],c=a.push.bind(a);a.push=r,a=a.slice();for(var l=0;l<a.length;l++)r(a[l]);var f=c;t()}([]);
|
||||
!function(e){function r(r){for(var n,i,a=r[0],c=r[1],l=r[2],s=0,p=[];s<a.length;s++)i=a[s],Object.prototype.hasOwnProperty.call(o,i)&&o[i]&&p.push(o[i][0]),o[i]=0;for(n in c)Object.prototype.hasOwnProperty.call(c,n)&&(e[n]=c[n]);for(f&&f(r);p.length;)p.shift()();return u.push.apply(u,l||[]),t()}function t(){for(var e,r=0;r<u.length;r++){for(var t=u[r],n=!0,a=1;a<t.length;a++){var c=t[a];0!==o[c]&&(n=!1)}n&&(u.splice(r--,1),e=i(i.s=t[0]))}return e}var n={},o={1:0},u=[];function i(r){if(n[r])return n[r].exports;var t=n[r]={i:r,l:!1,exports:{}};return e[r].call(t.exports,t,t.exports,i),t.l=!0,t.exports}i.e=function(e){var r=[],t=o[e];if(0!==t)if(t)r.push(t[2]);else{var n=new Promise((function(r,n){t=o[e]=[r,n]}));r.push(t[2]=n);var u,a=document.createElement("script");a.charset="utf-8",a.timeout=120,i.nc&&a.setAttribute("nonce",i.nc),a.src=function(e){return i.p+"static/js/"+({}[e]||e)+"."+{3:"3cf0cbc4"}[e]+".chunk.js"}(e);var c=new Error;u=function(r){a.onerror=a.onload=null,clearTimeout(l);var t=o[e];if(0!==t){if(t){var n=r&&("load"===r.type?"missing":r.type),u=r&&r.target&&r.target.src;c.message="Loading chunk "+e+" failed.\n("+n+": "+u+")",c.name="ChunkLoadError",c.type=n,c.request=u,t[1](c)}o[e]=void 0}};var l=setTimeout((function(){u({type:"timeout",target:a})}),12e4);a.onerror=a.onload=u,document.head.appendChild(a)}return Promise.all(r)},i.m=e,i.c=n,i.d=function(e,r,t){i.o(e,r)||Object.defineProperty(e,r,{enumerable:!0,get:t})},i.r=function(e){"undefined"!==typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},i.t=function(e,r){if(1&r&&(e=i(e)),8&r)return e;if(4&r&&"object"===typeof e&&e&&e.__esModule)return e;var t=Object.create(null);if(i.r(t),Object.defineProperty(t,"default",{enumerable:!0,value:e}),2&r&&"string"!=typeof e)for(var n in e)i.d(t,n,function(r){return e[r]}.bind(null,n));return t},i.n=function(e){var r=e&&e.__esModule?function(){return e.default}:function(){return e};return i.d(r,"a",r),r},i.o=function(e,r){return Object.prototype.hasOwnProperty.call(e,r)},i.p="./",i.oe=function(e){throw console.error(e),e};var a=this.webpackJsonpvmui=this.webpackJsonpvmui||[],c=a.push.bind(a);a.push=r,a=a.slice();for(var l=0;l<a.length;l++)r(a[l]);var f=c;t()}([]);
|
|
@ -1,6 +1,7 @@
|
|||
package vmstorage
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"flag"
|
||||
"fmt"
|
||||
"net/http"
|
||||
|
@ -46,6 +47,8 @@ var (
|
|||
"Excess series are logged and dropped. This can be useful for limiting series cardinality. See also -storage.maxDailySeries")
|
||||
maxDailySeries = flag.Int("storage.maxDailySeries", 0, "The maximum number of unique series can be added to the storage during the last 24 hours. "+
|
||||
"Excess series are logged and dropped. This can be useful for limiting series churn rate. See also -storage.maxHourlySeries")
|
||||
|
||||
minFreeDiskSpaceBytes = flagutil.NewBytes("storage.minFreeDiskSpaceBytes", 10e6, "The minimum free disk space at -storageDataPath after which the storage stops accepting new data")
|
||||
)
|
||||
|
||||
// CheckTimeRange returns true if the given tr is denied for querying.
|
||||
|
@ -82,6 +85,7 @@ func InitWithoutMetrics(resetCacheIfNeeded func(mrs []storage.MetricRow)) {
|
|||
storage.SetFinalMergeDelay(*finalMergeDelay)
|
||||
storage.SetBigMergeWorkersCount(*bigMergeConcurrency)
|
||||
storage.SetSmallMergeWorkersCount(*smallMergeConcurrency)
|
||||
storage.SetFreeDiskSpaceLimit(minFreeDiskSpaceBytes.N)
|
||||
|
||||
logger.Infof("opening storage at %q with -retentionPeriod=%s", *DataPath, retentionPeriod)
|
||||
startTime := time.Now()
|
||||
|
@ -121,6 +125,9 @@ var resetResponseCacheIfNeeded func(mrs []storage.MetricRow)
|
|||
|
||||
// AddRows adds mrs to the storage.
|
||||
func AddRows(mrs []storage.MetricRow) error {
|
||||
if Storage.IsReadOnly() {
|
||||
return errReadOnly
|
||||
}
|
||||
resetResponseCacheIfNeeded(mrs)
|
||||
WG.Add(1)
|
||||
err := Storage.AddRows(mrs, uint8(*precisionBits))
|
||||
|
@ -128,6 +135,8 @@ func AddRows(mrs []storage.MetricRow) error {
|
|||
return err
|
||||
}
|
||||
|
||||
var errReadOnly = errors.New("the storage is in read-only mode; check -storage.minFreeDiskSpaceBytes command-line flag value")
|
||||
|
||||
// RegisterMetricNames registers all the metrics from mrs in the storage.
|
||||
func RegisterMetricNames(mrs []storage.MetricRow) error {
|
||||
WG.Add(1)
|
||||
|
@ -392,6 +401,15 @@ func registerStorageMetrics() {
|
|||
metrics.NewGauge(fmt.Sprintf(`vm_free_disk_space_bytes{path=%q}`, *DataPath), func() float64 {
|
||||
return float64(fs.MustGetFreeSpace(*DataPath))
|
||||
})
|
||||
metrics.NewGauge(fmt.Sprintf(`vm_free_disk_space_limit_bytes{path=%q}`, *DataPath), func() float64 {
|
||||
return float64(minFreeDiskSpaceBytes.N)
|
||||
})
|
||||
metrics.NewGauge(fmt.Sprintf(`vm_storage_is_read_only{path=%q}`, *DataPath), func() float64 {
|
||||
if Storage.IsReadOnly() {
|
||||
return 1
|
||||
}
|
||||
return 0
|
||||
})
|
||||
|
||||
metrics.NewGauge(`vm_active_merges{type="storage/big"}`, func() float64 {
|
||||
return float64(tm().ActiveBigMerges)
|
||||
|
|
7
app/vmui/packages/vmui/config-overrides.js
Normal file
7
app/vmui/packages/vmui/config-overrides.js
Normal file
|
@ -0,0 +1,7 @@
|
|||
// eslint-disable-next-line @typescript-eslint/no-var-requires,no-undef
|
||||
const {override, addExternalBabelPlugin} = require("customize-cra");
|
||||
|
||||
// eslint-disable-next-line no-undef
|
||||
module.exports = override(
|
||||
addExternalBabelPlugin("@babel/plugin-proposal-nullish-coalescing-operator")
|
||||
);
|
1375
app/vmui/packages/vmui/package-lock.json
generated
1375
app/vmui/packages/vmui/package-lock.json
generated
File diff suppressed because it is too large
Load diff
|
@ -13,30 +13,40 @@
|
|||
"@testing-library/jest-dom": "^5.11.6",
|
||||
"@testing-library/react": "^11.1.2",
|
||||
"@testing-library/user-event": "^12.2.2",
|
||||
"@types/d3": "^6.1.0",
|
||||
"@types/chart.js": "^2.9.34",
|
||||
"@types/jest": "^26.0.15",
|
||||
"@types/lodash.debounce": "^4.0.6",
|
||||
"@types/lodash.get": "^4.4.6",
|
||||
"@types/node": "^12.19.4",
|
||||
"@types/numeral": "^2.0.2",
|
||||
"@types/qs": "^6.9.6",
|
||||
"@types/react": "^16.9.56",
|
||||
"@types/react-dom": "^16.9.9",
|
||||
"@types/react-measure": "^2.0.6",
|
||||
"chart.js": "^3.5.1",
|
||||
"chartjs-adapter-date-fns": "^2.0.0",
|
||||
"chartjs-plugin-zoom": "^1.1.1",
|
||||
"codemirror-promql": "^0.10.2",
|
||||
"d3": "^6.2.0",
|
||||
"date-fns": "^2.23.0",
|
||||
"dayjs": "^1.10.4",
|
||||
"lodash.debounce": "^4.0.8",
|
||||
"lodash.get": "^4.4.2",
|
||||
"numeral": "^2.0.6",
|
||||
"qs": "^6.5.2",
|
||||
"react": "^17.0.1",
|
||||
"react-chartjs-2": "^3.0.5",
|
||||
"react-dom": "^17.0.1",
|
||||
"react-measure": "^2.5.2",
|
||||
"react-scripts": "4.0.0",
|
||||
"typescript": "~4.0.5",
|
||||
"uplot": "^1.6.16",
|
||||
"uplot-react": "^1.1.1",
|
||||
"web-vitals": "^0.2.4"
|
||||
},
|
||||
"scripts": {
|
||||
"start": "react-scripts start",
|
||||
"build": "GENERATE_SOURCEMAP=false react-scripts build",
|
||||
"test": "react-scripts test",
|
||||
"start": "react-app-rewired start",
|
||||
"build": "GENERATE_SOURCEMAP=false react-app-rewired build",
|
||||
"test": "react-app-rewired test",
|
||||
"eject": "react-scripts eject",
|
||||
"lint": "eslint src --ext tsx,ts",
|
||||
"lint:fix": "eslint src --ext tsx,ts --fix"
|
||||
|
@ -60,9 +70,12 @@
|
|||
]
|
||||
},
|
||||
"devDependencies": {
|
||||
"@babel/plugin-proposal-nullish-coalescing-operator": "^7.14.5",
|
||||
"@typescript-eslint/eslint-plugin": "^4.14.2",
|
||||
"@typescript-eslint/parser": "^4.14.2",
|
||||
"customize-cra": "^1.0.0",
|
||||
"eslint": "^7.14.0",
|
||||
"eslint-plugin-react": "^7.22.0"
|
||||
"eslint-plugin-react": "^7.22.0",
|
||||
"react-app-rewired": "^2.1.8"
|
||||
}
|
||||
}
|
||||
|
|
|
@ -3,7 +3,7 @@ import {Box, Popover, TextField, Typography} from "@material-ui/core";
|
|||
import { KeyboardDateTimePicker } from "@material-ui/pickers";
|
||||
import {TimeDurationPopover} from "./TimeDurationPopover";
|
||||
import {useAppDispatch, useAppState} from "../../../state/common/StateContext";
|
||||
import {dateFromSeconds, formatDateForNativeInput} from "../../../utils/time";
|
||||
import {checkDurationLimit, dateFromSeconds, formatDateForNativeInput} from "../../../utils/time";
|
||||
import {InlineBtn} from "../../common/InlineBtn";
|
||||
|
||||
interface TimeSelectorProps {
|
||||
|
@ -33,7 +33,9 @@ export const TimeSelector: FC<TimeSelectorProps> = ({setDuration}) => {
|
|||
|
||||
useEffect(() => {
|
||||
if (!durationStringFocused) {
|
||||
setDuration(durationString);
|
||||
const value = checkDurationLimit(durationString);
|
||||
setDurationString(value);
|
||||
setDuration(value);
|
||||
}
|
||||
}, [durationString, durationStringFocused]);
|
||||
|
||||
|
|
|
@ -11,7 +11,7 @@ export const useFetchQuery = (): {
|
|||
isLoading: boolean,
|
||||
graphData?: MetricResult[],
|
||||
liveData?: InstantMetricResult[],
|
||||
error?: string
|
||||
error?: string,
|
||||
} => {
|
||||
const {query, displayType, serverUrl, time: {period}} = useAppState();
|
||||
|
||||
|
@ -40,8 +40,10 @@ export const useFetchQuery = (): {
|
|||
return;
|
||||
}
|
||||
if (isValidHttpUrl(serverUrl)) {
|
||||
const duration = (period.end - period.start)/2;
|
||||
const doublePeriod = {...period, start: period.start - duration, end: period.end + duration};
|
||||
return displayType === "chart"
|
||||
? getQueryRangeUrl(serverUrl, query, period)
|
||||
? getQueryRangeUrl(serverUrl, query, doublePeriod)
|
||||
: getQueryUrl(serverUrl, query, period);
|
||||
} else {
|
||||
setError("Please provide a valid URL");
|
||||
|
@ -63,16 +65,21 @@ export const useFetchQuery = (): {
|
|||
headers.set("Authorization", bearerData?.token || "");
|
||||
}
|
||||
setIsLoading(true);
|
||||
const response = await fetch(fetchUrl, {
|
||||
headers
|
||||
});
|
||||
if (response.ok) {
|
||||
saveToStorage("LAST_QUERY", query);
|
||||
const resp = await response.json();
|
||||
setError(undefined);
|
||||
displayType === "chart" ? setGraphData(resp.data.result) : setLiveData(resp.data.result);
|
||||
} else {
|
||||
setError((await response.json())?.error);
|
||||
try {
|
||||
const response = await fetch(fetchUrl, {
|
||||
headers
|
||||
});
|
||||
if (response.ok) {
|
||||
saveToStorage("LAST_QUERY", query);
|
||||
const resp = await response.json();
|
||||
setError(undefined);
|
||||
displayType === "chart" ? setGraphData(resp.data.result) : setLiveData(resp.data.result);
|
||||
} else {
|
||||
setError((await response.json())?.error);
|
||||
}
|
||||
}
|
||||
catch (e) {
|
||||
setError(e.message);
|
||||
}
|
||||
setIsLoading(false);
|
||||
}
|
||||
|
|
|
@ -50,32 +50,32 @@ const HomeLayout: FC = () => {
|
|||
<UrlCopy url={fetchUrl}/>
|
||||
</Toolbar>
|
||||
</AppBar>
|
||||
<Box display="flex" flexDirection="column" style={{minHeight: "calc(100vh - 64px)"}}>
|
||||
<Box m={2}>
|
||||
<Box p={2} display="grid" gridTemplateRows="auto 1fr" gridGap={"20px"} style={{minHeight: "calc(100vh - 64px)"}}>
|
||||
<Box>
|
||||
<QueryConfigurator/>
|
||||
</Box>
|
||||
<Box flexShrink={1}>
|
||||
<Box height={"100%"}>
|
||||
{isLoading && <Fade in={isLoading} style={{
|
||||
transitionDelay: isLoading ? "300ms" : "0ms",
|
||||
}}>
|
||||
<Box alignItems="center" flexDirection="column" display="flex"
|
||||
<Box alignItems="center" justifyContent="center" flexDirection="column" display="flex"
|
||||
style={{
|
||||
width: "100%",
|
||||
maxWidth: "calc(100vh - 32px)",
|
||||
maxWidth: "calc(100vw - 32px)",
|
||||
position: "absolute",
|
||||
height: "150px",
|
||||
height: "50%",
|
||||
background: "linear-gradient(rgba(255,255,255,.7), rgba(255,255,255,.7), rgba(255,255,255,0))"
|
||||
}} m={2}>
|
||||
}}>
|
||||
<CircularProgress/>
|
||||
</Box>
|
||||
</Fade>}
|
||||
{<Box p={2}>
|
||||
{<Box height={"100%"} p={3} bgcolor={"#fff"}>
|
||||
{error &&
|
||||
<Alert color="error" style={{fontSize: "14px"}}>
|
||||
<Alert color="error" severity="error" style={{fontSize: "14px"}}>
|
||||
{error}
|
||||
</Alert>}
|
||||
{graphData && period && (displayType === "chart") &&
|
||||
<GraphView data={graphData} timePresets={period}></GraphView>}
|
||||
<GraphView data={graphData}/>}
|
||||
{liveData && (displayType === "code") && <JsonView data={liveData}/>}
|
||||
{liveData && (displayType === "table") && <TableView data={liveData}/>}
|
||||
</Box>}
|
||||
|
|
|
@ -1,126 +1,16 @@
|
|||
import React, {FC, useEffect, useMemo, useState} from "react";
|
||||
import React, {FC} from "react";
|
||||
import {MetricResult} from "../../../api/types";
|
||||
|
||||
import {schemeCategory10, scaleOrdinal, interpolateRainbow, range as d3Range} from "d3";
|
||||
|
||||
import {LineChart} from "../../LineChart/LineChart";
|
||||
import {DataSeries, TimeParams} from "../../../types";
|
||||
import {getNameForMetric} from "../../../utils/metric";
|
||||
import {Legend, LegendItem} from "../../Legend/Legend";
|
||||
import {useSortedCategories} from "../../../hooks/useSortedCategories";
|
||||
import {InlineBtn} from "../../common/InlineBtn";
|
||||
import LineChart from "../../LineChart/LineChart";
|
||||
import "../../../utils/chartjs-register-plugins";
|
||||
|
||||
export interface GraphViewProps {
|
||||
data: MetricResult[];
|
||||
timePresets: TimeParams
|
||||
data?: MetricResult[];
|
||||
}
|
||||
|
||||
const preDefinedScale = schemeCategory10;
|
||||
|
||||
const initialMaxAmount = 20;
|
||||
const showingIncrement = 20;
|
||||
|
||||
const GraphView: FC<GraphViewProps> = ({data, timePresets}) => {
|
||||
|
||||
const [showN, setShowN] = useState(initialMaxAmount);
|
||||
|
||||
const series: DataSeries[] = useMemo(() => {
|
||||
return data?.map(d => ({
|
||||
metadata: {
|
||||
name: getNameForMetric(d)
|
||||
},
|
||||
metric: d.metric,
|
||||
// VM metrics are tuples - much simpler to work with objects in chart
|
||||
values: d.values.map(v => ({
|
||||
key: v[0],
|
||||
value: +v[1]
|
||||
}))
|
||||
}));
|
||||
}, [data]);
|
||||
|
||||
const showingSeries = useMemo(() => series.slice(0 ,showN), [series, showN]);
|
||||
|
||||
const sortedCategories = useSortedCategories(data);
|
||||
|
||||
const seriesNames = useMemo(() => showingSeries.map(s => s.metadata.name), [showingSeries]);
|
||||
|
||||
// should not change as often as array of series names (for instance between executions of same query) to
|
||||
// keep related state (like selection of a labels)
|
||||
const [seriesNamesStable, setSeriesNamesStable] = useState(seriesNames);
|
||||
|
||||
useEffect(() => {
|
||||
// primitive way to check the fact that array contents are identical
|
||||
if (seriesNamesStable.join(",") !== seriesNames.join(",")) {
|
||||
setSeriesNamesStable(seriesNames);
|
||||
}
|
||||
}, [seriesNames, setSeriesNamesStable, seriesNamesStable]);
|
||||
|
||||
const amountOfSeries = useMemo(() => series.length, [series]);
|
||||
|
||||
const color = useMemo(() => {
|
||||
const len = seriesNamesStable.length;
|
||||
const scheme = len <= preDefinedScale.length
|
||||
? preDefinedScale
|
||||
: d3Range(len).map(d => d / len).map(interpolateRainbow); // dynamically generate n colors
|
||||
return scaleOrdinal<string>()
|
||||
.domain(seriesNamesStable) // associate series names with colors
|
||||
.range(scheme);
|
||||
}, [seriesNamesStable]);
|
||||
|
||||
|
||||
// changes only if names of series are different
|
||||
const initLabels = useMemo(() => {
|
||||
return seriesNamesStable.map(name => ({
|
||||
color: color(name),
|
||||
seriesName: name,
|
||||
labelData: showingSeries.find(s => s.metadata.name === name)?.metric, // find is O(n) - can do faster
|
||||
checked: true // init with checked always
|
||||
} as LegendItem));
|
||||
}, [color, seriesNamesStable]);
|
||||
|
||||
const [labels, setLabels] = useState(initLabels);
|
||||
|
||||
useEffect(() => {
|
||||
setLabels(initLabels);
|
||||
}, [initLabels]);
|
||||
|
||||
const visibleNames = useMemo(() => labels.filter(l => l.checked).map(l => l.seriesName), [labels]);
|
||||
|
||||
const visibleSeries = useMemo(() => showingSeries.filter(s => visibleNames.includes(s.metadata.name)), [showingSeries, visibleNames]);
|
||||
|
||||
const onLegendChange = (index: number) => {
|
||||
setLabels(prevState => {
|
||||
if (prevState) {
|
||||
const newState = [...prevState];
|
||||
newState[index] = {...newState[index], checked: !newState[index].checked};
|
||||
return newState;
|
||||
}
|
||||
return prevState;
|
||||
});
|
||||
};
|
||||
|
||||
const GraphView: FC<GraphViewProps> = ({data = []}) => {
|
||||
return <>
|
||||
{(amountOfSeries > 0)
|
||||
? <>
|
||||
{amountOfSeries > initialMaxAmount && <div style={{textAlign: "center"}}>
|
||||
{amountOfSeries > showN
|
||||
? <span style={{fontStyle: "italic"}}>Showing only first {showN} of {amountOfSeries} series.
|
||||
{showN + showingIncrement >= amountOfSeries
|
||||
?
|
||||
<InlineBtn handler={() => setShowN(amountOfSeries)} text="Show all"/>
|
||||
:
|
||||
<>
|
||||
<InlineBtn handler={() => setShowN(prev => Math.min(prev + showingIncrement, amountOfSeries))} text={`Show ${showingIncrement} more`}/>,
|
||||
<InlineBtn handler={() => setShowN(amountOfSeries)} text="show all"/>.
|
||||
</>}
|
||||
</span>
|
||||
: <span style={{fontStyle: "italic"}}>Showing all series.
|
||||
<InlineBtn handler={() => setShowN(initialMaxAmount)} text={`Show only ${initialMaxAmount}`}/>.
|
||||
</span>}
|
||||
</div>}
|
||||
<LineChart height={400} series={visibleSeries} color={color} timePresets={timePresets} categories={sortedCategories}></LineChart>
|
||||
<Legend labels={labels} onChange={onLegendChange} categories={sortedCategories}></Legend>
|
||||
</>
|
||||
{(data.length > 0)
|
||||
? <LineChart data={data} />
|
||||
: <div style={{textAlign: "center"}}>No data to show</div>}
|
||||
</>;
|
||||
};
|
||||
|
|
|
@ -1,18 +0,0 @@
|
|||
import React, {useEffect, useRef} from "react";
|
||||
import {axisBottom, ScaleTime, select as d3Select} from "d3";
|
||||
|
||||
interface AxisBottomI {
|
||||
xScale: ScaleTime<number, number>;
|
||||
height: number;
|
||||
}
|
||||
|
||||
export const AxisBottom: React.FC<AxisBottomI> = ({xScale, height}) => {
|
||||
//eslint-disable-next-line @typescript-eslint/no-explicit-any
|
||||
const ref = useRef<SVGSVGElement | any>(null);
|
||||
|
||||
useEffect(() => {
|
||||
d3Select(ref.current)
|
||||
.call(axisBottom<Date>(xScale));
|
||||
}, [xScale]);
|
||||
return <g ref={ref} className="x axis" transform={`translate(0, ${height})`} />;
|
||||
};
|
|
@ -1,39 +0,0 @@
|
|||
import React, {useEffect, useRef} from "react";
|
||||
import {axisLeft, ScaleLinear, select as d3Select} from "d3";
|
||||
import {format as d3Format} from "d3-format";
|
||||
|
||||
interface AxisLeftI {
|
||||
yScale: ScaleLinear<number, number>;
|
||||
label: string;
|
||||
}
|
||||
|
||||
const yFormatter = (val: number): string => {
|
||||
const v = Math.abs(val); // helps to handle negatives the same way
|
||||
const DECIMAL_THRESHOLD = 0.001;
|
||||
let format = ".2~s"; // 21K tilde means that it won't be 2.0K but just 2K
|
||||
if (v > 0 && v < DECIMAL_THRESHOLD) {
|
||||
format = ".0e"; // 1E-3 for values below DECIMAL_THRESHOLD
|
||||
}
|
||||
if (v >= DECIMAL_THRESHOLD && v < 1) {
|
||||
format = ".3~f"; // just plain 0.932
|
||||
}
|
||||
return d3Format(format)(val);
|
||||
};
|
||||
|
||||
export const AxisLeft: React.FC<AxisLeftI> = ({yScale, label}) => {
|
||||
//eslint-disable-next-line @typescript-eslint/no-explicit-any
|
||||
const ref = useRef<SVGSVGElement | any>(null);
|
||||
useEffect(() => {
|
||||
yScale && d3Select(ref.current).call(axisLeft<number>(yScale).tickFormat(yFormatter));
|
||||
}, [yScale]);
|
||||
return (
|
||||
<>
|
||||
<g className="y axis" ref={ref} />
|
||||
{label && (
|
||||
<text style={{fontSize: "0.6rem"}} transform="translate(0,-2)">
|
||||
{label}
|
||||
</text>
|
||||
)}
|
||||
</>
|
||||
);
|
||||
};
|
|
@ -1,48 +0,0 @@
|
|||
import React from "react";
|
||||
import {Box, makeStyles, Typography} from "@material-ui/core";
|
||||
|
||||
export interface ChartTooltipData {
|
||||
value: number;
|
||||
metrics: {
|
||||
key: string;
|
||||
value: string;
|
||||
}[];
|
||||
color?: string;
|
||||
}
|
||||
|
||||
export interface ChartTooltipProps {
|
||||
data: ChartTooltipData;
|
||||
time?: Date;
|
||||
}
|
||||
|
||||
const useStyle = makeStyles(() => ({
|
||||
wrapper: {
|
||||
maxWidth: "40vw"
|
||||
}
|
||||
}));
|
||||
|
||||
export const ChartTooltip: React.FC<ChartTooltipProps> = ({data, time}) => {
|
||||
const classes = useStyle();
|
||||
|
||||
return (
|
||||
<Box px={1} className={classes.wrapper}>
|
||||
<Box fontStyle="italic" mb={.5}>
|
||||
<Typography variant="subtitle1">{`${time?.toLocaleDateString()} ${time?.toLocaleTimeString()}`}</Typography>
|
||||
</Box>
|
||||
<Box mb={.5} my={1}>
|
||||
<Typography variant="subtitle2">{`Value: ${new Intl.NumberFormat(undefined, {
|
||||
maximumFractionDigits: 10
|
||||
}).format(data.value)}`}</Typography>
|
||||
</Box>
|
||||
<Box>
|
||||
<Typography variant="body2">
|
||||
{data.metrics.map(({key, value}) =>
|
||||
<Box component="span" mb={.25} key={key} display="flex" flexDirection="row" alignItems="center">
|
||||
<span>{key}: </span>
|
||||
<span style={{fontWeight: "bold"}}>{value}</span>
|
||||
</Box>)}
|
||||
</Typography>
|
||||
</Box>
|
||||
</Box>
|
||||
);
|
||||
};
|
|
@ -1,124 +0,0 @@
|
|||
/* eslint max-lines: ["error", {"max": 200}] */ // Complex D3 logic here - file can be larger
|
||||
import React, {useEffect, useMemo, useRef, useState} from "react";
|
||||
import {bisector, brushX, pointer as d3Pointer, ScaleLinear, ScaleTime, select as d3Select} from "d3";
|
||||
|
||||
interface LineI {
|
||||
yScale: ScaleLinear<number, number>;
|
||||
xScale: ScaleTime<number, number>;
|
||||
datesInChart: Date[];
|
||||
setSelection: (from: Date, to: Date) => void;
|
||||
onInteraction: (index: number | undefined, y: number | undefined) => void; // key is index. undefined means no interaction
|
||||
}
|
||||
|
||||
export const InteractionArea: React.FC<LineI> = ({yScale, xScale, datesInChart, onInteraction, setSelection}) => {
|
||||
const refBrush = useRef<SVGGElement>(null);
|
||||
|
||||
const [currentActivePoint, setCurrentActivePoint] = useState<number>();
|
||||
const [currentY, setCurrentY] = useState<number>();
|
||||
const [isBrushed, setIsBrushed] = useState(false);
|
||||
|
||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any,@typescript-eslint/explicit-function-return-type
|
||||
function brushEnded(this: any, event: any) {
|
||||
const selection = event.selection;
|
||||
if (selection) {
|
||||
if (!event.sourceEvent) return; // see comment in brushstarted
|
||||
setIsBrushed(true);
|
||||
const [from, to]: [Date, Date] = selection.map((s: number) => xScale.invert(s));
|
||||
setSelection(from, to);
|
||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any
|
||||
d3Select(refBrush.current).call(brush.move as any, null); // clean brush
|
||||
} else {
|
||||
// end event with empty selection means that we're cancelling brush
|
||||
setIsBrushed(false);
|
||||
}
|
||||
}
|
||||
|
||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any
|
||||
const brushStarted = (event: any): void => {
|
||||
// first of all: event is a d3 global value that stores current event (sort of).
|
||||
// This is weird but this is how d3 works with events.
|
||||
//This check is important:
|
||||
// Inside brushended - we have .call(brush.move, ...) in order to snap selected range to dates
|
||||
// that internally calls brushstarted again. But in this case sourceEvent is null, since the call
|
||||
// is programmatic. If we do not need to adjust selected are - no need to have this check (probably)
|
||||
if (event.sourceEvent) {
|
||||
setCurrentActivePoint(undefined);
|
||||
}
|
||||
};
|
||||
|
||||
const brush = useMemo(
|
||||
() =>
|
||||
brushX()
|
||||
.extent([
|
||||
[0, 0],
|
||||
[xScale.range()[1], yScale.range()[0]]
|
||||
])
|
||||
.on("end", brushEnded)
|
||||
.on("start", brushStarted),
|
||||
[brushEnded, xScale, yScale]
|
||||
);
|
||||
|
||||
// Needed to clean brush if we need to keep it
|
||||
|
||||
// const resetBrushHandler = useCallback(
|
||||
// (e) => {
|
||||
// const el = e.target as HTMLElement;
|
||||
// if (
|
||||
// el &&
|
||||
// el.tagName !== "rect" &&
|
||||
// e.target.classList.length &&
|
||||
// !e.target.classList.contains("selection")
|
||||
// ) {
|
||||
// // eslint-disable-next-line @typescript-eslint/no-explicit-any
|
||||
// d3Select(refBrush.current).call(brush.move as any, null);
|
||||
// }
|
||||
// },
|
||||
// [brush.move]
|
||||
// );
|
||||
|
||||
// useEffect(() => {
|
||||
// window.addEventListener("click", resetBrushHandler);
|
||||
// return () => {
|
||||
// window.removeEventListener("click", resetBrushHandler);
|
||||
// };
|
||||
// }, [resetBrushHandler]);
|
||||
|
||||
useEffect(() => {
|
||||
const bisect = bisector((d: Date) => d).center;
|
||||
const defineActivePoint = (mx: number): void => {
|
||||
const date = xScale.invert(mx); // date is a Date object
|
||||
const index = bisect(datesInChart, date, 1);
|
||||
setCurrentActivePoint(index);
|
||||
};
|
||||
|
||||
d3Select(refBrush.current)
|
||||
.on("touchmove mousemove", (event) => {
|
||||
const coords: [number, number] = d3Pointer(event);
|
||||
if (!isBrushed) {
|
||||
defineActivePoint(coords[0]);
|
||||
setCurrentY(coords[1]);
|
||||
}
|
||||
})
|
||||
.on("mouseout", () => {
|
||||
if (!isBrushed) {
|
||||
setCurrentActivePoint(undefined);
|
||||
}
|
||||
});
|
||||
}, [xScale, datesInChart, isBrushed]);
|
||||
|
||||
useEffect(() => {
|
||||
onInteraction(currentActivePoint, currentY);
|
||||
}, [currentActivePoint, currentY, onInteraction]);
|
||||
|
||||
useEffect(() => {
|
||||
// eslint-disable-next-line @typescript-eslint/ban-ts-comment
|
||||
// @ts-ignore
|
||||
brush && xScale && d3Select(refBrush.current).call(brush);
|
||||
}, [xScale, brush]);
|
||||
|
||||
return (
|
||||
<>
|
||||
<g ref={refBrush} />
|
||||
</>
|
||||
);
|
||||
};
|
|
@ -1,10 +0,0 @@
|
|||
import React from "react";
|
||||
|
||||
interface LineI {
|
||||
height: number;
|
||||
x: number | undefined;
|
||||
}
|
||||
|
||||
export const InteractionLine: React.FC<LineI> = ({height, x}) => {
|
||||
return <>{x && <line x1={x} y1="0" x2={x} y2={height} stroke="black" strokeDasharray="4" />}</>;
|
||||
};
|
|
@ -1,196 +1,147 @@
|
|||
/* eslint max-lines: ["error", {"max": 300}] */
|
||||
import React, {useCallback, useMemo, useRef, useState} from "react";
|
||||
import {line as d3Line, max as d3Max, min as d3Min, scaleLinear, ScaleOrdinal, scaleTime} from "d3";
|
||||
import "./line-chart.css";
|
||||
import Measure from "react-measure";
|
||||
import {AxisBottom} from "./AxisBottom";
|
||||
import {AxisLeft} from "./AxisLeft";
|
||||
import {DataSeries, DataValue, TimeParams} from "../../types";
|
||||
import {InteractionLine} from "./InteractionLine";
|
||||
import {InteractionArea} from "./InteractionArea";
|
||||
import {Box, Popover} from "@material-ui/core";
|
||||
import {ChartTooltip, ChartTooltipData} from "./ChartTooltip";
|
||||
import {useAppDispatch} from "../../state/common/StateContext";
|
||||
import {dateFromSeconds} from "../../utils/time";
|
||||
import {MetricCategory} from "../../hooks/useSortedCategories";
|
||||
import React, {FC, useEffect, useMemo, useRef, useState} from "react";
|
||||
import {getNameForMetric} from "../../utils/metric";
|
||||
import "chartjs-adapter-date-fns";
|
||||
import {useAppDispatch, useAppState} from "../../state/common/StateContext";
|
||||
import {GraphViewProps} from "../Home/Views/GraphView";
|
||||
import uPlot, {AlignedData as uPlotData, Options as uPlotOptions, Series as uPlotSeries} from "uplot";
|
||||
import UplotReact from "uplot-react";
|
||||
import "uplot/dist/uPlot.min.css";
|
||||
import numeral from "numeral";
|
||||
import "./legend.css";
|
||||
|
||||
interface LineChartProps {
|
||||
series: DataSeries[];
|
||||
timePresets: TimeParams;
|
||||
height: number;
|
||||
color: ScaleOrdinal<string, string>; // maps name to color hex code
|
||||
categories: MetricCategory[];
|
||||
}
|
||||
|
||||
interface TooltipState {
|
||||
xCoord: number;
|
||||
date: Date;
|
||||
index: number;
|
||||
leftPart: boolean;
|
||||
activeSeries: number;
|
||||
}
|
||||
|
||||
const TOOLTIP_MARGIN = 20;
|
||||
|
||||
export const LineChart: React.FC<LineChartProps> = ({series, timePresets, height, color, categories}) => {
|
||||
const [screenWidth, setScreenWidth] = useState<number>(window.innerWidth);
|
||||
const LineChart: FC<GraphViewProps> = ({data = []}) => {
|
||||
|
||||
const dispatch = useAppDispatch();
|
||||
const {time: {period}} = useAppState();
|
||||
const [dataChart, setDataChart] = useState<uPlotData>();
|
||||
const [series, setSeries] = useState<uPlotSeries[]>([]);
|
||||
const [scale, setScale] = useState({min: period.start, max: period.end});
|
||||
const refContainer = useRef<HTMLDivElement>(null);
|
||||
|
||||
const margin = {top: 10, right: 20, bottom: 40, left: 50};
|
||||
const svgWidth = useMemo(() => screenWidth - margin.left - margin.right, [screenWidth, margin.left, margin.right]);
|
||||
const svgHeight = useMemo(() => height - margin.top - margin.bottom, [margin.top, margin.bottom]);
|
||||
const xScale = useMemo(() => scaleTime().domain([timePresets.start,timePresets.end].map(dateFromSeconds)).range([0, svgWidth]), [
|
||||
svgWidth,
|
||||
timePresets
|
||||
]);
|
||||
|
||||
const [showTooltip, setShowTooltip] = useState(false);
|
||||
|
||||
const [tooltipState, setTooltipState] = useState<TooltipState>();
|
||||
|
||||
const yAxisLabel = ""; // TODO: label
|
||||
|
||||
const yScale = useMemo(
|
||||
() => {
|
||||
const seriesValues = series.reduce((acc: DataValue[], next: DataSeries) => [...acc, ...next.values], []).map(_ => _.value);
|
||||
const max = d3Max(seriesValues) ?? 1; // || 1 will cause one additional tick if max is 0
|
||||
const min = d3Min(seriesValues) || 0;
|
||||
return scaleLinear()
|
||||
.domain([min > 0 ? 0 : min, max < 0 ? 0 : max]) // input
|
||||
.range([svgHeight, 0])
|
||||
.nice();
|
||||
},
|
||||
[series, svgHeight]
|
||||
);
|
||||
|
||||
const line = useMemo(
|
||||
() =>
|
||||
d3Line<DataValue>()
|
||||
.x((d) => xScale(dateFromSeconds(d.key)))
|
||||
.y((d) => yScale(d.value || 0)),
|
||||
[xScale, yScale]
|
||||
);
|
||||
const getDataLine = (series: DataSeries) => line(series.values);
|
||||
|
||||
const handleChartInteraction = useCallback(
|
||||
async (key: number | undefined, y: number | undefined) => {
|
||||
if (typeof key === "number") {
|
||||
if (y && series && series[0]) {
|
||||
|
||||
// define closest series in chart
|
||||
const hoveringOverValue = yScale.invert(y);
|
||||
const closestPoint = series.map(s => s.values[key]?.value).reduce((acc, nextValue, index) => {
|
||||
const delta = Math.abs(hoveringOverValue - nextValue);
|
||||
if (delta < acc.delta) {
|
||||
acc = {delta, index};
|
||||
}
|
||||
return acc;
|
||||
}, {delta: Infinity, index: 0});
|
||||
|
||||
const date = dateFromSeconds(series[0].values[key].key);
|
||||
// popover orientation should be defined based on the scale domain middle, not data, since
|
||||
// data may not be present for the whole range
|
||||
const leftPart = date.valueOf() < (xScale.domain()[1].valueOf() + xScale.domain()[0].valueOf()) / 2;
|
||||
setTooltipState({
|
||||
date,
|
||||
xCoord: xScale(date),
|
||||
index: key,
|
||||
activeSeries: closestPoint.index,
|
||||
leftPart
|
||||
});
|
||||
setShowTooltip(true);
|
||||
}
|
||||
} else {
|
||||
setShowTooltip(false);
|
||||
setTooltipState(undefined);
|
||||
}
|
||||
},
|
||||
[xScale, yScale, series]
|
||||
);
|
||||
|
||||
const tooltipData: ChartTooltipData | undefined = useMemo(() => {
|
||||
if (tooltipState?.activeSeries) {
|
||||
return {
|
||||
value: series[tooltipState.activeSeries].values[tooltipState.index].value,
|
||||
metrics: categories.map(c => ({ key: c.key, value: series[tooltipState.activeSeries].metric[c.key]}))
|
||||
};
|
||||
} else {
|
||||
return undefined;
|
||||
const getColorByName = (str: string): string => {
|
||||
let hash = 0;
|
||||
for (let i = 0; i < str.length; i++) {
|
||||
hash = str.charCodeAt(i) + ((hash << 5) - hash);
|
||||
}
|
||||
}, [tooltipState, series]);
|
||||
|
||||
const tooltipAnchor = useRef<SVGGElement>(null);
|
||||
|
||||
const seriesDates = useMemo(() => {
|
||||
if (series && series[0]) {
|
||||
return series[0].values.map(v => v.key).map(dateFromSeconds);
|
||||
} else {
|
||||
return [];
|
||||
let colour = "#";
|
||||
for (let i = 0; i < 3; i++) {
|
||||
const value = (hash >> (i * 8)) & 0xFF;
|
||||
colour += ("00" + value.toString(16)).substr(-2);
|
||||
}
|
||||
}, [series]);
|
||||
|
||||
const setSelection = (from: Date, to: Date) => {
|
||||
dispatch({type: "SET_PERIOD", payload: {from, to}});
|
||||
return colour;
|
||||
};
|
||||
|
||||
return (
|
||||
<Measure bounds onResize={({bounds}) => bounds && setScreenWidth(bounds?.width)}>
|
||||
{({measureRef}) => (
|
||||
<div ref={measureRef} style={{width: "100%"}}>
|
||||
{tooltipAnchor && tooltipData && (
|
||||
<Popover
|
||||
disableScrollLock={true}
|
||||
style={{pointerEvents: "none"}} // IMPORTANT in order to allow interactions through popover's backdrop
|
||||
id="chart-tooltip-popover"
|
||||
open={showTooltip}
|
||||
anchorEl={tooltipAnchor.current}
|
||||
anchorOrigin={{
|
||||
vertical: "top",
|
||||
horizontal: tooltipState?.leftPart ? TOOLTIP_MARGIN : -TOOLTIP_MARGIN
|
||||
}}
|
||||
transformOrigin={{
|
||||
vertical: "top",
|
||||
horizontal: tooltipState?.leftPart ? "left" : "right"
|
||||
}}
|
||||
disableRestoreFocus>
|
||||
<Box m={1}>
|
||||
<ChartTooltip data={tooltipData} time={tooltipState?.date}/>
|
||||
</Box>
|
||||
</Popover>
|
||||
)}
|
||||
<svg width="100%" height={height}>
|
||||
<g transform={`translate(${margin.left}, ${margin.top})`}>
|
||||
<defs>
|
||||
{/*Clip path helps to clip the line*/}
|
||||
<clipPath id="clip-line">
|
||||
{/*Transforming and adding size to clip-path in order to avoid clipping of chart elements*/}
|
||||
<rect transform={"translate(0, -2)"} width={xScale.range()[1] + 4} height={yScale.range()[0] + 2} />
|
||||
</clipPath>
|
||||
</defs>
|
||||
<AxisBottom xScale={xScale} height={svgHeight} />
|
||||
<AxisLeft yScale={yScale} label={yAxisLabel} />
|
||||
{series.map((s, i) =>
|
||||
<path stroke={color(s.metadata.name)}
|
||||
key={i} className="line"
|
||||
style={{opacity: tooltipState?.activeSeries !== undefined ? (i === tooltipState?.activeSeries ? 1 : .2) : 1 }}
|
||||
d={getDataLine(s) as string}
|
||||
clipPath="url(#clip-line)"/>)}
|
||||
<g ref={tooltipAnchor}>
|
||||
<InteractionLine height={svgHeight} x={tooltipState?.xCoord} />
|
||||
</g>
|
||||
{/*NOTE: in SVG last element wins - so since we want mouseover to work in all area this should be last*/}
|
||||
<InteractionArea
|
||||
xScale={xScale}
|
||||
yScale={yScale}
|
||||
datesInChart={seriesDates}
|
||||
onInteraction={handleChartInteraction}
|
||||
setSelection={setSelection}
|
||||
/>
|
||||
</g>
|
||||
</svg>
|
||||
</div>
|
||||
)}
|
||||
</Measure>
|
||||
);
|
||||
const times = useMemo(() => {
|
||||
const allTimes = data.map(d => d.values.map(v => v[0])).flat();
|
||||
const start = Math.min(...allTimes);
|
||||
const end = Math.max(...allTimes);
|
||||
const output = [];
|
||||
for (let i = start; i < end; i += period.step || 1) {
|
||||
output.push(i);
|
||||
}
|
||||
return output;
|
||||
}, [data]);
|
||||
|
||||
useEffect(() => {
|
||||
const values = data.map(d => times.map(t => {
|
||||
const v = d.values.find(v => v[0] === t);
|
||||
return v ? +v[1] : null;
|
||||
}));
|
||||
const seriesValues = data.map(d => ({
|
||||
label: getNameForMetric(d),
|
||||
width: 1,
|
||||
font: "11px Arial",
|
||||
stroke: getColorByName(getNameForMetric(d))}));
|
||||
setSeries([{}, ...seriesValues]);
|
||||
setDataChart([times, ...values]);
|
||||
}, [data]);
|
||||
|
||||
const onReadyChart = (u: uPlot) => {
|
||||
const factor = 0.85;
|
||||
|
||||
// wheel drag pan
|
||||
u.over.addEventListener("mousedown", e => {
|
||||
if (e.button !== 0) return;
|
||||
e.preventDefault();
|
||||
const left0 = e.clientX;
|
||||
const scXMin0 = u.scales.x.min || 1;
|
||||
const scXMax0 = u.scales.x.max || 1;
|
||||
const xUnitsPerPx = u.posToVal(1, "x") - u.posToVal(0, "x");
|
||||
|
||||
const onmove = (e: MouseEvent) => {
|
||||
e.preventDefault();
|
||||
const dx = xUnitsPerPx * (e.clientX - left0);
|
||||
const min = scXMin0 - dx;
|
||||
const max = scXMax0 - dx;
|
||||
u.setScale("x", {min, max});
|
||||
setScale({min, max});
|
||||
};
|
||||
|
||||
const onup = () => {
|
||||
document.removeEventListener("mousemove", onmove);
|
||||
document.removeEventListener("mouseup", onup);
|
||||
};
|
||||
|
||||
document.addEventListener("mousemove", onmove);
|
||||
document.addEventListener("mouseup", onup);
|
||||
});
|
||||
|
||||
// wheel scroll zoom
|
||||
u.over.addEventListener("wheel", e => {
|
||||
if (!e.ctrlKey && !e.metaKey) return;
|
||||
e.preventDefault();
|
||||
const {width} = u.over.getBoundingClientRect();
|
||||
const {left = width/2} = u.cursor;
|
||||
const leftPct = left/width;
|
||||
const xVal = u.posToVal(left, "x");
|
||||
const oxRange = (u.scales.x.max || 0) - (u.scales.x.min || 0);
|
||||
const nxRange = e.deltaY < 0 ? oxRange * factor : oxRange / factor;
|
||||
const min = xVal - leftPct * nxRange;
|
||||
const max = min + nxRange;
|
||||
u.batch(() => {
|
||||
u.setScale("x", {min, max});
|
||||
setScale({min, max});
|
||||
});
|
||||
});
|
||||
};
|
||||
|
||||
useEffect(() => {setScale({min: period.start, max: period.end});}, [period]);
|
||||
|
||||
useEffect(() => {
|
||||
const duration = (period.end - period.start)/3;
|
||||
const factor = duration / (scale.max - scale.min);
|
||||
if (scale.max > period.end + duration || scale.min < period.start - duration || factor >= 0.7) {
|
||||
dispatch({type: "SET_PERIOD", payload: {from: new Date(scale.min * 1000), to: new Date(scale.max * 1000)}});
|
||||
}
|
||||
}, [scale]);
|
||||
|
||||
const options: uPlotOptions = {
|
||||
width: refContainer.current ? refContainer.current.offsetWidth : 400,
|
||||
height: 500,
|
||||
series: series,
|
||||
plugins: [{
|
||||
hooks: {
|
||||
ready: onReadyChart
|
||||
}
|
||||
}],
|
||||
cursor: {drag: {x: false, y: false}},
|
||||
axes: [
|
||||
{space: 80},
|
||||
{
|
||||
show: true,
|
||||
font: "10px Arial",
|
||||
values: (self, ticks) => ticks.map(n => n > 1000 ? numeral(n).format("0.0a") : n)
|
||||
}
|
||||
],
|
||||
scales: {x: {range: () => [scale.min, scale.max]}}
|
||||
};
|
||||
|
||||
return <div ref={refContainer}>
|
||||
{dataChart && <UplotReact
|
||||
options={options}
|
||||
data={dataChart}
|
||||
/>}
|
||||
</div>;
|
||||
};
|
||||
|
||||
export default LineChart;
|
11
app/vmui/packages/vmui/src/components/LineChart/legend.css
Normal file
11
app/vmui/packages/vmui/src/components/LineChart/legend.css
Normal file
|
@ -0,0 +1,11 @@
|
|||
.uplot .u-legend {
|
||||
display: grid;
|
||||
align-items: center;
|
||||
justify-content: start;
|
||||
text-align: left;
|
||||
margin-top: 25px;
|
||||
}
|
||||
|
||||
.uplot .u-legend .u-series {
|
||||
font-size: 12px;
|
||||
}
|
|
@ -1,15 +0,0 @@
|
|||
.line {
|
||||
fill: none;
|
||||
stroke-width: 2;
|
||||
}
|
||||
|
||||
.overlay {
|
||||
fill: none;
|
||||
pointer-events: all;
|
||||
}
|
||||
|
||||
/* Style the dots by assigning a fill and stroke */
|
||||
.dot {
|
||||
fill: #621773;
|
||||
stroke: #fff;
|
||||
}
|
|
@ -1,5 +0,0 @@
|
|||
export type AggregatedDataSet = {
|
||||
key: number;
|
||||
value: aggregatedDataValue;
|
||||
};
|
||||
export type aggregatedDataValue = {[key: string]: number};
|
|
@ -1,6 +1,12 @@
|
|||
import {DisplayType} from "../../components/Home/Configurator/DisplayTypeSwitch";
|
||||
import {TimeParams, TimePeriod} from "../../types";
|
||||
import {dateFromSeconds, getDurationFromPeriod, getTimeperiodForDuration} from "../../utils/time";
|
||||
import {
|
||||
dateFromSeconds,
|
||||
formatDateToLocal,
|
||||
getDateNowUTC,
|
||||
getDurationFromPeriod,
|
||||
getTimeperiodForDuration
|
||||
} from "../../utils/time";
|
||||
import {getFromStorage} from "../../utils/storage";
|
||||
import {getDefaultServer} from "../../utils/default-server-url";
|
||||
import {getQueryStringValue} from "../../utils/query-string";
|
||||
|
@ -33,16 +39,16 @@ export type Action =
|
|||
| { type: "TOGGLE_AUTOREFRESH"}
|
||||
| { type: "TOGGLE_AUTOCOMPLETE"}
|
||||
|
||||
const duration = getQueryStringValue("g0.range_input", "1h");
|
||||
const endInput = getQueryStringValue("g0.end_input", undefined);
|
||||
const duration = getQueryStringValue("g0.range_input", "1h") as string;
|
||||
const endInput = formatDateToLocal(getQueryStringValue("g0.end_input", getDateNowUTC()) as Date);
|
||||
|
||||
export const initialState: AppState = {
|
||||
serverUrl: getDefaultServer(),
|
||||
displayType: "chart",
|
||||
query: getQueryStringValue("g0.expr", getFromStorage("LAST_QUERY") as string || "\n"), // demo_memory_usage_bytes
|
||||
query: getQueryStringValue("g0.expr", getFromStorage("LAST_QUERY") as string || "\n") as string, // demo_memory_usage_bytes
|
||||
time: {
|
||||
duration,
|
||||
period: getTimeperiodForDuration(duration, endInput && new Date(endInput))
|
||||
period: getTimeperiodForDuration(duration, new Date(endInput))
|
||||
},
|
||||
queryControls: {
|
||||
autoRefresh: false,
|
||||
|
|
|
@ -0,0 +1,4 @@
|
|||
import {Chart} from "chart.js";
|
||||
import zoomPlugin from "chartjs-plugin-zoom";
|
||||
|
||||
Chart.register(zoomPlugin);
|
|
@ -53,9 +53,9 @@ export const setQueryStringValue = (newValue: Record<string, unknown>): void =>
|
|||
|
||||
export const getQueryStringValue = (
|
||||
key: string,
|
||||
defaultValue?: any,
|
||||
defaultValue?: unknown,
|
||||
queryString = window.location.search
|
||||
) => {
|
||||
): unknown => {
|
||||
const values = qs.parse(queryString, { ignoreQueryPrefix: true });
|
||||
return get(values, key, defaultValue || "");
|
||||
};
|
||||
|
|
|
@ -2,11 +2,17 @@ import {TimeParams, TimePeriod} from "../types";
|
|||
|
||||
import dayjs, {UnitTypeShort} from "dayjs";
|
||||
import duration from "dayjs/plugin/duration";
|
||||
import utc from "dayjs/plugin/utc";
|
||||
|
||||
dayjs.extend(duration);
|
||||
dayjs.extend(utc);
|
||||
|
||||
const MAX_ITEMS_PER_CHART = window.screen.availWidth / 2;
|
||||
|
||||
export const limitsDurations = {min: 1000, max: 1.578e+11}; // min: 1 seconds, max: 5 years
|
||||
|
||||
export const dateIsoFormat = "YYYY-MM-DD[T]HH:mm:ss";
|
||||
|
||||
export const supportedDurations = [
|
||||
{long: "days", short: "d", possible: "day"},
|
||||
{long: "weeks", short: "w", possible: "week"},
|
||||
|
@ -57,20 +63,45 @@ export const getTimeperiodForDuration = (dur: string, date?: Date): TimeParams =
|
|||
start: n - delta,
|
||||
end: n,
|
||||
step: step,
|
||||
date: formatDateForNativeInput((date || new Date()))
|
||||
date: formatDateToUTC(date || new Date())
|
||||
};
|
||||
};
|
||||
|
||||
export const formatDateForNativeInput = (date: Date): string => dayjs(date).format("YYYY-MM-DD[T]HH:mm:ss");
|
||||
export const formatDateToLocal = (date: Date): string => dayjs(date).utcOffset(0, true).local().format(dateIsoFormat);
|
||||
export const formatDateToUTC = (date: Date): string => dayjs(date).utc().format(dateIsoFormat);
|
||||
export const formatDateForNativeInput = (date: Date): string => dayjs(date).format(dateIsoFormat);
|
||||
|
||||
export const getDateNowUTC = (): Date => new Date(dayjs().utc().format(dateIsoFormat));
|
||||
|
||||
const getDurationFromMilliseconds = (ms: number): string => {
|
||||
const milliseconds = Math.floor(ms % 1000);
|
||||
const seconds = Math.floor((ms / 1000) % 60);
|
||||
const minutes = Math.floor((ms / 1000 / 60) % 60);
|
||||
const hours = Math.floor((ms / 1000 / 3600 ) % 24);
|
||||
const days = Math.floor(ms / (1000 * 60 * 60 * 24));
|
||||
const durs: UnitTypeShort[] = ["d", "h", "m", "s", "ms"];
|
||||
const values = [days, hours, minutes, seconds, milliseconds].map((t, i) => t ? `${t}${durs[i]}` : "");
|
||||
return values.filter(t => t).join(" ");
|
||||
};
|
||||
|
||||
export const getDurationFromPeriod = (p: TimePeriod): string => {
|
||||
const dur = dayjs.duration(p.to.valueOf() - p.from.valueOf());
|
||||
const durs: UnitTypeShort[] = ["d", "h", "m", "s"];
|
||||
return durs
|
||||
.map(d => ({val: dur.get(d), str: d}))
|
||||
.filter(obj => obj.val !== 0)
|
||||
.map(obj => `${obj.val}${obj.str}`)
|
||||
.join(" ");
|
||||
const ms = p.to.valueOf() - p.from.valueOf();
|
||||
return getDurationFromMilliseconds(ms);
|
||||
};
|
||||
|
||||
export const checkDurationLimit = (dur: string): string => {
|
||||
const durItems = dur.trim().split(" ");
|
||||
|
||||
const durObject = durItems.reduce((prev, curr) => {
|
||||
const dur = isSupportedDuration(curr);
|
||||
return dur ? {...prev, ...dur} : {...prev};
|
||||
}, {});
|
||||
|
||||
const delta = dayjs.duration(durObject).asMilliseconds();
|
||||
|
||||
if (delta < limitsDurations.min) return getDurationFromMilliseconds(limitsDurations.min);
|
||||
if (delta > limitsDurations.max) return getDurationFromMilliseconds(limitsDurations.max);
|
||||
return dur;
|
||||
};
|
||||
|
||||
export const dateFromSeconds = (epochTimeInSeconds: number): Date =>
|
||||
|
|
|
@ -4,7 +4,7 @@ DOCKER_NAMESPACE := victoriametrics
|
|||
|
||||
ROOT_IMAGE ?= alpine:3.14.2
|
||||
CERTS_IMAGE := alpine:3.14.2
|
||||
GO_BUILDER_IMAGE := golang:1.17.1
|
||||
GO_BUILDER_IMAGE := golang:1.17.1-alpine
|
||||
BUILDER_IMAGE := local/builder:2.0.0-$(shell echo $(GO_BUILDER_IMAGE) | tr : _)
|
||||
BASE_IMAGE := local/base:1.1.3-$(shell echo $(ROOT_IMAGE) | tr : _)-$(shell echo $(CERTS_IMAGE) | tr : _)
|
||||
|
||||
|
@ -36,7 +36,7 @@ app-via-docker: package-builder
|
|||
$(BUILDER_IMAGE) \
|
||||
go build $(RACE) -mod=vendor -trimpath \
|
||||
-ldflags "-extldflags '-static' $(GO_BUILDINFO)" \
|
||||
-tags 'netgo osusergo nethttpomithttp2' \
|
||||
-tags 'netgo osusergo nethttpomithttp2 musl' \
|
||||
-o bin/$(APP_NAME)$(APP_SUFFIX)-prod $(PKG_PREFIX)/app/$(APP_NAME)
|
||||
|
||||
app-via-docker-windows: package-builder
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
ARG go_builder_image
|
||||
FROM $go_builder_image
|
||||
STOPSIGNAL SIGINT
|
||||
RUN apk add gcc musl-dev make --no-cache
|
|
@ -1,66 +1,65 @@
|
|||
---
|
||||
sort: 19
|
||||
---
|
||||
|
||||
# VM best practices
|
||||
|
||||
VictoriaMetrics is a fast, cost-effective and scalable monitoring solution and time series database. It can be used as a long-term, remote storage for Prometheus which allows it to gather metrics from different systems and store them in a single location or separate them for different purposes (short-, long-term, responsibility zones etc).
|
||||
|
||||
## Install Recommendation
|
||||
There is no need to tune VictoriaMetrics because it uses reasonable defaults for command-line flags. These flags are automatically adjusted for the available CPU and RAM resources. There is no need for Operating System tuning because VictoriaMetrics is optimized for default OS settings. The only option is to increase the limit on the [number of open files in the OS](https://medium.com/@muhammadtriwibowo/set-permanently-ulimit-n-open-files-in-ubuntu-4d61064429a), so Prometheus instances could establish more connections to VictoriaMetrics (65535 standard production value).
|
||||
## Filesystem Considerations
|
||||
|
||||
The recommended filesystem is ext4. If you plan to store more than 1TB of data on ext4 partition or plan to extend it to more than 16TB, then the following options are recommended to pass to mkfs.ext4:
|
||||
mkfs.ext4 ... -O 64bit,huge_file,extent -T huge
|
||||
|
||||
## Operation System
|
||||
When configuring VictoriaMetrics, the best practice is to use the latest Ubuntu OS version.
|
||||
|
||||
## VictoriaMetrics Versions
|
||||
Always update VictoriaMetrics instances in the environment to avoid version and build mismatch that will result in differences in performance and operational features. It is strongly recommended that you keep VictoriaMetrics in the environment up-to-date and install all VictoriaMetrics updates as soon as they are available. The best place to find the most recent updates as soon as they are available is to follow [this link](https://github.com/VictoriaMetrics/VictoriaMetrics/releases).
|
||||
|
||||
## Upgrade
|
||||
It is safe to upgrade VictoriaMetrics to new versions unless the [release notes](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) say otherwise. It is safe to skip multiple versions during the upgrade unless release notes say otherwise. It is recommended to perform regular upgrades to the latest version, since it may contain important bug fixes, performance optimizations or new features.
|
||||
It is also safe to downgrade to the previous version unless release notes say otherwise.
|
||||
The following steps must be performed during the upgrade / downgrade process:
|
||||
|
||||
* Send SIGINT signal to VictoriaMetrics process so that it is stopped gracefully.
|
||||
* Wait until the process stops. This can take a few seconds.
|
||||
* Start the upgraded VictoriaMetrics.
|
||||
|
||||
Prometheus doesn't drop data during the VictoriaMetrics restart. See [this article](https://grafana.com/blog/2019/03/25/whats-new-in-prometheus-2.8-wal-based-remote-write/) for details.
|
||||
|
||||
## Security
|
||||
Do not forget to protect sensitive endpoints in VictoriaMetrics when exposing them to untrusted networks such as the internet. Please consider setting the following command-line flags:
|
||||
|
||||
* tls, -tlsCertFile and -tlsKeyFile for switching from HTTP to HTTPS.
|
||||
* httpAuth.username and -httpAuth.password for protecting all the HTTP endpoints with [HTTP Basic Authentication](https://en.wikipedia.org/wiki/Basic_access_authentication).
|
||||
* deleteAuthKey for protecting /api/v1/admin/tsdb/delete_series endpoint. See how to [delete time series](https://docs.victoriametrics.com/#how-to-delete-time-series).
|
||||
* snapshotAuthKey for protecting `/snapshot*` endpoints. See [how to work with snapshots](https://docs.victoriametrics.com/#how-to-work-with-snapshots).
|
||||
* forceMergeAuthKey for protecting /internal/force_merge endpoint. See [force merge docs](https://docs.victoriametrics.com/#forced-merge).
|
||||
* search.resetCacheAuthKey for protecting /internal/resetRollupResultCache endpoint. See [backfilling](https://docs.victoriametrics.com/vmbackup.html#backfilling) for more details.
|
||||
|
||||
Explicitly set internal network interface to TCP and UDP ports for data ingestion with Graphite and OpenTSDB formats. For example, substitute -graphiteListenAddr=:2003 with -graphiteListenAddr=<internal_iface_ip>:2003.
|
||||
It is preferable to authorize all incoming requests from untrusted networks with [vmauth](https://docs.victoriametrics.com/vmauth.html) or a similar auth proxy.
|
||||
|
||||
## Backup Recommendations
|
||||
VictoriaMetrics supports backups via [vmbackup](https://docs.victoriametrics.com/vmbackup.html) and [vmrestore](https://docs.victoriametrics.com/vmrestore.html) tools. We also provide the [vmbackupmanager tool](https://docs.victoriametrics.com/vmbackupmanager.html) in [enterprise package](https://victoriametrics.com/enterprise.html).
|
||||
|
||||
## Networking
|
||||
Network usage: outbound traffic is negligible. Ingress traffic is ~100 bytes per ingested data point via [Prometheus remote_write API](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write). The actual ingress bandwidth usage depends on the average number of labels per ingested metric and the average size of label values. A higher number of per-metric labels and longer label values result inhigher ingress bandwidth.
|
||||
|
||||
## Storage Considerations
|
||||
Storage space: VictoriaMetrics needs less than a byte per data point on average. So, ~260GB is required to store a month-long insert stream of 100K data points per second. The actual storage size depends largely on data randomness (entropy). Higher randomness means higher storage size requirements. Read [this article](https://medium.com/faun/victoriametrics-achieving-better-compression-for-time-series-data-than-gorilla-317bc1f95932) for details.
|
||||
|
||||
## RAM
|
||||
RAM size: VictoriaMetrics needs less than 1KB per [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-active-time-series). Therefore, ~1GB of RAM is required for 1M active time series. Time series are considered active if new data points have been added recently or if they have been recently queried. The number of active time series may be obtained from vm_cache_entries{type="storage/hour_metric_ids"} metric exported on the /metrics page. VictoriaMetrics stores various caches in RAM. Memory size for these caches may be limited with -memory.allowedPercent or -memory.allowedBytes flags.
|
||||
|
||||
## CPU
|
||||
CPU cores: VictoriaMetrics needs one CPU core per 300K inserted data points per second. So, ~4 CPU cores are required for processing the insert stream of 1M data points per second. The ingestion rate may be lower for [high cardinality data](https://docs.victoriametrics.com/FAQ.html#what-is-high-cardinality) or for time series with a high number of labels. See [this article](https://valyala.medium.com/insert-benchmarks-with-inch-influxdb-vs-victoriametrics-e31a41ae2893) for details. If you see lower numbers per CPU core, it is likely that the [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-active-time-series) info doesn't fit in your caches and you will need more RAM to lower CPU usage.
|
||||
|
||||
## Technical Support and Services
|
||||
If you have questions about installing or using this software pleasecheck this and other documents first. Answers to the most frequently askedquestions can be found on the Technical Papers webpage or in VictoriaMetrics community channels. If you need further assistance with VictoriaMetrics, please contact us at info@victoriametrics.com - we’ll be happy to help.
|
||||
|
||||
Following VictoriaMetrics best practices allows for the optimal configuration of our fast and scalable monitoring solution and time series database while minimizing or avoiding downtime or performance issues during installation and software usage. Our best practices also allow you to quickly troubleshoot any issues that might arise.
|
||||
|
||||
|
||||
---
|
||||
sort: 19
|
||||
---
|
||||
|
||||
# VictoriaMetrics best practices
|
||||
|
||||
|
||||
## Install Recommendation
|
||||
|
||||
It is recommended running the latest available release of VictoriaMetrics from [this page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases), since it contains all the bugfixes and enhancements.
|
||||
|
||||
There is no need to tune VictoriaMetrics because it uses reasonable defaults for command-line flags. These flags are automatically adjusted for the available CPU and RAM resources. There is no need in Operating System tuning because VictoriaMetrics is optimized for default OS settings. The only option is to increase the limit on the [number of open files in the OS](https://medium.com/@muhammadtriwibowo/set-permanently-ulimit-n-open-files-in-ubuntu-4d61064429a), so VictoriaMetrics could accept more incoming connections and could keep open more data files.
|
||||
|
||||
|
||||
## Filesystem
|
||||
|
||||
The recommended filesystem for VictoriaMetrics is [ext4](https://en.wikipedia.org/wiki/Ext4). If you plan to store more than 1TB of data on ext4 partition or plan to extend it to more than 16TB, then the following options are recommended to pass to mkfs.ext4:
|
||||
|
||||
```
|
||||
mkfs.ext4 ... -O 64bit,huge_file,extent -T huge
|
||||
```
|
||||
|
||||
VictoriaMetrics should work OK with other filesystems, including network filesystems such as [NFS](https://en.wikipedia.org/wiki/Network_File_System), [Amazon EFS](https://aws.amazon.com/efs/) and [Google Filestore](https://cloud.google.com/filestore).
|
||||
|
||||
|
||||
## Operation System
|
||||
|
||||
VictoriaMetrics is production-ready for the following operating systems:
|
||||
|
||||
* Linux (Alpine, Ubuntu, Debian, RedHat, etc.)
|
||||
* FreeBSD
|
||||
* OpenBSD
|
||||
|
||||
Some VictoriaMetrics components ([vmagent](https://docs.victoriametrics.com/vmagent.html), [vmalert](https://docs.victoriametrics.com/vmalert.html) and [vmauth](https://docs.victoriametrics.com/vmauth.html)) can run on Windows.
|
||||
|
||||
VictoriaMetrics can run also on MacOS for testing and development purposes.
|
||||
|
||||
|
||||
## Upgrade procedure
|
||||
|
||||
It is safe to upgrade VictoriaMetrics to new versions unless the [release notes](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) say otherwise. It is safe to skip multiple versions during the upgrade unless release notes say otherwise. It is recommended to perform regular upgrades to the latest version, since it may contain important bug fixes, performance optimizations or new features.
|
||||
|
||||
It is also safe to downgrade to the previous version unless release notes say otherwise.
|
||||
|
||||
The following steps must be performed during the upgrade / downgrade procedure:
|
||||
|
||||
* Send SIGINT signal to VictoriaMetrics process so that it is stopped gracefully.
|
||||
* Wait until the process stops. This can take a few seconds.
|
||||
* Start the upgraded VictoriaMetrics.
|
||||
|
||||
|
||||
## Backup Recommendations
|
||||
|
||||
VictoriaMetrics supports backups via [vmbackup](https://docs.victoriametrics.com/vmbackup.html) and [vmrestore](https://docs.victoriametrics.com/vmrestore.html) tools. There is also [vmbackupmanager](https://docs.victoriametrics.com/vmbackupmanager.html), which simplifies backup automation.
|
||||
|
||||
|
||||
## Technical Support and Services
|
||||
|
||||
There are the following channels for providing technical support for VictoriaMetrics:
|
||||
|
||||
* [Github issues](https://github.com/VictoriaMetrics/VictoriaMetrics/issues)
|
||||
* [Slack channel](https://slack.victoriametrics.com/)
|
||||
* [Telegram channel](https://t.me/VictoriaMetrics_en)
|
||||
|
||||
We also provide [Enterprise support](https://victoriametrics.com/enterprise.html).
|
||||
|
|
|
@ -7,6 +7,22 @@ sort: 15
|
|||
## tip
|
||||
|
||||
|
||||
## [v1.67.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.67.0)
|
||||
|
||||
* FEATURE: add ability to accept metrics from [DataDog agent](https://docs.datadoghq.com/agent/) and [DogStatsD](https://docs.datadoghq.com/developers/dogstatsd/). See [these docs](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-send-data-from-datadog-agent). This option simplifies the migration path from DataDog to VictoriaMetrics. See also [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/206).
|
||||
* FEATURE: vmagent [enterprise](https://victoriametrics.com/enterprise.html): add support for data reading and writing from/to [Apache Kafka](https://kafka.apache.org/). See [these docs](https://docs.victoriametrics.com/vmagent.html#kafka-integration).
|
||||
* FEATURE: vmui: switch to [μPlot](https://github.com/leeoniya/uPlot) and add ability to naturally scroll and zoom graphs. See [these docs](https://docs.victoriametrics.com/#vmui). Thanks to @Loori-R.
|
||||
* FEATURE: vmstorage: stop accepting new data if `-storageDataPath` directory contains less than `-storage.minFreeDiskSpaceBytes` of free space. This should prevent from `out of disk space` crashes. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/269).
|
||||
* FEATURE: calculate quantiles in the same way as Prometheus does in such functions as [quantile_over_time](https://docs.victoriametrics.com/MetricsQL.html#quantile_over_time) and [quantile](https://docs.victoriametrics.com/MetricsQL.html#quantile). Previously results from VictoriaMetrics could be slightly different than results from Prometheus. See [this](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1625) and [this](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1612) issues.
|
||||
* FEATURE: add `rollup_scrape_interval(m[d])` function to [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html), which returns `min`, `max` and `avg` values for the interval between samples for `m` on the given lookbehind window `d`.
|
||||
* FEATURE: add `topk_last(k, q)` and `bottomk_last(k, q)` functions to [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html), which return up to `k` time series from `q` with the maximum / minimum last value on the graph.
|
||||
|
||||
* BUGFIX: align behavior of the queries `a or on (labels) b`, `a and on (labels) b` and `a unless on (labels) b` where `b` has multiple time series with the given `labels` to Prometheus behavior. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/1643).
|
||||
* BUGFIX: vmagent: fix `openstack_sd_config` service discovery when both `domain_name` and `project_id` config options are set. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1655).
|
||||
* BUGFIX: return proper values (zeroes) from [stddev_over_time](https://docs.victoriametrics.com/MetricsQL.html#stddev_over_time) and [stdvar_over_time](https://docs.victoriametrics.com/MetricsQL.html#stdvar_over_time) functions when the lookbehind window in square brackets contains only a single sample. Previously the sample value was incorrectly returned in this case.
|
||||
* BUGFIX: vminsert: fix uneven distribution of time series among storage nodes in [multi-level cluster setup](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#multi-level-cluster-setup). See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1672).
|
||||
|
||||
|
||||
## [v1.66.2](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.66.2)
|
||||
|
||||
* FEATURE: vmagent: add `vm_promscrape_max_scrape_size_exceeded_errors_total` metric for counting of the failed scrapes due to the exceeded response size (the response size limit can be configured via `-promscrape.maxScrapeSize` command-line flag). See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1639).
|
||||
|
|
|
@ -183,6 +183,10 @@ or [an alternative dashboard for VictoriaMetrics cluster](https://grafana.com/gr
|
|||
|
||||
It is recommended setting up alerts in [vmalert](https://docs.victoriametrics.com/vmalert.html) or in Prometheus from [this config](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/cluster/deployment/docker/alerts.yml).
|
||||
|
||||
## Readonly mode
|
||||
|
||||
`vmstorage` nodes automatically switch to readonly mode when the directory pointed by `-storageDataPath` contains less than `-storage.minFreeDiskSpaceBytes` of free space. `vminsert` nodes stop sending data to such nodes and start re-routing the data to the remaining `vmstorage` nodes.
|
||||
|
||||
|
||||
|
||||
## URL format
|
||||
|
@ -191,12 +195,11 @@ It is recommended setting up alerts in [vmalert](https://docs.victoriametrics.co
|
|||
- `<accountID>` is an arbitrary 32-bit integer identifying namespace for data ingestion (aka tenant). It is possible to set it as `accountID:projectID`,
|
||||
where `projectID` is also arbitrary 32-bit integer. If `projectID` isn't set, then it equals to `0`.
|
||||
- `<suffix>` may have the following values:
|
||||
- `prometheus` and `prometheus/api/v1/write` - for inserting data with [Prometheus remote write API](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write)
|
||||
- `influx/write` and `influx/api/v2/write` - for inserting data with [InfluxDB line protocol](https://docs.influxdata.com/influxdb/v1.7/write_protocols/line_protocol_tutorial/).
|
||||
- `opentsdb/api/put` - for accepting [OpenTSDB HTTP /api/put requests](http://opentsdb.net/docs/build/html/api_http/put.html).
|
||||
This handler is disabled by default. It is exposed on a distinct TCP address set via `-opentsdbHTTPListenAddr` command-line flag.
|
||||
See [these docs](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#sending-opentsdb-data-via-http-apiput-requests) for details.
|
||||
- `prometheus/api/v1/import` - for importing data obtained via `api/v1/export` on `vmselect` (see below).
|
||||
- `prometheus` and `prometheus/api/v1/write` - for inserting data with [Prometheus remote write API](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write).
|
||||
- `datadog/api/v1/series` - for inserting data with [DataDog submit metrics API](https://docs.datadoghq.com/api/latest/metrics/#submit-metrics). See [these docs](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-send-data-from-datadog-agent) for details.
|
||||
- `influx/write` and `influx/api/v2/write` - for inserting data with [InfluxDB line protocol](https://docs.influxdata.com/influxdb/v1.7/write_protocols/line_protocol_tutorial/). See [these docs](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf) for details.
|
||||
- `opentsdb/api/put` - for accepting [OpenTSDB HTTP /api/put requests](http://opentsdb.net/docs/build/html/api_http/put.html). This handler is disabled by default. It is exposed on a distinct TCP address set via `-opentsdbHTTPListenAddr` command-line flag. See [these docs](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#sending-opentsdb-data-via-http-apiput-requests) for details.
|
||||
- `prometheus/api/v1/import` - for importing data obtained via `api/v1/export` at `vmselect` (see below).
|
||||
- `prometheus/api/v1/import/native` - for importing data obtained via `api/v1/export/native` on `vmselect` (see below).
|
||||
- `prometheus/api/v1/import/csv` - for importing arbitrary CSV data. See [these docs](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-import-csv-data) for details.
|
||||
- `prometheus/api/v1/import/prometheus` - for importing data in [Prometheus text exposition format](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#text-based-format) and in [OpenMetrics format](https://github.com/OpenObservability/OpenMetrics/blob/master/specification/OpenMetrics.md). See [these docs](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-import-data-in-prometheus-exposition-format) for details.
|
||||
|
@ -470,6 +473,9 @@ Below is the output for `/path/to/vminsert -help`:
|
|||
TCP address to listen for data from other vminsert nodes in multi-level cluster setup. See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#multi-level-cluster-setup . Usually :8400 must be set. Doesn't work if empty
|
||||
-csvTrimTimestamp duration
|
||||
Trim timestamps when importing csv data to this duration. Minimum practical duration is 1ms. Higher duration (i.e. 1s) may be used for reducing disk space usage for timestamp data (default 1ms)
|
||||
-datadog.maxInsertRequestSize size
|
||||
The maximum size in bytes of a single DataDog POST request to /api/v1/series
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 67108864)
|
||||
-disableRerouting
|
||||
Whether to disable re-routing when some of vmstorage nodes accept incoming data at slower speed compared to other storage nodes. Disabled re-routing limits the ingestion rate by the slowest vmstorage node. On the other side, disabled re-routing minimizes the number of active time series in the cluster during rolling restarts and during spikes in series churn rate (default true)
|
||||
-enableTCP6
|
||||
|
@ -557,7 +563,7 @@ Below is the output for `/path/to/vminsert -help`:
|
|||
-opentsdbhttpTrimTimestamp duration
|
||||
Trim timestamps for OpenTSDB HTTP data to this duration. Minimum practical duration is 1ms. Higher duration (i.e. 1s) may be used for reducing disk space usage for timestamp data (default 1ms)
|
||||
-relabelConfig string
|
||||
Optional path to a file with relabeling rules, which are applied to all the ingested metrics. See https://docs.victoriametrics.com/#relabeling for details
|
||||
Optional path to a file with relabeling rules, which are applied to all the ingested metrics. See https://docs.victoriametrics.com/#relabeling for details. The config is reloaded on SIGHUP signal
|
||||
-relabelDebug
|
||||
Whether to log metrics before and after relabeling with -relabelConfig. If the -relabelDebug is enabled, then the metrics aren't sent to storage. This is useful for debugging the relabeling configs
|
||||
-replicationFactor int
|
||||
|
@ -779,6 +785,9 @@ Below is the output for `/path/to/vmstorage -help`:
|
|||
The maximum number of unique series can be added to the storage during the last 24 hours. Excess series are logged and dropped. This can be useful for limiting series churn rate. See also -storage.maxHourlySeries
|
||||
-storage.maxHourlySeries int
|
||||
The maximum number of unique series can be added to the storage during the last hour. Excess series are logged and dropped. This can be useful for limiting series cardinality. See also -storage.maxDailySeries
|
||||
-storage.minFreeDiskSpaceBytes size
|
||||
The minimum free disk space at -storageDataPath after which the storage stops accepting new data
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 10000000)
|
||||
-storageDataPath string
|
||||
Path to storage data (default "vmstorage-data")
|
||||
-tls
|
||||
|
|
|
@ -274,9 +274,13 @@ See also [implicit query conversions](#implicit-query-conversions).
|
|||
|
||||
`rollup_rate(series_selector[d])` calculates per-second change rates for adjancent raw samples on the given lookbehind window `d` and returns `min`, `max` and `avg` values for the calculated per-second change rates. The calculations are perfomed individually per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups.
|
||||
|
||||
#### rollup_scrape_interval
|
||||
|
||||
`rollup_scrape_interval(series_selector[d])` calculates the interval in seconds between adjancent raw samples on the given lookbehind window `d` and returns `min`, `max` and `avg` values for the calculated interval. The calculations are perfomed individually per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. See also [scrape_interval](#scrape_interval).
|
||||
|
||||
#### scrape_interval
|
||||
|
||||
`scrape_interval(series_selector[d])` calculates the average interval in seconds between raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups.
|
||||
`scrape_interval(series_selector[d])` calculates the average interval in seconds between raw samples on the given lookbehind window `d` per each time series returned from the given [series_selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). Metric names are stripped from the resulting rollups. See also [rollup_scrape_interval](#rollup_scrape_interval).
|
||||
|
||||
#### share_gt_over_time
|
||||
|
||||
|
@ -714,19 +718,23 @@ See also [implicit query conversions](#implicit-query-conversions).
|
|||
|
||||
#### bottomk_avg
|
||||
|
||||
`bottomk_avg(k, q, "other_label=other_value")` returns up to `k` time series with the smallest averages. If an optional `other_label=other_value` arg is set, then the sum of the remaining time series is returned with the given label. For example, `bottomk_avg(3, sum(process_resident_memory_bytes) by (job), "job=other")` would return up to 3 time series with the smallest averages plus a time series with `{job="other"}` label with the sum of the remaining series if any. See also [topk_avg](#topk_avg).
|
||||
`bottomk_avg(k, q, "other_label=other_value")` returns up to `k` time series from `q` with the smallest averages. If an optional `other_label=other_value` arg is set, then the sum of the remaining time series is returned with the given label. For example, `bottomk_avg(3, sum(process_resident_memory_bytes) by (job), "job=other")` would return up to 3 time series with the smallest averages plus a time series with `{job="other"}` label with the sum of the remaining series if any. See also [topk_avg](#topk_avg).
|
||||
|
||||
#### bottomk_last
|
||||
|
||||
`bottomk_last(k, q, "other_label=other_value")` returns up to `k` time series from `q` with the smallest last values. If an optional `other_label=other_value` arg is set, then the sum of the remaining time series is returned with the given label. For example, `bottomk_max(3, sum(process_resident_memory_bytes) by (job), "job=other")` would return up to 3 time series with the smallest maximums plus a time series with `{job="other"}` label with the sum of the remaining series if any. See also [topk_last](#topk_last).
|
||||
|
||||
#### bottomk_max
|
||||
|
||||
`bottomk_max(k, q, "other_label=other_value")` returns up to `k` time series with the smallest maximums. If an optional `other_label=other_value` arg is set, then the sum of the remaining time series is returned with the given label. For example, `bottomk_max(3, sum(process_resident_memory_bytes) by (job), "job=other")` would return up to 3 time series with the smallest maximums plus a time series with `{job="other"}` label with the sum of the remaining series if any. See also [topk_max](#topk_max).
|
||||
`bottomk_max(k, q, "other_label=other_value")` returns up to `k` time series from `q` with the smallest maximums. If an optional `other_label=other_value` arg is set, then the sum of the remaining time series is returned with the given label. For example, `bottomk_max(3, sum(process_resident_memory_bytes) by (job), "job=other")` would return up to 3 time series with the smallest maximums plus a time series with `{job="other"}` label with the sum of the remaining series if any. See also [topk_max](#topk_max).
|
||||
|
||||
#### bottomk_median
|
||||
|
||||
`bottomk_median(k, q, "other_label=other_value")` returns up to `k` time series with the smallest medians. If an optional `other_label=other_value` arg is set, then the sum of the remaining time series is returned with the given label. For example, `bottomk_median(3, sum(process_resident_memory_bytes) by (job), "job=other")` would return up to 3 time series with the smallest medians plus a time series with `{job="other"}` label with the sum of the remaining series if any. See also [topk_median](#topk_median).
|
||||
`bottomk_median(k, q, "other_label=other_value")` returns up to `k` time series from `q with the smallest medians. If an optional `other_label=other_value` arg is set, then the sum of the remaining time series is returned with the given label. For example, `bottomk_median(3, sum(process_resident_memory_bytes) by (job), "job=other")` would return up to 3 time series with the smallest medians plus a time series with `{job="other"}` label with the sum of the remaining series if any. See also [topk_median](#topk_median).
|
||||
|
||||
#### bottomk_min
|
||||
|
||||
`bottomk_min(k, q, "other_label=other_value")` returns up to `k` time series with the smallest minimums. If an optional `other_label=other_value` arg is set, then the sum of the remaining time series is returned with the given label. For example, `bottomk_min(3, sum(process_resident_memory_bytes) by (job), "job=other")` would return up to 3 time series with the smallest minimums plus a time series with `{job="other"}` label with the sum of the remaining series if any. See also [topk_min](#topk_min).
|
||||
`bottomk_min(k, q, "other_label=other_value")` returns up to `k` time series from `q` with the smallest minimums. If an optional `other_label=other_value` arg is set, then the sum of the remaining time series is returned with the given label. For example, `bottomk_min(3, sum(process_resident_memory_bytes) by (job), "job=other")` would return up to 3 time series with the smallest minimums plus a time series with `{job="other"}` label with the sum of the remaining series if any. See also [topk_min](#topk_min).
|
||||
|
||||
#### count
|
||||
|
||||
|
@ -814,19 +822,23 @@ See also [implicit query conversions](#implicit-query-conversions).
|
|||
|
||||
#### topk_avg
|
||||
|
||||
`topk_avg(k, q, "other_label=other_value")` returns up to `k` time series with the biggest averages. If an optional `other_label=other_value` arg is set, then the sum of the remaining time series is returned with the given label. For example, `topk_avg(3, sum(process_resident_memory_bytes) by (job), "job=other")` would return up to 3 time series with the biggest averages plus a time series with `{job="other"}` label with the sum of the remaining series if any. See also [bottomk_avg](#bottomk_avg).
|
||||
`topk_avg(k, q, "other_label=other_value")` returns up to `k` time series from `q` with the biggest averages. If an optional `other_label=other_value` arg is set, then the sum of the remaining time series is returned with the given label. For example, `topk_avg(3, sum(process_resident_memory_bytes) by (job), "job=other")` would return up to 3 time series with the biggest averages plus a time series with `{job="other"}` label with the sum of the remaining series if any. See also [bottomk_avg](#bottomk_avg).
|
||||
|
||||
#### topk_last
|
||||
|
||||
`topk_last(k, q, "other_label=other_value")` returns up to `k` time series from `q` with the biggest last values. If an optional `other_label=other_value` arg is set, then the sum of the remaining time series is returned with the given label. For example, `topk_max(3, sum(process_resident_memory_bytes) by (job), "job=other")` would return up to 3 time series with the biggest amaximums plus a time series with `{job="other"}` label with the sum of the remaining series if any. See also [bottomk_last](#bottomk_last).
|
||||
|
||||
#### topk_max
|
||||
|
||||
`topk_max(k, q, "other_label=other_value")` returns up to `k` time series with the biggest maximums. If an optional `other_label=other_value` arg is set, then the sum of the remaining time series is returned with the given label. For example, `topk_max(3, sum(process_resident_memory_bytes) by (job), "job=other")` would return up to 3 time series with the biggest amaximums plus a time series with `{job="other"}` label with the sum of the remaining series if any. See also [bottomk_max](#bottomk_max).
|
||||
`topk_max(k, q, "other_label=other_value")` returns up to `k` time series from `q` with the biggest maximums. If an optional `other_label=other_value` arg is set, then the sum of the remaining time series is returned with the given label. For example, `topk_max(3, sum(process_resident_memory_bytes) by (job), "job=other")` would return up to 3 time series with the biggest amaximums plus a time series with `{job="other"}` label with the sum of the remaining series if any. See also [bottomk_max](#bottomk_max).
|
||||
|
||||
#### topk_median
|
||||
|
||||
`topk_median(k, q, "other_label=other_value")` returns up to `k` time series with the biggest medians. If an optional `other_label=other_value` arg is set, then the sum of the remaining time series is returned with the given label. For example, `topk_median(3, sum(process_resident_memory_bytes) by (job), "job=other")` would return up to 3 time series with the biggest medians plus a time series with `{job="other"}` label with the sum of the remaining series if any. See also [bottomk_median](#bottomk_median).
|
||||
`topk_median(k, q, "other_label=other_value")` returns up to `k` time series from `q` with the biggest medians. If an optional `other_label=other_value` arg is set, then the sum of the remaining time series is returned with the given label. For example, `topk_median(3, sum(process_resident_memory_bytes) by (job), "job=other")` would return up to 3 time series with the biggest medians plus a time series with `{job="other"}` label with the sum of the remaining series if any. See also [bottomk_median](#bottomk_median).
|
||||
|
||||
#### topk_min
|
||||
|
||||
`topk_min(k, q, "other_label=other_value")` returns up to `k` time series with the biggest minimums. If an optional `other_label=other_value` arg is set, then the sum of the remaining time series is returned with the given label. For example, `topk_min(3, sum(process_resident_memory_bytes) by (job), "job=other")` would return up to 3 time series with the biggest minimums plus a time series with `{job="other"}` label with the sum of the remaining series if any. See also [bottomk_min](#bottomk_min).
|
||||
`topk_min(k, q, "other_label=other_value")` returns up to `k` time series from `q` with the biggest minimums. If an optional `other_label=other_value` arg is set, then the sum of the remaining time series is returned with the given label. For example, `topk_min(3, sum(process_resident_memory_bytes) by (job), "job=other")` would return up to 3 time series with the biggest minimums plus a time series with `{job="other"}` label with the sum of the remaining series if any. See also [bottomk_min](#bottomk_min).
|
||||
|
||||
#### zscore
|
||||
|
||||
|
|
|
@ -265,6 +265,52 @@ VictoriaMetrics also supports [importing data in Prometheus exposition format](#
|
|||
See also [vmagent](https://docs.victoriametrics.com/vmagent.html), which can be used as drop-in replacement for Prometheus.
|
||||
|
||||
|
||||
## How to send data from DataDog agent
|
||||
|
||||
VictoriaMetrics accepts data from [DataDog agent](https://docs.datadoghq.com/agent/) or [DogStatsD]() via ["submit metrics" API](https://docs.datadoghq.com/api/latest/metrics/#submit-metrics) at `/datadog/api/v1/series` path.
|
||||
|
||||
Run DataDog agent with `DD_DD_URL=http://victoriametrics-host:8428/datadog` environment variable in order to write data to VictoriaMetrics at `victoriametrics-host` host. Another option is to set `dd_url` param at [DataDog agent configuration file](https://docs.datadoghq.com/agent/guide/agent-configuration-files/) to `http://victoriametrics-host:8428/datadog`.
|
||||
|
||||
Example on how to send data to VictoriaMetrics via DataDog "submit metrics" API from command line:
|
||||
|
||||
```bash
|
||||
echo '
|
||||
{
|
||||
"series": [
|
||||
{
|
||||
"host": "test.example.com",
|
||||
"interval": 20,
|
||||
"metric": "system.load.1",
|
||||
"points": [[
|
||||
0,
|
||||
0.5
|
||||
]],
|
||||
"tags": [
|
||||
"environment:test"
|
||||
],
|
||||
"type": "rate"
|
||||
}
|
||||
]
|
||||
}
|
||||
' | curl -X POST --data-binary @- http://localhost:8428/datadog/api/v1/series
|
||||
```
|
||||
|
||||
The imported data can be read via [export API](https://docs.victoriametrics.com/#how-to-export-data-in-json-line-format):
|
||||
|
||||
```bash
|
||||
curl http://localhost:8428/api/v1/export -d 'match[]=system.load.1'
|
||||
```
|
||||
|
||||
This command should return the following output if everything is OK:
|
||||
|
||||
```
|
||||
{"metric":{"__name__":"system.load.1","environment":"test","host":"test.example.com"},"values":[0.5],"timestamps":[1632833641000]}
|
||||
```
|
||||
|
||||
Extra labels may be added to all the written time series by passing `extra_label=name=value` query args.
|
||||
For example, `/datadog/api/v1/series?extra_label=foo=bar` would add `{foo="bar"}` label to all the ingested metrics.
|
||||
|
||||
|
||||
## How to send data from InfluxDB-compatible agents such as [Telegraf](https://www.influxdata.com/time-series-platform/telegraf/)
|
||||
|
||||
Use `http://<victoriametric-addr>:8428` url instead of InfluxDB url in agents' configs.
|
||||
|
@ -479,7 +525,7 @@ By default, VictoriaMetrics returns time series for the last 5 minutes from `/ap
|
|||
|
||||
Additionally VictoriaMetrics provides the following handlers:
|
||||
|
||||
* `/vmui` - Basic Web UI
|
||||
* `/vmui` - Basic Web UI. See [these docs](#vmui).
|
||||
* `/api/v1/series/count` - returns the total number of time series in the database. Some notes:
|
||||
* the handler scans all the inverted index, so it can be slow if the database contains tens of millions of time series;
|
||||
* the handler may count [deleted time series](#how-to-delete-time-series) additionally to normal time series due to internal implementation restrictions;
|
||||
|
@ -550,6 +596,15 @@ VictoriaMetrics supports the following handlers from [Graphite Tags API](https:/
|
|||
* [/tags/delSeries](https://graphite.readthedocs.io/en/stable/tags.html#removing-series-from-the-tagdb)
|
||||
|
||||
|
||||
## vmui
|
||||
|
||||
VictoriaMetrics provides UI for query troubleshooting and exploration. The UI is available at `http://victoriametrics:8428/vmui`.
|
||||
The UI allows exploring query results via graphs and tables. Graphs support scrolling and zooming:
|
||||
|
||||
* Drag the graph to the left / right in order to move the displayed time range into the past / future.
|
||||
* Hold `Ctrl` (or `Cmd` on MacOS) and scroll up / down in order to zoom in / out the graph.
|
||||
|
||||
|
||||
## How to build from sources
|
||||
|
||||
We recommend using either [binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) or
|
||||
|
@ -790,6 +845,7 @@ The exported CSV data can be imported to VictoriaMetrics via [/api/v1/import/csv
|
|||
Time series data can be imported via any supported ingestion protocol:
|
||||
|
||||
* [Prometheus remote_write API](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write). See [these docs](#prometheus-setup) for details.
|
||||
* DataDog `submit metrics` API. See [these docs](#how-to-send-data-from-datadog-agent) for details.
|
||||
* InfluxDB line protocol. See [these docs](#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf) for details.
|
||||
* Graphite plaintext protocol. See [these docs](#how-to-send-data-from-graphite-compatible-agents-such-as-statsd) for details.
|
||||
* OpenTSDB telnet put protocol. See [these docs](#sending-data-via-telnet-put-protocol) for details.
|
||||
|
@ -1650,7 +1706,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
The maximum size of scrape response in bytes to process from Prometheus targets. Bigger responses are rejected
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 16777216)
|
||||
-promscrape.noStaleMarkers
|
||||
Whether to disable seding Prometheus stale markers for metrics when scrape target disappears. This option may reduce memory usage if stale markers aren't needed for your setup. See also https://docs.victoriametrics.com/vmagent.html#stream-parsing-mode
|
||||
Whether to disable sending Prometheus stale markers for metrics when scrape target disappears. This option may reduce memory usage if stale markers aren't needed for your setup. See also https://docs.victoriametrics.com/vmagent.html#stream-parsing-mode
|
||||
-promscrape.openstackSDCheckInterval duration
|
||||
Interval for checking for changes in openstack API server. This works only if openstack_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#openstack_sd_config for details (default 30s)
|
||||
-promscrape.seriesLimitPerTarget int
|
||||
|
|
|
@ -269,6 +269,52 @@ VictoriaMetrics also supports [importing data in Prometheus exposition format](#
|
|||
See also [vmagent](https://docs.victoriametrics.com/vmagent.html), which can be used as drop-in replacement for Prometheus.
|
||||
|
||||
|
||||
## How to send data from DataDog agent
|
||||
|
||||
VictoriaMetrics accepts data from [DataDog agent](https://docs.datadoghq.com/agent/) or [DogStatsD]() via ["submit metrics" API](https://docs.datadoghq.com/api/latest/metrics/#submit-metrics) at `/datadog/api/v1/series` path.
|
||||
|
||||
Run DataDog agent with `DD_DD_URL=http://victoriametrics-host:8428/datadog` environment variable in order to write data to VictoriaMetrics at `victoriametrics-host` host. Another option is to set `dd_url` param at [DataDog agent configuration file](https://docs.datadoghq.com/agent/guide/agent-configuration-files/) to `http://victoriametrics-host:8428/datadog`.
|
||||
|
||||
Example on how to send data to VictoriaMetrics via DataDog "submit metrics" API from command line:
|
||||
|
||||
```bash
|
||||
echo '
|
||||
{
|
||||
"series": [
|
||||
{
|
||||
"host": "test.example.com",
|
||||
"interval": 20,
|
||||
"metric": "system.load.1",
|
||||
"points": [[
|
||||
0,
|
||||
0.5
|
||||
]],
|
||||
"tags": [
|
||||
"environment:test"
|
||||
],
|
||||
"type": "rate"
|
||||
}
|
||||
]
|
||||
}
|
||||
' | curl -X POST --data-binary @- http://localhost:8428/datadog/api/v1/series
|
||||
```
|
||||
|
||||
The imported data can be read via [export API](https://docs.victoriametrics.com/#how-to-export-data-in-json-line-format):
|
||||
|
||||
```bash
|
||||
curl http://localhost:8428/api/v1/export -d 'match[]=system.load.1'
|
||||
```
|
||||
|
||||
This command should return the following output if everything is OK:
|
||||
|
||||
```
|
||||
{"metric":{"__name__":"system.load.1","environment":"test","host":"test.example.com"},"values":[0.5],"timestamps":[1632833641000]}
|
||||
```
|
||||
|
||||
Extra labels may be added to all the written time series by passing `extra_label=name=value` query args.
|
||||
For example, `/datadog/api/v1/series?extra_label=foo=bar` would add `{foo="bar"}` label to all the ingested metrics.
|
||||
|
||||
|
||||
## How to send data from InfluxDB-compatible agents such as [Telegraf](https://www.influxdata.com/time-series-platform/telegraf/)
|
||||
|
||||
Use `http://<victoriametric-addr>:8428` url instead of InfluxDB url in agents' configs.
|
||||
|
@ -483,7 +529,7 @@ By default, VictoriaMetrics returns time series for the last 5 minutes from `/ap
|
|||
|
||||
Additionally VictoriaMetrics provides the following handlers:
|
||||
|
||||
* `/vmui` - Basic Web UI
|
||||
* `/vmui` - Basic Web UI. See [these docs](#vmui).
|
||||
* `/api/v1/series/count` - returns the total number of time series in the database. Some notes:
|
||||
* the handler scans all the inverted index, so it can be slow if the database contains tens of millions of time series;
|
||||
* the handler may count [deleted time series](#how-to-delete-time-series) additionally to normal time series due to internal implementation restrictions;
|
||||
|
@ -554,6 +600,15 @@ VictoriaMetrics supports the following handlers from [Graphite Tags API](https:/
|
|||
* [/tags/delSeries](https://graphite.readthedocs.io/en/stable/tags.html#removing-series-from-the-tagdb)
|
||||
|
||||
|
||||
## vmui
|
||||
|
||||
VictoriaMetrics provides UI for query troubleshooting and exploration. The UI is available at `http://victoriametrics:8428/vmui`.
|
||||
The UI allows exploring query results via graphs and tables. Graphs support scrolling and zooming:
|
||||
|
||||
* Drag the graph to the left / right in order to move the displayed time range into the past / future.
|
||||
* Hold `Ctrl` (or `Cmd` on MacOS) and scroll up / down in order to zoom in / out the graph.
|
||||
|
||||
|
||||
## How to build from sources
|
||||
|
||||
We recommend using either [binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) or
|
||||
|
@ -794,6 +849,7 @@ The exported CSV data can be imported to VictoriaMetrics via [/api/v1/import/csv
|
|||
Time series data can be imported via any supported ingestion protocol:
|
||||
|
||||
* [Prometheus remote_write API](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write). See [these docs](#prometheus-setup) for details.
|
||||
* DataDog `submit metrics` API. See [these docs](#how-to-send-data-from-datadog-agent) for details.
|
||||
* InfluxDB line protocol. See [these docs](#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf) for details.
|
||||
* Graphite plaintext protocol. See [these docs](#how-to-send-data-from-graphite-compatible-agents-such-as-statsd) for details.
|
||||
* OpenTSDB telnet put protocol. See [these docs](#sending-data-via-telnet-put-protocol) for details.
|
||||
|
@ -1654,7 +1710,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
The maximum size of scrape response in bytes to process from Prometheus targets. Bigger responses are rejected
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 16777216)
|
||||
-promscrape.noStaleMarkers
|
||||
Whether to disable seding Prometheus stale markers for metrics when scrape target disappears. This option may reduce memory usage if stale markers aren't needed for your setup. See also https://docs.victoriametrics.com/vmagent.html#stream-parsing-mode
|
||||
Whether to disable sending Prometheus stale markers for metrics when scrape target disappears. This option may reduce memory usage if stale markers aren't needed for your setup. See also https://docs.victoriametrics.com/vmagent.html#stream-parsing-mode
|
||||
-promscrape.openstackSDCheckInterval duration
|
||||
Interval for checking for changes in openstack API server. This works only if openstack_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#openstack_sd_config for details (default 30s)
|
||||
-promscrape.seriesLimitPerTarget int
|
||||
|
|
|
@ -234,7 +234,7 @@ scrape_configs:
|
|||
```
|
||||
|
||||
* `remoteWriteUrls: - http://vmcluster-victoria-metrics-cluster-vminsert.default.svc.cluster.local:8480/insert/0/prometheus/` configures `vmagent` to write scraped metrics to the `vmselect service`.
|
||||
* The `metric_ralabel_configs` section allows you to process Kubernetes metrics for the Grafana dashboard.
|
||||
* The `metric_relabel_configs` section allows you to process Kubernetes metrics for the Grafana dashboard.
|
||||
|
||||
|
||||
Verify that `vmagent`'s pod is up and running by executing the following command:
|
||||
|
|
|
@ -415,7 +415,7 @@ config:
|
|||
```
|
||||
|
||||
* By adding `remoteWriteUrls: - http://vmcluster-victoria-metrics-cluster-vminsert.default.svc.cluster.local:8480/insert/0/prometheus/` we configuring [vmagent](https://docs.victoriametrics.com/vmagent.html) to write scraped metrics into the `vmselect service`.
|
||||
* The second part of this yaml file is needed to add the `metric_ralabel_configs` section that helps us to show Kubernetes metrics on the Grafana dashboard.
|
||||
* The second part of this yaml file is needed to add the `metric_relabel_configs` section that helps us to show Kubernetes metrics on the Grafana dashboard.
|
||||
|
||||
|
||||
Verify that `vmagent`'s pod is up and running by executing the following command:
|
||||
|
@ -548,4 +548,4 @@ vmagent has it’s own dashboard:
|
|||
|
||||
* We set up TimeSeries Database for your Kubernetes cluster.
|
||||
* We collected metrics from all running pods,nodes, … and stored them in a VictoriaMetrics database.
|
||||
* We visualized resources used in the Kubernetes cluster by using Grafana dashboards.
|
||||
* We visualized resources used in the Kubernetes cluster by using Grafana dashboards.
|
||||
|
|
|
@ -170,7 +170,7 @@ server:
|
|||
|
||||
* By running `helm install vmsingle vm/victoria-metrics-single` we install [VictoriaMetrics Single](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html) to default [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) inside your cluster
|
||||
* By adding `scrape: enable: true` we add and enable autodiscovery scraping from kubernetes cluster to [VictoriaMetrics Single](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html)
|
||||
* On line 166 from [https://docs.victoriametrics.com/guides/guide-vmsingle-values.yaml](https://docs.victoriametrics.com/guides/guide-vmsingle-values.yaml) we added `metric_ralabel_configs` section that will help us to show Kubernetes metrics on Grafana dashboard.
|
||||
* On line 166 from [https://docs.victoriametrics.com/guides/guide-vmsingle-values.yaml](https://docs.victoriametrics.com/guides/guide-vmsingle-values.yaml) we added `metric_relabel_configs` section that will help us to show Kubernetes metrics on Grafana dashboard.
|
||||
|
||||
|
||||
As a result of the command you will see the following output:
|
||||
|
|
108
docs/vmagent.md
108
docs/vmagent.md
|
@ -21,10 +21,12 @@ to `vmagent` such as the ability to push metrics instead of pulling them. We did
|
|||
|
||||
## Features
|
||||
|
||||
* Can be used as a drop-in replacement for Prometheus for scraping targets such as [node_exporter](https://github.com/prometheus/node_exporter).
|
||||
See [Quick Start](#quick-start) for details.
|
||||
* Can be used as a drop-in replacement for Prometheus for scraping targets such as [node_exporter](https://github.com/prometheus/node_exporter). See [Quick Start](#quick-start) for details.
|
||||
* Can read data from Kafka. See [these docs](#reading-metrics-from-kafka).
|
||||
* Can write data to Kafka. See [these docs](#writing-metrics-to-kafka).
|
||||
* Can add, remove and modify labels (aka tags) via Prometheus relabeling. Can filter data before sending it to remote storage. See [these docs](#relabeling) for details.
|
||||
* Accepts data via all ingestion protocols supported by VictoriaMetrics:
|
||||
* DataDog "submit metrics" API. See [these docs](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-send-data-from-datadog-agent).
|
||||
* InfluxDB line protocol via `http://<vmagent>:8429/write`. See [these docs](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf).
|
||||
* Graphite plaintext protocol if `-graphiteListenAddr` command-line flag is set. See [these docs](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-send-data-from-graphite-compatible-agents-such-as-statsd).
|
||||
* OpenTSDB telnet and http protocols if `-opentsdbListenAddr` command-line flag is set. See [these docs](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-send-data-from-opentsdb-compatible-agents).
|
||||
|
@ -524,6 +526,108 @@ It may be useful to perform `vmagent` rolling update without any scrape loss.
|
|||
regex: true
|
||||
```
|
||||
|
||||
## Kafka integration
|
||||
|
||||
[Enterprise version](https://victoriametrics.com/enterprise.html) of `vmagent` can read and write metrics from / to Kafka:
|
||||
|
||||
* [Reading metrics from Kafka](#reading-metrics-from-kafka)
|
||||
* [Writing metrics to Kafka](#writing-metrics-to-kafka)
|
||||
|
||||
The enterprise version of vmagent is available for evaluation at [releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) page in `vmutils-*-enteprise.tar.gz` archives and in [docker images](https://hub.docker.com/r/victoriametrics/vmagent/tags) with tags containing `enterprise` suffix.
|
||||
|
||||
|
||||
### Reading metrics from Kafka
|
||||
|
||||
[Enterprise version](https://victoriametrics.com/enterprise.html) of `vmagent` can read metrics in various formats from Kafka messages. These formats can be configured with `-kafka.consumer.topic.defaultFormat` or `-kafka.consumer.topic.format` command-line options. The following formats are supported:
|
||||
|
||||
* `promremotewrite` - [Prometheus remote_write](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write). Messages in this format can be sent by vmagent - see [these docs](#writing-metrics-to-kafka).
|
||||
* `influx` - [InfluxDB line protocol format](https://docs.influxdata.com/influxdb/v1.7/write_protocols/line_protocol_tutorial/).
|
||||
* `prometheus` - [Prometheus text exposition format](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#text-based-format) and [OpenMetrics format](https://github.com/OpenObservability/OpenMetrics/blob/master/specification/OpenMetrics.md).
|
||||
* `graphite` - [Graphite plaintext format](https://graphite.readthedocs.io/en/latest/feeding-carbon.html#the-plaintext-protocol).
|
||||
* `jsonline` - [JSON line format](https://docs.victoriametrics.com/#how-to-import-data-in-json-line-format).
|
||||
|
||||
Every Kafka message may contain multiple lines in `influx`, `prometheus`, `graphite` and `jsonline` format delimited by `\n`.
|
||||
|
||||
`vmagent` consumes messages from Kafka topics specified by `-kafka.consumer.topic` command-line flag. Multiple topics can be specified by passing multiple `-kafka.consumer.topic` command-line flags to `vmagent`.
|
||||
|
||||
`vmagent` consumes messages from Kafka brokers specified by `-kafka.consumer.topic.brokers` command-line flag. Multiple brokers can be specified per each `-kafka.consumer.topic` by passing a list of brokers delimited by `;`. For example, `-kafka.consumer.topic.brokers=host1:9092;host2:9092`.
|
||||
|
||||
The following command starts `vmagent`, which reads metrics in InfluxDB line protocol format from Kafka broker at `localhost:9092` from the topic `metrics-by-telegraf` and sends them to remote storage at `http://localhost:8428/api/v1/write`:
|
||||
|
||||
```bash
|
||||
./bin/vmagent -remoteWrite.url=http://localhost:8428/api/v1/write \
|
||||
-kafka.consumer.topic.brokers=localhost:9092 \
|
||||
-kafka.consumer.topic.format=influx \
|
||||
-kafka.consumer.topic=metrics-by-telegraf \
|
||||
-kafka.consumer.topic.groupID=some-id
|
||||
```
|
||||
|
||||
It is expected that [Telegraf](https://github.com/influxdata/telegraf) sends metrics to the `metrics-by-telegraf` topic with the following config:
|
||||
|
||||
```yaml
|
||||
[[outputs.kafka]]
|
||||
brokers = ["localhost:9092"]
|
||||
topic = "influx"
|
||||
data_format = "influx"
|
||||
```
|
||||
|
||||
|
||||
#### Command-line flags for Kafka consumer
|
||||
|
||||
These command-line flags are available only in [enterprise](https://victoriametrics.com/enterprise.html) version of `vmagent`, which can be downloaded for evaluation from [releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) page (see `vmutils-*-enteprise.tar.gz` archives) and from [docker images](https://hub.docker.com/r/victoriametrics/vmagent/tags) with tags containing `enterprise` suffix.
|
||||
|
||||
```
|
||||
-kafka.consumer.topic array
|
||||
Kafka topic names for data consumption.
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
-kafka.consumer.topic.basicAuth.password array
|
||||
Optional basic auth password for -kafka.consumer.topic. Must be used in conjunction with any supported auth methods for kafka client, specified by flag -kafka.consumer.topic.options='security.protocol=SASL_SSL;sasl.mechanisms=PLAIN'
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
-kafka.consumer.topic.basicAuth.username array
|
||||
Optional basic auth username for -kafka.consumer.topic. Must be used in conjunction with any supported auth methods for kafka client, specified by flag -kafka.consumer.topic.options='security.protocol=SASL_SSL;sasl.mechanisms=PLAIN'
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
-kafka.consumer.topic.brokers array
|
||||
List of brokers to connect for given topic, e.g. -kafka.consumer.topic.broker=host-1:9092;host-2:9092
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
-kafka.consumer.topic.defaultFormat string
|
||||
Expected data format in the topic if -kafka.consumer.topic.format is skipped. (default "promremotewrite")
|
||||
-kafka.consumer.topic.format array
|
||||
data format for corresponding kafka topic. Valid formats: influx, prometheus, promremotewrite, graphite, jsonline
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
-kafka.consumer.topic.groupID array
|
||||
Defines group.id for topic
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
-kafka.consumer.topic.isGzipped array
|
||||
Enables gzip setting for topic messages payload. Only prometheus, jsonline and influx formats accept gzipped messages.
|
||||
Supports array of values separated by comma or specified via multiple flags.
|
||||
-kafka.consumer.topic.options array
|
||||
Optional key=value;key1=value2 settings for topic consumer. See full configuration options at https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md.
|
||||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
```
|
||||
|
||||
### Writing metrics to Kafka
|
||||
|
||||
[Enterprise version](https://victoriametrics.com/enterprise.html) of `vmagent` writes data to Kafka with `at-least-once` semantics if `-remoteWrite.url` contains e.g. Kafka url. For example, if `vmagent` is started with `-remoteWrite.url=kafka://localhost:9092/?topic=prom-rw`, then it would send Prometheus remote_write messages to Kafka bootstrap server at `localhost:9092` with the topic `prom-rw`. These messages can be read later from Kafka by another `vmagent` - see [these docs](#reading-metrics-from-kafka) for details.
|
||||
|
||||
Additional Kafka options can be passed as query params to `-remoteWrite.url`. For instance, `kafka://localhost:9092/?topic=prom-rw&client.id=my-favorite-id` sets `client.id` Kafka option to `my-favorite-id`. The full list of Kafka options is available [here](https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md).
|
||||
|
||||
|
||||
#### Kafka broker authorization and authentication
|
||||
|
||||
Two types of auth are supported:
|
||||
|
||||
* sasl with username and password:
|
||||
|
||||
```bash
|
||||
./bin/vmagent -remoteWrite.url=kafka://localhost:9092/?topic=prom-rw&security.protocol=SASL_SSL&sasl.mechanisms=PLAIN -remoteWrite.basicAuth.username=user -remoteWrite.basicAuth.password=password
|
||||
```
|
||||
|
||||
* tls certificates:
|
||||
|
||||
```bash
|
||||
./bin/vmagent -remoteWrite.url=kafka://localhost:9092/?topic=prom-rw&security.protocol=SSL -remoteWrite.tlsCAFile=/opt/ca.pem -remoteWrite.tlsCertFile=/opt/cert.pem -remoteWrite.tlsKeyFile=/opt/key.pem
|
||||
```
|
||||
|
||||
|
||||
## How to build from sources
|
||||
|
||||
|
|
|
@ -8,7 +8,7 @@ sort: 6
|
|||
|
||||
Supported storage systems for backups:
|
||||
|
||||
* [GCS](https://cloud.google.com/storage/). Example: `gcs://<bucket>/<path/to/backup>`
|
||||
* [GCS](https://cloud.google.com/storage/). Example: `gs://<bucket>/<path/to/backup>`
|
||||
* [S3](https://aws.amazon.com/s3/). Example: `s3://<bucket>/<path/to/backup>`
|
||||
* Any S3-compatible storage such as [MinIO](https://github.com/minio/minio), [Ceph](https://docs.ceph.com/docs/mimic/radosgw/s3/) or [Swift](https://www.swiftstack.com/docs/admin/middleware/s3_middleware.html). See [these docs](#advanced-usage) for details.
|
||||
* Local filesystem. Example: `fs://</absolute/path/to/backup>`
|
||||
|
@ -34,7 +34,7 @@ creation of hourly, daily, weekly and monthly backups.
|
|||
Regular backup can be performed with the following command:
|
||||
|
||||
```
|
||||
vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshotName=<local-snapshot> -dst=gcs://<bucket>/<path/to/new/backup>
|
||||
vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshotName=<local-snapshot> -dst=gs://<bucket>/<path/to/new/backup>
|
||||
```
|
||||
|
||||
* `</path/to/victoria-metrics-data>` - path to VictoriaMetrics data pointed by `-storageDataPath` command-line flag in single-node VictoriaMetrics or in cluster `vmstorage`.
|
||||
|
@ -50,7 +50,7 @@ If the destination GCS bucket already contains the previous backup at `-origin`
|
|||
with the following command:
|
||||
|
||||
```
|
||||
vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshotName=<local-snapshot> -dst=gcs://<bucket>/<path/to/new/backup> -origin=gcs://<bucket>/<path/to/existing/backup>
|
||||
vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshotName=<local-snapshot> -dst=gs://<bucket>/<path/to/new/backup> -origin=gs://<bucket>/<path/to/existing/backup>
|
||||
```
|
||||
|
||||
It saves time and network bandwidth costs by performing server-side copy for the shared data from the `-origin` to `-dst`.
|
||||
|
@ -62,7 +62,7 @@ Incremental backups performed if `-dst` points to an already existing backup. In
|
|||
It saves time and network bandwidth costs when working with big backups:
|
||||
|
||||
```
|
||||
vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshotName=<local-snapshot> -dst=gcs://<bucket>/<path/to/existing/backup>
|
||||
vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshotName=<local-snapshot> -dst=gs://<bucket>/<path/to/existing/backup>
|
||||
```
|
||||
|
||||
|
||||
|
@ -73,16 +73,16 @@ Smart backups mean storing full daily backups into `YYYYMMDD` folders and creati
|
|||
* Run the following command every hour:
|
||||
|
||||
```
|
||||
vmbackup -snapshotName=<latest-snapshot> -dst=gcs://<bucket>/latest
|
||||
vmbackup -snapshotName=<latest-snapshot> -dst=gs://<bucket>/latest
|
||||
```
|
||||
|
||||
Where `<latest-snapshot>` is the latest [snapshot](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-work-with-snapshots).
|
||||
The command will upload only changed data to `gcs://<bucket>/latest`.
|
||||
The command will upload only changed data to `gs://<bucket>/latest`.
|
||||
|
||||
* Run the following command once a day:
|
||||
|
||||
```
|
||||
vmbackup -snapshotName=<daily-snapshot> -dst=gcs://<bucket>/<YYYYMMDD> -origin=gcs://<bucket>/latest
|
||||
vmbackup -snapshotName=<daily-snapshot> -dst=gs://<bucket>/<YYYYMMDD> -origin=gs://<bucket>/latest
|
||||
```
|
||||
|
||||
Where `<daily-snapshot>` is the snapshot for the last day `<YYYYMMDD>`.
|
||||
|
@ -187,7 +187,7 @@ See [this article](https://medium.com/@valyala/speeding-up-backups-for-big-time-
|
|||
-customS3Endpoint string
|
||||
Custom S3 endpoint for use with S3-compatible storages (e.g. MinIO). S3 is used if not set
|
||||
-dst string
|
||||
Where to put the backup on the remote storage. Example: gcs://bucket/path/to/backup/dir, s3://bucket/path/to/backup/dir or fs:///path/to/local/backup/dir
|
||||
Where to put the backup on the remote storage. Example: gs://bucket/path/to/backup/dir, s3://bucket/path/to/backup/dir or fs:///path/to/local/backup/dir
|
||||
-dst can point to the previous backup. In this case incremental backup is performed, i.e. only changed data is uploaded
|
||||
-envflag.enable
|
||||
Whether to enable reading flags from environment variables additionally to command line. Command line flag values have priority over values from environment vars. Flags are read only from command line if this flag isn't set. See https://docs.victoriametrics.com/#environment-variables for more details
|
||||
|
|
|
@ -80,7 +80,7 @@ Backup manager launched with the following configuration:
|
|||
```console
|
||||
export NODE_IP=192.168.0.10
|
||||
export VMSTORAGE_ENDPOINT=http://127.0.0.1:8428
|
||||
./vmbackupmanager -dst=gcs://vmstorage-data/$NODE_IP -credsFilePath=credentials.json -storageDataPath=/vmstorage-data -snapshot.createURL=$VMSTORAGE_ENDPOINT/snapshot/create -eula
|
||||
./vmbackupmanager -dst=gs://vmstorage-data/$NODE_IP -credsFilePath=credentials.json -storageDataPath=/vmstorage-data -snapshot.createURL=$VMSTORAGE_ENDPOINT/snapshot/create -eula
|
||||
```
|
||||
|
||||
Expected logs in vmbackupmanager:
|
||||
|
@ -129,7 +129,7 @@ We enable backup retention policy for backup manager by using following configur
|
|||
```console
|
||||
export NODE_IP=192.168.0.10
|
||||
export VMSTORAGE_ENDPOINT=http://127.0.0.1:8428
|
||||
./vmbackupmanager -dst=gcs://vmstorage-data/$NODE_IP -credsFilePath=credentials.json -storageDataPath=/vmstorage-data -snapshot.createURL=$VMSTORAGE_ENDPOINT/snapshot/create
|
||||
./vmbackupmanager -dst=gs://vmstorage-data/$NODE_IP -credsFilePath=credentials.json -storageDataPath=/vmstorage-data -snapshot.createURL=$VMSTORAGE_ENDPOINT/snapshot/create
|
||||
-keepLastDaily=3 -eula
|
||||
```
|
||||
|
||||
|
|
|
@ -16,7 +16,7 @@ when restarting `vmrestore` with the same args.
|
|||
VictoriaMetrics must be stopped during the restore process.
|
||||
|
||||
```
|
||||
vmrestore -src=gcs://<bucket>/<path/to/backup> -storageDataPath=<local/path/to/restore>
|
||||
vmrestore -src=gs://<bucket>/<path/to/backup> -storageDataPath=<local/path/to/restore>
|
||||
|
||||
```
|
||||
|
||||
|
@ -120,7 +120,7 @@ i.e. the end result would be similar to [rsync --delete](https://askubuntu.com/q
|
|||
-skipBackupCompleteCheck
|
||||
Whether to skip checking for 'backup complete' file in -src. This may be useful for restoring from old backups, which were created without 'backup complete' file
|
||||
-src string
|
||||
Source path with backup on the remote storage. Example: gcs://bucket/path/to/backup/dir, s3://bucket/path/to/backup/dir or fs:///path/to/local/backup/dir
|
||||
Source path with backup on the remote storage. Example: gs://bucket/path/to/backup/dir, s3://bucket/path/to/backup/dir or fs:///path/to/local/backup/dir
|
||||
-storageDataPath string
|
||||
Destination path where backup must be restored. VictoriaMetrics must be stopped when restoring from backup. -storageDataPath dir can be non-empty. In this case the contents of -storageDataPath dir is synchronized with -src contents, i.e. it works like 'rsync --delete' (default "victoria-metrics-data")
|
||||
-version
|
||||
|
|
32
go.mod
32
go.mod
|
@ -1,46 +1,42 @@
|
|||
module github.com/VictoriaMetrics/VictoriaMetrics
|
||||
|
||||
require (
|
||||
cloud.google.com/go v0.95.0 // indirect
|
||||
cloud.google.com/go/storage v1.16.1
|
||||
cloud.google.com/go v0.97.0 // indirect
|
||||
cloud.google.com/go/storage v1.17.0
|
||||
github.com/VictoriaMetrics/fastcache v1.7.0
|
||||
|
||||
// Do not use the original github.com/valyala/fasthttp because of issues
|
||||
// like https://github.com/valyala/fasthttp/commit/996610f021ff45fdc98c2ce7884d5fa4e7f9199b
|
||||
github.com/VictoriaMetrics/fasthttp v1.1.0
|
||||
github.com/VictoriaMetrics/metrics v1.18.0
|
||||
github.com/VictoriaMetrics/metricsql v0.24.0
|
||||
github.com/VictoriaMetrics/metricsql v0.26.0
|
||||
github.com/VividCortex/ewma v1.2.0 // indirect
|
||||
github.com/aws/aws-sdk-go v1.40.47
|
||||
github.com/aws/aws-sdk-go v1.40.58
|
||||
github.com/cespare/xxhash/v2 v2.1.2
|
||||
github.com/cheggaaa/pb/v3 v3.0.8
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.1 // indirect
|
||||
github.com/fatih/color v1.13.0 // indirect
|
||||
github.com/go-kit/kit v0.11.0
|
||||
github.com/go-logfmt/logfmt v0.5.1 // indirect
|
||||
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
|
||||
github.com/go-kit/kit v0.12.0
|
||||
github.com/golang/snappy v0.0.4
|
||||
github.com/googleapis/gax-go/v2 v2.1.1 // indirect
|
||||
github.com/influxdata/influxdb v1.9.4
|
||||
github.com/influxdata/influxdb v1.9.5
|
||||
github.com/klauspost/compress v1.13.6
|
||||
github.com/mattn/go-colorable v0.1.11 // indirect
|
||||
github.com/mattn/go-runewidth v0.0.13 // indirect
|
||||
github.com/oklog/ulid v1.3.1
|
||||
github.com/prometheus/common v0.30.0 // indirect
|
||||
github.com/prometheus/procfs v0.7.3 // indirect
|
||||
github.com/prometheus/common v0.31.1 // indirect
|
||||
github.com/prometheus/prometheus v1.8.2-0.20201119142752-3ad25a6dc3d9
|
||||
github.com/urfave/cli/v2 v2.3.0
|
||||
github.com/valyala/fastjson v1.6.3
|
||||
github.com/valyala/fastrand v1.1.0
|
||||
github.com/valyala/fasttemplate v1.2.1
|
||||
github.com/valyala/gozstd v1.13.0
|
||||
github.com/valyala/histogram v1.2.0
|
||||
github.com/valyala/quicktemplate v1.7.0
|
||||
go.uber.org/atomic v1.9.0 // indirect
|
||||
golang.org/x/net v0.0.0-20210917221730-978cfadd31cf
|
||||
golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f
|
||||
golang.org/x/sys v0.0.0-20210923061019-b8560ed6a9b7
|
||||
golang.org/x/text v0.3.7 // indirect
|
||||
google.golang.org/api v0.57.0
|
||||
golang.org/x/net v0.0.0-20211007125505-59d4e928ea9d
|
||||
golang.org/x/oauth2 v0.0.0-20211005180243-6b3c2da341f1
|
||||
golang.org/x/sys v0.0.0-20211007075335-d3039528d8ac
|
||||
google.golang.org/api v0.58.0
|
||||
google.golang.org/genproto v0.0.0-20211007155348-82e027067bd4 // indirect
|
||||
google.golang.org/grpc v1.41.0 // indirect
|
||||
gopkg.in/yaml.v2 v2.4.0
|
||||
)
|
||||
|
||||
|
|
147
go.sum
147
go.sum
|
@ -25,8 +25,8 @@ cloud.google.com/go v0.84.0/go.mod h1:RazrYuxIK6Kb7YrzzhPoLmCVzl7Sup4NrbKPg8KHSU
|
|||
cloud.google.com/go v0.87.0/go.mod h1:TpDYlFy7vuLzZMMZ+B6iRiELaY7z/gJPaqbMx6mlWcY=
|
||||
cloud.google.com/go v0.90.0/go.mod h1:kRX0mNRHe0e2rC6oNakvwQqzyDmg57xJ+SZU1eT2aDQ=
|
||||
cloud.google.com/go v0.93.3/go.mod h1:8utlLll2EF5XMAV15woO4lSbWQlk8rer9aLOfLh7+YI=
|
||||
cloud.google.com/go v0.95.0 h1:JVWssQIj9cLwHmLjqWLptFa83o7HgqUictM6eyvGWJE=
|
||||
cloud.google.com/go v0.95.0/go.mod h1:MzZUAH870Y7E+c14j23Ir66FC1+PK8WLG7OG4SjP+0k=
|
||||
cloud.google.com/go v0.97.0 h1:3DXvAyifywvq64LfkKaMOmkWPS1CikIQdMe2lY9vxU8=
|
||||
cloud.google.com/go v0.97.0/go.mod h1:GF7l59pYBVlXQIBLx3a761cZ41F9bBH3JUlihCt2Udc=
|
||||
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
|
||||
cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
|
||||
cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
|
||||
|
@ -46,14 +46,12 @@ cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0Zeo
|
|||
cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk=
|
||||
cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs=
|
||||
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
|
||||
cloud.google.com/go/storage v1.16.1 h1:sMEIc4wxvoY3NXG7Rn9iP7jb/2buJgWR1vNXCR/UPfs=
|
||||
cloud.google.com/go/storage v1.16.1/go.mod h1:LaNorbty3ehnU3rEjXSNV/NRgQA0O8Y+uh6bPe5UOk4=
|
||||
cloud.google.com/go/storage v1.17.0 h1:CDpe3jS3EiD5nGlbtvyA4EUfkF6k9GMrxLR8+hLmoec=
|
||||
cloud.google.com/go/storage v1.17.0/go.mod h1:0wRtHSM3Npk/QJYdwcpRNVRVJlH2OxyWF9Dws3J+MtE=
|
||||
collectd.org v0.3.0/go.mod h1:A/8DzQBkF6abtvrT2j/AU/4tiBgJWYyh0y/oB/4MlWE=
|
||||
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
|
||||
github.com/Azure/azure-pipeline-go v0.2.3/go.mod h1:x841ezTBIMG6O3lAcl8ATHnsOPVl2bqk7S3ta6S6u4k=
|
||||
github.com/Azure/azure-sdk-for-go v41.3.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
|
||||
github.com/Azure/azure-sdk-for-go v48.2.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
|
||||
github.com/Azure/azure-storage-blob-go v0.13.0/go.mod h1:pA9kNqtjUeQF2zOSu4s//nUdBD+e64lEuc4sVnuOfNs=
|
||||
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
|
||||
github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
|
||||
github.com/Azure/go-autorest/autorest v0.9.0/go.mod h1:xyHB1BMZT0cuDHU7I0+g046+BFDTQ8rEZB0s4Yfa6bI=
|
||||
|
@ -64,7 +62,6 @@ github.com/Azure/go-autorest/autorest v0.11.11/go.mod h1:eipySxLmqSyC5s5k1CLupqe
|
|||
github.com/Azure/go-autorest/autorest/adal v0.5.0/go.mod h1:8Z9fGy2MpX0PvDjB1pEgQTmVqjGhiHBW7RJJEciWzS0=
|
||||
github.com/Azure/go-autorest/autorest/adal v0.8.2/go.mod h1:ZjhuQClTqx435SRJ2iMlOxPYt3d2C/T/7TiQCVZSn3Q=
|
||||
github.com/Azure/go-autorest/autorest/adal v0.8.3/go.mod h1:ZjhuQClTqx435SRJ2iMlOxPYt3d2C/T/7TiQCVZSn3Q=
|
||||
github.com/Azure/go-autorest/autorest/adal v0.9.2/go.mod h1:/3SMAM86bP6wC9Ev35peQDUeqFZBMH07vvUOmg4z/fE=
|
||||
github.com/Azure/go-autorest/autorest/adal v0.9.5/go.mod h1:B7KF7jKIeC9Mct5spmyCB/A8CG/sEz1vwIRGv/bbw7A=
|
||||
github.com/Azure/go-autorest/autorest/azure/auth v0.5.3/go.mod h1:4bJZhUhcq8LB20TruwHbAQsmUs2Xh+QR7utuJpLXX3A=
|
||||
github.com/Azure/go-autorest/autorest/azure/cli v0.4.2/go.mod h1:7qkJkT+j6b+hIpzMOwPChJhTqS8VbsqqgULzMNRugoM=
|
||||
|
@ -88,6 +85,7 @@ github.com/DATA-DOG/go-sqlmock v1.4.1/go.mod h1:f/Ixk793poVmq4qj/V1dPUg2JEAKC73Q
|
|||
github.com/DataDog/datadog-go v3.2.0+incompatible/go.mod h1:LButxg5PwREeZtORoXG3tL4fMGNddJ+vMq1mwgfaqoQ=
|
||||
github.com/HdrHistogram/hdrhistogram-go v0.9.0/go.mod h1:nxrse8/Tzg2tg3DZcZjm6qEclQKK70g0KxO61gFFZD4=
|
||||
github.com/HdrHistogram/hdrhistogram-go v1.1.0/go.mod h1:yDgFjdqOqDEKOvasDdhWNXYg9BVp4O+o5f6V/ehm6Oo=
|
||||
github.com/HdrHistogram/hdrhistogram-go v1.1.2/go.mod h1:yDgFjdqOqDEKOvasDdhWNXYg9BVp4O+o5f6V/ehm6Oo=
|
||||
github.com/Knetic/govaluate v3.0.1-0.20171022003610-9aa49832a739+incompatible/go.mod h1:r7JcOSlj0wfOMncg0iLm8Leh48TZaKVeNIfJntJ2wa0=
|
||||
github.com/Masterminds/semver v1.4.2/go.mod h1:MB6lktGJrhw8PrUyiEoblNEGEQ+RzHPF078ddwwvV3Y=
|
||||
github.com/Masterminds/sprig v2.16.0+incompatible/go.mod h1:y6hNFY5UBTIWBxnzTeuNhlNS5hqE0NB0E6fgfo2Br3o=
|
||||
|
@ -109,8 +107,8 @@ github.com/VictoriaMetrics/fasthttp v1.1.0/go.mod h1:/7DMcogqd+aaD3G3Hg5kFgoFwlR
|
|||
github.com/VictoriaMetrics/metrics v1.12.2/go.mod h1:Z1tSfPfngDn12bTfZSCqArT3OPY3u88J12hSoOhuiRE=
|
||||
github.com/VictoriaMetrics/metrics v1.18.0 h1:vov5NxDHRSXFbdiH4dYLYEjKLoAXXSQ7hcnG8TSD9JQ=
|
||||
github.com/VictoriaMetrics/metrics v1.18.0/go.mod h1:ArjwVz7WpgpegX/JpB0zpNF2h2232kErkEnzH1sxMmA=
|
||||
github.com/VictoriaMetrics/metricsql v0.24.0 h1:1SOiuEaSgfS2CiQyCAlYQs3WPHzXNMPUSXtE1Zx6qDw=
|
||||
github.com/VictoriaMetrics/metricsql v0.24.0/go.mod h1:ylO7YITho/Iw6P71oEaGyHbO94bGoGtzWfLGqFhMIg8=
|
||||
github.com/VictoriaMetrics/metricsql v0.26.0 h1:lJBRn9vn9kst7hfNzSsQorulzNYQtX7JxWWWxh/udfI=
|
||||
github.com/VictoriaMetrics/metricsql v0.26.0/go.mod h1:ylO7YITho/Iw6P71oEaGyHbO94bGoGtzWfLGqFhMIg8=
|
||||
github.com/VividCortex/ewma v1.1.1/go.mod h1:2Tkkvm3sRDVXaiyucHiACn4cqf7DpdyLvmxzcbUokwA=
|
||||
github.com/VividCortex/ewma v1.2.0 h1:f58SaIzcDXrSy3kWaHNvuJgJ3Nmz59Zji6XoJR/q1ow=
|
||||
github.com/VividCortex/ewma v1.2.0/go.mod h1:nz4BbCtbLyFDeC9SUHbtcT5644juEuWfUAUnGx7j5l4=
|
||||
|
@ -139,6 +137,7 @@ github.com/apache/thrift v0.13.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb
|
|||
github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o=
|
||||
github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY=
|
||||
github.com/armon/go-metrics v0.3.3/go.mod h1:4O98XIr/9W0sxpJ8UaYkvjk10Iff7SnFrb4QAOwNTFc=
|
||||
github.com/armon/go-metrics v0.3.9/go.mod h1:4O98XIr/9W0sxpJ8UaYkvjk10Iff7SnFrb4QAOwNTFc=
|
||||
github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
|
||||
github.com/armon/go-radix v1.0.0/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
|
||||
github.com/aryann/difflib v0.0.0-20170710044230-e206f873d14a/go.mod h1:DAHtR1m6lCRdSC2Tm3DSWRPvIPr6xNKyeHdqDQSQT+A=
|
||||
|
@ -153,25 +152,14 @@ github.com/aws/aws-sdk-go v1.29.16/go.mod h1:1KvfttTE3SPKMpo8g2c6jL3ZKfXtFvKscTg
|
|||
github.com/aws/aws-sdk-go v1.30.12/go.mod h1:5zCpMtNQVjRREroY7sYe8lOMRSxkhG6MZveU8YkpAk0=
|
||||
github.com/aws/aws-sdk-go v1.34.28/go.mod h1:H7NKnBqNVzoTJpGfLrQkkD+ytBA93eiDYi/+8rV9s48=
|
||||
github.com/aws/aws-sdk-go v1.35.31/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
|
||||
github.com/aws/aws-sdk-go v1.38.68/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
|
||||
github.com/aws/aws-sdk-go v1.40.47 h1:ZXayMFfPtODnSZRlMSmMYpiygac6PORNCPMjdGltcFg=
|
||||
github.com/aws/aws-sdk-go v1.40.47/go.mod h1:585smgzpB/KqRA+K3y/NL/oYRqQvpNJYvLm+LY1U59Q=
|
||||
github.com/aws/aws-sdk-go v1.40.45/go.mod h1:585smgzpB/KqRA+K3y/NL/oYRqQvpNJYvLm+LY1U59Q=
|
||||
github.com/aws/aws-sdk-go v1.40.58 h1:SFa94nBsXyaS+cXluXlvqLwsQdeD7A/unJcWEld1xZ0=
|
||||
github.com/aws/aws-sdk-go v1.40.58/go.mod h1:585smgzpB/KqRA+K3y/NL/oYRqQvpNJYvLm+LY1U59Q=
|
||||
github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g=
|
||||
github.com/aws/aws-sdk-go-v2 v1.3.2/go.mod h1:7OaACgj2SX3XGWnrIjGlJM22h6yD6MEWKvm7levnnM8=
|
||||
github.com/aws/aws-sdk-go-v2 v1.7.0/go.mod h1:tb9wi5s61kTDA5qCkcDbt3KRVV74GGslQkl/DRdX/P4=
|
||||
github.com/aws/aws-sdk-go-v2/config v1.1.5/go.mod h1:P3F1hku7qzC81txjwXnwOM6Ex6ezkU6+/557Teyb64E=
|
||||
github.com/aws/aws-sdk-go-v2/credentials v1.1.5/go.mod h1:Ir1R6tPiR1/2y1hes8yOijFMz54hzSmgcmCDo6F45Qc=
|
||||
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.0.6/go.mod h1:0+fWMitrmIpENiY8/1DyhdYPUCAPvd9UNz9mtCsEoLQ=
|
||||
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.1.2/go.mod h1:Azf567f5wBUfUbwpyJJnLM/geFFIzEulGR30L+nQZOE=
|
||||
github.com/aws/aws-sdk-go-v2/service/cloudwatch v1.5.0/go.mod h1:acH3+MQoiMzozT/ivU+DbRg7Ooo2298RdRaWcOv+4vM=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.0.4/go.mod h1:BCfU3Uo2fhKcMZFp9zU5QQGQxqWCOYmZ/27Dju3S/do=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.0.6/go.mod h1:L0KWr0ASo83PRZu9NaZaDsw3koS6PspKv137DMDZjHo=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.2.2/go.mod h1:nnutjMLuna0s3GVY/MAkpLX03thyNER06gXvnMAPj5g=
|
||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.5.0/go.mod h1:uwA7gs93Qcss43astPUb1eq4RyceNmYWAQjZFDOAMLo=
|
||||
github.com/aws/aws-sdk-go-v2/service/sso v1.1.5/go.mod h1:bpGz0tidC4y39sZkQSkpO/J0tzWCMXHbw6FZ0j1GkWM=
|
||||
github.com/aws/aws-sdk-go-v2/service/sts v1.2.2/go.mod h1:ssRzzJ2RZOVuKj2Vx1YE7ypfil/BIlgmQnCSW4DistU=
|
||||
github.com/aws/smithy-go v1.3.1/go.mod h1:SObp3lf9smib00L/v3U2eAKG8FyQ7iLrJnQiAmR5n+E=
|
||||
github.com/aws/smithy-go v1.5.0/go.mod h1:SObp3lf9smib00L/v3U2eAKG8FyQ7iLrJnQiAmR5n+E=
|
||||
github.com/aws/aws-sdk-go-v2 v1.9.1/go.mod h1:cK/D0BBs0b/oWPIcX/Z/obahJK1TT7IPVjy53i/mX/4=
|
||||
github.com/aws/aws-sdk-go-v2/service/cloudwatch v1.8.1/go.mod h1:CM+19rL1+4dFWnOQKwDc7H1KwXTz+h61oUSHyhV0b3o=
|
||||
github.com/aws/smithy-go v1.8.0/go.mod h1:SObp3lf9smib00L/v3U2eAKG8FyQ7iLrJnQiAmR5n+E=
|
||||
github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
|
||||
github.com/benbjohnson/immutable v0.2.1/go.mod h1:uc6OHo6PN2++n98KHLxW8ef4W42ylHiQSENghE1ezxI=
|
||||
github.com/benbjohnson/tmpl v1.0.0/go.mod h1:igT620JFIi44B6awvU9IsDhR77IXWtFigTLil/RPdps=
|
||||
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
|
||||
|
@ -185,10 +173,11 @@ github.com/bonitoo-io/go-sql-bigquery v0.3.4-1.4.0/go.mod h1:J4Y6YJm0qTWB9aFziB7
|
|||
github.com/c-bata/go-prompt v0.2.2/go.mod h1:VzqtzE2ksDBcdln8G7mk2RX9QyGjH+OVqOCSiVIqS34=
|
||||
github.com/cactus/go-statsd-client/statsd v0.0.0-20191106001114-12b4e2b38748/go.mod h1:l/bIBLeOl9eX+wxJAzxS4TveKRtAqlyDpHjhkfO0MEI=
|
||||
github.com/casbin/casbin/v2 v2.1.2/go.mod h1:YcPU1XXisHhLzuxH9coDNf2FbKpjGlbCg3n9yuLkIJQ=
|
||||
github.com/casbin/casbin/v2 v2.31.6/go.mod h1:vByNa/Fchek0KZUgG5wEsl7iFsiviAYKRtgrQfcJqHg=
|
||||
github.com/casbin/casbin/v2 v2.37.0/go.mod h1:vByNa/Fchek0KZUgG5wEsl7iFsiviAYKRtgrQfcJqHg=
|
||||
github.com/cenkalti/backoff v0.0.0-20181003080854-62661b46c409/go.mod h1:90ReRw6GdpyfrHakVjL/QHaoyV4aDUVVkXQJJJ3NXXM=
|
||||
github.com/cenkalti/backoff v2.2.1+incompatible/go.mod h1:90ReRw6GdpyfrHakVjL/QHaoyV4aDUVVkXQJJJ3NXXM=
|
||||
github.com/cenkalti/backoff/v4 v4.0.2/go.mod h1:eEew/i+1Q6OrCDZh3WiXYv3+nJwBASZ8Bog/87DQnVg=
|
||||
github.com/cenkalti/backoff/v4 v4.1.1/go.mod h1:scbssz8iZGpm3xbr14ovlUdkxfGXNInqkPWOWmG2CLw=
|
||||
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
|
||||
github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
|
||||
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
|
||||
|
@ -203,12 +192,14 @@ github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5P
|
|||
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
|
||||
github.com/circonus-labs/circonus-gometrics v2.3.1+incompatible/go.mod h1:nmEj6Dob7S7YxXgwXpfOuvO54S+tGdZdw9fuRZt25Ag=
|
||||
github.com/circonus-labs/circonusllhist v0.1.3/go.mod h1:kMXHVDlOchFAehlya5ePtbp5jckzBHf4XRpQvBOLI+I=
|
||||
github.com/clbanning/mxj v1.8.4/go.mod h1:BVjHeAH+rl9rs6f+QIpeRl0tfu10SXn1pUSa5PVGJng=
|
||||
github.com/clbanning/x2j v0.0.0-20191024224557-825249438eec/go.mod h1:jMjuTZXRI4dUb/I5gc9Hdhagfvm9+RyrPryS/auMzxE=
|
||||
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
|
||||
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
|
||||
github.com/cncf/udpa/go v0.0.0-20200629203442-efcf912fb354/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
|
||||
github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
|
||||
github.com/cncf/xds/go v0.0.0-20210312221358-fbca930ec8ed/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
|
||||
github.com/cncf/xds/go v0.0.0-20210805033703-aa0b78936158/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
|
||||
github.com/cockroachdb/datadriven v0.0.0-20190809214429-80d97fb3cbaa/go.mod h1:zn76sxSg3SzpJ0PPJaLDCu+Bu0Lg3sKTORVIj19EIF8=
|
||||
github.com/codahale/hdrhistogram v0.0.0-20161010025455-3a0bb77429bd/go.mod h1:sE/e/2PUdi/liOCUjSTXgM1o87ZssimdTWN964YiIeI=
|
||||
github.com/containerd/containerd v1.3.4/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
|
||||
|
@ -260,18 +251,21 @@ github.com/envoyproxy/go-control-plane v0.9.7/go.mod h1:cwu0lG7PUMfa9snN8LXBig5y
|
|||
github.com/envoyproxy/go-control-plane v0.9.9-0.20201210154907-fd9021fe5dad/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
|
||||
github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
|
||||
github.com/envoyproxy/go-control-plane v0.9.9-0.20210512163311-63b5d3c536b0/go.mod h1:hliV/p42l8fGbc6Y9bQ70uLwIvmJyVE5k4iMKlh8wCQ=
|
||||
github.com/envoyproxy/go-control-plane v0.9.10-0.20210907150352-cf90f659a021/go.mod h1:AFq3mo9L8Lqqiid3OhADV3RfLJnjiw63cSpi+fDTRC0=
|
||||
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
|
||||
github.com/evanphx/json-patch v4.2.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
|
||||
github.com/evanphx/json-patch v4.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
|
||||
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
|
||||
github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU=
|
||||
github.com/fatih/color v1.10.0/go.mod h1:ELkj/draVOlAH/xkhN6mQ50Qd0MPOk5AAr3maGEBuJM=
|
||||
github.com/fatih/color v1.12.0/go.mod h1:ELkj/draVOlAH/xkhN6mQ50Qd0MPOk5AAr3maGEBuJM=
|
||||
github.com/fatih/color v1.13.0 h1:8LOYc1KYPPmyKMuN8QV2DNRWNbLo6LZ0iLs8+mlH53w=
|
||||
github.com/fatih/color v1.13.0/go.mod h1:kLAiJbzzSOZDVNGyDpeOxJ47H46qBXwg5ILebYFFOfk=
|
||||
github.com/fogleman/gg v1.2.1-0.20190220221249-0403632d5b90/go.mod h1:R/bRT+9gY/C5z7JzPU0zXsXHKM4/ayA+zqcVNZzPa1k=
|
||||
github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
|
||||
github.com/foxcpp/go-mockdns v0.0.0-20201212160233-ede2f9158d15/go.mod h1:tPg4cp4nseejPd+UKxtCVQ2hUxNTZ7qQZJa7CLriIeo=
|
||||
github.com/franela/goblin v0.0.0-20200105215937-c9ffbefa60db/go.mod h1:7dvUGVsVBjqR7JHJk0brhHOZYGmfBYOrK0ZhYMEtBr4=
|
||||
github.com/franela/goblin v0.0.0-20210519012713-85d372ac71e2/go.mod h1:VzmDKDJVZI3aJmnRI9VjAn9nJ8qPPsN1fqzr9dqInIo=
|
||||
github.com/franela/goreq v0.0.0-20171204163338-bcd34c9993f8/go.mod h1:ZhphrRTfi2rbfLwlschooIH4+wKKDR4Pdxhh+TRoA20=
|
||||
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
|
||||
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
|
||||
|
@ -290,9 +284,11 @@ github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2
|
|||
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
|
||||
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
|
||||
github.com/go-kit/kit v0.10.0/go.mod h1:xUsJbQ/Fp4kEt7AFgCuvyX4a71u8h9jB8tj/ORgOZ7o=
|
||||
github.com/go-kit/kit v0.11.0 h1:IGmIEl7aHTYh6E2HlT+ptILBotjo4xl8PMDl852etiI=
|
||||
github.com/go-kit/kit v0.11.0/go.mod h1:73/6Ixaufkvb5Osvkls8C79vuQ49Ba1rUEUYNSf+FUw=
|
||||
github.com/go-kit/kit v0.12.0 h1:e4o3o3IsBfAKQh5Qbbiqyfu97Ku7jrO/JbohvztANh4=
|
||||
github.com/go-kit/kit v0.12.0/go.mod h1:lHd+EkCZPIwYItmGDDRdhinkzX2A1sj+M9biaEaizzs=
|
||||
github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY=
|
||||
github.com/go-kit/log v0.2.0 h1:7i2K3eKTos3Vc0enKCfnVcgHh2olr/MyfboYq7cAcFw=
|
||||
github.com/go-kit/log v0.2.0/go.mod h1:NwTd00d/i8cPZ3xOwwiv2PO5MOcx78fFErGNcVmBjv0=
|
||||
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
|
||||
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
|
||||
github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A=
|
||||
|
@ -378,8 +374,8 @@ github.com/go-openapi/validate v0.19.14/go.mod h1:PdGrHe0rp6MG3A1SrAY/rIHATqzJEE
|
|||
github.com/go-sql-driver/mysql v1.4.0/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w=
|
||||
github.com/go-sql-driver/mysql v1.4.1/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w=
|
||||
github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg=
|
||||
github.com/go-stack/stack v1.8.0 h1:5SgMzNM5HxrEjV0ww2lTmX6E2Izsfxas4+YHWRs3Lsk=
|
||||
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
|
||||
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
|
||||
github.com/go-zookeeper/zk v1.0.2/go.mod h1:nOB03cncLtlp4t+UAkGSV+9beXP/akpekBwL+UX1Qcw=
|
||||
github.com/gobuffalo/attrs v0.0.0-20190224210810-a9411de4debd/go.mod h1:4duuawTqi2wkkpB4ePgWMaai6/Kc6WEz83bhFwpHzj0=
|
||||
github.com/gobuffalo/depgen v0.0.0-20190329151759-d478694a28d3/go.mod h1:3STtPUQYuzV0gBVOY3vy6CfMm/ljR4pABfrTeHNLHUY=
|
||||
|
@ -416,6 +412,7 @@ github.com/gogo/protobuf v1.2.2-0.20190730201129-28a6bbf47e48/go.mod h1:SlYgWuQ5
|
|||
github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
|
||||
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
|
||||
github.com/golang-jwt/jwt v3.2.1+incompatible/go.mod h1:8pz2t5EyA70fFQQSrl6XZXzqecmYZeUEB8OUGHkxJ+I=
|
||||
github.com/golang-jwt/jwt/v4 v4.0.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg=
|
||||
github.com/golang-sql/civil v0.0.0-20190719163853-cb61b32ac6fe/go.mod h1:8vg3r2VgvsThLBIFL93Qb5yWzgyZWhEmBwUJWevAkK0=
|
||||
github.com/golang/freetype v0.0.0-20170609003504-e2365dfdc4a0/go.mod h1:E/TSTwGwJL78qG/PmXZO1EjYhfJinVAhrmmHX6Z8B9k=
|
||||
github.com/golang/geo v0.0.0-20190916061304-5b978397cfec/go.mod h1:QZ0nwyI2jOfgRAoBvP+ab5aRr7c9x7lhGEJrKvBwjWI=
|
||||
|
@ -533,18 +530,21 @@ github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFb
|
|||
github.com/hashicorp/consul/api v1.3.0/go.mod h1:MmDNSzIMUjNpY/mQ398R4bk2FnqQLoPndWW5VkKPlCE=
|
||||
github.com/hashicorp/consul/api v1.4.0/go.mod h1:xc8u05kyMa3Wjr9eEAsIAo3dg8+LywT5E/Cl7cNS5nU=
|
||||
github.com/hashicorp/consul/api v1.7.0/go.mod h1:1NSuaUUkFaJzMasbfq/11wKYWSR67Xn6r2DXKhuDNFg=
|
||||
github.com/hashicorp/consul/api v1.8.1/go.mod h1:sDjTOq0yUyv5G4h+BqSea7Fn6BU+XbolEz1952UB+mk=
|
||||
github.com/hashicorp/consul/api v1.10.1/go.mod h1:XjsvQN+RJGWI2TWy1/kqaE16HrR2J/FWgkYjdZQsX9M=
|
||||
github.com/hashicorp/consul/sdk v0.3.0/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
|
||||
github.com/hashicorp/consul/sdk v0.4.0/go.mod h1:fY08Y9z5SvJqevyZNy6WWPXiG3KwBPAvlcdx16zZ0fM=
|
||||
github.com/hashicorp/consul/sdk v0.6.0/go.mod h1:fY08Y9z5SvJqevyZNy6WWPXiG3KwBPAvlcdx16zZ0fM=
|
||||
github.com/hashicorp/consul/sdk v0.7.0/go.mod h1:fY08Y9z5SvJqevyZNy6WWPXiG3KwBPAvlcdx16zZ0fM=
|
||||
github.com/hashicorp/consul/sdk v0.8.0/go.mod h1:GBvyrGALthsZObzUGsfgHZQDXjg4lOjagTIwIR1vPms=
|
||||
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
|
||||
github.com/hashicorp/go-cleanhttp v0.5.0/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
|
||||
github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
|
||||
github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48=
|
||||
github.com/hashicorp/go-hclog v0.12.0/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ=
|
||||
github.com/hashicorp/go-hclog v0.12.2/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ=
|
||||
github.com/hashicorp/go-hclog v0.16.2/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ=
|
||||
github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
|
||||
github.com/hashicorp/go-immutable-radix v1.2.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
|
||||
github.com/hashicorp/go-immutable-radix v1.3.1/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
|
||||
github.com/hashicorp/go-msgpack v0.5.3/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM=
|
||||
github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
|
||||
github.com/hashicorp/go-multierror v1.1.0/go.mod h1:spPvp8C1qA32ftKqdAHm4hHTbPw+vmowP0z+KUhOZdA=
|
||||
|
@ -576,6 +576,7 @@ github.com/hetznercloud/hcloud-go v1.23.1/go.mod h1:xng8lbDUg+xM1dgc0yGHX5EeqbwI
|
|||
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
|
||||
github.com/huandu/xstrings v1.0.0/go.mod h1:4qWG/gcEcfX4z/mBDHJ++3ReCw9ibxbsNJbcucJdbSo=
|
||||
github.com/hudl/fargo v1.3.0/go.mod h1:y3CKSmjA+wD2gak7sUSXTAoopbhU08POFhmITJgmKTg=
|
||||
github.com/hudl/fargo v1.4.0/go.mod h1:9Ai6uvFy5fQNq6VPKtg+Ceq1+eTY4nKUlR2JElEOcDo=
|
||||
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
|
||||
github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
|
||||
github.com/imdario/mergo v0.3.4/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
|
||||
|
@ -583,12 +584,12 @@ github.com/imdario/mergo v0.3.5/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJ
|
|||
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
|
||||
github.com/influxdata/flux v0.65.0/go.mod h1:BwN2XG2lMszOoquQaFdPET8FRQfrXiZsWmcMO9rkaVY=
|
||||
github.com/influxdata/flux v0.65.1/go.mod h1:J754/zds0vvpfwuq7Gc2wRdVwEodfpCFM7mYlOw2LqY=
|
||||
github.com/influxdata/flux v0.127.3/go.mod h1:Zc0P/HNnJnhBlm4QsmsBbAeAdtccKo4Eu0OfkP3RCk0=
|
||||
github.com/influxdata/flux v0.131.0/go.mod h1:CKvnYe6FHpTj/E0YGI7TcOZdGiYHoToOPSnoa12RtKI=
|
||||
github.com/influxdata/httprouter v1.3.1-0.20191122104820-ee83e2772f69/go.mod h1:pwymjR6SrP3gD3pRj9RJwdl1j5s3doEEV8gS4X9qSzA=
|
||||
github.com/influxdata/influxdb v1.8.0/go.mod h1:SIzcnsjaHRFpmlxpJ4S3NT64qtEKYweNTUMb/vh0OMQ=
|
||||
github.com/influxdata/influxdb v1.8.3/go.mod h1:JugdFhsvvI8gadxOI6noqNeeBHvWNTbfYGtiAn+2jhI=
|
||||
github.com/influxdata/influxdb v1.9.4 h1:hZMq5fd4enVnruYHd7qCHsqG7kWQ/msA6x+kCvGFsRY=
|
||||
github.com/influxdata/influxdb v1.9.4/go.mod h1:dR0WCHqaHPpJLaqWnRSl/QHsbXJR+QpofbZXyTc8ccw=
|
||||
github.com/influxdata/influxdb v1.9.5 h1:4O7AC5jOA9RoqtDuD2rysXbumcEwaqWlWXmwuyK+a2s=
|
||||
github.com/influxdata/influxdb v1.9.5/go.mod h1:4uPVvcry9KWQVWLxyT9641qpkRXUBN+xa0MJFFNNLKo=
|
||||
github.com/influxdata/influxdb-client-go/v2 v2.3.1-0.20210518120617-5d1fff431040/go.mod h1:vLNHdxTJkIf2mSLvGrpj8TCcISApPoXkaxP8g9uRlW8=
|
||||
github.com/influxdata/influxdb1-client v0.0.0-20191209144304-8bf82d3c094d/go.mod h1:qj24IKcXYK6Iy9ceXlo3Tc+vtHo9lIhSX5JddghvEPo=
|
||||
github.com/influxdata/influxdb1-client v0.0.0-20200827194710-b269163b24ab/go.mod h1:qj24IKcXYK6Iy9ceXlo3Tc+vtHo9lIhSX5JddghvEPo=
|
||||
|
@ -620,6 +621,7 @@ github.com/json-iterator/go v1.1.8/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/u
|
|||
github.com/json-iterator/go v1.1.9/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
|
||||
github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
|
||||
github.com/json-iterator/go v1.1.11/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
|
||||
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
|
||||
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
|
||||
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
|
||||
github.com/jsternberg/zap-logfmt v1.0.0/go.mod h1:uvPs/4X51zdkcm5jXl5SYoN+4RK21K8mysFmDaM/h+o=
|
||||
|
@ -637,7 +639,6 @@ github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI
|
|||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||
github.com/klauspost/compress v1.4.0/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A=
|
||||
github.com/klauspost/compress v1.9.5/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A=
|
||||
github.com/klauspost/compress v1.11.12/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
|
||||
github.com/klauspost/compress v1.13.4/go.mod h1:8dP1Hq4DHOhN9w426knH3Rhby4rFm6D8eO+e+Dq5Gzg=
|
||||
github.com/klauspost/compress v1.13.5/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
|
||||
github.com/klauspost/compress v1.13.6 h1:P76CopJELS0TiO2mebmnzgWaajssP/EszplttgQxcgc=
|
||||
|
@ -682,9 +683,9 @@ github.com/mattn/go-colorable v0.1.4/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVc
|
|||
github.com/mattn/go-colorable v0.1.6/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
|
||||
github.com/mattn/go-colorable v0.1.7/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
|
||||
github.com/mattn/go-colorable v0.1.8/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
|
||||
github.com/mattn/go-colorable v0.1.9 h1:sqDoxXbdeALODt0DAeJCVp38ps9ZogZEAXjus69YV3U=
|
||||
github.com/mattn/go-colorable v0.1.9/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
|
||||
github.com/mattn/go-ieproxy v0.0.1/go.mod h1:pYabZ6IHcRpFh7vIaLfK7rdcWgFEb3SFJ6/gNWuh88E=
|
||||
github.com/mattn/go-colorable v0.1.11 h1:nQ+aFkoE2TMGc0b68U2OKSexC+eq46+XwZzWXHRmPYs=
|
||||
github.com/mattn/go-colorable v0.1.11/go.mod h1:u5H1YNBxpqRaxsYJYSkiCWKzEfiAb1Gb520KVy5xxl4=
|
||||
github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
|
||||
github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
|
||||
github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
|
||||
|
@ -708,8 +709,10 @@ github.com/miekg/dns v1.1.22/go.mod h1:bPDLeHnStXmXAq1m/Ch/hvfNHr14JKNPMBo3VZKju
|
|||
github.com/miekg/dns v1.1.26/go.mod h1:bPDLeHnStXmXAq1m/Ch/hvfNHr14JKNPMBo3VZKjuso=
|
||||
github.com/miekg/dns v1.1.29/go.mod h1:KNUDUusw/aVsxyTYZM1oqvCicbwhgbNgztCETuNZ7xM=
|
||||
github.com/miekg/dns v1.1.35/go.mod h1:KNUDUusw/aVsxyTYZM1oqvCicbwhgbNgztCETuNZ7xM=
|
||||
github.com/miekg/dns v1.1.43/go.mod h1:+evo5L0630/F6ca/Z9+GAqzhjGyn8/c+TBaOyfEl0V4=
|
||||
github.com/mileusna/useragent v0.0.0-20190129205925-3e331f0949a5/go.mod h1:JWhYAp2EXqUtsxTKdeGlY8Wp44M7VxThC9FEoNGi2IE=
|
||||
github.com/minio/highwayhash v1.0.1/go.mod h1:BQskDq+xkJ12lmlUUi7U0M5Swg3EWR+dLTk+kldvVxY=
|
||||
github.com/minio/highwayhash v1.0.2/go.mod h1:BQskDq+xkJ12lmlUUi7U0M5Swg3EWR+dLTk+kldvVxY=
|
||||
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
|
||||
github.com/mitchellh/cli v1.1.0/go.mod h1:xcISNoH86gajksDmfB23e/pu+B+GeFRMYmoHXxx3xhI=
|
||||
github.com/mitchellh/go-homedir v1.0.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
|
||||
|
@ -723,10 +726,12 @@ github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh
|
|||
github.com/mitchellh/mapstructure v1.2.2/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
|
||||
github.com/mitchellh/mapstructure v1.3.2/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
|
||||
github.com/mitchellh/mapstructure v1.3.3/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
|
||||
github.com/mitchellh/mapstructure v1.4.2/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
|
||||
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
||||
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
||||
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
|
||||
github.com/montanaflynn/stats v0.0.0-20171201202039-1bf9dbcd8cbe/go.mod h1:wL8QJuTMNUDYhXwkmfOly8iTdp5TEcJFWZD2D7SIkUc=
|
||||
github.com/morikuni/aec v1.0.0/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc=
|
||||
github.com/mschoch/smat v0.0.0-20160514031455-90eadee771ae/go.mod h1:qAyveg+e4CE+eKJXWVjKXM4ck2QobLqTDytGJbLLhJg=
|
||||
|
@ -737,11 +742,11 @@ github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+
|
|||
github.com/nats-io/jwt v0.3.0/go.mod h1:fRYCDE99xlTsqUzISS1Bi75UBJ6ljOJQOAAu5VglpSg=
|
||||
github.com/nats-io/jwt v0.3.2/go.mod h1:/euKqTS1ZD+zzjYrY7pseZrTtWQSjujC7xjPc8wL6eU=
|
||||
github.com/nats-io/jwt v1.2.2/go.mod h1:/xX356yQA6LuXI9xWW7mZNpxgF2mBmGecH+Fj34sP5Q=
|
||||
github.com/nats-io/jwt/v2 v2.0.2/go.mod h1:VRP+deawSXyhNjXmxPCHskrR6Mq50BqpEI5SEcNiGlY=
|
||||
github.com/nats-io/jwt/v2 v2.0.3/go.mod h1:VRP+deawSXyhNjXmxPCHskrR6Mq50BqpEI5SEcNiGlY=
|
||||
github.com/nats-io/nats-server/v2 v2.1.2/go.mod h1:Afk+wRZqkMQs/p45uXdrVLuab3gwv3Z8C4HTBu8GD/k=
|
||||
github.com/nats-io/nats-server/v2 v2.2.6/go.mod h1:sEnFaxqe09cDmfMgACxZbziXnhQFhwk+aKkZjBBRYrI=
|
||||
github.com/nats-io/nats-server/v2 v2.5.0/go.mod h1:Kj86UtrXAL6LwYRA6H4RqzkHhK0Vcv2ZnKD5WbQ1t3g=
|
||||
github.com/nats-io/nats.go v1.9.1/go.mod h1:ZjDU1L/7fJ09jvUSRVBR2e7+RnLiiIQyqyzEE/Zbp4w=
|
||||
github.com/nats-io/nats.go v1.11.0/go.mod h1:BPko4oXsySz4aSWeFgOHLZs3G4Jq4ZAyE6/zMCxRT6w=
|
||||
github.com/nats-io/nats.go v1.12.1/go.mod h1:BPko4oXsySz4aSWeFgOHLZs3G4Jq4ZAyE6/zMCxRT6w=
|
||||
github.com/nats-io/nkeys v0.1.0/go.mod h1:xpnFELMwJABBLVhffcfd1MZx6VsNRFpEugbxziKVo7w=
|
||||
github.com/nats-io/nkeys v0.1.3/go.mod h1:xpnFELMwJABBLVhffcfd1MZx6VsNRFpEugbxziKVo7w=
|
||||
github.com/nats-io/nkeys v0.2.0/go.mod h1:XdZpAbhgyyODYqjTawOnIOI7VlbKSarI9Gfy1tqEu/s=
|
||||
|
@ -749,6 +754,8 @@ github.com/nats-io/nkeys v0.3.0/go.mod h1:gvUNGjVcM2IPr5rCsRsC6Wb3Hr2CQAm08dsxtV
|
|||
github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c=
|
||||
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e h1:fD57ERR4JtEqsWbfPhv4DMiApHyliiK5xCTNVSPiaAs=
|
||||
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
|
||||
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
|
||||
github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU=
|
||||
github.com/oklog/oklog v0.3.2/go.mod h1:FCV+B7mhrz4o+ueLpx+KqkyXRGMWOYEvfiXtdGtbWGs=
|
||||
github.com/oklog/run v1.0.0/go.mod h1:dlhp/R75TPv97u0XWUtDeV/lRKWPKSdTuV0TZvrmrQA=
|
||||
github.com/oklog/run v1.1.0/go.mod h1:sVPdnTZT1zYwAJeCMu2Th4T21pA3FPOQRfWjQlk7DVU=
|
||||
|
@ -760,9 +767,14 @@ github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+W
|
|||
github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
||||
github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
||||
github.com/onsi/ginkgo v1.11.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
||||
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
|
||||
github.com/onsi/ginkgo v1.16.2/go.mod h1:CObGmKUOKaSC0RjmoAK7tKyn4Azo5P2IWuoMnvwxz1E=
|
||||
github.com/onsi/gomega v0.0.0-20170829124025-dcabb60a477c/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
|
||||
github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
|
||||
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
|
||||
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
|
||||
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
|
||||
github.com/onsi/gomega v1.13.0/go.mod h1:lRk9szgn8TxENtWd0Tp4c3wjlRfMTMH27I+3Je41yGY=
|
||||
github.com/op/go-logging v0.0.0-20160315200505-970db520ece7/go.mod h1:HzydrMdWErDVzsI23lYNej1Htcns9BCg93Dk0bBINWk=
|
||||
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
|
||||
github.com/opencontainers/image-spec v1.0.1/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
|
||||
|
@ -787,6 +799,7 @@ github.com/pborman/uuid v1.2.0/go.mod h1:X/NO0urCmaxf9VXbdlT7C2Yzkj2IKimNn4k+gtP
|
|||
github.com/pelletier/go-toml v1.4.0/go.mod h1:PN7xzY2wHTK0K9p34ErDQMlFxa51Fk0OUruD3k1mMwo=
|
||||
github.com/pelletier/go-toml v1.7.0/go.mod h1:vwGMzjaWMwyfHwgIBhI2YUM4fB6nL6lVAvS1LBMMhTE=
|
||||
github.com/performancecopilot/speed v3.0.0+incompatible/go.mod h1:/CLtqpZ5gBg1M9iaPbIdPPGyKcA8hKdoy6hAWba7Yac=
|
||||
github.com/performancecopilot/speed/v4 v4.0.0/go.mod h1:qxrSyuDGrTOWfV+uKRFhfxw6h/4HXRGUiZiufxo49BM=
|
||||
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
|
||||
github.com/peterh/liner v1.0.1-0.20180619022028-8c1271fcf47f/go.mod h1:xIteQHvHuaLYG9IFj6mSxM0fCKrs34IrEQUhOYuGPHc=
|
||||
github.com/philhofer/fwd v1.0.0/go.mod h1:gk3iGcWd9+svBvR0sR+KPcfE+RNWozjowpeBVG3ZVNU=
|
||||
|
@ -833,8 +846,9 @@ github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB8
|
|||
github.com/prometheus/common v0.14.0/go.mod h1:U+gB1OBLb1lF3O42bTCL+FK18tX9Oar16Clt/msog/s=
|
||||
github.com/prometheus/common v0.15.0/go.mod h1:U+gB1OBLb1lF3O42bTCL+FK18tX9Oar16Clt/msog/s=
|
||||
github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc=
|
||||
github.com/prometheus/common v0.30.0 h1:JEkYlQnpzrzQFxi6gnukFPdQ+ac82oRhzMcIduJu/Ug=
|
||||
github.com/prometheus/common v0.30.0/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls=
|
||||
github.com/prometheus/common v0.31.1 h1:d18hG4PkHnNAKNMOmFuXFaiY8Us0nird/2m60uS1AMs=
|
||||
github.com/prometheus/common v0.31.1/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls=
|
||||
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
||||
github.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
||||
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
|
||||
|
@ -888,7 +902,7 @@ github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic
|
|||
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
|
||||
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
|
||||
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
|
||||
github.com/snowflakedb/gosnowflake v1.6.1/go.mod h1:1kyg2XEduwti88V11PKRHImhXLK5WpGiayY6lFNYb98=
|
||||
github.com/snowflakedb/gosnowflake v1.3.13/go.mod h1:6nfka9aTXkUNha1p1cjeeyjDvcyh7jfjp0l8kGpDBok=
|
||||
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
|
||||
github.com/sony/gobreaker v0.4.1/go.mod h1:ZKptC7FHNvhBz7dN2LGjPVBz2sZJmc0/PkyDJOjmxWY=
|
||||
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
|
||||
|
@ -996,13 +1010,15 @@ go.uber.org/atomic v1.6.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
|
|||
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
|
||||
go.uber.org/atomic v1.9.0 h1:ECmE8Bn/WFTYwEW/bpKD3M8VtR/zQVbavAoalC1PYyE=
|
||||
go.uber.org/atomic v1.9.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
|
||||
go.uber.org/goleak v1.1.10 h1:z+mqJhf6ss6BSfSM671tgKyZBFPTTJM+HLxnhPC3wu0=
|
||||
go.uber.org/goleak v1.1.10/go.mod h1:8a7PlsEVH3e/a/GLqe5IIrQx6GzcnRmZEufDUTk4A7A=
|
||||
go.uber.org/goleak v1.1.11-0.20210813005559-691160354723 h1:sHOAIxRGBp443oHZIPB+HsUGaksVCXVQENPxwTfQdH4=
|
||||
go.uber.org/goleak v1.1.11-0.20210813005559-691160354723/go.mod h1:cwTWslyiVhfpKIDGSZEM2HlOvcqm+tG4zioyIeLoqMQ=
|
||||
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
|
||||
go.uber.org/multierr v1.3.0/go.mod h1:VgVr7evmIr6uPjLBxg28wmKNXyqE9akIJ5XnfpiKl+4=
|
||||
go.uber.org/multierr v1.4.0/go.mod h1:VgVr7evmIr6uPjLBxg28wmKNXyqE9akIJ5XnfpiKl+4=
|
||||
go.uber.org/multierr v1.5.0/go.mod h1:FeouvMocqHpRaaGuG9EjoKcStLC43Zu/fmqdUMPcKYU=
|
||||
go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU=
|
||||
go.uber.org/multierr v1.7.0/go.mod h1:7EAYxJLBy9rStEaz58O2t4Uvip6FSURkq8/ppBp95ak=
|
||||
go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee/go.mod h1:vJERXedbb3MVM5f9Ejo0C68/HhF8uaILCdgjnY+goOA=
|
||||
go.uber.org/zap v1.9.1/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
|
||||
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
|
||||
|
@ -1010,6 +1026,7 @@ go.uber.org/zap v1.13.0/go.mod h1:zwrFLgMcdUuIBviXEYEH1YKNaOBnKXsx2IPda5bBwHM=
|
|||
go.uber.org/zap v1.14.0/go.mod h1:zwrFLgMcdUuIBviXEYEH1YKNaOBnKXsx2IPda5bBwHM=
|
||||
go.uber.org/zap v1.14.1/go.mod h1:Mb2vm2krFEG5DV0W9qcHBYFtp/Wku1cvYaqPsS/WYfc=
|
||||
go.uber.org/zap v1.17.0/go.mod h1:MXVU+bhUf/A7Xi2HNOnopQOrmycQ5Ih87HtOu4q5SSo=
|
||||
go.uber.org/zap v1.19.1/go.mod h1:j3DNczoxDZroyBnOT1L/Q79cfUMGZxlv/9dzN7SM1rI=
|
||||
golang.org/x/crypto v0.0.0-20180505025534-4ec37c66abab/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
|
@ -1031,12 +1048,15 @@ golang.org/x/crypto v0.0.0-20191206172530-e9b2fee46413/go.mod h1:LzIPMQfyMNhhGPh
|
|||
golang.org/x/crypto v0.0.0-20200220183623-bac4c82f6975/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20200323165209-0ec3e9974c59/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20200422194213-44a606286825/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20200510223506-06a226fb4e37/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20200820211705-5c72a883971a/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20201221181555-eec23a3978ad/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
|
||||
golang.org/x/crypto v0.0.0-20210314154223-e6e6c4f2bb5b/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
|
||||
golang.org/x/crypto v0.0.0-20210513164829-c07d793c2f9a/go.mod h1:P+XmwS30IXTQdn5tA2iutPOUgjI07+tq3H3K9MVA1s8=
|
||||
golang.org/x/crypto v0.0.0-20210616213533-5ff15b29337e/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||
golang.org/x/crypto v0.0.0-20210915214749-c084706c2272/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||
golang.org/x/exp v0.0.0-20180321215751-8460e604b9de/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
golang.org/x/exp v0.0.0-20180807140117-3d87b88a115f/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
|
@ -1064,7 +1084,6 @@ golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRu
|
|||
golang.org/x/lint v0.0.0-20200130185559-910be7a94367/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
|
||||
golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
|
||||
golang.org/x/lint v0.0.0-20201208152925-83fdc39ff7b5/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
|
||||
golang.org/x/lint v0.0.0-20210508222113-6edffad5e616 h1:VLliZ0d+/avPrXXH+OakdXhpJuEoBZuwh1m2j7U6Iug=
|
||||
golang.org/x/lint v0.0.0-20210508222113-6edffad5e616/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
|
||||
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
|
||||
golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
|
||||
|
@ -1104,7 +1123,6 @@ golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLL
|
|||
golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20191002035440-2ec189313ef0/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20191004110552-13f9640d40b9/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20191112182307-2180aed22343/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20191126235420-ef20fe5d7933/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
|
@ -1117,6 +1135,7 @@ golang.org/x/net v0.0.0-20200421231249-e086a090c8fd/go.mod h1:qpuaurCH72eLCgpAm/
|
|||
golang.org/x/net v0.0.0-20200501053045-e0ff5e5a1de5/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
|
||||
golang.org/x/net v0.0.0-20200506145744-7e3656a0809f/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
|
||||
golang.org/x/net v0.0.0-20200513185701-a91f0712d120/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
|
||||
golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
|
||||
golang.org/x/net v0.0.0-20200520182314-0ba52f642ac2/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
|
||||
golang.org/x/net v0.0.0-20200602114024-627f9648deb9/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
|
||||
golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
|
||||
|
@ -1130,12 +1149,14 @@ golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v
|
|||
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
||||
golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLdyRGr576XBO4/greRjx4P4O3yc=
|
||||
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
|
||||
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT/9KSuxbyM7479/AVlXFRxuMCk=
|
||||
golang.org/x/net v0.0.0-20210503060351-7fd8e65b6420/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.0.0-20210510120150-4163338589ed/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.0.0-20210614182718-04defd469f4e/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.0.0-20210917221730-978cfadd31cf h1:R150MpwJIv1MpS0N/pc+NhTM8ajzvlmxlY5OYsrevXQ=
|
||||
golang.org/x/net v0.0.0-20210917221730-978cfadd31cf/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.0.0-20211007125505-59d4e928ea9d h1:QWMn1lFvU/nZ58ssWqiFJMd3DKIII8NYc4sn708XgKs=
|
||||
golang.org/x/net v0.0.0-20211007125505-59d4e928ea9d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
|
@ -1150,8 +1171,9 @@ golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84/go.mod h1:KelEdhl1UZF7XfJ
|
|||
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
||||
golang.org/x/oauth2 v0.0.0-20210628180205-a41e5a781914/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
||||
golang.org/x/oauth2 v0.0.0-20210805134026-6f1e6394065a/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
||||
golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f h1:Qmd2pbz05z7z6lm0DrgQVVPuBm92jqujBKMHMOlOQEw=
|
||||
golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
||||
golang.org/x/oauth2 v0.0.0-20211005180243-6b3c2da341f1 h1:B333XXssMuKQeBwiNODx4TupZy7bf4sxFZnN2ZOcvUE=
|
||||
golang.org/x/oauth2 v0.0.0-20211005180243-6b3c2da341f1/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
||||
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
|
@ -1193,6 +1215,7 @@ golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7w
|
|||
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190826190057-c7b8b68b1456/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190922100055-0a153f010e69/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190924154521-2837fb4f24fe/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
|
@ -1200,7 +1223,7 @@ golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7w
|
|||
golang.org/x/sys v0.0.0-20191008105621-543471e840be/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191010194322-b09406accb47/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191112214154-59a1497f0cea/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191128015809-6d18c012aee9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191220142924-d4481acd189f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
|
@ -1227,13 +1250,13 @@ golang.org/x/sys v0.0.0-20200622214017-ed371f2e16b4/go.mod h1:h1NjWce9XRLGQEsW7w
|
|||
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200826173525-f9321e4c35a6/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200828194041-157a740278f4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20201015000850-e3ed0017c211/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20201201145000-ef89a241ccb3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210104204734-6f8348627aad/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
|
@ -1248,14 +1271,17 @@ golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBc
|
|||
golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210603125802-9665404d3644/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210806184541-e5e7981a1069/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210823070655-63515b42dcdf/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210908233432-aa78b53d3365/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210910150752-751e447fb3d0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210923061019-b8560ed6a9b7 h1:c20P3CcPbopVp2f7099WLOqSNKURf30Z0uq66HpijZY=
|
||||
golang.org/x/sys v0.0.0-20210923061019-b8560ed6a9b7/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210917161153-d61c044b1678/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20211007075335-d3039528d8ac h1:oN6lz7iLW/YC7un8pq+9bOLyXrprv2+DKfkJY+2LJJw=
|
||||
golang.org/x/sys v0.0.0-20211007075335-d3039528d8ac/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/text v0.0.0-20160726164857-2910a502d2bf/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
|
@ -1277,6 +1303,7 @@ golang.org/x/time v0.0.0-20200416051211-89c76fbcd5d1/go.mod h1:tRJNPiyCQ0inRvYxb
|
|||
golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20201208040808-7e3f01d25324/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20180525024113-a5b4c53f6e8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
|
@ -1350,6 +1377,7 @@ golang.org/x/tools v0.0.0-20201110124207-079ba7bd75cd/go.mod h1:emZCQorbCU4vsT4f
|
|||
golang.org/x/tools v0.0.0-20201119054027-25dc3e1ccc3c/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||
golang.org/x/tools v0.0.0-20201201161351-ac6f37ff4c2a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||
golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||
golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||
golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||
golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
|
||||
|
@ -1357,7 +1385,6 @@ golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
|
|||
golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
|
||||
golang.org/x/tools v0.1.3/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
|
||||
golang.org/x/tools v0.1.4/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
|
||||
golang.org/x/tools v0.1.5 h1:ouewzE6p+/VEB31YYnTbEJdi8pFqKp4P4n85vwo3DHA=
|
||||
golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
|
@ -1399,8 +1426,9 @@ google.golang.org/api v0.50.0/go.mod h1:4bNT5pAuq5ji4SRZm+5QIkjny9JAyVD/3gaSihNe
|
|||
google.golang.org/api v0.51.0/go.mod h1:t4HdrdoNgyN5cbEfm7Lum0lcLDLiise1F8qDKX00sOU=
|
||||
google.golang.org/api v0.54.0/go.mod h1:7C4bFFOvVDGXjfDTAsgGwDgAxRDeQ4X8NvUedIt6z3k=
|
||||
google.golang.org/api v0.56.0/go.mod h1:38yMfeP1kfjsl8isn0tliTjIb1rJXcQi4UXlbqivdVE=
|
||||
google.golang.org/api v0.57.0 h1:4t9zuDlHLcIx0ZEhmXEeFVCRsiOgpgn2QOH9N0MNjPI=
|
||||
google.golang.org/api v0.57.0/go.mod h1:dVPlbZyBo2/OjBpmvNdpn2GRm6rPy75jyU7bmhdrMgI=
|
||||
google.golang.org/api v0.58.0 h1:MDkAbYIB1JpSgCTOCYYoIec/coMlKK4oVbpnBLLcyT0=
|
||||
google.golang.org/api v0.58.0/go.mod h1:cAbP2FsxoGVNwtgNAmmn3y5G1TWAiVYRmg4yku3lv+E=
|
||||
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
|
||||
google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
|
@ -1466,12 +1494,14 @@ google.golang.org/genproto v0.0.0-20210728212813-7823e685a01f/go.mod h1:ob2IJxKr
|
|||
google.golang.org/genproto v0.0.0-20210805201207-89edb61ffb67/go.mod h1:ob2IJxKrgPT52GcgX759i1sleT07tiKowYBGbczaW48=
|
||||
google.golang.org/genproto v0.0.0-20210813162853-db860fec028c/go.mod h1:cFeNkxwySK631ADgubI+/XFU/xp8FD5KIVV4rj8UC5w=
|
||||
google.golang.org/genproto v0.0.0-20210821163610-241b8fcbd6c8/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
|
||||
google.golang.org/genproto v0.0.0-20210825212027-de86158e7fda/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
|
||||
google.golang.org/genproto v0.0.0-20210828152312-66f60bf46e71/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
|
||||
google.golang.org/genproto v0.0.0-20210903162649-d08c68adba83/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
|
||||
google.golang.org/genproto v0.0.0-20210909211513-a8c4777a87af/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
|
||||
google.golang.org/genproto v0.0.0-20210921142501-181ce0d877f6 h1:2ncG/LajxmrclaZH+ppVi02rQxz4eXYJzGHdFN4Y9UA=
|
||||
google.golang.org/genproto v0.0.0-20210917145530-b395a37504d4/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
|
||||
google.golang.org/genproto v0.0.0-20210921142501-181ce0d877f6/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
|
||||
google.golang.org/genproto v0.0.0-20210924002016-3dee208752a0/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
|
||||
google.golang.org/genproto v0.0.0-20211007155348-82e027067bd4 h1:YXPV/eKW0ZWRdB5tyI6aPoaa2Wxb4OSlFrTREMdwn64=
|
||||
google.golang.org/genproto v0.0.0-20211007155348-82e027067bd4/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
|
||||
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
|
||||
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
||||
google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM=
|
||||
|
@ -1503,8 +1533,9 @@ google.golang.org/grpc v1.37.1/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQ
|
|||
google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
|
||||
google.golang.org/grpc v1.39.0/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnDzfrE=
|
||||
google.golang.org/grpc v1.39.1/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnDzfrE=
|
||||
google.golang.org/grpc v1.40.0 h1:AGJ0Ih4mHjSeibYkFGh1dD9KJ/eOtZ93I6hoHhukQ5Q=
|
||||
google.golang.org/grpc v1.40.0/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34=
|
||||
google.golang.org/grpc v1.41.0 h1:f+PlOh7QV4iIJkPrx5NQ7qaNGFQ3OTse67yaDHfju4E=
|
||||
google.golang.org/grpc v1.41.0/go.mod h1:U3l9uK9J0sini8mHphKoXyaqDA/8VyGnDee1zzIUK6k=
|
||||
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0/go.mod h1:6Kw0yEErY5E/yWrBtf03jp27GLLJujG4z/JK95pnjjw=
|
||||
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
|
||||
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
|
||||
|
|
|
@ -182,7 +182,7 @@ func NewRemoteFS(path string) (common.RemoteFS, error) {
|
|||
}
|
||||
n := strings.Index(path, "://")
|
||||
if n < 0 {
|
||||
return nil, fmt.Errorf("Missing scheme in path %q. Supported schemes: `gcs://`, `s3://`, `fs://`", path)
|
||||
return nil, fmt.Errorf("Missing scheme in path %q. Supported schemes: `gs://`, `s3://`, `fs://`", path)
|
||||
}
|
||||
scheme := path[:n]
|
||||
dir := path[n+len("://"):]
|
||||
|
@ -195,7 +195,7 @@ func NewRemoteFS(path string) (common.RemoteFS, error) {
|
|||
Dir: dir,
|
||||
}
|
||||
return fs, nil
|
||||
case "gcs":
|
||||
case "gcs", "gs":
|
||||
n := strings.Index(dir, "/")
|
||||
if n < 0 {
|
||||
return nil, fmt.Errorf("missing directory on the gcs bucket %q", dir)
|
||||
|
|
|
@ -1163,7 +1163,7 @@ func getParamsFromLabels(labels []prompbmarshal.Label, paramsOrig map[string][]s
|
|||
|
||||
func mergeLabels(swc *scrapeWorkConfig, target string, extraLabels, metaLabels map[string]string) []prompbmarshal.Label {
|
||||
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config
|
||||
m := make(map[string]string, 4+len(swc.externalLabels)+len(swc.params)+len(extraLabels)+len(metaLabels))
|
||||
m := make(map[string]string, 6+len(swc.externalLabels)+len(swc.params)+len(extraLabels)+len(metaLabels))
|
||||
for k, v := range swc.externalLabels {
|
||||
m[k] = v
|
||||
}
|
||||
|
|
|
@ -232,10 +232,7 @@ func buildScope(sdc *SDConfig) (map[string]interface{}, error) {
|
|||
// ProjectName provided: either DomainID or DomainName must also be supplied.
|
||||
// ProjectID may not be supplied.
|
||||
if len(sdc.DomainID) == 0 && len(sdc.DomainName) == 0 {
|
||||
return nil, fmt.Errorf("both domain_id and domain_name present")
|
||||
}
|
||||
if len(sdc.ProjectID) > 0 {
|
||||
return nil, fmt.Errorf("both domain_id and domain_name present")
|
||||
return nil, fmt.Errorf("domain_id or domain_name must present")
|
||||
}
|
||||
if len(sdc.DomainID) > 0 {
|
||||
return map[string]interface{}{
|
||||
|
@ -254,13 +251,6 @@ func buildScope(sdc *SDConfig) (map[string]interface{}, error) {
|
|||
}, nil
|
||||
}
|
||||
} else if len(sdc.ProjectID) > 0 {
|
||||
// ProjectID provided. ProjectName, DomainID, and DomainName may not be provided.
|
||||
if len(sdc.DomainID) > 0 {
|
||||
return nil, fmt.Errorf("both project_id and domain_id present")
|
||||
}
|
||||
if len(sdc.DomainName) > 0 {
|
||||
return nil, fmt.Errorf("both project_id and domain_name present")
|
||||
}
|
||||
return map[string]interface{}{
|
||||
"project": map[string]interface{}{
|
||||
"id": &sdc.ProjectID,
|
||||
|
|
|
@ -28,7 +28,7 @@ import (
|
|||
var (
|
||||
suppressScrapeErrors = flag.Bool("promscrape.suppressScrapeErrors", false, "Whether to suppress scrape errors logging. "+
|
||||
"The last error for each target is always available at '/targets' page even if scrape errors logging is suppressed")
|
||||
noStaleMarkers = flag.Bool("promscrape.noStaleMarkers", false, "Whether to disable seding Prometheus stale markers for metrics when scrape target disappears. This option may reduce memory usage if stale markers aren't needed for your setup. See also https://docs.victoriametrics.com/vmagent.html#stream-parsing-mode")
|
||||
noStaleMarkers = flag.Bool("promscrape.noStaleMarkers", false, "Whether to disable sending Prometheus stale markers for metrics when scrape target disappears. This option may reduce memory usage if stale markers aren't needed for your setup. See also https://docs.victoriametrics.com/vmagent.html#stream-parsing-mode")
|
||||
seriesLimitPerTarget = flag.Int("promscrape.seriesLimitPerTarget", 0, "Optional limit on the number of unique time series a single scrape target can expose. See https://docs.victoriametrics.com/vmagent.html#cardinality-limiter for more info")
|
||||
)
|
||||
|
||||
|
|
|
@ -5,6 +5,7 @@ import (
|
|||
"sync"
|
||||
|
||||
"github.com/klauspost/compress/gzip"
|
||||
"github.com/klauspost/compress/zlib"
|
||||
)
|
||||
|
||||
// GetGzipReader returns new gzip reader from the pool.
|
||||
|
@ -29,3 +30,24 @@ func PutGzipReader(zr *gzip.Reader) {
|
|||
}
|
||||
|
||||
var gzipReaderPool sync.Pool
|
||||
|
||||
// GetZlibReader returns zlib reader.
|
||||
func GetZlibReader(r io.Reader) (io.ReadCloser, error) {
|
||||
v := zlibReaderPool.Get()
|
||||
if v == nil {
|
||||
return zlib.NewReader(r)
|
||||
}
|
||||
zr := v.(io.ReadCloser)
|
||||
if err := zr.(zlib.Resetter).Reset(r, nil); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return zr, nil
|
||||
}
|
||||
|
||||
// PutZlibReader returns back zlib reader obtained via GetZlibReader.
|
||||
func PutZlibReader(zr io.ReadCloser) {
|
||||
_ = zr.Close()
|
||||
zlibReaderPool.Put(zr)
|
||||
}
|
||||
|
||||
var zlibReaderPool sync.Pool
|
73
lib/protoparser/datadog/parser.go
Normal file
73
lib/protoparser/datadog/parser.go
Normal file
|
@ -0,0 +1,73 @@
|
|||
package datadog
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
|
||||
)
|
||||
|
||||
// Request represents DataDog POST request to /api/v1/series
|
||||
//
|
||||
// See https://docs.datadoghq.com/api/latest/metrics/#submit-metrics
|
||||
type Request struct {
|
||||
Series []Series `json:"series"`
|
||||
}
|
||||
|
||||
func (req *Request) reset() {
|
||||
req.Series = req.Series[:0]
|
||||
}
|
||||
|
||||
// Unmarshal unmarshals DataDog /api/v1/series request body from b to req.
|
||||
//
|
||||
// See https://docs.datadoghq.com/api/latest/metrics/#submit-metrics
|
||||
//
|
||||
// b shouldn't be modified when req is in use.
|
||||
func (req *Request) Unmarshal(b []byte) error {
|
||||
req.reset()
|
||||
if err := json.Unmarshal(b, req); err != nil {
|
||||
return fmt.Errorf("cannot unmarshal %q: %w", b, err)
|
||||
}
|
||||
// Set missing timestamps to the current time.
|
||||
currentTimestamp := float64(fasttime.UnixTimestamp())
|
||||
series := req.Series
|
||||
for i := range series {
|
||||
points := series[i].Points
|
||||
for j := range points {
|
||||
if points[j][0] <= 0 {
|
||||
points[j][0] = currentTimestamp
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Series represents a series item from DataDog POST request to /api/v1/series
|
||||
//
|
||||
// See https://docs.datadoghq.com/api/latest/metrics/#submit-metrics
|
||||
type Series struct {
|
||||
Host string `json:"host"`
|
||||
|
||||
// Do not decode Interval, since it isn't used by VictoriaMetrics
|
||||
// Interval int64 `json:"interval"`
|
||||
|
||||
Metric string `json:"metric"`
|
||||
Points []Point `json:"points"`
|
||||
Tags []string `json:"tags"`
|
||||
|
||||
// Do not decode Type, since it isn't used by VictoriaMetrics
|
||||
// Type string `json:"type"`
|
||||
}
|
||||
|
||||
// Point represents a point from DataDog POST request to /api/v1/series
|
||||
type Point [2]float64
|
||||
|
||||
// Timestamp returns timestamp in milliseconds from the given pt.
|
||||
func (pt *Point) Timestamp() int64 {
|
||||
return int64(pt[0] * 1000)
|
||||
}
|
||||
|
||||
// Value returns value from the given pt.
|
||||
func (pt *Point) Value() float64 {
|
||||
return pt[1]
|
||||
}
|
66
lib/protoparser/datadog/parser_test.go
Normal file
66
lib/protoparser/datadog/parser_test.go
Normal file
|
@ -0,0 +1,66 @@
|
|||
package datadog
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestRequestUnmarshalFailure(t *testing.T) {
|
||||
f := func(s string) {
|
||||
t.Helper()
|
||||
var req Request
|
||||
if err := req.Unmarshal([]byte(s)); err == nil {
|
||||
t.Fatalf("expecting non-nil error for Unmarshal(%q)", s)
|
||||
}
|
||||
}
|
||||
f("")
|
||||
f("foobar")
|
||||
f(`{"series":123`)
|
||||
f(`1234`)
|
||||
f(`[]`)
|
||||
}
|
||||
|
||||
func TestRequestUnmarshalSuccess(t *testing.T) {
|
||||
f := func(s string, reqExpected *Request) {
|
||||
t.Helper()
|
||||
var req Request
|
||||
if err := req.Unmarshal([]byte(s)); err != nil {
|
||||
t.Fatalf("unexpected error in Unmarshal(%q): %s", s, err)
|
||||
}
|
||||
if !reflect.DeepEqual(&req, reqExpected) {
|
||||
t.Fatalf("unexpected row;\ngot\n%+v\nwant\n%+v", &req, reqExpected)
|
||||
}
|
||||
}
|
||||
f("{}", &Request{})
|
||||
f(`
|
||||
{
|
||||
"series": [
|
||||
{
|
||||
"host": "test.example.com",
|
||||
"interval": 20,
|
||||
"metric": "system.load.1",
|
||||
"points": [[
|
||||
1575317847,
|
||||
0.5
|
||||
]],
|
||||
"tags": [
|
||||
"environment:test"
|
||||
],
|
||||
"type": "rate"
|
||||
}
|
||||
]
|
||||
}
|
||||
`, &Request{
|
||||
Series: []Series{{
|
||||
Host: "test.example.com",
|
||||
Metric: "system.load.1",
|
||||
Points: []Point{{
|
||||
1575317847,
|
||||
0.5,
|
||||
}},
|
||||
Tags: []string{
|
||||
"environment:test",
|
||||
},
|
||||
}},
|
||||
})
|
||||
}
|
39
lib/protoparser/datadog/parser_timing_test.go
Normal file
39
lib/protoparser/datadog/parser_timing_test.go
Normal file
|
@ -0,0 +1,39 @@
|
|||
package datadog
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkRequestUnmarshal(b *testing.B) {
|
||||
reqBody := []byte(`{
|
||||
"series": [
|
||||
{
|
||||
"host": "test.example.com",
|
||||
"interval": 20,
|
||||
"metric": "system.load.1",
|
||||
"points": [
|
||||
1575317847,
|
||||
0.5
|
||||
],
|
||||
"tags": [
|
||||
"environment:test"
|
||||
],
|
||||
"type": "rate"
|
||||
}
|
||||
]
|
||||
}`)
|
||||
b.SetBytes(int64(len(reqBody)))
|
||||
b.ReportAllocs()
|
||||
b.RunParallel(func(pb *testing.PB) {
|
||||
var req Request
|
||||
for pb.Next() {
|
||||
if err := req.Unmarshal(reqBody); err != nil {
|
||||
panic(fmt.Errorf("unexpected error: %w", err))
|
||||
}
|
||||
if len(req.Series) != 1 {
|
||||
panic(fmt.Errorf("unexpected number of series unmarshaled: got %d; want 4", len(req.Series)))
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
138
lib/protoparser/datadog/streamparser.go
Normal file
138
lib/protoparser/datadog/streamparser.go
Normal file
|
@ -0,0 +1,138 @@
|
|||
package datadog
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"io"
|
||||
"sync"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
)
|
||||
|
||||
// The maximum request size is defined at https://docs.datadoghq.com/api/latest/metrics/#submit-metrics
|
||||
var maxInsertRequestSize = flagutil.NewBytes("datadog.maxInsertRequestSize", 64*1024*1024, "The maximum size in bytes of a single DataDog POST request to /api/v1/series")
|
||||
|
||||
// ParseStream parses DataDog POST request for /api/v1/series from reader and calls callback for the parsed request.
|
||||
//
|
||||
// callback shouldn't hold series after returning.
|
||||
func ParseStream(r io.Reader, contentEncoding string, callback func(series []Series) error) error {
|
||||
switch contentEncoding {
|
||||
case "gzip":
|
||||
zr, err := common.GetGzipReader(r)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot read gzipped DataDog data: %w", err)
|
||||
}
|
||||
defer common.PutGzipReader(zr)
|
||||
r = zr
|
||||
case "deflate":
|
||||
zlr, err := common.GetZlibReader(r)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot read deflated DataDog data: %w", err)
|
||||
}
|
||||
defer common.PutZlibReader(zlr)
|
||||
r = zlr
|
||||
}
|
||||
ctx := getPushCtx(r)
|
||||
defer putPushCtx(ctx)
|
||||
if err := ctx.Read(); err != nil {
|
||||
return err
|
||||
}
|
||||
req := getRequest()
|
||||
defer putRequest(req)
|
||||
if err := req.Unmarshal(ctx.reqBuf.B); err != nil {
|
||||
unmarshalErrors.Inc()
|
||||
return fmt.Errorf("cannot unmarshal DataDog POST request with size %d bytes: %s", len(ctx.reqBuf.B), err)
|
||||
}
|
||||
rows := 0
|
||||
series := req.Series
|
||||
for i := range series {
|
||||
rows += len(series[i].Points)
|
||||
}
|
||||
rowsRead.Add(rows)
|
||||
|
||||
if err := callback(series); err != nil {
|
||||
return fmt.Errorf("error when processing imported data: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type pushCtx struct {
|
||||
br *bufio.Reader
|
||||
reqBuf bytesutil.ByteBuffer
|
||||
}
|
||||
|
||||
func (ctx *pushCtx) reset() {
|
||||
ctx.br.Reset(nil)
|
||||
ctx.reqBuf.Reset()
|
||||
}
|
||||
|
||||
func (ctx *pushCtx) Read() error {
|
||||
readCalls.Inc()
|
||||
lr := io.LimitReader(ctx.br, int64(maxInsertRequestSize.N)+1)
|
||||
startTime := fasttime.UnixTimestamp()
|
||||
reqLen, err := ctx.reqBuf.ReadFrom(lr)
|
||||
if err != nil {
|
||||
readErrors.Inc()
|
||||
return fmt.Errorf("cannot read compressed request in %d seconds: %w", fasttime.UnixTimestamp()-startTime, err)
|
||||
}
|
||||
if reqLen > int64(maxInsertRequestSize.N) {
|
||||
readErrors.Inc()
|
||||
return fmt.Errorf("too big packed request; mustn't exceed `-maxInsertRequestSize=%d` bytes", maxInsertRequestSize.N)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
var (
|
||||
readCalls = metrics.NewCounter(`vm_protoparser_read_calls_total{type="datadog"}`)
|
||||
readErrors = metrics.NewCounter(`vm_protoparser_read_errors_total{type="datadog"}`)
|
||||
rowsRead = metrics.NewCounter(`vm_protoparser_rows_read_total{type="datadog"}`)
|
||||
unmarshalErrors = metrics.NewCounter(`vm_protoparser_unmarshal_errors_total{type="datadog"}`)
|
||||
)
|
||||
|
||||
func getPushCtx(r io.Reader) *pushCtx {
|
||||
select {
|
||||
case ctx := <-pushCtxPoolCh:
|
||||
ctx.br.Reset(r)
|
||||
return ctx
|
||||
default:
|
||||
if v := pushCtxPool.Get(); v != nil {
|
||||
ctx := v.(*pushCtx)
|
||||
ctx.br.Reset(r)
|
||||
return ctx
|
||||
}
|
||||
return &pushCtx{
|
||||
br: bufio.NewReaderSize(r, 64*1024),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func putPushCtx(ctx *pushCtx) {
|
||||
ctx.reset()
|
||||
select {
|
||||
case pushCtxPoolCh <- ctx:
|
||||
default:
|
||||
pushCtxPool.Put(ctx)
|
||||
}
|
||||
}
|
||||
|
||||
var pushCtxPool sync.Pool
|
||||
var pushCtxPoolCh = make(chan *pushCtx, cgroup.AvailableCPUs())
|
||||
|
||||
func getRequest() *Request {
|
||||
v := requestPool.Get()
|
||||
if v == nil {
|
||||
return &Request{}
|
||||
}
|
||||
return v.(*Request)
|
||||
}
|
||||
|
||||
func putRequest(req *Request) {
|
||||
requestPool.Put(req)
|
||||
}
|
||||
|
||||
var requestPool sync.Pool
|
|
@ -482,7 +482,8 @@ func AreIdenticalSeriesFast(s1, s2 string) bool {
|
|||
n := strings.IndexByte(x1, ' ')
|
||||
if n < 0 {
|
||||
// Invalid Prometheus line - it must contain at least a single space between metric name and value
|
||||
return false
|
||||
// Compare it in full with x2.
|
||||
n = len(x1) - 1
|
||||
}
|
||||
n++
|
||||
if n > len(x2) || x1[:n] != x2[:n] {
|
||||
|
|
|
@ -33,11 +33,11 @@ func TestAreIdenticalSeriesFast(t *testing.T) {
|
|||
}
|
||||
}
|
||||
f("", "", true)
|
||||
f("", "a 1", false) // different number of metrics
|
||||
f(" ", " a 1", false) // different number of metrics
|
||||
f("a 1", "", false) // different number of metrics
|
||||
f(" a 1", " ", false) // different number of metrics
|
||||
f("foo", "foo", false) // missing value
|
||||
f("", "a 1", false) // different number of metrics
|
||||
f(" ", " a 1", false) // different number of metrics
|
||||
f("a 1", "", false) // different number of metrics
|
||||
f(" a 1", " ", false) // different number of metrics
|
||||
f("foo", "foo", true) // consider series identical if they miss value
|
||||
f("foo 1", "foo 1", true)
|
||||
f("foo 1", "foo 2", true)
|
||||
f("foo 1 ", "foo 2 ", true)
|
||||
|
@ -85,6 +85,25 @@ func TestAreIdenticalSeriesFast(t *testing.T) {
|
|||
f("foo{bar=\"b az\"} +Inf 5", "foo{bar=\"b az\"} NaN 7.43", true)
|
||||
f("foo{bar=\"b az\"} +Inf 5", "foo{bar=\"b az\"} nan 7.43", true)
|
||||
f("foo{bar=\"b az\"} +Inf 5", "foo{bar=\"b az\"} nansf 7.43", false) // invalid value
|
||||
|
||||
// False positive - whitespace after the numeric char in the label.
|
||||
f(`foo{bar=" 12.3 "} 1`, `foo{bar=" 13 "} 23`, true)
|
||||
f(`foo{bar=" 12.3 "} 1 3443`, `foo{bar=" 13 "} 23 4345`, true)
|
||||
f(`foo{bar=" 12.3 "} 1 3443 # {} 34`, `foo{bar=" 13 "} 23 4345 # {foo=" bar "} 34`, true)
|
||||
|
||||
// Metrics and labels with '#' chars
|
||||
f(`foo{bar="#1"} 1`, `foo{bar="#1"} 1`, true)
|
||||
f(`foo{bar="a#1"} 1`, `foo{bar="b#1"} 1.4`, false)
|
||||
f(`foo{bar=" #1"} 1`, `foo{bar=" #1"} 3`, true)
|
||||
f(`foo{bar=" #1 "} 1`, `foo{bar=" #1 "} -2.34 343.34 # {foo="#bar"} `, true)
|
||||
f(`foo{bar=" #1"} 1`, `foo{bar="#1"} 1`, false)
|
||||
f(`foo{b#ar=" #1"} 1`, `foo{b#ar=" #1"} 1.23`, true)
|
||||
f(`foo{z#ar=" #1"} 1`, `foo{b#ar=" #1"} 1.23`, false)
|
||||
f(`fo#o{b#ar="#1"} 1`, `fo#o{b#ar="#1"} 1.23`, true)
|
||||
f(`fo#o{b#ar="#1"} 1`, `fa#o{b#ar="#1"} 1.23`, false)
|
||||
|
||||
// False positive - the value after '#' char can be arbitrary
|
||||
f(`fo#o{b#ar="#1"} 1`, `fo#osdf 1.23`, true)
|
||||
}
|
||||
|
||||
func TestPrevBackslashesCount(t *testing.T) {
|
||||
|
@ -269,6 +288,44 @@ cassandra_token_ownership_ratio 78.9`, &Rows{
|
|||
}},
|
||||
})
|
||||
|
||||
// `#` char in label value
|
||||
f(`foo{bar="#1 az"} 24`, &Rows{
|
||||
Rows: []Row{{
|
||||
Metric: "foo",
|
||||
Tags: []Tag{{
|
||||
Key: "bar",
|
||||
Value: "#1 az",
|
||||
}},
|
||||
Value: 24,
|
||||
}},
|
||||
})
|
||||
|
||||
// `#` char in label name and label value
|
||||
f(`foo{bar#2="#1 az"} 24 456`, &Rows{
|
||||
Rows: []Row{{
|
||||
Metric: "foo",
|
||||
Tags: []Tag{{
|
||||
Key: "bar#2",
|
||||
Value: "#1 az",
|
||||
}},
|
||||
Value: 24,
|
||||
Timestamp: 456000,
|
||||
}},
|
||||
})
|
||||
|
||||
// `#` char in metric name, label name and label value
|
||||
f(`foo#qw{bar#2="#1 az"} 24 456 # foobar {baz="x"}`, &Rows{
|
||||
Rows: []Row{{
|
||||
Metric: "foo#qw",
|
||||
Tags: []Tag{{
|
||||
Key: "bar#2",
|
||||
Value: "#1 az",
|
||||
}},
|
||||
Value: 24,
|
||||
Timestamp: 456000,
|
||||
}},
|
||||
})
|
||||
|
||||
// Incorrectly escaped backlash. This is real-world case, which must be supported.
|
||||
f(`mssql_sql_server_active_transactions_sec{loginname="domain\somelogin",env="develop"} 56`, &Rows{
|
||||
Rows: []Row{{
|
||||
|
|
|
@ -57,6 +57,8 @@ type Storage struct {
|
|||
hourlySeriesLimitRowsDropped uint64
|
||||
dailySeriesLimitRowsDropped uint64
|
||||
|
||||
isReadOnly uint32
|
||||
|
||||
path string
|
||||
cachePath string
|
||||
retentionMsecs int64
|
||||
|
@ -118,6 +120,7 @@ type Storage struct {
|
|||
currHourMetricIDsUpdaterWG sync.WaitGroup
|
||||
nextDayMetricIDsUpdaterWG sync.WaitGroup
|
||||
retentionWatcherWG sync.WaitGroup
|
||||
freeDiskSpaceWatcherWG sync.WaitGroup
|
||||
|
||||
// The snapshotLock prevents from concurrent creation of snapshots,
|
||||
// since this may result in snapshots without recently added data,
|
||||
|
@ -258,6 +261,7 @@ func OpenStorage(path string, retentionMsecs int64, maxHourlySeries, maxDailySer
|
|||
s.startCurrHourMetricIDsUpdater()
|
||||
s.startNextDayMetricIDsUpdater()
|
||||
s.startRetentionWatcher()
|
||||
s.startFreeDiskSpaceWatcher()
|
||||
|
||||
return s, nil
|
||||
}
|
||||
|
@ -558,6 +562,47 @@ func (s *Storage) UpdateMetrics(m *Metrics) {
|
|||
s.tb.UpdateMetrics(&m.TableMetrics)
|
||||
}
|
||||
|
||||
// SetFreeDiskSpaceLimit sets the minimum free disk space size of current storage path
|
||||
//
|
||||
// The function must be called before opening or creating any storage.
|
||||
func SetFreeDiskSpaceLimit(bytes int) {
|
||||
freeDiskSpaceLimitBytes = uint64(bytes)
|
||||
}
|
||||
|
||||
var freeDiskSpaceLimitBytes uint64
|
||||
|
||||
// IsReadOnly returns information is storage in read only mode
|
||||
func (s *Storage) IsReadOnly() bool {
|
||||
return atomic.LoadUint32(&s.isReadOnly) == 1
|
||||
}
|
||||
|
||||
func (s *Storage) startFreeDiskSpaceWatcher() {
|
||||
f := func() {
|
||||
freeSpaceBytes := fs.MustGetFreeSpace(s.path)
|
||||
if freeSpaceBytes < freeDiskSpaceLimitBytes {
|
||||
// Switch the storage to readonly mode if there is no enough free space left at s.path
|
||||
atomic.StoreUint32(&s.isReadOnly, 1)
|
||||
return
|
||||
}
|
||||
atomic.StoreUint32(&s.isReadOnly, 0)
|
||||
}
|
||||
f()
|
||||
s.freeDiskSpaceWatcherWG.Add(1)
|
||||
go func() {
|
||||
defer s.freeDiskSpaceWatcherWG.Done()
|
||||
ticker := time.NewTicker(30 * time.Second)
|
||||
defer ticker.Stop()
|
||||
for {
|
||||
select {
|
||||
case <-s.stop:
|
||||
return
|
||||
case <-ticker.C:
|
||||
f()
|
||||
}
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
func (s *Storage) startRetentionWatcher() {
|
||||
s.retentionWatcherWG.Add(1)
|
||||
go func() {
|
||||
|
@ -672,6 +717,7 @@ func (s *Storage) resetAndSaveTSIDCache() {
|
|||
func (s *Storage) MustClose() {
|
||||
close(s.stop)
|
||||
|
||||
s.freeDiskSpaceWatcherWG.Wait()
|
||||
s.retentionWatcherWG.Wait()
|
||||
s.currHourMetricIDsUpdaterWG.Wait()
|
||||
s.nextDayMetricIDsUpdaterWG.Wait()
|
||||
|
|
22
vendor/cloud.google.com/go/CHANGES.md
generated
vendored
22
vendor/cloud.google.com/go/CHANGES.md
generated
vendored
|
@ -1,5 +1,27 @@
|
|||
# Changes
|
||||
|
||||
## [0.97.0](https://www.github.com/googleapis/google-cloud-go/compare/v0.96.0...v0.97.0) (2021-09-29)
|
||||
|
||||
|
||||
### Features
|
||||
|
||||
* **internal** add Retry func to testutil from samples repository [#4902](https://github.com/googleapis/google-cloud-go/pull/4902)
|
||||
|
||||
## [0.96.0](https://www.github.com/googleapis/google-cloud-go/compare/v0.95.0...v0.96.0) (2021-09-28)
|
||||
|
||||
|
||||
### Features
|
||||
|
||||
* **civil:** add IsEmpty function to time, date and datetime ([#4728](https://www.github.com/googleapis/google-cloud-go/issues/4728)) ([88bfa64](https://www.github.com/googleapis/google-cloud-go/commit/88bfa64d6df2f3bb7d41e0b8f56717dd3de790e2)), refs [#4727](https://www.github.com/googleapis/google-cloud-go/issues/4727)
|
||||
* **internal/godocfx:** detect preview versions ([#4899](https://www.github.com/googleapis/google-cloud-go/issues/4899)) ([9b60844](https://www.github.com/googleapis/google-cloud-go/commit/9b608445ce9ebabbc87a50e85ce6ef89125031d2))
|
||||
* **internal:** provide wrapping for retried errors ([#4797](https://www.github.com/googleapis/google-cloud-go/issues/4797)) ([ce5f4db](https://www.github.com/googleapis/google-cloud-go/commit/ce5f4dbab884e847a2d9f1f8f3fcfd7df19a505a))
|
||||
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* **internal/gapicgen:** restore fmting proto files ([#4789](https://www.github.com/googleapis/google-cloud-go/issues/4789)) ([5606b54](https://www.github.com/googleapis/google-cloud-go/commit/5606b54b97bb675487c6c138a4081c827218f933))
|
||||
* **internal/trace:** use xerrors.As for trace ([#4813](https://www.github.com/googleapis/google-cloud-go/issues/4813)) ([05fe61c](https://www.github.com/googleapis/google-cloud-go/commit/05fe61c5aa4860bdebbbe3e91a9afaba16aa6184))
|
||||
|
||||
## [0.95.0](https://www.github.com/googleapis/google-cloud-go/compare/v0.94.1...v0.95.0) (2021-09-21)
|
||||
|
||||
### Bug Fixes
|
||||
|
|
59
vendor/cloud.google.com/go/README.md
generated
vendored
59
vendor/cloud.google.com/go/README.md
generated
vendored
|
@ -27,63 +27,8 @@ make backwards-incompatible changes.
|
|||
|
||||
## Supported APIs
|
||||
|
||||
| Google API | Status | Package |
|
||||
| ----------------------------------------------- | ------ | ----------------------------------------------------------------------------------------------------------------------------- |
|
||||
| [Asset][cloud-asset] | stable | [`cloud.google.com/go/asset/apiv1`](https://pkg.go.dev/cloud.google.com/go/asset/v1beta) |
|
||||
| [Automl][cloud-automl] | stable | [`cloud.google.com/go/automl/apiv1`](https://pkg.go.dev/cloud.google.com/go/automl/apiv1) |
|
||||
| [BigQuery][cloud-bigquery] | stable | [`cloud.google.com/go/bigquery`](https://pkg.go.dev/cloud.google.com/go/bigquery) |
|
||||
| [Bigtable][cloud-bigtable] | stable | [`cloud.google.com/go/bigtable`](https://pkg.go.dev/cloud.google.com/go/bigtable) |
|
||||
| [Cloudbuild][cloud-build] | stable | [`cloud.google.com/go/cloudbuild/apiv1`](https://pkg.go.dev/cloud.google.com/go/cloudbuild/apiv1) |
|
||||
| [Cloudtasks][cloud-tasks] | stable | [`cloud.google.com/go/cloudtasks/apiv2`](https://pkg.go.dev/cloud.google.com/go/cloudtasks/apiv2) |
|
||||
| [Compute Engine][cloud-compute] | alpha | [`cloud.google.com/go/compute/apiv1`](https://pkg.go.dev/cloud.google.com/go/compute/apiv1) |
|
||||
| [Container][cloud-container] | stable | [`cloud.google.com/go/container/apiv1`](https://pkg.go.dev/cloud.google.com/go/container/apiv1) |
|
||||
| [ContainerAnalysis][cloud-containeranalysis] | beta | [`cloud.google.com/go/containeranalysis/apiv1`](https://pkg.go.dev/cloud.google.com/go/containeranalysis/apiv1) |
|
||||
| [Dataproc][cloud-dataproc] | stable | [`cloud.google.com/go/dataproc/apiv1`](https://pkg.go.dev/cloud.google.com/go/dataproc/apiv1) |
|
||||
| [Datastore][cloud-datastore] | stable | [`cloud.google.com/go/datastore`](https://pkg.go.dev/cloud.google.com/go/datastore) |
|
||||
| [Debugger][cloud-debugger] | stable | [`cloud.google.com/go/debugger/apiv2`](https://pkg.go.dev/cloud.google.com/go/debugger/apiv2) |
|
||||
| [Dialogflow][cloud-dialogflow] | stable | [`cloud.google.com/go/dialogflow/apiv2`](https://pkg.go.dev/cloud.google.com/go/dialogflow/apiv2) |
|
||||
| [Data Loss Prevention][cloud-dlp] | stable | [`cloud.google.com/go/dlp/apiv2`](https://pkg.go.dev/cloud.google.com/go/dlp/apiv2) |
|
||||
| [ErrorReporting][cloud-errors] | alpha | [`cloud.google.com/go/errorreporting`](https://pkg.go.dev/cloud.google.com/go/errorreporting) |
|
||||
| [Firestore][cloud-firestore] | stable | [`cloud.google.com/go/firestore`](https://pkg.go.dev/cloud.google.com/go/firestore) |
|
||||
| [IAM][cloud-iam] | stable | [`cloud.google.com/go/iam`](https://pkg.go.dev/cloud.google.com/go/iam) |
|
||||
| [IoT][cloud-iot] | stable | [`cloud.google.com/go/iot/apiv1`](https://pkg.go.dev/cloud.google.com/go/iot/apiv1) |
|
||||
| [IRM][cloud-irm] | alpha | [`cloud.google.com/go/irm/apiv1alpha2`](https://pkg.go.dev/cloud.google.com/go/irm/apiv1alpha2) |
|
||||
| [KMS][cloud-kms] | stable | [`cloud.google.com/go/kms/apiv1`](https://pkg.go.dev/cloud.google.com/go/kms/apiv1) |
|
||||
| [Natural Language][cloud-natural-language] | stable | [`cloud.google.com/go/language/apiv1`](https://pkg.go.dev/cloud.google.com/go/language/apiv1) |
|
||||
| [Logging][cloud-logging] | stable | [`cloud.google.com/go/logging`](https://pkg.go.dev/cloud.google.com/go/logging) |
|
||||
| [Memorystore][cloud-memorystore] | alpha | [`cloud.google.com/go/redis/apiv1`](https://pkg.go.dev/cloud.google.com/go/redis/apiv1) |
|
||||
| [Monitoring][cloud-monitoring] | stable | [`cloud.google.com/go/monitoring/apiv3`](https://pkg.go.dev/cloud.google.com/go/monitoring/apiv3) |
|
||||
| [OS Login][cloud-oslogin] | stable | [`cloud.google.com/go/oslogin/apiv1`](https://pkg.go.dev/cloud.google.com/go/oslogin/apiv1) |
|
||||
| [Pub/Sub][cloud-pubsub] | stable | [`cloud.google.com/go/pubsub`](https://pkg.go.dev/cloud.google.com/go/pubsub) |
|
||||
| [Pub/Sub Lite][cloud-pubsublite] | stable | [`cloud.google.com/go/pubsublite`](https://pkg.go.dev/cloud.google.com/go/pubsublite) |
|
||||
| [Phishing Protection][cloud-phishingprotection] | alpha | [`cloud.google.com/go/phishingprotection/apiv1beta1`](https://pkg.go.dev/cloud.google.com/go/phishingprotection/apiv1beta1) |
|
||||
| [reCAPTCHA Enterprise][cloud-recaptcha] | alpha | [`cloud.google.com/go/recaptchaenterprise/apiv1beta1`](https://pkg.go.dev/cloud.google.com/go/recaptchaenterprise/apiv1beta1) |
|
||||
| [Recommender][cloud-recommender] | beta | [`cloud.google.com/go/recommender/apiv1beta1`](https://pkg.go.dev/cloud.google.com/go/recommender/apiv1beta1) |
|
||||
| [Scheduler][cloud-scheduler] | stable | [`cloud.google.com/go/scheduler/apiv1`](https://pkg.go.dev/cloud.google.com/go/scheduler/apiv1) |
|
||||
| [Securitycenter][cloud-securitycenter] | stable | [`cloud.google.com/go/securitycenter/apiv1`](https://pkg.go.dev/cloud.google.com/go/securitycenter/apiv1) |
|
||||
| [Spanner][cloud-spanner] | stable | [`cloud.google.com/go/spanner`](https://pkg.go.dev/cloud.google.com/go/spanner) |
|
||||
| [Speech][cloud-speech] | stable | [`cloud.google.com/go/speech/apiv1`](https://pkg.go.dev/cloud.google.com/go/speech/apiv1) |
|
||||
| [Storage][cloud-storage] | stable | [`cloud.google.com/go/storage`](https://pkg.go.dev/cloud.google.com/go/storage) |
|
||||
| [Talent][cloud-talent] | alpha | [`cloud.google.com/go/talent/apiv4beta1`](https://pkg.go.dev/cloud.google.com/go/talent/apiv4beta1) |
|
||||
| [Text To Speech][cloud-texttospeech] | stable | [`cloud.google.com/go/texttospeech/apiv1`](https://pkg.go.dev/cloud.google.com/go/texttospeech/apiv1) |
|
||||
| [Trace][cloud-trace] | stable | [`cloud.google.com/go/trace/apiv2`](https://pkg.go.dev/cloud.google.com/go/trace/apiv2) |
|
||||
| [Translate][cloud-translate] | stable | [`cloud.google.com/go/translate`](https://pkg.go.dev/cloud.google.com/go/translate) |
|
||||
| [Video Intelligence][cloud-video] | beta | [`cloud.google.com/go/videointelligence/apiv1beta2`](https://pkg.go.dev/cloud.google.com/go/videointelligence/apiv1beta2) |
|
||||
| [Vision][cloud-vision] | stable | [`cloud.google.com/go/vision/apiv1`](https://pkg.go.dev/cloud.google.com/go/vision/apiv1) |
|
||||
| [Webrisk][cloud-webrisk] | alpha | [`cloud.google.com/go/webrisk/apiv1beta1`](https://pkg.go.dev/cloud.google.com/go/webrisk/apiv1beta1) |
|
||||
|
||||
> **Alpha status**: the API is still being actively developed. As a
|
||||
> result, it might change in backward-incompatible ways and is not recommended
|
||||
> for production use.
|
||||
>
|
||||
> **Beta status**: the API is largely complete, but still has outstanding
|
||||
> features and bugs to be addressed. There may be minor backwards-incompatible
|
||||
> changes where necessary.
|
||||
>
|
||||
> **Stable status**: the API is mature and ready for production use. We will
|
||||
> continue addressing bugs and feature requests.
|
||||
|
||||
Documentation and examples are available at [pkg.go.dev/cloud.google.com/go](https://pkg.go.dev/cloud.google.com/go)
|
||||
For an updated list of all of our released APIs please see our
|
||||
[reference docs](https://cloud.google.com/go/docs/reference).
|
||||
|
||||
## [Go Versions Supported](#supported-versions)
|
||||
|
||||
|
|
2
vendor/cloud.google.com/go/go.mod
generated
vendored
2
vendor/cloud.google.com/go/go.mod
generated
vendored
|
@ -12,7 +12,7 @@ require (
|
|||
golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1
|
||||
google.golang.org/api v0.57.0
|
||||
google.golang.org/genproto v0.0.0-20210921142501-181ce0d877f6
|
||||
google.golang.org/genproto v0.0.0-20210924002016-3dee208752a0
|
||||
google.golang.org/grpc v1.40.0
|
||||
google.golang.org/protobuf v1.27.1
|
||||
)
|
||||
|
|
4
vendor/cloud.google.com/go/go.sum
generated
vendored
4
vendor/cloud.google.com/go/go.sum
generated
vendored
|
@ -488,8 +488,8 @@ google.golang.org/genproto v0.0.0-20210821163610-241b8fcbd6c8/go.mod h1:eFjDcFEc
|
|||
google.golang.org/genproto v0.0.0-20210828152312-66f60bf46e71/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
|
||||
google.golang.org/genproto v0.0.0-20210831024726-fe130286e0e2/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
|
||||
google.golang.org/genproto v0.0.0-20210903162649-d08c68adba83/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
|
||||
google.golang.org/genproto v0.0.0-20210921142501-181ce0d877f6 h1:2ncG/LajxmrclaZH+ppVi02rQxz4eXYJzGHdFN4Y9UA=
|
||||
google.golang.org/genproto v0.0.0-20210921142501-181ce0d877f6/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
|
||||
google.golang.org/genproto v0.0.0-20210924002016-3dee208752a0 h1:5Tbluzus3QxoAJx4IefGt1W0HQZW4nuMrVk684jI74Q=
|
||||
google.golang.org/genproto v0.0.0-20210924002016-3dee208752a0/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
|
||||
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
||||
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
|
||||
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
|
||||
|
|
27
vendor/cloud.google.com/go/internal/.repo-metadata-full.json
generated
vendored
27
vendor/cloud.google.com/go/internal/.repo-metadata-full.json
generated
vendored
|
@ -35,15 +35,6 @@
|
|||
"release_level": "alpha",
|
||||
"library_type": ""
|
||||
},
|
||||
"cloud.google.com/go/analytics/data/apiv1alpha": {
|
||||
"distribution_name": "cloud.google.com/go/analytics/data/apiv1alpha",
|
||||
"description": "Google Analytics Data API",
|
||||
"language": "Go",
|
||||
"client_library_type": "generated",
|
||||
"docs_url": "https://cloud.google.com/go/docs/reference/cloud.google.com/go/analytics/latest/data/apiv1alpha",
|
||||
"release_level": "alpha",
|
||||
"library_type": ""
|
||||
},
|
||||
"cloud.google.com/go/apigateway/apiv1": {
|
||||
"distribution_name": "cloud.google.com/go/apigateway/apiv1",
|
||||
"description": "API Gateway API",
|
||||
|
@ -179,6 +170,15 @@
|
|||
"release_level": "ga",
|
||||
"library_type": ""
|
||||
},
|
||||
"cloud.google.com/go/bigquery/migration/apiv2alpha": {
|
||||
"distribution_name": "cloud.google.com/go/bigquery/migration/apiv2alpha",
|
||||
"description": "BigQuery Migration API",
|
||||
"language": "Go",
|
||||
"client_library_type": "generated",
|
||||
"docs_url": "https://cloud.google.com/go/docs/reference/cloud.google.com/go/bigquery/latest/migration/apiv2alpha",
|
||||
"release_level": "alpha",
|
||||
"library_type": ""
|
||||
},
|
||||
"cloud.google.com/go/bigquery/reservation/apiv1": {
|
||||
"distribution_name": "cloud.google.com/go/bigquery/reservation/apiv1",
|
||||
"description": "BigQuery Reservation API",
|
||||
|
@ -890,6 +890,15 @@
|
|||
"release_level": "beta",
|
||||
"library_type": ""
|
||||
},
|
||||
"cloud.google.com/go/orchestration/airflow/service/apiv1": {
|
||||
"distribution_name": "cloud.google.com/go/orchestration/airflow/service/apiv1",
|
||||
"description": "Cloud Composer API",
|
||||
"language": "Go",
|
||||
"client_library_type": "generated",
|
||||
"docs_url": "https://cloud.google.com/go/docs/reference/cloud.google.com/go/orchestration/latest/airflow/service/apiv1",
|
||||
"release_level": "beta",
|
||||
"library_type": ""
|
||||
},
|
||||
"cloud.google.com/go/orgpolicy/apiv2": {
|
||||
"distribution_name": "cloud.google.com/go/orgpolicy/apiv2",
|
||||
"description": "Organization Policy API",
|
||||
|
|
37
vendor/cloud.google.com/go/internal/retry.go
generated
vendored
37
vendor/cloud.google.com/go/internal/retry.go
generated
vendored
|
@ -16,9 +16,11 @@ package internal
|
|||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
gax "github.com/googleapis/gax-go/v2"
|
||||
"google.golang.org/grpc/status"
|
||||
)
|
||||
|
||||
// Retry calls the supplied function f repeatedly according to the provided
|
||||
|
@ -44,11 +46,40 @@ func retry(ctx context.Context, bo gax.Backoff, f func() (stop bool, err error),
|
|||
lastErr = err
|
||||
}
|
||||
p := bo.Pause()
|
||||
if cerr := sleep(ctx, p); cerr != nil {
|
||||
if ctxErr := sleep(ctx, p); ctxErr != nil {
|
||||
if lastErr != nil {
|
||||
return Annotatef(lastErr, "retry failed with %v; last error", cerr)
|
||||
return wrappedCallErr{ctxErr: ctxErr, wrappedErr: lastErr}
|
||||
}
|
||||
return cerr
|
||||
return ctxErr
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Use this error type to return an error which allows introspection of both
|
||||
// the context error and the error from the service.
|
||||
type wrappedCallErr struct {
|
||||
ctxErr error
|
||||
wrappedErr error
|
||||
}
|
||||
|
||||
func (e wrappedCallErr) Error() string {
|
||||
return fmt.Sprintf("retry failed with %v; last error: %v", e.ctxErr, e.wrappedErr)
|
||||
}
|
||||
|
||||
func (e wrappedCallErr) Unwrap() error {
|
||||
return e.wrappedErr
|
||||
}
|
||||
|
||||
// Is allows errors.Is to match the error from the call as well as context
|
||||
// sentinel errors.
|
||||
func (e wrappedCallErr) Is(err error) bool {
|
||||
return e.ctxErr == err || e.wrappedErr == err
|
||||
}
|
||||
|
||||
// GRPCStatus allows the wrapped error to be used with status.FromError.
|
||||
func (e wrappedCallErr) GRPCStatus() *status.Status {
|
||||
if s, ok := status.FromError(e.wrappedErr); ok {
|
||||
return s
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
|
4
vendor/cloud.google.com/go/internal/trace/trace.go
generated
vendored
4
vendor/cloud.google.com/go/internal/trace/trace.go
generated
vendored
|
@ -19,6 +19,7 @@ import (
|
|||
"fmt"
|
||||
|
||||
"go.opencensus.io/trace"
|
||||
"golang.org/x/xerrors"
|
||||
"google.golang.org/api/googleapi"
|
||||
"google.golang.org/genproto/googleapis/rpc/code"
|
||||
"google.golang.org/grpc/status"
|
||||
|
@ -42,7 +43,8 @@ func EndSpan(ctx context.Context, err error) {
|
|||
// toStatus interrogates an error and converts it to an appropriate
|
||||
// OpenCensus status.
|
||||
func toStatus(err error) trace.Status {
|
||||
if err2, ok := err.(*googleapi.Error); ok {
|
||||
var err2 *googleapi.Error
|
||||
if ok := xerrors.As(err, &err2); ok {
|
||||
return trace.Status{Code: httpStatusCodeToOCCode(err2.Code), Message: err2.Message}
|
||||
} else if s, ok := status.FromError(err); ok {
|
||||
return trace.Status{Code: int32(s.Code()), Message: s.Message()}
|
||||
|
|
12
vendor/cloud.google.com/go/storage/CHANGES.md
generated
vendored
12
vendor/cloud.google.com/go/storage/CHANGES.md
generated
vendored
|
@ -1,5 +1,17 @@
|
|||
# Changes
|
||||
|
||||
## [1.17.0](https://www.github.com/googleapis/google-cloud-go/compare/storage/v1.16.1...storage/v1.17.0) (2021-09-28)
|
||||
|
||||
|
||||
### Features
|
||||
|
||||
* **storage:** add projectNumber field to bucketAttrs. ([#4805](https://www.github.com/googleapis/google-cloud-go/issues/4805)) ([07343af](https://www.github.com/googleapis/google-cloud-go/commit/07343afc15085b164cc41d202d13f9d46f5c0d02))
|
||||
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* **storage:** align retry idempotency (part 1) ([#4715](https://www.github.com/googleapis/google-cloud-go/issues/4715)) ([ffa903e](https://www.github.com/googleapis/google-cloud-go/commit/ffa903eeec61aa3869e5220e2f09371127b5c393))
|
||||
|
||||
### [1.16.1](https://www.github.com/googleapis/google-cloud-go/compare/storage/v1.16.0...storage/v1.16.1) (2021-08-30)
|
||||
|
||||
|
||||
|
|
44
vendor/cloud.google.com/go/storage/acl.go
generated
vendored
44
vendor/cloud.google.com/go/storage/acl.go
generated
vendored
|
@ -133,11 +133,9 @@ func (a *ACLHandle) bucketDefaultList(ctx context.Context) ([]ACLRule, error) {
|
|||
}
|
||||
|
||||
func (a *ACLHandle) bucketDefaultDelete(ctx context.Context, entity ACLEntity) error {
|
||||
return runWithRetry(ctx, func() error {
|
||||
req := a.c.raw.DefaultObjectAccessControls.Delete(a.bucket, string(entity))
|
||||
a.configureCall(ctx, req)
|
||||
return req.Do()
|
||||
})
|
||||
req := a.c.raw.DefaultObjectAccessControls.Delete(a.bucket, string(entity))
|
||||
a.configureCall(ctx, req)
|
||||
return req.Do()
|
||||
}
|
||||
|
||||
func (a *ACLHandle) bucketList(ctx context.Context) ([]ACLRule, error) {
|
||||
|
@ -161,24 +159,16 @@ func (a *ACLHandle) bucketSet(ctx context.Context, entity ACLEntity, role ACLRol
|
|||
Entity: string(entity),
|
||||
Role: string(role),
|
||||
}
|
||||
err := runWithRetry(ctx, func() error {
|
||||
req := a.c.raw.BucketAccessControls.Update(a.bucket, string(entity), acl)
|
||||
a.configureCall(ctx, req)
|
||||
_, err := req.Do()
|
||||
return err
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
req := a.c.raw.BucketAccessControls.Update(a.bucket, string(entity), acl)
|
||||
a.configureCall(ctx, req)
|
||||
_, err := req.Do()
|
||||
return err
|
||||
}
|
||||
|
||||
func (a *ACLHandle) bucketDelete(ctx context.Context, entity ACLEntity) error {
|
||||
return runWithRetry(ctx, func() error {
|
||||
req := a.c.raw.BucketAccessControls.Delete(a.bucket, string(entity))
|
||||
a.configureCall(ctx, req)
|
||||
return req.Do()
|
||||
})
|
||||
req := a.c.raw.BucketAccessControls.Delete(a.bucket, string(entity))
|
||||
a.configureCall(ctx, req)
|
||||
return req.Do()
|
||||
}
|
||||
|
||||
func (a *ACLHandle) objectList(ctx context.Context) ([]ACLRule, error) {
|
||||
|
@ -214,18 +204,14 @@ func (a *ACLHandle) objectSet(ctx context.Context, entity ACLEntity, role ACLRol
|
|||
req = a.c.raw.ObjectAccessControls.Update(a.bucket, a.object, string(entity), acl)
|
||||
}
|
||||
a.configureCall(ctx, req)
|
||||
return runWithRetry(ctx, func() error {
|
||||
_, err := req.Do()
|
||||
return err
|
||||
})
|
||||
_, err := req.Do()
|
||||
return err
|
||||
}
|
||||
|
||||
func (a *ACLHandle) objectDelete(ctx context.Context, entity ACLEntity) error {
|
||||
return runWithRetry(ctx, func() error {
|
||||
req := a.c.raw.ObjectAccessControls.Delete(a.bucket, a.object, string(entity))
|
||||
a.configureCall(ctx, req)
|
||||
return req.Do()
|
||||
})
|
||||
req := a.c.raw.ObjectAccessControls.Delete(a.bucket, a.object, string(entity))
|
||||
a.configureCall(ctx, req)
|
||||
return req.Do()
|
||||
}
|
||||
|
||||
func (a *ACLHandle) configureCall(ctx context.Context, call interface{ Header() http.Header }) {
|
||||
|
|
5
vendor/cloud.google.com/go/storage/bucket.go
generated
vendored
5
vendor/cloud.google.com/go/storage/bucket.go
generated
vendored
|
@ -337,6 +337,10 @@ type BucketAttrs struct {
|
|||
// Typical values are "multi-region", "region" and "dual-region".
|
||||
// This field is read-only.
|
||||
LocationType string
|
||||
|
||||
// The project number of the project the bucket belongs to.
|
||||
// This field is read-only.
|
||||
ProjectNumber uint64
|
||||
}
|
||||
|
||||
// BucketPolicyOnly is an alias for UniformBucketLevelAccess.
|
||||
|
@ -597,6 +601,7 @@ func newBucket(b *raw.Bucket) (*BucketAttrs, error) {
|
|||
PublicAccessPrevention: toPublicAccessPrevention(b.IamConfiguration),
|
||||
Etag: b.Etag,
|
||||
LocationType: b.LocationType,
|
||||
ProjectNumber: b.ProjectNumber,
|
||||
}, nil
|
||||
}
|
||||
|
||||
|
|
10
vendor/cloud.google.com/go/storage/go.mod
generated
vendored
10
vendor/cloud.google.com/go/storage/go.mod
generated
vendored
|
@ -3,14 +3,14 @@ module cloud.google.com/go/storage
|
|||
go 1.11
|
||||
|
||||
require (
|
||||
cloud.google.com/go v0.93.3
|
||||
cloud.google.com/go v0.94.1
|
||||
github.com/golang/protobuf v1.5.2
|
||||
github.com/google/go-cmp v0.5.6
|
||||
github.com/googleapis/gax-go/v2 v2.0.5
|
||||
golang.org/x/oauth2 v0.0.0-20210805134026-6f1e6394065a
|
||||
github.com/googleapis/gax-go/v2 v2.1.0
|
||||
golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1
|
||||
google.golang.org/api v0.54.0
|
||||
google.golang.org/genproto v0.0.0-20210825212027-de86158e7fda
|
||||
google.golang.org/api v0.57.0
|
||||
google.golang.org/genproto v0.0.0-20210921142501-181ce0d877f6
|
||||
google.golang.org/grpc v1.40.0
|
||||
google.golang.org/protobuf v1.27.1
|
||||
)
|
||||
|
|
25
vendor/cloud.google.com/go/storage/go.sum
generated
vendored
25
vendor/cloud.google.com/go/storage/go.sum
generated
vendored
|
@ -22,8 +22,9 @@ cloud.google.com/go v0.83.0/go.mod h1:Z7MJUsANfY0pYPdw0lbnivPx4/vhy/e2FEkSkF7vAV
|
|||
cloud.google.com/go v0.84.0/go.mod h1:RazrYuxIK6Kb7YrzzhPoLmCVzl7Sup4NrbKPg8KHSUM=
|
||||
cloud.google.com/go v0.87.0/go.mod h1:TpDYlFy7vuLzZMMZ+B6iRiELaY7z/gJPaqbMx6mlWcY=
|
||||
cloud.google.com/go v0.90.0/go.mod h1:kRX0mNRHe0e2rC6oNakvwQqzyDmg57xJ+SZU1eT2aDQ=
|
||||
cloud.google.com/go v0.93.3 h1:wPBktZFzYBcCZVARvwVKqH1uEj+aLXofJEtrb4oOsio=
|
||||
cloud.google.com/go v0.93.3/go.mod h1:8utlLll2EF5XMAV15woO4lSbWQlk8rer9aLOfLh7+YI=
|
||||
cloud.google.com/go v0.94.1 h1:DwuSvDZ1pTYGbXo8yOJevCTr3BoBlE+OVkHAKiYQUXc=
|
||||
cloud.google.com/go v0.94.1/go.mod h1:qAlAugsXlC+JWO+Bke5vCtc9ONxjQT3drlTTnAplMW4=
|
||||
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
|
||||
cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
|
||||
cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
|
||||
|
@ -141,8 +142,9 @@ github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1/go.mod h1:kpwsk12EmLe
|
|||
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
|
||||
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
|
||||
github.com/googleapis/gax-go/v2 v2.0.5 h1:sjZBwGj9Jlw33ImPtvFviGYvseOtDM7hkSKB7+Tv3SM=
|
||||
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
|
||||
github.com/googleapis/gax-go/v2 v2.1.0 h1:6DWmvNpomjL1+3liNSZbVns3zsYzzCjm6pRBO1tLeso=
|
||||
github.com/googleapis/gax-go/v2 v2.1.0/go.mod h1:Q3nei7sK6ybPYH7twZdmQpAd1MKb7pfu6SK+H1/DsU0=
|
||||
github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw=
|
||||
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
|
||||
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
|
||||
|
@ -266,8 +268,9 @@ golang.org/x/oauth2 v0.0.0-20210220000619-9bb904979d93/go.mod h1:KelEdhl1UZF7XfJ
|
|||
golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
||||
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
||||
golang.org/x/oauth2 v0.0.0-20210628180205-a41e5a781914/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
||||
golang.org/x/oauth2 v0.0.0-20210805134026-6f1e6394065a h1:4Kd8OPUx1xgUwrHDaviWZO8MsgoZTZYC3g+8m16RBww=
|
||||
golang.org/x/oauth2 v0.0.0-20210805134026-6f1e6394065a/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
||||
golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f h1:Qmd2pbz05z7z6lm0DrgQVVPuBm92jqujBKMHMOlOQEw=
|
||||
golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
||||
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
|
@ -321,8 +324,10 @@ golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBc
|
|||
golang.org/x/sys v0.0.0-20210603125802-9665404d3644/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210806184541-e5e7981a1069 h1:siQdpVirKtzPhKl3lZWozZraCFObP8S1v6PRp0bLrtU=
|
||||
golang.org/x/sys v0.0.0-20210806184541-e5e7981a1069/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210823070655-63515b42dcdf/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210908233432-aa78b53d3365 h1:6wSTsvPddg9gc/mVEEyk9oOAoxn+bT4Z9q1zx+4RwA4=
|
||||
golang.org/x/sys v0.0.0-20210908233432-aa78b53d3365/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
|
@ -417,8 +422,10 @@ google.golang.org/api v0.47.0/go.mod h1:Wbvgpq1HddcWVtzsVLyfLp8lDg6AA241LmgIL59t
|
|||
google.golang.org/api v0.48.0/go.mod h1:71Pr1vy+TAZRPkPs/xlCf5SsU8WjuAWv1Pfjbtukyy4=
|
||||
google.golang.org/api v0.50.0/go.mod h1:4bNT5pAuq5ji4SRZm+5QIkjny9JAyVD/3gaSihNefaw=
|
||||
google.golang.org/api v0.51.0/go.mod h1:t4HdrdoNgyN5cbEfm7Lum0lcLDLiise1F8qDKX00sOU=
|
||||
google.golang.org/api v0.54.0 h1:ECJUVngj71QI6XEm7b1sAf8BljU5inEhMbKPR8Lxhhk=
|
||||
google.golang.org/api v0.54.0/go.mod h1:7C4bFFOvVDGXjfDTAsgGwDgAxRDeQ4X8NvUedIt6z3k=
|
||||
google.golang.org/api v0.55.0/go.mod h1:38yMfeP1kfjsl8isn0tliTjIb1rJXcQi4UXlbqivdVE=
|
||||
google.golang.org/api v0.57.0 h1:4t9zuDlHLcIx0ZEhmXEeFVCRsiOgpgn2QOH9N0MNjPI=
|
||||
google.golang.org/api v0.57.0/go.mod h1:dVPlbZyBo2/OjBpmvNdpn2GRm6rPy75jyU7bmhdrMgI=
|
||||
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
|
||||
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
|
@ -477,8 +484,12 @@ google.golang.org/genproto v0.0.0-20210716133855-ce7ef5c701ea/go.mod h1:AxrInvYm
|
|||
google.golang.org/genproto v0.0.0-20210728212813-7823e685a01f/go.mod h1:ob2IJxKrgPT52GcgX759i1sleT07tiKowYBGbczaW48=
|
||||
google.golang.org/genproto v0.0.0-20210805201207-89edb61ffb67/go.mod h1:ob2IJxKrgPT52GcgX759i1sleT07tiKowYBGbczaW48=
|
||||
google.golang.org/genproto v0.0.0-20210813162853-db860fec028c/go.mod h1:cFeNkxwySK631ADgubI+/XFU/xp8FD5KIVV4rj8UC5w=
|
||||
google.golang.org/genproto v0.0.0-20210825212027-de86158e7fda h1:iT5uhT54PtbqUsWddv/nnEWdE5e/MTr+Nv3vjxlBP1A=
|
||||
google.golang.org/genproto v0.0.0-20210825212027-de86158e7fda/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
|
||||
google.golang.org/genproto v0.0.0-20210821163610-241b8fcbd6c8/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
|
||||
google.golang.org/genproto v0.0.0-20210828152312-66f60bf46e71/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
|
||||
google.golang.org/genproto v0.0.0-20210831024726-fe130286e0e2/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
|
||||
google.golang.org/genproto v0.0.0-20210903162649-d08c68adba83/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
|
||||
google.golang.org/genproto v0.0.0-20210921142501-181ce0d877f6 h1:2ncG/LajxmrclaZH+ppVi02rQxz4eXYJzGHdFN4Y9UA=
|
||||
google.golang.org/genproto v0.0.0-20210921142501-181ce0d877f6/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
|
||||
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
||||
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
|
||||
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
|
||||
|
|
7
vendor/cloud.google.com/go/storage/hmac.go
generated
vendored
7
vendor/cloud.google.com/go/storage/hmac.go
generated
vendored
|
@ -214,12 +214,7 @@ func (c *Client) CreateHMACKey(ctx context.Context, projectID, serviceAccountEma
|
|||
|
||||
setClientHeader(call.Header())
|
||||
|
||||
var hkPb *raw.HmacKey
|
||||
var err error
|
||||
err = runWithRetry(ctx, func() error {
|
||||
hkPb, err = call.Context(ctx).Do()
|
||||
return err
|
||||
})
|
||||
hkPb, err := call.Context(ctx).Do()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
|
2
vendor/cloud.google.com/go/storage/internal/apiv2/doc.go
generated
vendored
2
vendor/cloud.google.com/go/storage/internal/apiv2/doc.go
generated
vendored
|
@ -68,7 +68,7 @@ import (
|
|||
type clientHookParams struct{}
|
||||
type clientHook func(context.Context, clientHookParams) ([]option.ClientOption, error)
|
||||
|
||||
const versionClient = "20210821"
|
||||
const versionClient = "20210921"
|
||||
|
||||
func insertMetadata(ctx context.Context, mds ...metadata.MD) context.Context {
|
||||
out, _ := metadata.FromOutgoingContext(ctx)
|
||||
|
|
4
vendor/cloud.google.com/go/storage/notifications.go
generated
vendored
4
vendor/cloud.google.com/go/storage/notifications.go
generated
vendored
|
@ -184,5 +184,7 @@ func (b *BucketHandle) DeleteNotification(ctx context.Context, id string) (err e
|
|||
if b.userProject != "" {
|
||||
call.UserProject(b.userProject)
|
||||
}
|
||||
return call.Context(ctx).Do()
|
||||
return runWithRetry(ctx, func() error {
|
||||
return call.Context(ctx).Do()
|
||||
})
|
||||
}
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue