mirror of
https://github.com/VictoriaMetrics/VictoriaMetrics.git
synced 2025-03-01 15:33:35 +00:00
Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files
This commit is contained in:
commit
ae64c2db61
109 changed files with 3781 additions and 2736 deletions
2
Makefile
2
Makefile
|
@ -283,7 +283,7 @@ golangci-lint: install-golangci-lint
|
|||
golangci-lint run --exclude '(SA4003|SA1019|SA5011):' -D errcheck -D structcheck --timeout 2m
|
||||
|
||||
install-golangci-lint:
|
||||
which golangci-lint || curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(shell go env GOPATH)/bin v1.44.0
|
||||
which golangci-lint || curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(shell go env GOPATH)/bin v1.44.1
|
||||
|
||||
install-wwhrd:
|
||||
which wwhrd || GO111MODULE=off go get github.com/frapposelli/wwhrd
|
||||
|
|
66
README.md
66
README.md
|
@ -274,6 +274,8 @@ VictoriaMetrics accepts data from [DataDog agent](https://docs.datadoghq.com/age
|
|||
|
||||
Run DataDog agent with `DD_DD_URL=http://victoriametrics-host:8428/datadog` environment variable in order to write data to VictoriaMetrics at `victoriametrics-host` host. Another option is to set `dd_url` param at [DataDog agent configuration file](https://docs.datadoghq.com/agent/guide/agent-configuration-files/) to `http://victoriametrics-host:8428/datadog`.
|
||||
|
||||
VictoriaMetrics doesn't check `DD_API_KEY` param, so it can be set to arbitrary value.
|
||||
|
||||
Example on how to send data to VictoriaMetrics via DataDog "submit metrics" API from command line:
|
||||
|
||||
```bash
|
||||
|
@ -330,7 +332,7 @@ and stream plain InfluxDB line protocol data to the configured TCP and/or UDP ad
|
|||
VictoriaMetrics performs the following transformations to the ingested InfluxDB data:
|
||||
|
||||
* [`db` query arg](https://docs.influxdata.com/influxdb/v1.7/tools/api/#write-http-endpoint) is mapped into `db` label value
|
||||
unless `db` tag exists in the InfluxDB line.
|
||||
unless `db` tag exists in the InfluxDB line. The `db` label name can be overriden via `-influxDBLabel` command-line flag.
|
||||
* Field names are mapped to time series names prefixed with `{measurement}{separator}` value, where `{separator}` equals to `_` by default. It can be changed with `-influxMeasurementFieldSeparator` command-line flag. See also `-influxSkipSingleField` command-line flag. If `{measurement}` is empty or if `-influxSkipMeasurement` command-line flag is set, then time series names correspond to field names.
|
||||
* Field values are mapped to time series values.
|
||||
* Tags are mapped to Prometheus labels as-is.
|
||||
|
@ -867,7 +869,7 @@ The [deduplication](#deduplication) isn't applied for the data exported in nativ
|
|||
|
||||
## How to import time series data
|
||||
|
||||
Time series data can be imported via any supported ingestion protocol:
|
||||
Time series data can be imported into VictoriaMetrics via any supported ingestion protocol:
|
||||
|
||||
* [Prometheus remote_write API](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write). See [these docs](#prometheus-setup) for details.
|
||||
* DataDog `submit metrics` API. See [these docs](#how-to-send-data-from-datadog-agent) for details.
|
||||
|
@ -1426,6 +1428,31 @@ See also more advanced [cardinality limiter in vmagent](https://docs.victoriamet
|
|||
VictoriaMetrics uses various internal caches. These caches are stored to `<-storageDataPath>/cache` directory during graceful shutdown (e.g. when VictoriaMetrics is stopped by sending `SIGINT` signal). The caches are read on the next VictoriaMetrics startup. Sometimes it is needed to remove such caches on the next startup. This can be performed by placing `reset_cache_on_startup` file inside the `<-storageDataPath>/cache` directory before the restart of VictoriaMetrics. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1447) for details.
|
||||
|
||||
|
||||
## Cache tuning
|
||||
|
||||
VictoriaMetrics uses various in-memory caches for faster data ingestion and query performance.
|
||||
The following metrics for each type of cache are exported at [`/metrics` page](#monitoring):
|
||||
- `vm_cache_size_bytes` - the actual cache size
|
||||
- `vm_cache_size_max_bytes` - cache size limit
|
||||
- `vm_cache_requests_total` - the number of requests to the cache
|
||||
- `vm_cache_misses_total` - the number of cache misses
|
||||
- `vm_cache_entries` - the number of entries in the cache
|
||||
|
||||
Both Grafana dashboards for [single-node VictoriaMetrics](https://grafana.com/dashboards/10229)
|
||||
and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176)
|
||||
contain `Caches` section with cache metrics visualized. The panels show the current
|
||||
memory usage by each type of cache, and also a cache hit rate. If hit rate is close to 100%
|
||||
then cache efficiency is already very high and does not need any tuning.
|
||||
The panel `Cache usage %` in `Troubleshooting` section shows the percentage of used cache size
|
||||
from the allowed size by type. If the percentage is below 100%, then no further tuning needed.
|
||||
|
||||
Please note, default cache sizes were carefully adjusted accordingly to the most
|
||||
practical scenarios and workloads. Change the defaults only if you understand the implications.
|
||||
|
||||
To override the default values see command-line flags with `-storage.cacheSize` prefix.
|
||||
See the full description of flags [here](#list-of-command-line-flags).
|
||||
|
||||
|
||||
## Data migration
|
||||
|
||||
Use [vmctl](https://docs.victoriametrics.com/vmctl.html) for data migration. It supports the following data migration types:
|
||||
|
@ -1460,7 +1487,7 @@ cache when samples with timestamps older than `now - search.cacheTimestampOffset
|
|||
|
||||
VictoriaMetrics doesn't support updating already existing sample values to new ones. It stores all the ingested data points
|
||||
for the same time series with identical timestamps. While it is possible substituting old time series with new time series via
|
||||
[removal of old time series](#how-to-delete-timeseries) and then [writing new time series](#backfilling), this approach
|
||||
[removal of old time series](#how-to-delete-time-series) and then [writing new time series](#backfilling), this approach
|
||||
should be used only for one-off updates. It shouldn't be used for frequent updates because of non-zero overhead related to data removal.
|
||||
|
||||
|
||||
|
@ -1498,18 +1525,26 @@ for running ingestion benchmarks based on node_exporter metrics.
|
|||
|
||||
VictoriaMetrics provides handlers for collecting the following [Go profiles](https://blog.golang.org/profiling-go-programs):
|
||||
|
||||
* Memory profile. It can be collected with the following command:
|
||||
* Memory profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed):
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -s http://<victoria-metrics-host>:8428/debug/pprof/heap > mem.pprof
|
||||
curl http://0.0.0.0:8428/debug/pprof/heap > mem.pprof
|
||||
```
|
||||
|
||||
* CPU profile. It can be collected with the following command:
|
||||
</div>
|
||||
|
||||
* CPU profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed):
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -s http://<victoria-metrics-host>:8428/debug/pprof/profile > cpu.pprof
|
||||
curl http://0.0.0.0:8428/debug/pprof/profile > cpu.pprof
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
The command for collecting CPU profile waits for 30 seconds before returning.
|
||||
|
||||
The collected profiles may be analyzed with [go tool pprof](https://github.com/google/pprof).
|
||||
|
@ -1678,6 +1713,8 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
-influx.maxLineSize size
|
||||
The maximum size in bytes for a single InfluxDB line during parsing
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 262144)
|
||||
-influxDBLabel string
|
||||
Default label for the DB name sent over '?db={db_name}' query parameter (default "db")
|
||||
-influxListenAddr string
|
||||
TCP and UDP address to listen for InfluxDB line protocol data. Usually :8189 must be set. Doesn't work if empty. This flag isn't needed when ingesting data over HTTP - just send it to http://<victoriametrics>:8428/write
|
||||
-influxMeasurementFieldSeparator string
|
||||
|
@ -1778,7 +1815,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
-promscrape.eurekaSDCheckInterval duration
|
||||
Interval for checking for changes in eureka. This works only if eureka_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#eureka_sd_config for details (default 30s)
|
||||
-promscrape.fileSDCheckInterval duration
|
||||
Interval for checking for changes in 'file_sd_config'. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#file_sd_config for details (default 30s)
|
||||
Interval for checking for changes in 'file_sd_config'. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#file_sd_config for details (default 5m0s)
|
||||
-promscrape.gceSDCheckInterval duration
|
||||
Interval for checking for changes in gce. This works only if gce_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#gce_sd_config for details (default 1m0s)
|
||||
-promscrape.httpSDCheckInterval duration
|
||||
|
@ -1888,6 +1925,15 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
authKey, which must be passed in query string to /snapshot* pages
|
||||
-sortLabels
|
||||
Whether to sort labels for incoming samples before writing them to storage. This may be needed for reducing memory usage at storage when the order of labels in incoming samples is random. For example, if m{k1="v1",k2="v2"} may be sent as m{k2="v2",k1="v1"}. Enabled sorting for labels can slow down ingestion performance a bit
|
||||
-storage.cacheSizeIndexDBDataBlocks size
|
||||
Overrides max size for indexdb/dataBlocks cache. See https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#cache-tuning
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 0)
|
||||
-storage.cacheSizeIndexDBIndexBlocks size
|
||||
Overrides max size for indexdb/indexBlocks cache. See https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#cache-tuning
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 0)
|
||||
-storage.cacheSizeStorageTSID size
|
||||
Overrides max size for storage/tsid cache. See https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#cache-tuning
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 0)
|
||||
-storage.maxDailySeries int
|
||||
The maximum number of unique series can be added to the storage during the last 24 hours. Excess series are logged and dropped. This can be useful for limiting series churn rate. See also -storage.maxHourlySeries
|
||||
-storage.maxHourlySeries int
|
||||
|
@ -1900,9 +1946,9 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
-tls
|
||||
Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set
|
||||
-tlsCertFile string
|
||||
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower
|
||||
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
|
||||
-tlsKeyFile string
|
||||
Path to file with TLS key. Used only if -tls is set
|
||||
Path to file with TLS key. Used only if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
|
||||
-version
|
||||
Show VictoriaMetrics version
|
||||
```
|
||||
|
|
|
@ -685,18 +685,26 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b
|
|||
|
||||
`vmagent` provides handlers for collecting the following [Go profiles](https://blog.golang.org/profiling-go-programs):
|
||||
|
||||
* Memory profile can be collected with the following command:
|
||||
* Memory profile can be collected with the following command (replace `0.0.0.0` with hostname if needed):
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -s http://<vmagent-host>:8429/debug/pprof/heap > mem.pprof
|
||||
curl http://0.0.0.0:8429/debug/pprof/heap > mem.pprof
|
||||
```
|
||||
|
||||
* CPU profile can be collected with the following command:
|
||||
</div>
|
||||
|
||||
* CPU profile can be collected with the following command (replace `0.0.0.0` with hostname if needed):
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -s http://<vmagent-host>:8429/debug/pprof/profile > cpu.pprof
|
||||
curl http://0.0.0.0:8429/debug/pprof/profile > cpu.pprof
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
The command for collecting CPU profile waits for 30 seconds before returning.
|
||||
|
||||
The collected profiles may be analyzed with [go tool pprof](https://github.com/google/pprof).
|
||||
|
@ -763,6 +771,8 @@ See the docs at https://docs.victoriametrics.com/vmagent.html .
|
|||
-influx.maxLineSize size
|
||||
The maximum size in bytes for a single InfluxDB line during parsing
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 262144)
|
||||
-influxDBLabel string
|
||||
Default label for the DB name sent over '?db={db_name}' query parameter (default "db")
|
||||
-influxListenAddr string
|
||||
TCP and UDP address to listen for InfluxDB line protocol data. Usually :8189 must be set. Doesn't work if empty. This flag isn't needed when ingesting data over HTTP - just send it to http://<vmagent>:8429/write
|
||||
-influxMeasurementFieldSeparator string
|
||||
|
@ -881,7 +891,7 @@ See the docs at https://docs.victoriametrics.com/vmagent.html .
|
|||
-promscrape.eurekaSDCheckInterval duration
|
||||
Interval for checking for changes in eureka. This works only if eureka_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#eureka_sd_config for details (default 30s)
|
||||
-promscrape.fileSDCheckInterval duration
|
||||
Interval for checking for changes in 'file_sd_config'. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#file_sd_config for details (default 30s)
|
||||
Interval for checking for changes in 'file_sd_config'. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#file_sd_config for details (default 5m0s)
|
||||
-promscrape.gceSDCheckInterval duration
|
||||
Interval for checking for changes in gce. This works only if gce_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#gce_sd_config for details (default 1m0s)
|
||||
-promscrape.httpSDCheckInterval duration
|
||||
|
@ -1017,9 +1027,9 @@ See the docs at https://docs.victoriametrics.com/vmagent.html .
|
|||
-tls
|
||||
Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set
|
||||
-tlsCertFile string
|
||||
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower
|
||||
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
|
||||
-tlsKeyFile string
|
||||
Path to file with TLS key. Used only if -tls is set
|
||||
Path to file with TLS key. Used only if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
|
||||
-version
|
||||
Show VictoriaMetrics version
|
||||
```
|
||||
|
|
|
@ -24,6 +24,7 @@ var (
|
|||
measurementFieldSeparator = flag.String("influxMeasurementFieldSeparator", "_", "Separator for '{measurement}{separator}{field_name}' metric name when inserted via InfluxDB line protocol")
|
||||
skipSingleField = flag.Bool("influxSkipSingleField", false, "Uses '{measurement}' instead of '{measurement}{separator}{field_name}' for metic name if InfluxDB line contains only a single field")
|
||||
skipMeasurement = flag.Bool("influxSkipMeasurement", false, "Uses '{field_name}' as a metric name while ignoring '{measurement}' and '-influxMeasurementFieldSeparator'")
|
||||
dbLabel = flag.String("influxDBLabel", "db", "Default label for the DB name sent over '?db={db_name}' query parameter")
|
||||
)
|
||||
|
||||
var (
|
||||
|
@ -80,7 +81,7 @@ func insertRows(at *auth.Token, db string, rows []parser.Row, extraLabels []prom
|
|||
hasDBKey := false
|
||||
for j := range r.Tags {
|
||||
tag := &r.Tags[j]
|
||||
if tag.Key == "db" {
|
||||
if tag.Key == *dbLabel {
|
||||
hasDBKey = true
|
||||
}
|
||||
commonLabels = append(commonLabels, prompbmarshal.Label{
|
||||
|
@ -90,7 +91,7 @@ func insertRows(at *auth.Token, db string, rows []parser.Row, extraLabels []prom
|
|||
}
|
||||
if len(db) > 0 && !hasDBKey {
|
||||
commonLabels = append(commonLabels, prompbmarshal.Label{
|
||||
Name: "db",
|
||||
Name: *dbLabel,
|
||||
Value: db,
|
||||
})
|
||||
}
|
||||
|
|
|
@ -15,7 +15,7 @@ implementation and aims to be compatible with its syntax.
|
|||
support and expressions validation;
|
||||
* Prometheus [alerting rules definition format](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/#defining-alerting-rules)
|
||||
support;
|
||||
* Integration with [Alertmanager](https://github.com/prometheus/alertmanager);
|
||||
* Integration with [Alertmanager](https://github.com/prometheus/alertmanager) starting from [Alertmanager v0.16.0-aplha](https://github.com/prometheus/alertmanager/releases/tag/v0.16.0-alpha.0);
|
||||
* Keeps the alerts [state on restarts](#alerts-state-on-restarts);
|
||||
* Graphite datasource can be used for alerting and recording rules. See [these docs](#graphite);
|
||||
* Recording and Alerting rules backfilling (aka `replay`). See [these docs](#rules-backfilling);
|
||||
|
@ -728,9 +728,9 @@ The shortlist of configuration flags is the following:
|
|||
-tls
|
||||
Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set
|
||||
-tlsCertFile string
|
||||
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower
|
||||
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
|
||||
-tlsKeyFile string
|
||||
Path to file with TLS key. Used only if -tls is set
|
||||
Path to file with TLS key. Used only if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
|
||||
-version
|
||||
Show VictoriaMetrics version
|
||||
```
|
||||
|
|
|
@ -25,6 +25,7 @@ groups:
|
|||
dynamic: '{{ $x := query "up" | first | value }}{{ if eq 1.0 $x }}one{{ else }}unknown{{ end }}'
|
||||
annotations:
|
||||
description: Job {{ $labels.job }} is up!
|
||||
external: cluster-{{ $externalLabels.cluster }}; replica-{{ $externalLabels.replica }}
|
||||
summary: All instances up {{ range query "up" }}
|
||||
{{ . | label "instance" }}
|
||||
{{ end }}
|
||||
|
|
|
@ -167,7 +167,20 @@ func newManager(ctx context.Context) (*manager, error) {
|
|||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to init datasource: %w", err)
|
||||
}
|
||||
nts, err := notifier.Init(alertURLGeneratorFn)
|
||||
|
||||
labels := make(map[string]string, 0)
|
||||
for _, s := range *externalLabels {
|
||||
if len(s) == 0 {
|
||||
continue
|
||||
}
|
||||
n := strings.IndexByte(s, '=')
|
||||
if n < 0 {
|
||||
return nil, fmt.Errorf("missing '=' in `-label`. It must contain label in the form `Name=value`; got %q", s)
|
||||
}
|
||||
labels[s[:n]] = s[n+1:]
|
||||
}
|
||||
|
||||
nts, err := notifier.Init(alertURLGeneratorFn, labels, *externalURL)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to init notifier: %w", err)
|
||||
}
|
||||
|
@ -175,7 +188,7 @@ func newManager(ctx context.Context) (*manager, error) {
|
|||
groups: make(map[uint64]*Group),
|
||||
querierBuilder: q,
|
||||
notifiers: nts,
|
||||
labels: map[string]string{},
|
||||
labels: labels,
|
||||
}
|
||||
rw, err := remotewrite.Init(ctx)
|
||||
if err != nil {
|
||||
|
@ -189,16 +202,6 @@ func newManager(ctx context.Context) (*manager, error) {
|
|||
}
|
||||
manager.rr = rr
|
||||
|
||||
for _, s := range *externalLabels {
|
||||
if len(s) == 0 {
|
||||
continue
|
||||
}
|
||||
n := strings.IndexByte(s, '=')
|
||||
if n < 0 {
|
||||
return nil, fmt.Errorf("missing '=' in `-label`. It must contain label in the form `Name=value`; got %q", s)
|
||||
}
|
||||
manager.labels[s[:n]] = s[n+1:]
|
||||
}
|
||||
return manager, nil
|
||||
}
|
||||
|
||||
|
|
|
@ -70,7 +70,13 @@ type AlertTplData struct {
|
|||
Expr string
|
||||
}
|
||||
|
||||
const tplHeader = `{{ $value := .Value }}{{ $labels := .Labels }}{{ $expr := .Expr }}`
|
||||
var tplHeaders = []string{
|
||||
"{{ $value := .Value }}",
|
||||
"{{ $labels := .Labels }}",
|
||||
"{{ $expr := .Expr }}",
|
||||
"{{ $externalLabels := .ExternalLabels }}",
|
||||
"{{ $externalURL := .ExternalURL }}",
|
||||
}
|
||||
|
||||
// ExecTemplate executes the Alert template for given
|
||||
// map of annotations.
|
||||
|
@ -100,13 +106,15 @@ func templateAnnotations(annotations map[string]string, data AlertTplData, funcs
|
|||
var buf bytes.Buffer
|
||||
eg := new(utils.ErrGroup)
|
||||
r := make(map[string]string, len(annotations))
|
||||
tData := tplData{data, externalLabels, externalURL}
|
||||
header := strings.Join(tplHeaders, "")
|
||||
for key, text := range annotations {
|
||||
buf.Reset()
|
||||
builder.Reset()
|
||||
builder.Grow(len(tplHeader) + len(text))
|
||||
builder.WriteString(tplHeader)
|
||||
builder.Grow(len(header) + len(text))
|
||||
builder.WriteString(header)
|
||||
builder.WriteString(text)
|
||||
if err := templateAnnotation(&buf, builder.String(), data, funcs); err != nil {
|
||||
if err := templateAnnotation(&buf, builder.String(), tData, funcs); err != nil {
|
||||
r[key] = text
|
||||
eg.Add(fmt.Errorf("key %q, template %q: %w", key, text, err))
|
||||
continue
|
||||
|
@ -116,7 +124,13 @@ func templateAnnotations(annotations map[string]string, data AlertTplData, funcs
|
|||
return r, eg.Err()
|
||||
}
|
||||
|
||||
func templateAnnotation(dst io.Writer, text string, data AlertTplData, funcs template.FuncMap) error {
|
||||
type tplData struct {
|
||||
AlertTplData
|
||||
ExternalLabels map[string]string
|
||||
ExternalURL string
|
||||
}
|
||||
|
||||
func templateAnnotation(dst io.Writer, text string, data tplData, funcs template.FuncMap) error {
|
||||
t := template.New("").Funcs(funcs).Option("missingkey=zero")
|
||||
tpl, err := t.Parse(text)
|
||||
if err != nil {
|
||||
|
|
|
@ -1,12 +1,24 @@
|
|||
package notifier
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/datasource"
|
||||
)
|
||||
|
||||
func TestAlert_ExecTemplate(t *testing.T) {
|
||||
extLabels := make(map[string]string, 0)
|
||||
const (
|
||||
extCluster = "prod"
|
||||
extDC = "east"
|
||||
extURL = "https://foo.bar"
|
||||
)
|
||||
extLabels["cluster"] = extCluster
|
||||
extLabels["dc"] = extDC
|
||||
_, err := Init(nil, extLabels, extURL)
|
||||
checkErr(t, err)
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
alert *Alert
|
||||
|
@ -74,6 +86,26 @@ func TestAlert_ExecTemplate(t *testing.T) {
|
|||
"desc": "bar 1;garply 2;",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "external",
|
||||
alert: &Alert{
|
||||
Value: 1e4,
|
||||
Labels: map[string]string{
|
||||
"job": "staging",
|
||||
"instance": "localhost",
|
||||
},
|
||||
},
|
||||
annotations: map[string]string{
|
||||
"url": "{{ $externalURL }}",
|
||||
"summary": "Issues with {{$labels.instance}} (dc-{{$externalLabels.dc}}) for job {{$labels.job}}",
|
||||
"description": "It is {{ $value }} connections for {{$labels.instance}} (cluster-{{$externalLabels.cluster}})",
|
||||
},
|
||||
expTpl: map[string]string{
|
||||
"url": extURL,
|
||||
"summary": fmt.Sprintf("Issues with localhost (dc-%s) for job staging", extDC),
|
||||
"description": fmt.Sprintf("It is 10000 connections for localhost (cluster-%s)", extCluster),
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
qFn := func(q string) ([]datasource.Metric, error) {
|
||||
|
|
|
@ -73,35 +73,18 @@ func (cw *configWatcher) reload(path string) error {
|
|||
}
|
||||
|
||||
// stop existing discovery
|
||||
close(cw.syncCh)
|
||||
cw.wg.Wait()
|
||||
cw.mustStop()
|
||||
|
||||
// re-start cw with new config
|
||||
cw.syncCh = make(chan struct{})
|
||||
cw.cfg = cfg
|
||||
|
||||
cw.resetTargets()
|
||||
return cw.start()
|
||||
}
|
||||
|
||||
const (
|
||||
addRetryBackoff = time.Millisecond * 100
|
||||
addRetryCount = 2
|
||||
)
|
||||
|
||||
func (cw *configWatcher) add(typeK TargetType, interval time.Duration, labelsFn getLabels) error {
|
||||
var targets []Target
|
||||
var errors []error
|
||||
var count int
|
||||
for { // retry addRetryCount times if first discovery attempts gave no results
|
||||
targets, errors = targetsFromLabels(labelsFn, cw.cfg, cw.genFn)
|
||||
for _, err := range errors {
|
||||
return fmt.Errorf("failed to init notifier for %q: %s", typeK, err)
|
||||
}
|
||||
if len(targets) > 0 || count >= addRetryCount {
|
||||
break
|
||||
}
|
||||
time.Sleep(addRetryBackoff)
|
||||
targets, errors := targetsFromLabels(labelsFn, cw.cfg, cw.genFn)
|
||||
for _, err := range errors {
|
||||
return fmt.Errorf("failed to init notifier for %q: %s", typeK, err)
|
||||
}
|
||||
|
||||
cw.setTargets(typeK, targets)
|
||||
|
@ -215,7 +198,10 @@ func (cw *configWatcher) start() error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func (cw *configWatcher) resetTargets() {
|
||||
func (cw *configWatcher) mustStop() {
|
||||
close(cw.syncCh)
|
||||
cw.wg.Wait()
|
||||
|
||||
cw.targetsMu.Lock()
|
||||
for _, targets := range cw.targets {
|
||||
for _, t := range targets {
|
||||
|
@ -224,6 +210,11 @@ func (cw *configWatcher) resetTargets() {
|
|||
}
|
||||
cw.targets = make(map[TargetType][]Target)
|
||||
cw.targetsMu.Unlock()
|
||||
|
||||
for i := range cw.cfg.ConsulSDConfigs {
|
||||
cw.cfg.ConsulSDConfigs[i].MustStop()
|
||||
}
|
||||
cw.cfg = nil
|
||||
}
|
||||
|
||||
func (cw *configWatcher) setTargets(key TargetType, targets []Target) {
|
||||
|
|
|
@ -9,9 +9,6 @@ import (
|
|||
"os"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape/discovery/consul"
|
||||
)
|
||||
|
||||
func TestConfigWatcherReload(t *testing.T) {
|
||||
|
@ -31,6 +28,7 @@ static_configs:
|
|||
if err != nil {
|
||||
t.Fatalf("failed to start config watcher: %s", err)
|
||||
}
|
||||
defer cw.mustStop()
|
||||
ns := cw.notifiers()
|
||||
if len(ns) != 2 {
|
||||
t.Fatalf("expected to have 2 notifiers; got %d %#v", len(ns), ns)
|
||||
|
@ -78,16 +76,11 @@ consul_sd_configs:
|
|||
- alertmanager
|
||||
`, consulSDServer.URL))
|
||||
|
||||
prevCheckInterval := *consul.SDCheckInterval
|
||||
defer func() { *consul.SDCheckInterval = prevCheckInterval }()
|
||||
|
||||
*consul.SDCheckInterval = time.Millisecond * 100
|
||||
|
||||
cw, err := newWatcher(consulSDFile.Name(), nil)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to start config watcher: %s", err)
|
||||
}
|
||||
time.Sleep(*consul.SDCheckInterval * 2)
|
||||
defer cw.mustStop()
|
||||
|
||||
if len(cw.notifiers()) != 2 {
|
||||
t.Fatalf("expected to get 2 notifiers; got %d", len(cw.notifiers()))
|
||||
|
@ -161,6 +154,7 @@ consul_sd_configs:
|
|||
if err != nil {
|
||||
t.Fatalf("failed to start config watcher: %s", err)
|
||||
}
|
||||
defer cw.mustStop()
|
||||
|
||||
const workers = 500
|
||||
const iterations = 10
|
||||
|
|
|
@ -46,13 +46,29 @@ func Reload() error {
|
|||
|
||||
var staticNotifiersFn func() []Notifier
|
||||
|
||||
var (
|
||||
// externalLabels is a global variable for holding external labels configured via flags
|
||||
// It is supposed to be inited via Init function only.
|
||||
externalLabels map[string]string
|
||||
// externalURL is a global variable for holding external URL value configured via flag
|
||||
// It is supposed to be inited via Init function only.
|
||||
externalURL string
|
||||
)
|
||||
|
||||
// Init returns a function for retrieving actual list of Notifier objects.
|
||||
// Init works in two mods:
|
||||
// * configuration via flags (for backward compatibility). Is always static
|
||||
// and don't support live reloads.
|
||||
// * configuration via file. Supports live reloads and service discovery.
|
||||
// Init returns an error if both mods are used.
|
||||
func Init(gen AlertURLGenerator) (func() []Notifier, error) {
|
||||
func Init(gen AlertURLGenerator, extLabels map[string]string, extURL string) (func() []Notifier, error) {
|
||||
if externalLabels != nil || externalURL != "" {
|
||||
return nil, fmt.Errorf("BUG: notifier.Init was called multiple times")
|
||||
}
|
||||
|
||||
externalURL = extURL
|
||||
externalLabels = extLabels
|
||||
|
||||
if *configPath == "" && len(*addrs) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
|
|
@ -188,18 +188,26 @@ ROOT_IMAGE=scratch make package-vmauth
|
|||
|
||||
`vmauth` provides handlers for collecting the following [Go profiles](https://blog.golang.org/profiling-go-programs):
|
||||
|
||||
* Memory profile. It can be collected with the following command:
|
||||
* Memory profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed):
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -s http://<vmauth-host>:8427/debug/pprof/heap > mem.pprof
|
||||
curl http://0.0.0.0:8427/debug/pprof/heap > mem.pprof
|
||||
```
|
||||
|
||||
* CPU profile. It can be collected with the following command:
|
||||
</div>
|
||||
|
||||
* CPU profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed):
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -s http://<vmauth-host>:8427/debug/pprof/profile > cpu.pprof
|
||||
curl http://0.0.0.0:8427/debug/pprof/profile > cpu.pprof
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
The command for collecting CPU profile waits for 30 seconds before returning.
|
||||
|
||||
The collected profiles may be analyzed with [go tool pprof](https://github.com/google/pprof).
|
||||
|
@ -277,9 +285,9 @@ See the docs at https://docs.victoriametrics.com/vmauth.html .
|
|||
-tls
|
||||
Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set
|
||||
-tlsCertFile string
|
||||
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower
|
||||
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
|
||||
-tlsKeyFile string
|
||||
Path to file with TLS key. Used only if -tls is set
|
||||
Path to file with TLS key. Used only if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
|
||||
-version
|
||||
Show VictoriaMetrics version
|
||||
```
|
||||
|
|
|
@ -23,6 +23,7 @@ var (
|
|||
measurementFieldSeparator = flag.String("influxMeasurementFieldSeparator", "_", "Separator for '{measurement}{separator}{field_name}' metric name when inserted via InfluxDB line protocol")
|
||||
skipSingleField = flag.Bool("influxSkipSingleField", false, "Uses '{measurement}' instead of '{measurement}{separator}{field_name}' for metic name if InfluxDB line contains only a single field")
|
||||
skipMeasurement = flag.Bool("influxSkipMeasurement", false, "Uses '{field_name}' as a metric name while ignoring '{measurement}' and '-influxMeasurementFieldSeparator'")
|
||||
dbLabel = flag.String("influxDBLabel", "db", "Default label for the DB name sent over '?db={db_name}' query parameter")
|
||||
)
|
||||
|
||||
var (
|
||||
|
@ -80,13 +81,13 @@ func insertRows(db string, rows []parser.Row, extraLabels []prompbmarshal.Label)
|
|||
hasDBKey := false
|
||||
for j := range r.Tags {
|
||||
tag := &r.Tags[j]
|
||||
if tag.Key == "db" {
|
||||
if tag.Key == *dbLabel {
|
||||
hasDBKey = true
|
||||
}
|
||||
ic.AddLabel(tag.Key, tag.Value)
|
||||
}
|
||||
if !hasDBKey {
|
||||
ic.AddLabel("db", db)
|
||||
ic.AddLabel(*dbLabel, db)
|
||||
}
|
||||
for j := range extraLabels {
|
||||
label := &extraLabels[j]
|
||||
|
|
|
@ -148,6 +148,7 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
|
|||
return true
|
||||
case "/influx/write", "/influx/api/v2/write", "/write", "/api/v2/write":
|
||||
influxWriteRequests.Inc()
|
||||
addInfluxResponseHeaders(w)
|
||||
if err := influx.InsertHandlerForHTTP(r); err != nil {
|
||||
influxWriteErrors.Inc()
|
||||
httpserver.Errorf(w, r, "%s", err)
|
||||
|
@ -157,6 +158,7 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
|
|||
return true
|
||||
case "/influx/query", "/query":
|
||||
influxQueryRequests.Inc()
|
||||
addInfluxResponseHeaders(w)
|
||||
influxutils.WriteDatabaseNames(w)
|
||||
return true
|
||||
case "/datadog/api/v1/series":
|
||||
|
@ -241,6 +243,12 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
|
|||
}
|
||||
}
|
||||
|
||||
func addInfluxResponseHeaders(w http.ResponseWriter) {
|
||||
// This is needed for some clients, which expect InfluxDB version header.
|
||||
// See, for example, https://github.com/ntop/ntopng/issues/5449#issuecomment-1005347597
|
||||
w.Header().Set("X-Influxdb-Version", "1.8.0")
|
||||
}
|
||||
|
||||
var (
|
||||
requestDuration = metrics.NewHistogram(`vminsert_request_duration_seconds`)
|
||||
|
||||
|
|
|
@ -1,12 +1,12 @@
|
|||
{
|
||||
"files": {
|
||||
"main.css": "./static/css/main.098d452b.css",
|
||||
"main.js": "./static/js/main.c945b173.js",
|
||||
"main.js": "./static/js/main.2a9382c2.js",
|
||||
"static/js/27.939f971b.chunk.js": "./static/js/27.939f971b.chunk.js",
|
||||
"index.html": "./index.html"
|
||||
},
|
||||
"entrypoints": [
|
||||
"static/css/main.098d452b.css",
|
||||
"static/js/main.c945b173.js"
|
||||
"static/js/main.2a9382c2.js"
|
||||
]
|
||||
}
|
|
@ -1 +1 @@
|
|||
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="./favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1"/><meta name="theme-color" content="#000000"/><meta name="description" content="VM-UI is a metric explorer for Victoria Metrics"/><link rel="apple-touch-icon" href="./apple-touch-icon.png"/><link rel="icon" type="image/png" sizes="32x32" href="./favicon-32x32.png"><link rel="manifest" href="./manifest.json"/><title>VM UI</title><link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto:300,400,500,700&display=swap"/><script defer="defer" src="./static/js/main.c945b173.js"></script><link href="./static/css/main.098d452b.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div></body></html>
|
||||
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="./favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1"/><meta name="theme-color" content="#000000"/><meta name="description" content="VM-UI is a metric explorer for Victoria Metrics"/><link rel="apple-touch-icon" href="./apple-touch-icon.png"/><link rel="icon" type="image/png" sizes="32x32" href="./favicon-32x32.png"><link rel="manifest" href="./manifest.json"/><title>VM UI</title><link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto:300,400,500,700&display=swap"/><script defer="defer" src="./static/js/main.2a9382c2.js"></script><link href="./static/css/main.098d452b.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div></body></html>
|
2
app/vmselect/vmui/static/js/main.2a9382c2.js
Normal file
2
app/vmselect/vmui/static/js/main.2a9382c2.js
Normal file
File diff suppressed because one or more lines are too long
|
@ -6,7 +6,7 @@
|
|||
* @license MIT
|
||||
*/
|
||||
|
||||
/** @license MUI v5.4.1
|
||||
/** @license MUI v5.4.2
|
||||
*
|
||||
* This source code is licensed under the MIT license found in the
|
||||
* LICENSE file in the root directory of this source tree.
|
File diff suppressed because one or more lines are too long
|
@ -16,6 +16,7 @@ import (
|
|||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/mergeset"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/syncwg"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
|
@ -49,6 +50,10 @@ var (
|
|||
"Excess series are logged and dropped. This can be useful for limiting series churn rate. See also -storage.maxHourlySeries")
|
||||
|
||||
minFreeDiskSpaceBytes = flagutil.NewBytes("storage.minFreeDiskSpaceBytes", 10e6, "The minimum free disk space at -storageDataPath after which the storage stops accepting new data")
|
||||
|
||||
cacheSizeStorageTSID = flagutil.NewBytes("storage.cacheSizeStorageTSID", 0, "Overrides max size for storage/tsid cache. See https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#cache-tuning")
|
||||
cacheSizeIndexDBIndexBlocks = flagutil.NewBytes("storage.cacheSizeIndexDBIndexBlocks", 0, "Overrides max size for indexdb/indexBlocks cache. See https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#cache-tuning")
|
||||
cacheSizeIndexDBDataBlocks = flagutil.NewBytes("storage.cacheSizeIndexDBDataBlocks", 0, "Overrides max size for indexdb/dataBlocks cache. See https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#cache-tuning")
|
||||
)
|
||||
|
||||
// CheckTimeRange returns true if the given tr is denied for querying.
|
||||
|
@ -86,6 +91,9 @@ func InitWithoutMetrics(resetCacheIfNeeded func(mrs []storage.MetricRow)) {
|
|||
storage.SetBigMergeWorkersCount(*bigMergeConcurrency)
|
||||
storage.SetSmallMergeWorkersCount(*smallMergeConcurrency)
|
||||
storage.SetFreeDiskSpaceLimit(minFreeDiskSpaceBytes.N)
|
||||
storage.SetTSIDCacheSize(cacheSizeStorageTSID.N)
|
||||
mergeset.SetIndexBlocksCacheSize(cacheSizeIndexDBIndexBlocks.N)
|
||||
mergeset.SetDataBlocksCacheSize(cacheSizeIndexDBDataBlocks.N)
|
||||
|
||||
logger.Infof("opening storage at %q with -retentionPeriod=%s", *DataPath, retentionPeriod)
|
||||
startTime := time.Now()
|
||||
|
|
|
@ -39,8 +39,6 @@ Instead, it will copy all the configuration files and the transitive dependencie
|
|||
|
||||
You don’t have to ever use `eject`. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it.
|
||||
|
||||
**Note:** this [Dockerfile](https://github.com/VictoriaMetrics/vmui/blob/master/packages/vmui/Dockerfile) use static built files from [npm run build](https://github.com/VictoriaMetrics/vmui/tree/master/packages/vmui#npm-run-eject) .
|
||||
|
||||
## Learn More
|
||||
|
||||
You can learn more in the [Create React App documentation](https://facebook.github.io/create-react-app/docs/getting-started).
|
||||
|
|
686
app/vmui/packages/vmui/package-lock.json
generated
686
app/vmui/packages/vmui/package-lock.json
generated
File diff suppressed because it is too large
Load diff
|
@ -4,20 +4,20 @@
|
|||
"private": true,
|
||||
"homepage": "./",
|
||||
"dependencies": {
|
||||
"@date-io/dayjs": "^2.11.0",
|
||||
"@emotion/styled": "^11.6.0",
|
||||
"@mui/icons-material": "^5.4.1",
|
||||
"@mui/lab": "^5.0.0-alpha.68",
|
||||
"@mui/material": "^5.4.1",
|
||||
"@mui/styles": "^5.4.1",
|
||||
"@date-io/dayjs": "^2.13.1",
|
||||
"@emotion/styled": "^11.8.1",
|
||||
"@mui/icons-material": "^5.4.2",
|
||||
"@mui/lab": "^5.0.0-alpha.70",
|
||||
"@mui/material": "^5.4.3",
|
||||
"@mui/styles": "^5.4.2",
|
||||
"@testing-library/jest-dom": "^5.16.2",
|
||||
"@testing-library/react": "^12.1.2",
|
||||
"@testing-library/react": "^12.1.3",
|
||||
"@testing-library/user-event": "^13.5.0",
|
||||
"@types/jest": "^27.4.0",
|
||||
"@types/lodash.debounce": "^4.0.6",
|
||||
"@types/lodash.get": "^4.4.6",
|
||||
"@types/lodash.throttle": "^4.1.6",
|
||||
"@types/node": "^17.0.17",
|
||||
"@types/node": "^17.0.19",
|
||||
"@types/qs": "^6.9.7",
|
||||
"@types/react": "^17.0.39",
|
||||
"@types/react-dom": "^17.0.11",
|
||||
|
@ -26,7 +26,7 @@
|
|||
"lodash.debounce": "^4.0.8",
|
||||
"lodash.get": "^4.4.2",
|
||||
"lodash.throttle": "^4.1.1",
|
||||
"preact": "^10.6.5",
|
||||
"preact": "^10.6.6",
|
||||
"qs": "^6.10.3",
|
||||
"typescript": "~4.5.5",
|
||||
"uplot": "^1.6.19",
|
||||
|
@ -60,10 +60,10 @@
|
|||
},
|
||||
"devDependencies": {
|
||||
"@babel/plugin-proposal-nullish-coalescing-operator": "^7.16.7",
|
||||
"@typescript-eslint/eslint-plugin": "^5.11.0",
|
||||
"@typescript-eslint/parser": "^5.11.0",
|
||||
"@typescript-eslint/eslint-plugin": "^5.12.1",
|
||||
"@typescript-eslint/parser": "^5.12.1",
|
||||
"customize-cra": "^1.0.0",
|
||||
"eslint-plugin-react": "^7.28.0",
|
||||
"react-app-rewired": "^2.1.11"
|
||||
"react-app-rewired": "^2.2.1"
|
||||
}
|
||||
}
|
||||
|
|
|
@ -31,21 +31,23 @@ const QueryEditor: FC<QueryEditorProps> = ({
|
|||
queryOptions
|
||||
}) => {
|
||||
|
||||
const [downMetaKeys, setDownMetaKeys] = useState<string[]>([]);
|
||||
const [focusField, setFocusField] = useState(false);
|
||||
const [focusOption, setFocusOption] = useState(-1);
|
||||
const autocompleteAnchorEl = useRef<HTMLDivElement>(null);
|
||||
const wrapperEl = useRef<HTMLUListElement>(null);
|
||||
|
||||
const openAutocomplete = useMemo(() => {
|
||||
return !(!autocomplete || downMetaKeys.length || query.length < 2 || !focusField);
|
||||
}, [query, downMetaKeys, autocomplete, focusField]);
|
||||
const words = (query.match(/[a-zA-Z_:.][a-zA-Z0-9_:.]*/gm) || []).length;
|
||||
return !(!autocomplete || query.length < 2 || words > 1 || !focusField);
|
||||
}, [query, autocomplete, focusField]);
|
||||
|
||||
const actualOptions = useMemo(() => {
|
||||
setFocusOption(0);
|
||||
if (!openAutocomplete) return [];
|
||||
try {
|
||||
const regexp = new RegExp(String(query), "i");
|
||||
return queryOptions.filter((item) => regexp.test(item) && item !== query);
|
||||
const options = queryOptions.filter((item) => regexp.test(item) && (item !== query));
|
||||
return options.sort((a,b) => (a.match(regexp)?.index || 0) - (b.match(regexp)?.index || 0));
|
||||
} catch (e) {
|
||||
return [];
|
||||
}
|
||||
|
@ -53,31 +55,38 @@ const QueryEditor: FC<QueryEditorProps> = ({
|
|||
|
||||
const handleKeyDown = (e: KeyboardEvent<HTMLDivElement>) => {
|
||||
const {key, ctrlKey, metaKey, shiftKey} = e;
|
||||
if (ctrlKey || metaKey) setDownMetaKeys([...downMetaKeys, e.key]);
|
||||
if (key === "ArrowUp" && openAutocomplete && actualOptions.length) {
|
||||
e.preventDefault();
|
||||
setFocusOption((prev) => prev === 0 ? 0 : prev - 1);
|
||||
} else if (key === "ArrowDown" && openAutocomplete && actualOptions.length) {
|
||||
e.preventDefault();
|
||||
setFocusOption((prev) => prev >= actualOptions.length - 1 ? actualOptions.length - 1 : prev + 1);
|
||||
} else if (key === "Enter" && openAutocomplete && actualOptions.length && !shiftKey) {
|
||||
e.preventDefault();
|
||||
setQuery(actualOptions[focusOption], index);
|
||||
}
|
||||
return true;
|
||||
};
|
||||
|
||||
const handleKeyUp = (e: KeyboardEvent<HTMLDivElement>) => {
|
||||
const {key, ctrlKey, metaKey} = e;
|
||||
if (downMetaKeys.includes(key)) setDownMetaKeys(downMetaKeys.filter(k => k !== key));
|
||||
const ctrlMetaKey = ctrlKey || metaKey;
|
||||
if (key === "Enter" && ctrlMetaKey) {
|
||||
runQuery();
|
||||
} else if (key === "ArrowUp" && ctrlMetaKey) {
|
||||
const arrowUp = key === "ArrowUp";
|
||||
const arrowDown = key === "ArrowDown";
|
||||
const enter = key === "Enter";
|
||||
|
||||
const hasAutocomplete = openAutocomplete && actualOptions.length;
|
||||
|
||||
if ((arrowUp || arrowDown || enter) && (hasAutocomplete || ctrlMetaKey)) {
|
||||
e.preventDefault();
|
||||
}
|
||||
|
||||
// ArrowUp
|
||||
if (arrowUp && hasAutocomplete && !ctrlMetaKey) {
|
||||
setFocusOption((prev) => prev === 0 ? 0 : prev - 1);
|
||||
} else if (arrowUp && ctrlMetaKey) {
|
||||
setHistoryIndex(-1, index);
|
||||
} else if (key === "ArrowDown" && ctrlMetaKey) {
|
||||
}
|
||||
|
||||
// ArrowDown
|
||||
if (arrowDown && hasAutocomplete && !ctrlMetaKey) {
|
||||
setFocusOption((prev) => prev >= actualOptions.length - 1 ? actualOptions.length - 1 : prev + 1);
|
||||
} else if (arrowDown && ctrlMetaKey) {
|
||||
setHistoryIndex(1, index);
|
||||
}
|
||||
|
||||
// Enter
|
||||
if (enter && hasAutocomplete && !shiftKey && !ctrlMetaKey) {
|
||||
setQuery(actualOptions[focusOption], index);
|
||||
} else if (enter && ctrlKey) {
|
||||
runQuery();
|
||||
}
|
||||
};
|
||||
|
||||
useEffect(() => {
|
||||
|
@ -94,8 +103,16 @@ const QueryEditor: FC<QueryEditorProps> = ({
|
|||
multiline
|
||||
error={!!error}
|
||||
onFocus={() => setFocusField(true)}
|
||||
onBlur={() => setFocusField(false)}
|
||||
onKeyUp={handleKeyUp}
|
||||
onBlur={(e) => {
|
||||
const autocompleteItem = e.relatedTarget?.id || "";
|
||||
const itemIndex = actualOptions.indexOf(autocompleteItem.replace("$autocomplete$", ""));
|
||||
if (itemIndex !== -1) {
|
||||
setQuery(actualOptions[itemIndex], index);
|
||||
e.target.focus();
|
||||
} else {
|
||||
setFocusField(false);
|
||||
}
|
||||
}}
|
||||
onKeyDown={handleKeyDown}
|
||||
onChange={(e) => setQuery(e.target.value, index)}
|
||||
/>
|
||||
|
@ -103,7 +120,7 @@ const QueryEditor: FC<QueryEditorProps> = ({
|
|||
<Paper elevation={3} sx={{ maxHeight: 300, overflow: "auto" }}>
|
||||
<MenuList ref={wrapperEl} dense>
|
||||
{actualOptions.map((item, i) =>
|
||||
<MenuItem key={item} sx={{bgcolor: `rgba(0, 0, 0, ${i === focusOption ? 0.12 : 0})`}}>
|
||||
<MenuItem id={`$autocomplete$${item}`} key={item} sx={{bgcolor: `rgba(0, 0, 0, ${i === focusOption ? 0.12 : 0})`}}>
|
||||
{item}
|
||||
</MenuItem>)}
|
||||
</MenuList>
|
||||
|
|
|
@ -28,7 +28,8 @@ const THEME = createTheme({
|
|||
root: {
|
||||
fontSize: "12px",
|
||||
letterSpacing: "normal",
|
||||
lineHeight: "1"
|
||||
lineHeight: "1",
|
||||
zIndex: 0
|
||||
}
|
||||
}
|
||||
},
|
||||
|
|
File diff suppressed because it is too large
Load diff
|
@ -6,7 +6,7 @@
|
|||
"type": "grafana",
|
||||
"id": "grafana",
|
||||
"name": "Grafana",
|
||||
"version": "8.3.2"
|
||||
"version": "8.3.5"
|
||||
},
|
||||
{
|
||||
"type": "panel",
|
||||
|
@ -58,12 +58,12 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"description": "Overview for VictoriaMetrics vmagent v1.70.0 or higher",
|
||||
"description": "Overview for VictoriaMetrics vmagent v1.73.0 or higher",
|
||||
"editable": true,
|
||||
"fiscalYearStartMonth": 0,
|
||||
"graphTooltip": 1,
|
||||
"id": null,
|
||||
"iteration": 1639980687827,
|
||||
"iteration": 1644908591152,
|
||||
"links": [
|
||||
{
|
||||
"icon": "doc",
|
||||
|
@ -151,7 +151,7 @@
|
|||
"text": {},
|
||||
"textMode": "auto"
|
||||
},
|
||||
"pluginVersion": "8.3.2",
|
||||
"pluginVersion": "8.3.5",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(vm_promscrape_targets{job=~\"$job\", instance=~\"$instance\", status=\"up\"})",
|
||||
|
@ -215,7 +215,7 @@
|
|||
"text": {},
|
||||
"textMode": "auto"
|
||||
},
|
||||
"pluginVersion": "8.3.2",
|
||||
"pluginVersion": "8.3.5",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(vm_promscrape_targets{job=~\"$job\", instance=~\"$instance\", status=\"down\"})",
|
||||
|
@ -282,7 +282,7 @@
|
|||
"text": {},
|
||||
"textMode": "auto"
|
||||
},
|
||||
"pluginVersion": "8.3.2",
|
||||
"pluginVersion": "8.3.5",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(increase(vm_log_messages_total{job=~\"$job\", instance=~\"$instance\", level!=\"info\"}[30m]))",
|
||||
|
@ -341,7 +341,7 @@
|
|||
"text": {},
|
||||
"textMode": "auto"
|
||||
},
|
||||
"pluginVersion": "8.3.2",
|
||||
"pluginVersion": "8.3.5",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(vm_persistentqueue_bytes_pending{job=~\"$job\", instance=~\"$instance\"})",
|
||||
|
@ -487,7 +487,7 @@
|
|||
"alertThreshold": true
|
||||
},
|
||||
"percentage": false,
|
||||
"pluginVersion": "8.3.2",
|
||||
"pluginVersion": "8.3.5",
|
||||
"pointradius": 2,
|
||||
"points": false,
|
||||
"renderer": "flot",
|
||||
|
@ -583,7 +583,7 @@
|
|||
"alertThreshold": true
|
||||
},
|
||||
"percentage": false,
|
||||
"pluginVersion": "8.3.2",
|
||||
"pluginVersion": "8.3.5",
|
||||
"pointradius": 2,
|
||||
"points": false,
|
||||
"renderer": "flot",
|
||||
|
@ -687,7 +687,7 @@
|
|||
"alertThreshold": true
|
||||
},
|
||||
"percentage": false,
|
||||
"pluginVersion": "8.3.2",
|
||||
"pluginVersion": "8.3.5",
|
||||
"pointradius": 2,
|
||||
"points": false,
|
||||
"renderer": "flot",
|
||||
|
@ -785,7 +785,7 @@
|
|||
"alertThreshold": true
|
||||
},
|
||||
"percentage": false,
|
||||
"pluginVersion": "8.3.2",
|
||||
"pluginVersion": "8.3.5",
|
||||
"pointradius": 2,
|
||||
"points": false,
|
||||
"renderer": "flot",
|
||||
|
@ -906,7 +906,7 @@
|
|||
"alertThreshold": true
|
||||
},
|
||||
"percentage": false,
|
||||
"pluginVersion": "8.3.2",
|
||||
"pluginVersion": "8.3.5",
|
||||
"pointradius": 2,
|
||||
"points": false,
|
||||
"renderer": "flot",
|
||||
|
@ -999,7 +999,7 @@
|
|||
"alertThreshold": true
|
||||
},
|
||||
"percentage": false,
|
||||
"pluginVersion": "8.3.2",
|
||||
"pluginVersion": "8.3.5",
|
||||
"pointradius": 2,
|
||||
"points": false,
|
||||
"renderer": "flot",
|
||||
|
@ -1098,7 +1098,7 @@
|
|||
"alertThreshold": true
|
||||
},
|
||||
"percentage": false,
|
||||
"pluginVersion": "8.3.2",
|
||||
"pluginVersion": "8.3.5",
|
||||
"pointradius": 2,
|
||||
"points": false,
|
||||
"renderer": "flot",
|
||||
|
@ -1196,7 +1196,7 @@
|
|||
"alertThreshold": true
|
||||
},
|
||||
"percentage": false,
|
||||
"pluginVersion": "8.3.2",
|
||||
"pluginVersion": "8.3.5",
|
||||
"pointradius": 2,
|
||||
"points": false,
|
||||
"renderer": "flot",
|
||||
|
@ -1295,7 +1295,7 @@
|
|||
"alertThreshold": true
|
||||
},
|
||||
"percentage": false,
|
||||
"pluginVersion": "8.3.2",
|
||||
"pluginVersion": "8.3.5",
|
||||
"pointradius": 2,
|
||||
"points": false,
|
||||
"renderer": "flot",
|
||||
|
@ -3613,6 +3613,7 @@
|
|||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "$ds"
|
||||
},
|
||||
"description": "Shows the CPU usage per vmagent instance. \nIf you think that usage is abnormal or unexpected pls file an issue and attach CPU profile if possible.",
|
||||
|
@ -3628,7 +3629,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 14
|
||||
"y": 45
|
||||
},
|
||||
"hiddenSeries": false,
|
||||
"id": 35,
|
||||
|
@ -3658,21 +3659,47 @@
|
|||
"alertThreshold": true
|
||||
},
|
||||
"percentage": false,
|
||||
"pluginVersion": "8.3.2",
|
||||
"pluginVersion": "8.3.5",
|
||||
"pointradius": 2,
|
||||
"points": false,
|
||||
"renderer": "flot",
|
||||
"seriesOverrides": [],
|
||||
"seriesOverrides": [
|
||||
{
|
||||
"$$hashKey": "object:77",
|
||||
"alias": "/Limit.*/",
|
||||
"color": "#F2495C"
|
||||
}
|
||||
],
|
||||
"spaceLength": 10,
|
||||
"stack": false,
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "$ds"
|
||||
},
|
||||
"exemplar": false,
|
||||
"expr": "sum(rate(process_cpu_seconds_total{job=~\"$job\", instance=~\"$instance\"}[$__interval])) by(instance)",
|
||||
"format": "time_series",
|
||||
"interval": "",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "{{instance}}",
|
||||
"refId": "A"
|
||||
},
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "$ds"
|
||||
},
|
||||
"exemplar": false,
|
||||
"expr": "process_cpu_cores_available{job=~\"$job\", instance=~\"$instance\"}",
|
||||
"format": "time_series",
|
||||
"hide": false,
|
||||
"interval": "",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "Limit ({{instance}})",
|
||||
"refId": "B"
|
||||
}
|
||||
],
|
||||
"thresholds": [],
|
||||
|
@ -3727,7 +3754,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 14
|
||||
"y": 45
|
||||
},
|
||||
"hiddenSeries": false,
|
||||
"id": 37,
|
||||
|
@ -3757,7 +3784,7 @@
|
|||
"alertThreshold": true
|
||||
},
|
||||
"percentage": false,
|
||||
"pluginVersion": "8.3.2",
|
||||
"pluginVersion": "8.3.5",
|
||||
"pointradius": 2,
|
||||
"points": false,
|
||||
"renderer": "flot",
|
||||
|
@ -3834,7 +3861,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 22
|
||||
"y": 53
|
||||
},
|
||||
"hiddenSeries": false,
|
||||
"id": 81,
|
||||
|
@ -3858,7 +3885,7 @@
|
|||
"alertThreshold": true
|
||||
},
|
||||
"percentage": false,
|
||||
"pluginVersion": "8.3.2",
|
||||
"pluginVersion": "8.3.5",
|
||||
"pointradius": 2,
|
||||
"points": false,
|
||||
"renderer": "flot",
|
||||
|
@ -3943,7 +3970,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 22
|
||||
"y": 53
|
||||
},
|
||||
"hiddenSeries": false,
|
||||
"id": 7,
|
||||
|
@ -3967,7 +3994,7 @@
|
|||
"alertThreshold": true
|
||||
},
|
||||
"percentage": false,
|
||||
"pluginVersion": "8.3.2",
|
||||
"pluginVersion": "8.3.5",
|
||||
"pointradius": 2,
|
||||
"points": false,
|
||||
"renderer": "flot",
|
||||
|
@ -4045,7 +4072,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 30
|
||||
"y": 61
|
||||
},
|
||||
"hiddenSeries": false,
|
||||
"id": 83,
|
||||
|
@ -4069,7 +4096,7 @@
|
|||
"alertThreshold": true
|
||||
},
|
||||
"percentage": false,
|
||||
"pluginVersion": "8.3.2",
|
||||
"pluginVersion": "8.3.5",
|
||||
"pointradius": 2,
|
||||
"points": false,
|
||||
"renderer": "flot",
|
||||
|
@ -4153,7 +4180,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 30
|
||||
"y": 61
|
||||
},
|
||||
"hiddenSeries": false,
|
||||
"id": 39,
|
||||
|
@ -4177,7 +4204,7 @@
|
|||
"alertThreshold": true
|
||||
},
|
||||
"percentage": false,
|
||||
"pluginVersion": "8.3.2",
|
||||
"pluginVersion": "8.3.5",
|
||||
"pointradius": 2,
|
||||
"points": false,
|
||||
"renderer": "flot",
|
||||
|
@ -4247,7 +4274,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 38
|
||||
"y": 69
|
||||
},
|
||||
"hiddenSeries": false,
|
||||
"id": 43,
|
||||
|
@ -4271,7 +4298,7 @@
|
|||
"alertThreshold": true
|
||||
},
|
||||
"percentage": false,
|
||||
"pluginVersion": "8.3.2",
|
||||
"pluginVersion": "8.3.5",
|
||||
"pointradius": 2,
|
||||
"points": false,
|
||||
"renderer": "flot",
|
||||
|
@ -4339,7 +4366,7 @@
|
|||
"h": 8,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 38
|
||||
"y": 69
|
||||
},
|
||||
"hiddenSeries": false,
|
||||
"id": 41,
|
||||
|
@ -4363,7 +4390,7 @@
|
|||
"alertThreshold": true
|
||||
},
|
||||
"percentage": false,
|
||||
"pluginVersion": "8.3.2",
|
||||
"pluginVersion": "8.3.5",
|
||||
"pointradius": 2,
|
||||
"points": false,
|
||||
"renderer": "flot",
|
||||
|
@ -4418,7 +4445,7 @@
|
|||
}
|
||||
],
|
||||
"refresh": false,
|
||||
"schemaVersion": 33,
|
||||
"schemaVersion": 34,
|
||||
"style": "dark",
|
||||
"tags": [
|
||||
"vmagent",
|
||||
|
@ -4428,9 +4455,9 @@
|
|||
"list": [
|
||||
{
|
||||
"current": {
|
||||
"selected": true,
|
||||
"text": "dbaas-test-t3-medium-inst",
|
||||
"value": "dbaas-test-t3-medium-inst"
|
||||
"selected": false,
|
||||
"text": "VictoriaMetrics",
|
||||
"value": "VictoriaMetrics"
|
||||
},
|
||||
"hide": 0,
|
||||
"includeAll": false,
|
||||
|
|
File diff suppressed because it is too large
Load diff
|
@ -44,6 +44,17 @@ groups:
|
|||
description: "Too high memory usage may result into multiple issues such as OOMs or degraded performance.
|
||||
Consider to either increase available memory or decrease the load on the process."
|
||||
|
||||
- alert: TooHighCPUUsage
|
||||
expr: rate(process_cpu_seconds_total[5m]) / process_cpu_cores_available > 0.9
|
||||
for: 5m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: "More than 90% of CPU is used by \"{{ $labels.job }}\"(\"{{ $labels.instance }}\") during the last 5m"
|
||||
description: "Too high CPU usage may be a sign of insufficient resources and make process unstable.
|
||||
Consider to either increase available CPU resources or decrease the load on the process."
|
||||
|
||||
|
||||
# Alerts group for VM single assumes that Grafana dashboard
|
||||
# https://grafana.com/grafana/dashboards/10229 is installed.
|
||||
# Pls update the `dashboard` annotation according to your setup.
|
||||
|
@ -176,13 +187,13 @@ groups:
|
|||
sum(rate(vm_slow_row_inserts_total[5m])) by(instance)
|
||||
/
|
||||
sum(rate(vm_rows_inserted_total[5m])) by (instance)
|
||||
) > 0.5
|
||||
) > 0.05
|
||||
for: 15m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/wNf0q_kZk?viewPanel=68&var-instance={{ $labels.instance }}"
|
||||
summary: "Percentage of slow inserts is more than 50% on \"{{ $labels.instance }}\" for the last 15m"
|
||||
summary: "Percentage of slow inserts is more than 5% on \"{{ $labels.instance }}\" for the last 15m"
|
||||
description: "High rate of slow inserts on \"{{ $labels.instance }}\" may be a sign of resource exhaustion
|
||||
for the current load. It is likely more RAM is needed for optimal handling of the current number of active time series."
|
||||
|
||||
|
|
|
@ -2,11 +2,9 @@
|
|||
|
||||
### Build image
|
||||
|
||||
To build the snapshot in DigitalOcean account you will need API Token and [packer](https://learn.hashicorp.com/tutorials/packer/get-started-install-cli).
|
||||
|
||||
API Token can be generated on [https://cloud.digitalocean.com/account/api/tokens](https://cloud.digitalocean.com/account/api/tokens) or use already generated from OnePassword.
|
||||
|
||||
Set variable `DIGITALOCEAN_API_TOKEN` for environment:
|
||||
1. To build the snapshot in DigitalOcean account you will need API Token and [packer](https://learn.hashicorp.com/tutorials/packer/get-started-install-cli).
|
||||
2. API Token can be generated on [https://cloud.digitalocean.com/account/api/tokens](https://cloud.digitalocean.com/account/api/tokens) or use already generated from OnePassword.
|
||||
3. Set variable `DIGITALOCEAN_API_TOKEN` for environment:
|
||||
|
||||
```bash
|
||||
export DIGITALOCEAN_API_TOKEN="your_token_here"
|
||||
|
@ -18,8 +16,24 @@ or set it by with make:
|
|||
make release-victoria-metrics-digitalocean-oneclick-droplet DIGITALOCEAN_API_TOKEN="your_token_here"
|
||||
```
|
||||
|
||||
## Release guide for DigitalOcean Kubernetes 1-Click App
|
||||
|
||||
## Submit a pull request
|
||||
|
||||
1. Fork [https://github.com/digitalocean/marketplace-kubernetes](https://github.com/digitalocean/marketplace-kubernetes).
|
||||
2. Apply changes to vmagent.yaml and vmcluster.yaml in https://github.com/digitalocean/marketplace-kubernetes/tree/master/stacks/victoria-metrics-cluster/yaml .
|
||||
3. Send a PR to https://github.com/digitalocean/marketplace-kubernetes.
|
||||
4. Add changes to product page at [https://cloud.digitalocean.com/vendorportal/61de9e7fbbd94c7e4b9b80be/15/edit](https://cloud.digitalocean.com/vendorportal/61de9e7fbbd94c7e4b9b80be/15/edit):
|
||||
* update App Version;
|
||||
* (onfly if PR was submittedm apprived and merged) add select a checkbox "I made a change, submitted a pull request, and the pull request was approved and merged."
|
||||
* updated Version of packages and links to changelogs in `Software Included` section;
|
||||
* describe your updates in `Reason for update` section.
|
||||
* submit your changes.
|
||||
|
||||
|
||||
### Update information on Vendor Portal
|
||||
|
||||
|
||||
After packer build finished you need to update a product page.
|
||||
|
||||
1. Go to [https://cloud.digitalocean.com/vendorportal](https://cloud.digitalocean.com/vendorportal).
|
||||
|
|
|
@ -14,6 +14,20 @@ The following tip changes can be tested by building VictoriaMetrics components f
|
|||
|
||||
## tip
|
||||
|
||||
* FEATURE: allow overriding default limits for the following in-memory caches, which usually occupy the most memory:
|
||||
* `storage/tsid` - the cache speeds up lookups of internal metric ids by `metric_name{labels...}` during data ingestion. The size for this cache can be tuned with `-storage.cacheSizeStorageTSID` command-line flag.
|
||||
* `indexdb/dataBlocks` - the cache speeds up data lookups in `<-storageDataPath>/indexdb` files. The size for this cache can be tuned with `-storage.cacheSizeIndexDBDataBlocks` command-line flag.
|
||||
* `indexdb/indexBlocks` - the cache speeds up index lookups in `<-storageDataPath>/indexdb` files. The size for this cache can be tuned with `-storage.cacheSizeIndexDBIndexBlocks` command-line flag.
|
||||
See also [cache tuning docs](https://docs.victoriametrics.com/#cache-tuning). See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1940).
|
||||
* FEATURE: add `-influxDBLabel` command-line flag for overriding `db` label name for the data [imported into VictoriaMetrics via InfluxDB line protocol](https://docs.victoriametrics.com/#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf). Thanks to @johnatannvmd for [the pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2203).
|
||||
* FEATURE: return `X-Influxdb-Version` HTTP header in responses to [InfluxDB write requests](https://docs.victoriametrics.com/#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf). This is needed for some InfluxDB clients. See [this comment](https://github.com/ntop/ntopng/issues/5449#issuecomment-1005347597) and [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2209).
|
||||
|
||||
* BUGFIX: reduce memory usage during the first three hours after the upgrade from versions older than v1.73.0. The memory usage spike was related to the need of in-memory caches' re-population after the upgrade because of the fix for [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1401). Now cache size limits are reduced in order to occupy less memory during the upgrade.
|
||||
* BUGFIX: fix a bug, which could significantly slow down requests to `/api/v1/labels` and `/api/v1/label/<label_name>/values`. These APIs are used by Grafana for auto-completion of label names and label values. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2200).
|
||||
* BUGFIX: vmalert: add support for `$externalLabels` and `$externalURL` template vars in the same way as Prometheus does. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2193).
|
||||
* BUGFIX: vmalert: make sure notifiers are discovered during initialization if they are configured via `consul_sd_configs`. Previously they could be discovered in 30 seconds (the default value for `-promscrape.consulSDCheckInterval` command-line flag) after the initialization. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2202).
|
||||
* BUGFIX: update default value for `-promscrape.fileSDCheckInterval`, so it matches default duration used by Prometheus for checking for updates in `file_sd_configs`. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2187). Thanks to @corporate-gadfly for the fix.
|
||||
|
||||
|
||||
## [v1.73.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.73.0)
|
||||
|
||||
|
|
|
@ -421,18 +421,25 @@ All the cluster components provide the following handlers for [profiling](https:
|
|||
* `http://vmselect:8481/debug/pprof/heap` for memory profile and `http://vmselect:8481/debug/pprof/profile` for CPU profile
|
||||
* `http://vmstorage:8482/debug/pprof/heap` for memory profile and `http://vmstorage:8482/debug/pprof/profile` for CPU profile
|
||||
|
||||
Example command for collecting cpu profile from `vmstorage`:
|
||||
Example command for collecting cpu profile from `vmstorage` (replace `0.0.0.0` with `vmstorage` hostname if needed):
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -s http://vmstorage:8482/debug/pprof/profile > cpu.pprof
|
||||
curl http://0.0.0.0:8482/debug/pprof/profile > cpu.pprof
|
||||
```
|
||||
|
||||
Example command for collecting memory profile from `vminsert`:
|
||||
</div>
|
||||
|
||||
Example command for collecting memory profile from `vminsert` (replace `0.0.0.0` with `vminsert` hostname if needed):
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -s http://vminsert:8480/debug/pprof/heap > mem.pprof
|
||||
curl http://0.0.0.0:8480/debug/pprof/heap > mem.pprof
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
## Community and contributions
|
||||
|
||||
|
@ -590,9 +597,9 @@ Below is the output for `/path/to/vminsert -help`:
|
|||
-tls
|
||||
Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set
|
||||
-tlsCertFile string
|
||||
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower
|
||||
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
|
||||
-tlsKeyFile string
|
||||
Path to file with TLS key. Used only if -tls is set
|
||||
Path to file with TLS key. Used only if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
|
||||
-version
|
||||
Show VictoriaMetrics version
|
||||
```
|
||||
|
@ -716,9 +723,9 @@ Below is the output for `/path/to/vmselect -help`:
|
|||
-tls
|
||||
Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set
|
||||
-tlsCertFile string
|
||||
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower
|
||||
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
|
||||
-tlsKeyFile string
|
||||
Path to file with TLS key. Used only if -tls is set
|
||||
Path to file with TLS key. Used only if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
|
||||
-version
|
||||
Show VictoriaMetrics version
|
||||
```
|
||||
|
@ -819,9 +826,9 @@ Below is the output for `/path/to/vmstorage -help`:
|
|||
-tls
|
||||
Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set
|
||||
-tlsCertFile string
|
||||
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower
|
||||
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
|
||||
-tlsKeyFile string
|
||||
Path to file with TLS key. Used only if -tls is set
|
||||
Path to file with TLS key. Used only if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
|
||||
-version
|
||||
Show VictoriaMetrics version
|
||||
-vminsertAddr string
|
||||
|
|
|
@ -94,8 +94,8 @@ and writing data to VictoriaMetrics.
|
|||
VictoriaMetrics is simpler, faster, more cost-effective and it provides [MetricsQL query language](MetricsQL) based on PromQL. The simplicity is twofold:
|
||||
- It is simpler to configure and operate. There is no need for configuring [sidecars](https://github.com/thanos-io/thanos/blob/master/docs/components/sidecar.md),
|
||||
fighting the [gossip protocol](https://github.com/improbable-eng/thanos/blob/030bc345c12c446962225221795f4973848caab5/docs/proposals/completed/201809_gossip-removal.md)
|
||||
or setting up third-party systems such as [Consul](https://github.com/cortexproject/cortex/issues/157), [Cassandra](https://cortexmetrics.io/docs/production/cassandra/),
|
||||
[DynamoDB](https://cortexmetrics.io/docs/production/aws/) or [Memcached](https://cortexmetrics.io/docs/production/caching/).
|
||||
or setting up third-party systems such as [Consul](https://github.com/cortexproject/cortex/issues/157), [Cassandra](https://cortexmetrics.io/docs/chunks-storage/running-chunks-storage-with-cassandra/),
|
||||
[DynamoDB](https://cortexmetrics.io/docs/chunks-storage/aws-tips/) or [Memcached](https://cortexmetrics.io/docs/chunks-storage/caching/).
|
||||
- VictoriaMetrics has a simpler architecture. This means fewer bugs and more useful features in the long run compared to competing TSDBs.
|
||||
|
||||
See [comparing Thanos to VictoriaMetrics cluster](https://medium.com/@valyala/comparing-thanos-to-victoriametrics-cluster-b193bea1683)
|
||||
|
|
|
@ -274,6 +274,8 @@ VictoriaMetrics accepts data from [DataDog agent](https://docs.datadoghq.com/age
|
|||
|
||||
Run DataDog agent with `DD_DD_URL=http://victoriametrics-host:8428/datadog` environment variable in order to write data to VictoriaMetrics at `victoriametrics-host` host. Another option is to set `dd_url` param at [DataDog agent configuration file](https://docs.datadoghq.com/agent/guide/agent-configuration-files/) to `http://victoriametrics-host:8428/datadog`.
|
||||
|
||||
VictoriaMetrics doesn't check `DD_API_KEY` param, so it can be set to arbitrary value.
|
||||
|
||||
Example on how to send data to VictoriaMetrics via DataDog "submit metrics" API from command line:
|
||||
|
||||
```bash
|
||||
|
@ -330,7 +332,7 @@ and stream plain InfluxDB line protocol data to the configured TCP and/or UDP ad
|
|||
VictoriaMetrics performs the following transformations to the ingested InfluxDB data:
|
||||
|
||||
* [`db` query arg](https://docs.influxdata.com/influxdb/v1.7/tools/api/#write-http-endpoint) is mapped into `db` label value
|
||||
unless `db` tag exists in the InfluxDB line.
|
||||
unless `db` tag exists in the InfluxDB line. The `db` label name can be overriden via `-influxDBLabel` command-line flag.
|
||||
* Field names are mapped to time series names prefixed with `{measurement}{separator}` value, where `{separator}` equals to `_` by default. It can be changed with `-influxMeasurementFieldSeparator` command-line flag. See also `-influxSkipSingleField` command-line flag. If `{measurement}` is empty or if `-influxSkipMeasurement` command-line flag is set, then time series names correspond to field names.
|
||||
* Field values are mapped to time series values.
|
||||
* Tags are mapped to Prometheus labels as-is.
|
||||
|
@ -867,7 +869,7 @@ The [deduplication](#deduplication) isn't applied for the data exported in nativ
|
|||
|
||||
## How to import time series data
|
||||
|
||||
Time series data can be imported via any supported ingestion protocol:
|
||||
Time series data can be imported into VictoriaMetrics via any supported ingestion protocol:
|
||||
|
||||
* [Prometheus remote_write API](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write). See [these docs](#prometheus-setup) for details.
|
||||
* DataDog `submit metrics` API. See [these docs](#how-to-send-data-from-datadog-agent) for details.
|
||||
|
@ -1426,6 +1428,31 @@ See also more advanced [cardinality limiter in vmagent](https://docs.victoriamet
|
|||
VictoriaMetrics uses various internal caches. These caches are stored to `<-storageDataPath>/cache` directory during graceful shutdown (e.g. when VictoriaMetrics is stopped by sending `SIGINT` signal). The caches are read on the next VictoriaMetrics startup. Sometimes it is needed to remove such caches on the next startup. This can be performed by placing `reset_cache_on_startup` file inside the `<-storageDataPath>/cache` directory before the restart of VictoriaMetrics. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1447) for details.
|
||||
|
||||
|
||||
## Cache tuning
|
||||
|
||||
VictoriaMetrics uses various in-memory caches for faster data ingestion and query performance.
|
||||
The following metrics for each type of cache are exported at [`/metrics` page](#monitoring):
|
||||
- `vm_cache_size_bytes` - the actual cache size
|
||||
- `vm_cache_size_max_bytes` - cache size limit
|
||||
- `vm_cache_requests_total` - the number of requests to the cache
|
||||
- `vm_cache_misses_total` - the number of cache misses
|
||||
- `vm_cache_entries` - the number of entries in the cache
|
||||
|
||||
Both Grafana dashboards for [single-node VictoriaMetrics](https://grafana.com/dashboards/10229)
|
||||
and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176)
|
||||
contain `Caches` section with cache metrics visualized. The panels show the current
|
||||
memory usage by each type of cache, and also a cache hit rate. If hit rate is close to 100%
|
||||
then cache efficiency is already very high and does not need any tuning.
|
||||
The panel `Cache usage %` in `Troubleshooting` section shows the percentage of used cache size
|
||||
from the allowed size by type. If the percentage is below 100%, then no further tuning needed.
|
||||
|
||||
Please note, default cache sizes were carefully adjusted accordingly to the most
|
||||
practical scenarios and workloads. Change the defaults only if you understand the implications.
|
||||
|
||||
To override the default values see command-line flags with `-storage.cacheSize` prefix.
|
||||
See the full description of flags [here](#list-of-command-line-flags).
|
||||
|
||||
|
||||
## Data migration
|
||||
|
||||
Use [vmctl](https://docs.victoriametrics.com/vmctl.html) for data migration. It supports the following data migration types:
|
||||
|
@ -1460,7 +1487,7 @@ cache when samples with timestamps older than `now - search.cacheTimestampOffset
|
|||
|
||||
VictoriaMetrics doesn't support updating already existing sample values to new ones. It stores all the ingested data points
|
||||
for the same time series with identical timestamps. While it is possible substituting old time series with new time series via
|
||||
[removal of old time series](#how-to-delete-timeseries) and then [writing new time series](#backfilling), this approach
|
||||
[removal of old time series](#how-to-delete-time-series) and then [writing new time series](#backfilling), this approach
|
||||
should be used only for one-off updates. It shouldn't be used for frequent updates because of non-zero overhead related to data removal.
|
||||
|
||||
|
||||
|
@ -1498,18 +1525,26 @@ for running ingestion benchmarks based on node_exporter metrics.
|
|||
|
||||
VictoriaMetrics provides handlers for collecting the following [Go profiles](https://blog.golang.org/profiling-go-programs):
|
||||
|
||||
* Memory profile. It can be collected with the following command:
|
||||
* Memory profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed):
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -s http://<victoria-metrics-host>:8428/debug/pprof/heap > mem.pprof
|
||||
curl http://0.0.0.0:8428/debug/pprof/heap > mem.pprof
|
||||
```
|
||||
|
||||
* CPU profile. It can be collected with the following command:
|
||||
</div>
|
||||
|
||||
* CPU profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed):
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -s http://<victoria-metrics-host>:8428/debug/pprof/profile > cpu.pprof
|
||||
curl http://0.0.0.0:8428/debug/pprof/profile > cpu.pprof
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
The command for collecting CPU profile waits for 30 seconds before returning.
|
||||
|
||||
The collected profiles may be analyzed with [go tool pprof](https://github.com/google/pprof).
|
||||
|
@ -1678,6 +1713,8 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
-influx.maxLineSize size
|
||||
The maximum size in bytes for a single InfluxDB line during parsing
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 262144)
|
||||
-influxDBLabel string
|
||||
Default label for the DB name sent over '?db={db_name}' query parameter (default "db")
|
||||
-influxListenAddr string
|
||||
TCP and UDP address to listen for InfluxDB line protocol data. Usually :8189 must be set. Doesn't work if empty. This flag isn't needed when ingesting data over HTTP - just send it to http://<victoriametrics>:8428/write
|
||||
-influxMeasurementFieldSeparator string
|
||||
|
@ -1778,7 +1815,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
-promscrape.eurekaSDCheckInterval duration
|
||||
Interval for checking for changes in eureka. This works only if eureka_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#eureka_sd_config for details (default 30s)
|
||||
-promscrape.fileSDCheckInterval duration
|
||||
Interval for checking for changes in 'file_sd_config'. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#file_sd_config for details (default 30s)
|
||||
Interval for checking for changes in 'file_sd_config'. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#file_sd_config for details (default 5m0s)
|
||||
-promscrape.gceSDCheckInterval duration
|
||||
Interval for checking for changes in gce. This works only if gce_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#gce_sd_config for details (default 1m0s)
|
||||
-promscrape.httpSDCheckInterval duration
|
||||
|
@ -1888,6 +1925,15 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
authKey, which must be passed in query string to /snapshot* pages
|
||||
-sortLabels
|
||||
Whether to sort labels for incoming samples before writing them to storage. This may be needed for reducing memory usage at storage when the order of labels in incoming samples is random. For example, if m{k1="v1",k2="v2"} may be sent as m{k2="v2",k1="v1"}. Enabled sorting for labels can slow down ingestion performance a bit
|
||||
-storage.cacheSizeIndexDBDataBlocks size
|
||||
Overrides max size for indexdb/dataBlocks cache. See https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#cache-tuning
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 0)
|
||||
-storage.cacheSizeIndexDBIndexBlocks size
|
||||
Overrides max size for indexdb/indexBlocks cache. See https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#cache-tuning
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 0)
|
||||
-storage.cacheSizeStorageTSID size
|
||||
Overrides max size for storage/tsid cache. See https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#cache-tuning
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 0)
|
||||
-storage.maxDailySeries int
|
||||
The maximum number of unique series can be added to the storage during the last 24 hours. Excess series are logged and dropped. This can be useful for limiting series churn rate. See also -storage.maxHourlySeries
|
||||
-storage.maxHourlySeries int
|
||||
|
@ -1900,9 +1946,9 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
-tls
|
||||
Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set
|
||||
-tlsCertFile string
|
||||
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower
|
||||
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
|
||||
-tlsKeyFile string
|
||||
Path to file with TLS key. Used only if -tls is set
|
||||
Path to file with TLS key. Used only if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
|
||||
-version
|
||||
Show VictoriaMetrics version
|
||||
```
|
||||
|
|
|
@ -278,6 +278,8 @@ VictoriaMetrics accepts data from [DataDog agent](https://docs.datadoghq.com/age
|
|||
|
||||
Run DataDog agent with `DD_DD_URL=http://victoriametrics-host:8428/datadog` environment variable in order to write data to VictoriaMetrics at `victoriametrics-host` host. Another option is to set `dd_url` param at [DataDog agent configuration file](https://docs.datadoghq.com/agent/guide/agent-configuration-files/) to `http://victoriametrics-host:8428/datadog`.
|
||||
|
||||
VictoriaMetrics doesn't check `DD_API_KEY` param, so it can be set to arbitrary value.
|
||||
|
||||
Example on how to send data to VictoriaMetrics via DataDog "submit metrics" API from command line:
|
||||
|
||||
```bash
|
||||
|
@ -334,7 +336,7 @@ and stream plain InfluxDB line protocol data to the configured TCP and/or UDP ad
|
|||
VictoriaMetrics performs the following transformations to the ingested InfluxDB data:
|
||||
|
||||
* [`db` query arg](https://docs.influxdata.com/influxdb/v1.7/tools/api/#write-http-endpoint) is mapped into `db` label value
|
||||
unless `db` tag exists in the InfluxDB line.
|
||||
unless `db` tag exists in the InfluxDB line. The `db` label name can be overriden via `-influxDBLabel` command-line flag.
|
||||
* Field names are mapped to time series names prefixed with `{measurement}{separator}` value, where `{separator}` equals to `_` by default. It can be changed with `-influxMeasurementFieldSeparator` command-line flag. See also `-influxSkipSingleField` command-line flag. If `{measurement}` is empty or if `-influxSkipMeasurement` command-line flag is set, then time series names correspond to field names.
|
||||
* Field values are mapped to time series values.
|
||||
* Tags are mapped to Prometheus labels as-is.
|
||||
|
@ -871,7 +873,7 @@ The [deduplication](#deduplication) isn't applied for the data exported in nativ
|
|||
|
||||
## How to import time series data
|
||||
|
||||
Time series data can be imported via any supported ingestion protocol:
|
||||
Time series data can be imported into VictoriaMetrics via any supported ingestion protocol:
|
||||
|
||||
* [Prometheus remote_write API](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write). See [these docs](#prometheus-setup) for details.
|
||||
* DataDog `submit metrics` API. See [these docs](#how-to-send-data-from-datadog-agent) for details.
|
||||
|
@ -1430,6 +1432,31 @@ See also more advanced [cardinality limiter in vmagent](https://docs.victoriamet
|
|||
VictoriaMetrics uses various internal caches. These caches are stored to `<-storageDataPath>/cache` directory during graceful shutdown (e.g. when VictoriaMetrics is stopped by sending `SIGINT` signal). The caches are read on the next VictoriaMetrics startup. Sometimes it is needed to remove such caches on the next startup. This can be performed by placing `reset_cache_on_startup` file inside the `<-storageDataPath>/cache` directory before the restart of VictoriaMetrics. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1447) for details.
|
||||
|
||||
|
||||
## Cache tuning
|
||||
|
||||
VictoriaMetrics uses various in-memory caches for faster data ingestion and query performance.
|
||||
The following metrics for each type of cache are exported at [`/metrics` page](#monitoring):
|
||||
- `vm_cache_size_bytes` - the actual cache size
|
||||
- `vm_cache_size_max_bytes` - cache size limit
|
||||
- `vm_cache_requests_total` - the number of requests to the cache
|
||||
- `vm_cache_misses_total` - the number of cache misses
|
||||
- `vm_cache_entries` - the number of entries in the cache
|
||||
|
||||
Both Grafana dashboards for [single-node VictoriaMetrics](https://grafana.com/dashboards/10229)
|
||||
and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176)
|
||||
contain `Caches` section with cache metrics visualized. The panels show the current
|
||||
memory usage by each type of cache, and also a cache hit rate. If hit rate is close to 100%
|
||||
then cache efficiency is already very high and does not need any tuning.
|
||||
The panel `Cache usage %` in `Troubleshooting` section shows the percentage of used cache size
|
||||
from the allowed size by type. If the percentage is below 100%, then no further tuning needed.
|
||||
|
||||
Please note, default cache sizes were carefully adjusted accordingly to the most
|
||||
practical scenarios and workloads. Change the defaults only if you understand the implications.
|
||||
|
||||
To override the default values see command-line flags with `-storage.cacheSize` prefix.
|
||||
See the full description of flags [here](#list-of-command-line-flags).
|
||||
|
||||
|
||||
## Data migration
|
||||
|
||||
Use [vmctl](https://docs.victoriametrics.com/vmctl.html) for data migration. It supports the following data migration types:
|
||||
|
@ -1464,7 +1491,7 @@ cache when samples with timestamps older than `now - search.cacheTimestampOffset
|
|||
|
||||
VictoriaMetrics doesn't support updating already existing sample values to new ones. It stores all the ingested data points
|
||||
for the same time series with identical timestamps. While it is possible substituting old time series with new time series via
|
||||
[removal of old time series](#how-to-delete-timeseries) and then [writing new time series](#backfilling), this approach
|
||||
[removal of old time series](#how-to-delete-time-series) and then [writing new time series](#backfilling), this approach
|
||||
should be used only for one-off updates. It shouldn't be used for frequent updates because of non-zero overhead related to data removal.
|
||||
|
||||
|
||||
|
@ -1502,18 +1529,26 @@ for running ingestion benchmarks based on node_exporter metrics.
|
|||
|
||||
VictoriaMetrics provides handlers for collecting the following [Go profiles](https://blog.golang.org/profiling-go-programs):
|
||||
|
||||
* Memory profile. It can be collected with the following command:
|
||||
* Memory profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed):
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -s http://<victoria-metrics-host>:8428/debug/pprof/heap > mem.pprof
|
||||
curl http://0.0.0.0:8428/debug/pprof/heap > mem.pprof
|
||||
```
|
||||
|
||||
* CPU profile. It can be collected with the following command:
|
||||
</div>
|
||||
|
||||
* CPU profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed):
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -s http://<victoria-metrics-host>:8428/debug/pprof/profile > cpu.pprof
|
||||
curl http://0.0.0.0:8428/debug/pprof/profile > cpu.pprof
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
The command for collecting CPU profile waits for 30 seconds before returning.
|
||||
|
||||
The collected profiles may be analyzed with [go tool pprof](https://github.com/google/pprof).
|
||||
|
@ -1682,6 +1717,8 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
-influx.maxLineSize size
|
||||
The maximum size in bytes for a single InfluxDB line during parsing
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 262144)
|
||||
-influxDBLabel string
|
||||
Default label for the DB name sent over '?db={db_name}' query parameter (default "db")
|
||||
-influxListenAddr string
|
||||
TCP and UDP address to listen for InfluxDB line protocol data. Usually :8189 must be set. Doesn't work if empty. This flag isn't needed when ingesting data over HTTP - just send it to http://<victoriametrics>:8428/write
|
||||
-influxMeasurementFieldSeparator string
|
||||
|
@ -1782,7 +1819,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
-promscrape.eurekaSDCheckInterval duration
|
||||
Interval for checking for changes in eureka. This works only if eureka_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#eureka_sd_config for details (default 30s)
|
||||
-promscrape.fileSDCheckInterval duration
|
||||
Interval for checking for changes in 'file_sd_config'. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#file_sd_config for details (default 30s)
|
||||
Interval for checking for changes in 'file_sd_config'. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#file_sd_config for details (default 5m0s)
|
||||
-promscrape.gceSDCheckInterval duration
|
||||
Interval for checking for changes in gce. This works only if gce_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#gce_sd_config for details (default 1m0s)
|
||||
-promscrape.httpSDCheckInterval duration
|
||||
|
@ -1892,6 +1929,15 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
authKey, which must be passed in query string to /snapshot* pages
|
||||
-sortLabels
|
||||
Whether to sort labels for incoming samples before writing them to storage. This may be needed for reducing memory usage at storage when the order of labels in incoming samples is random. For example, if m{k1="v1",k2="v2"} may be sent as m{k2="v2",k1="v1"}. Enabled sorting for labels can slow down ingestion performance a bit
|
||||
-storage.cacheSizeIndexDBDataBlocks size
|
||||
Overrides max size for indexdb/dataBlocks cache. See https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#cache-tuning
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 0)
|
||||
-storage.cacheSizeIndexDBIndexBlocks size
|
||||
Overrides max size for indexdb/indexBlocks cache. See https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#cache-tuning
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 0)
|
||||
-storage.cacheSizeStorageTSID size
|
||||
Overrides max size for storage/tsid cache. See https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#cache-tuning
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 0)
|
||||
-storage.maxDailySeries int
|
||||
The maximum number of unique series can be added to the storage during the last 24 hours. Excess series are logged and dropped. This can be useful for limiting series churn rate. See also -storage.maxHourlySeries
|
||||
-storage.maxHourlySeries int
|
||||
|
@ -1904,9 +1950,9 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
-tls
|
||||
Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set
|
||||
-tlsCertFile string
|
||||
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower
|
||||
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
|
||||
-tlsKeyFile string
|
||||
Path to file with TLS key. Used only if -tls is set
|
||||
Path to file with TLS key. Used only if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
|
||||
-version
|
||||
Show VictoriaMetrics version
|
||||
```
|
||||
|
|
|
@ -282,7 +282,7 @@ cat <<EOF | helm install my-grafana grafana/grafana -f -
|
|||
default:
|
||||
victoriametrics:
|
||||
gnetId: 11176
|
||||
revision: 17
|
||||
revision: 18
|
||||
datasource: victoriametrics
|
||||
vmagent:
|
||||
gnetId: 12683
|
||||
|
|
|
@ -484,7 +484,7 @@ cat <<EOF | helm install my-grafana grafana/grafana -f -
|
|||
default:
|
||||
victoriametrics:
|
||||
gnetId: 11176
|
||||
revision: 17
|
||||
revision: 18
|
||||
datasource: victoriametrics
|
||||
vmagent:
|
||||
gnetId: 12683
|
||||
|
|
|
@ -687,6 +687,7 @@ VMAgentSpec defines the desired state of VMAgent
|
|||
| nodeScrapeNamespaceSelector | NodeScrapeNamespaceSelector defines Namespaces to be selected for VMNodeScrape discovery. Works in combination with Selector. NamespaceSelector nil - only objects at VMAgent namespace. Selector nil - only objects at NamespaceSelector namespaces. If both nil - behaviour controlled by selectAllByDefault | *[metav1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#labelselector-v1-meta) | false |
|
||||
| staticScrapeSelector | StaticScrapeSelector defines PodScrapes to be selected for target discovery. Works in combination with NamespaceSelector. If both nil - match everything. NamespaceSelector nil - only objects at VMAgent namespace. Selector nil - only objects at NamespaceSelector namespaces. | *[metav1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#labelselector-v1-meta) | false |
|
||||
| staticScrapeNamespaceSelector | StaticScrapeNamespaceSelector defines Namespaces to be selected for VMStaticScrape discovery. Works in combination with NamespaceSelector. NamespaceSelector nil - only objects at VMAgent namespace. Selector nil - only objects at NamespaceSelector namespaces. If both nil - behaviour controlled by selectAllByDefault | *[metav1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#labelselector-v1-meta) | false |
|
||||
| inlineScrapeConfig | InlineScrapeConfig allows for an array of scrape jobs to be passed as a YAML formatted string (it is recommended to use the literal block style indicator "\|"). As scrape configs are appended, the user is responsible to make sure it is valid. Note that using this feature may expose the possibility to break upgrades of VMAgent. It is advised to review VMAgent release notes to ensure that no incompatible scrape configs are going to break VMAgent after the upgrade. | string | false |
|
||||
| additionalScrapeConfigs | AdditionalScrapeConfigs As scrape configs are appended, the user is responsible to make sure it is valid. Note that using this feature may expose the possibility to break upgrades of VMAgent. It is advised to review VMAgent release notes to ensure that no incompatible scrape configs are going to break VMAgent after the upgrade. | *[v1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#secretkeyselector-v1-core) | false |
|
||||
| arbitraryFSAccessThroughSMs | ArbitraryFSAccessThroughSMs configures whether configuration based on a service scrape can access arbitrary files on the file system of the VMAgent container e.g. bearer token files. | [ArbitraryFSAccessThroughSMsConfig](#arbitraryfsaccessthroughsmsconfig) | false |
|
||||
| insertPorts | InsertPorts - additional listen ports for data ingestion. | *[InsertPorts](#insertports) | false |
|
||||
|
|
336
docs/url-examples.md
Normal file
336
docs/url-examples.md
Normal file
|
@ -0,0 +1,336 @@
|
|||
---
|
||||
sort: 20
|
||||
---
|
||||
|
||||
# VictoriaMetrics API examples
|
||||
|
||||
|
||||
## /api/v1/admin/tsdb/delete_series
|
||||
|
||||
**Deletes time series from VictoriaMetrics**
|
||||
|
||||
Single:
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl 'http://<victoriametrics-addr>:8428/api/v1/admin/tsdb/delete_series?match[]=vm_http_request_errors_total'
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Cluster:
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl 'http://<vmselect>:8481/delete/0/prometheus/api/v1/admin/tsdb/delete_series?match[]=vm_http_request_errors_total'
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Additional information:
|
||||
* [How to delete time series](https://docs.victoriametrics.com/#how-to-delete-time-series)
|
||||
|
||||
|
||||
## /api/v1/export/csv
|
||||
|
||||
**Exports CSV data from VictoriaMetrics**
|
||||
|
||||
Single:
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl 'http://<victoriametrics-addr>:8428/api/v1/export/csv?format=__name__,__value__,__timestamp__:unix_s&match=vm_http_request_errors_total' > filename.txt
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Cluster:
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -G 'http://<vmselect>:8481/select/0/prometheus/api/v1/export/csv?format=__name__,__value__,__timestamp__:unix_s&match=vm_http_request_errors_total' > filename.txt
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Additional information:
|
||||
* [How to export time series](https://docs.victoriametrics.com/#how-to-export-csv-data)
|
||||
* [URL Format](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format)
|
||||
|
||||
|
||||
## /api/v1/export/native
|
||||
|
||||
**Exports data from VictoriaMetrics in native format**
|
||||
|
||||
Single:
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -G 'http://<victoriametrics-addr>:8428/api/v1/export/native?match[]=vm_http_request_errors_total' > filename.txt
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Cluster:
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -G 'http://<vmselect>:8481/select/0/prometheus/api/v1/export/native?match=vm_http_request_errors_total' > filename.txt
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
More information:
|
||||
* [How to export data in native format](https://docs.victoriametrics.com/#how-to-export-data-in-native-format)
|
||||
|
||||
|
||||
## /api/v1/import
|
||||
|
||||
**Imports data obtained via /api/v1/export**
|
||||
|
||||
Single:
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl --data-binary "@import.txt" -X POST 'http://destination-victoriametrics:8428/api/v1/import'
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Cluster:
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl --data-binary "@import.txt" -X POST 'http://<vminsert>:8480/insert/prometheus/api/v1/import'
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Additional information:
|
||||
* [How to import time series data](https://docs.victoriametrics.com/#how-to-import-time-series-data)
|
||||
|
||||
|
||||
## /api/v1/import/csv
|
||||
|
||||
**Imports CSV data to VictoriaMetrics**
|
||||
|
||||
Single:
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl --data-binary "@import.txt" -X POST 'http://localhost:8428/api/v1/import/prometheus'
|
||||
curl -d "GOOG,1.23,4.56,NYSE" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Cluster:
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl --data-binary "@import.txt" -X POST 'http://<vminsert>:8480/insert/0/prometheus/api/v1/import/csv'
|
||||
curl -d "GOOG,1.23,4.56,NYSE" 'http://<vminsert>:8480/insert/0/prometheus/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Additional information:
|
||||
* [URL format](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format)
|
||||
* [How to import CSV data](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-import-csv-data)
|
||||
|
||||
|
||||
## /datadog/api/v1/series
|
||||
|
||||
**Sends data from DataDog agent to VM**
|
||||
|
||||
Single:
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
echo '
|
||||
{
|
||||
"series": [
|
||||
{
|
||||
"host": "test.example.com",
|
||||
"interval": 20,
|
||||
"metric": "system.load.1",
|
||||
"points": [[
|
||||
0,
|
||||
0.5
|
||||
]],
|
||||
"tags": [
|
||||
"environment:test"
|
||||
],
|
||||
"type": "rate"
|
||||
}
|
||||
]
|
||||
}
|
||||
' | curl -X POST --data-binary @- http://localhost:8428/datadog/api/v1/series
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Cluster:
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
echo '
|
||||
{
|
||||
"series": [
|
||||
{
|
||||
"host": "test.example.com",
|
||||
"interval": 20,
|
||||
"metric": "system.load.1",
|
||||
"points": [[
|
||||
0,
|
||||
0.5
|
||||
]],
|
||||
"tags": [
|
||||
"environment:test"
|
||||
],
|
||||
"type": "rate"
|
||||
}
|
||||
]
|
||||
}
|
||||
' | curl -X POST --data-binary @- 'http://<vminsert>:8480/insert/0/datadog/api/v1/series'
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Additional information:
|
||||
* [How to send data from datadog agent](https://docs.victoriametrics.com/#how-to-send-data-from-datadog-agent)
|
||||
|
||||
|
||||
## /graphite/metrics/find
|
||||
|
||||
**Searches Graphite metrics in VictoriaMetrics**
|
||||
|
||||
Single:
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -G 'http://localhost:8428/graphite/metrics/find?query=vm_http_request_errors_total'
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Cluster:
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -G 'http://<vmselect>:8481/select/0/graphite/metrics/find?query=vm_http_request_errors_total'
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Additional information:
|
||||
* [Metrics find](https://graphite-api.readthedocs.io/en/latest/api.html#metrics-find)
|
||||
* [How to send data from graphite compatible agents such as statsd](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-send-data-from-graphite-compatible-agents-such-as-statsd)
|
||||
* [URL Format](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format)
|
||||
|
||||
|
||||
## /influx/write
|
||||
|
||||
**Writes data with InfluxDB line protocol to VictoriaMetrics**
|
||||
|
||||
Single:
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://localhost:8428/write'
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Cluster:
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'http://<vminsert>:8480/insert/0/influx/write'
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Additional information:
|
||||
* [How to send data from influxdb compatible agents such as telegraf](https://docs.victoriametrics.com/#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf)
|
||||
|
||||
|
||||
## TCP and UDP
|
||||
|
||||
**How to send data from OpenTSDB-compatible agents to VictoriaMetrics**
|
||||
|
||||
Turned off by default. Enable OpenTSDB receiver in VictoriaMetrics by setting `-opentsdbListenAddr` command-line flag.
|
||||
*If run from docker, '-opentsdbListenAddr' port should be exposed*
|
||||
|
||||
Single:
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2" | nc -N localhost 4242
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Cluster:
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2 VictoriaMetrics_AccountID=0" | nc -N http://<vminsert> 4242
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Enable HTTP server for OpenTSDB /api/put requests by setting `-opentsdbHTTPListenAddr` command-line flag.
|
||||
|
||||
Single:
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"metric":"bar","value":43}]' http://localhost:4242/api/put
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Cluster:
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"metric":"bar","value":43}]'
|
||||
'http://<vminsert>:8480/insert/42/opentsdb/api/put'
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Additional information:
|
||||
* [Api http put](http://opentsdb.net/docs/build/html/api_http/put.html)
|
||||
* [How to send data from opentsdb compatible agents](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-send-data-from-opentsdb-compatible-agents)
|
||||
|
||||
|
||||
**How to write data with Graphite plaintext protocol to VictoriaMetrics**
|
||||
|
||||
Enable Graphite receiver in VictoriaMetrics by setting `-graphiteListenAddr` command-line flag.
|
||||
|
||||
Single:
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
echo "foo.bar.baz;tag1=value1;tag2=value2 123 `date +%s`" |
|
||||
nc -N localhost 2003
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Cluster:
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
echo "foo.bar.baz;tag1=value1;tag2=value2;VictoriaMetrics_AccountID=42 123 `date +%s`" | nc -N http://<vminsert> 2003
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
Additional information:
|
||||
|
||||
`VictoriaMetrics_AccountID=42` - tag that indicated tenant ID.
|
||||
* [Request handler](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/a3eafd2e7fc75776dfc19d3c68c85589454d9dce/app/vminsert/opentsdb/request_handler.go#L47)
|
||||
* [How to send data from graphite compatible agents such as statsd](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#how-to-send-data-from-graphite-compatible-agents-such-as-statsd)
|
|
@ -689,18 +689,26 @@ ARM build may run on Raspberry Pi or on [energy-efficient ARM servers](https://b
|
|||
|
||||
`vmagent` provides handlers for collecting the following [Go profiles](https://blog.golang.org/profiling-go-programs):
|
||||
|
||||
* Memory profile can be collected with the following command:
|
||||
* Memory profile can be collected with the following command (replace `0.0.0.0` with hostname if needed):
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -s http://<vmagent-host>:8429/debug/pprof/heap > mem.pprof
|
||||
curl http://0.0.0.0:8429/debug/pprof/heap > mem.pprof
|
||||
```
|
||||
|
||||
* CPU profile can be collected with the following command:
|
||||
</div>
|
||||
|
||||
* CPU profile can be collected with the following command (replace `0.0.0.0` with hostname if needed):
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -s http://<vmagent-host>:8429/debug/pprof/profile > cpu.pprof
|
||||
curl http://0.0.0.0:8429/debug/pprof/profile > cpu.pprof
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
The command for collecting CPU profile waits for 30 seconds before returning.
|
||||
|
||||
The collected profiles may be analyzed with [go tool pprof](https://github.com/google/pprof).
|
||||
|
@ -767,6 +775,8 @@ See the docs at https://docs.victoriametrics.com/vmagent.html .
|
|||
-influx.maxLineSize size
|
||||
The maximum size in bytes for a single InfluxDB line during parsing
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 262144)
|
||||
-influxDBLabel string
|
||||
Default label for the DB name sent over '?db={db_name}' query parameter (default "db")
|
||||
-influxListenAddr string
|
||||
TCP and UDP address to listen for InfluxDB line protocol data. Usually :8189 must be set. Doesn't work if empty. This flag isn't needed when ingesting data over HTTP - just send it to http://<vmagent>:8429/write
|
||||
-influxMeasurementFieldSeparator string
|
||||
|
@ -885,7 +895,7 @@ See the docs at https://docs.victoriametrics.com/vmagent.html .
|
|||
-promscrape.eurekaSDCheckInterval duration
|
||||
Interval for checking for changes in eureka. This works only if eureka_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#eureka_sd_config for details (default 30s)
|
||||
-promscrape.fileSDCheckInterval duration
|
||||
Interval for checking for changes in 'file_sd_config'. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#file_sd_config for details (default 30s)
|
||||
Interval for checking for changes in 'file_sd_config'. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#file_sd_config for details (default 5m0s)
|
||||
-promscrape.gceSDCheckInterval duration
|
||||
Interval for checking for changes in gce. This works only if gce_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#gce_sd_config for details (default 1m0s)
|
||||
-promscrape.httpSDCheckInterval duration
|
||||
|
@ -1021,9 +1031,9 @@ See the docs at https://docs.victoriametrics.com/vmagent.html .
|
|||
-tls
|
||||
Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set
|
||||
-tlsCertFile string
|
||||
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower
|
||||
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
|
||||
-tlsKeyFile string
|
||||
Path to file with TLS key. Used only if -tls is set
|
||||
Path to file with TLS key. Used only if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
|
||||
-version
|
||||
Show VictoriaMetrics version
|
||||
```
|
||||
|
|
|
@ -19,7 +19,7 @@ implementation and aims to be compatible with its syntax.
|
|||
support and expressions validation;
|
||||
* Prometheus [alerting rules definition format](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/#defining-alerting-rules)
|
||||
support;
|
||||
* Integration with [Alertmanager](https://github.com/prometheus/alertmanager);
|
||||
* Integration with [Alertmanager](https://github.com/prometheus/alertmanager) starting from [Alertmanager v0.16.0-aplha](https://github.com/prometheus/alertmanager/releases/tag/v0.16.0-alpha.0);
|
||||
* Keeps the alerts [state on restarts](#alerts-state-on-restarts);
|
||||
* Graphite datasource can be used for alerting and recording rules. See [these docs](#graphite);
|
||||
* Recording and Alerting rules backfilling (aka `replay`). See [these docs](#rules-backfilling);
|
||||
|
@ -732,9 +732,9 @@ The shortlist of configuration flags is the following:
|
|||
-tls
|
||||
Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set
|
||||
-tlsCertFile string
|
||||
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower
|
||||
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
|
||||
-tlsKeyFile string
|
||||
Path to file with TLS key. Used only if -tls is set
|
||||
Path to file with TLS key. Used only if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
|
||||
-version
|
||||
Show VictoriaMetrics version
|
||||
```
|
||||
|
|
|
@ -192,18 +192,26 @@ ROOT_IMAGE=scratch make package-vmauth
|
|||
|
||||
`vmauth` provides handlers for collecting the following [Go profiles](https://blog.golang.org/profiling-go-programs):
|
||||
|
||||
* Memory profile. It can be collected with the following command:
|
||||
* Memory profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed):
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -s http://<vmauth-host>:8427/debug/pprof/heap > mem.pprof
|
||||
curl http://0.0.0.0:8427/debug/pprof/heap > mem.pprof
|
||||
```
|
||||
|
||||
* CPU profile. It can be collected with the following command:
|
||||
</div>
|
||||
|
||||
* CPU profile. It can be collected with the following command (replace `0.0.0.0` with hostname if needed):
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```bash
|
||||
curl -s http://<vmauth-host>:8427/debug/pprof/profile > cpu.pprof
|
||||
curl http://0.0.0.0:8427/debug/pprof/profile > cpu.pprof
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
The command for collecting CPU profile waits for 30 seconds before returning.
|
||||
|
||||
The collected profiles may be analyzed with [go tool pprof](https://github.com/google/pprof).
|
||||
|
@ -281,9 +289,9 @@ See the docs at https://docs.victoriametrics.com/vmauth.html .
|
|||
-tls
|
||||
Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set
|
||||
-tlsCertFile string
|
||||
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower
|
||||
Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
|
||||
-tlsKeyFile string
|
||||
Path to file with TLS key. Used only if -tls is set
|
||||
Path to file with TLS key. Used only if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
|
||||
-version
|
||||
Show VictoriaMetrics version
|
||||
```
|
||||
|
|
16
go.mod
16
go.mod
|
@ -3,7 +3,7 @@ module github.com/VictoriaMetrics/VictoriaMetrics
|
|||
go 1.17
|
||||
|
||||
require (
|
||||
cloud.google.com/go/storage v1.20.0
|
||||
cloud.google.com/go/storage v1.21.0
|
||||
github.com/VictoriaMetrics/fastcache v1.9.0
|
||||
|
||||
// Do not use the original github.com/valyala/fasthttp because of issues
|
||||
|
@ -11,7 +11,7 @@ require (
|
|||
github.com/VictoriaMetrics/fasthttp v1.1.0
|
||||
github.com/VictoriaMetrics/metrics v1.18.1
|
||||
github.com/VictoriaMetrics/metricsql v0.40.0
|
||||
github.com/aws/aws-sdk-go v1.42.52
|
||||
github.com/aws/aws-sdk-go v1.43.3
|
||||
github.com/cespare/xxhash/v2 v2.1.2
|
||||
github.com/cheggaaa/pb/v3 v3.0.8
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.1 // indirect
|
||||
|
@ -19,7 +19,7 @@ require (
|
|||
github.com/go-kit/kit v0.12.0
|
||||
github.com/golang/snappy v0.0.4
|
||||
github.com/influxdata/influxdb v1.9.6
|
||||
github.com/klauspost/compress v1.14.2
|
||||
github.com/klauspost/compress v1.14.4
|
||||
github.com/mattn/go-colorable v0.1.12 // indirect
|
||||
github.com/mattn/go-runewidth v0.0.13 // indirect
|
||||
github.com/oklog/ulid v1.3.1
|
||||
|
@ -33,15 +33,15 @@ require (
|
|||
github.com/valyala/quicktemplate v1.7.0
|
||||
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd
|
||||
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8
|
||||
golang.org/x/sys v0.0.0-20220209214540-3681064d5158
|
||||
google.golang.org/api v0.68.0
|
||||
golang.org/x/sys v0.0.0-20220222172238-00053529121e
|
||||
google.golang.org/api v0.70.0
|
||||
gopkg.in/yaml.v2 v2.4.0
|
||||
)
|
||||
|
||||
require (
|
||||
cloud.google.com/go v0.100.2 // indirect
|
||||
cloud.google.com/go/compute v1.2.0 // indirect
|
||||
cloud.google.com/go/iam v0.1.1 // indirect
|
||||
cloud.google.com/go/compute v1.4.0 // indirect
|
||||
cloud.google.com/go/iam v0.2.0 // indirect
|
||||
github.com/VividCortex/ewma v1.2.0 // indirect
|
||||
github.com/beorn7/perks v1.0.1 // indirect
|
||||
github.com/go-kit/log v0.2.0 // indirect
|
||||
|
@ -68,7 +68,7 @@ require (
|
|||
golang.org/x/text v0.3.7 // indirect
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
|
||||
google.golang.org/appengine v1.6.7 // indirect
|
||||
google.golang.org/genproto v0.0.0-20220211171837-173942840c17 // indirect
|
||||
google.golang.org/genproto v0.0.0-20220222154240-daf995802d7b // indirect
|
||||
google.golang.org/grpc v1.44.0 // indirect
|
||||
google.golang.org/protobuf v1.27.1 // indirect
|
||||
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
|
||||
|
|
37
go.sum
37
go.sum
|
@ -41,12 +41,15 @@ cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM7
|
|||
cloud.google.com/go/bigtable v1.2.0/go.mod h1:JcVAOl45lrTmQfLj7T6TxyMzIN/3FGGcFm+2xVAli2o=
|
||||
cloud.google.com/go/bigtable v1.10.1/go.mod h1:cyHeKlx6dcZCO0oSQucYdauseD8kIENGuDOJPKMCVg8=
|
||||
cloud.google.com/go/compute v0.1.0/go.mod h1:GAesmwr110a34z04OlxYkATPBEfVhkymfTBXtfbBFow=
|
||||
cloud.google.com/go/compute v1.2.0 h1:EKki8sSdvDU0OO9mAXGwPXOTOgPz2l08R0/IutDH11I=
|
||||
cloud.google.com/go/compute v1.2.0/go.mod h1:xlogom/6gr8RJGBe7nT2eGsQYAFUbbv8dbC29qE3Xmw=
|
||||
cloud.google.com/go/compute v1.3.0/go.mod h1:cCZiE1NHEtai4wiufUhW8I8S1JKkAnhnQJWM7YD99wM=
|
||||
cloud.google.com/go/compute v1.4.0 h1:tzSyCe254NKkL8zshJUSoVvI9mcgbFdSpCC44uUNjT0=
|
||||
cloud.google.com/go/compute v1.4.0/go.mod h1:TcrKl8VipL9ZM0wEjdooJ1eet/6YsEV/E/larxxkAdg=
|
||||
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
|
||||
cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
|
||||
cloud.google.com/go/iam v0.1.1 h1:4CapQyNFjiksks1/x7jsvsygFPhihslYk5GptIrlX68=
|
||||
cloud.google.com/go/iam v0.1.1/go.mod h1:CKqrcnI/suGpybEHxZ7BMehL0oA4LpdyJdUlTl9jVMw=
|
||||
cloud.google.com/go/iam v0.2.0 h1:Ouq6qif4mZdXkb3SiFMpxvu0JQJB1Yid9TsZ23N6hg8=
|
||||
cloud.google.com/go/iam v0.2.0/go.mod h1:BCK88+tmjAwnZYfOSizmKCTSFjJHCa18t3DpdGEY13Y=
|
||||
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
|
||||
cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw=
|
||||
cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA=
|
||||
|
@ -56,8 +59,8 @@ cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0Zeo
|
|||
cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk=
|
||||
cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs=
|
||||
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
|
||||
cloud.google.com/go/storage v1.20.0 h1:kv3rQ3clEQdxqokkCCgQo+bxPqcuXiROjxvnKb8Oqdk=
|
||||
cloud.google.com/go/storage v1.20.0/go.mod h1:TiC1o6FxNCG8y5gB7rqCsFZCIYPMPZCO81ppOoEPLGI=
|
||||
cloud.google.com/go/storage v1.21.0 h1:HwnT2u2D309SFDHQII6m18HlrCi3jAXhUMTLOWXYH14=
|
||||
cloud.google.com/go/storage v1.21.0/go.mod h1:XmRlxkgPjlBONznT2dDUU/5XlpU2OjMnKuqnZI01LAA=
|
||||
collectd.org v0.3.0/go.mod h1:A/8DzQBkF6abtvrT2j/AU/4tiBgJWYyh0y/oB/4MlWE=
|
||||
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
|
||||
github.com/Azure/azure-sdk-for-go v41.3.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
|
||||
|
@ -162,8 +165,8 @@ github.com/aws/aws-sdk-go v1.30.12/go.mod h1:5zCpMtNQVjRREroY7sYe8lOMRSxkhG6MZve
|
|||
github.com/aws/aws-sdk-go v1.34.28/go.mod h1:H7NKnBqNVzoTJpGfLrQkkD+ytBA93eiDYi/+8rV9s48=
|
||||
github.com/aws/aws-sdk-go v1.35.31/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
|
||||
github.com/aws/aws-sdk-go v1.40.45/go.mod h1:585smgzpB/KqRA+K3y/NL/oYRqQvpNJYvLm+LY1U59Q=
|
||||
github.com/aws/aws-sdk-go v1.42.52 h1:/+TZ46+0qu9Ph/UwjVrU3SG8OBi87uJLrLiYRNZKbHQ=
|
||||
github.com/aws/aws-sdk-go v1.42.52/go.mod h1:OGr6lGMAKGlG9CVrYnWYDKIyb829c6EVBRjxqjmPepc=
|
||||
github.com/aws/aws-sdk-go v1.43.3 h1:qvCkC4FviA9rR4UvRk4ldr6f3mIJE0VaI3KrsDx1gTk=
|
||||
github.com/aws/aws-sdk-go v1.43.3/go.mod h1:OGr6lGMAKGlG9CVrYnWYDKIyb829c6EVBRjxqjmPepc=
|
||||
github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g=
|
||||
github.com/aws/aws-sdk-go-v2 v1.9.1/go.mod h1:cK/D0BBs0b/oWPIcX/Z/obahJK1TT7IPVjy53i/mX/4=
|
||||
github.com/aws/aws-sdk-go-v2/service/cloudwatch v1.8.1/go.mod h1:CM+19rL1+4dFWnOQKwDc7H1KwXTz+h61oUSHyhV0b3o=
|
||||
|
@ -658,8 +661,8 @@ github.com/klauspost/compress v1.13.1/go.mod h1:8dP1Hq4DHOhN9w426knH3Rhby4rFm6D8
|
|||
github.com/klauspost/compress v1.13.4/go.mod h1:8dP1Hq4DHOhN9w426knH3Rhby4rFm6D8eO+e+Dq5Gzg=
|
||||
github.com/klauspost/compress v1.13.5/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
|
||||
github.com/klauspost/compress v1.13.6/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
|
||||
github.com/klauspost/compress v1.14.2 h1:S0OHlFk/Gbon/yauFJ4FfJJF5V0fc5HbBTJazi28pRw=
|
||||
github.com/klauspost/compress v1.14.2/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
|
||||
github.com/klauspost/compress v1.14.4 h1:eijASRJcobkVtSt81Olfh7JX43osYLwy5krOJo6YEu4=
|
||||
github.com/klauspost/compress v1.14.4/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
|
||||
github.com/klauspost/cpuid v0.0.0-20170728055534-ae7887de9fa5/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek=
|
||||
github.com/klauspost/crc32 v0.0.0-20161016154125-cb6bfca970f6/go.mod h1:+ZoRqAPRLkC4NPOvfYeR5KNOrY6TD+/sAC3HXPZgDYg=
|
||||
github.com/klauspost/pgzip v1.0.2-0.20170402124221-0bf5dcad4ada/go.mod h1:Ch1tH69qFZu15pkjo5kYi6mth2Zzwzt50oCQKQE9RUs=
|
||||
|
@ -1305,9 +1308,11 @@ golang.org/x/sys v0.0.0-20211124211545-fe61309f8881/go.mod h1:oPkhp1MJrh7nUepCBc
|
|||
golang.org/x/sys v0.0.0-20211210111614-af8b64212486/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220128215802-99c3d69c2c27/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220204135822-1c1b9b1eba6a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220209214540-3681064d5158 h1:rm+CHSpPEEW2IsXUib1ThaHIjuBVZjxNgSKmBLFfD4c=
|
||||
golang.org/x/sys v0.0.0-20220209214540-3681064d5158/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220222172238-00053529121e h1:AGLQ2aegkB2Y9RY8YdQk+7MDCW9da7YmizIwNIt8NtQ=
|
||||
golang.org/x/sys v0.0.0-20220222172238-00053529121e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||
|
@ -1459,8 +1464,10 @@ google.golang.org/api v0.61.0/go.mod h1:xQRti5UdCmoCEqFxcz93fTl338AVqDgyaDRuOZ3h
|
|||
google.golang.org/api v0.63.0/go.mod h1:gs4ij2ffTRXwuzzgJl/56BdwJaA194ijkfn++9tDuPo=
|
||||
google.golang.org/api v0.64.0/go.mod h1:931CdxA8Rm4t6zqTFGSsgwbAEZ2+GMYurbndwSimebM=
|
||||
google.golang.org/api v0.66.0/go.mod h1:I1dmXYpX7HGwz/ejRxwQp2qj5bFAz93HiCU1C1oYd9M=
|
||||
google.golang.org/api v0.68.0 h1:9eJiHhwJKIYX6sX2fUZxQLi7pDRA/MYu8c12q6WbJik=
|
||||
google.golang.org/api v0.68.0/go.mod h1:sOM8pTpwgflXRhz+oC8H2Dr+UcbMqkPPWNJo88Q7TH8=
|
||||
google.golang.org/api v0.67.0/go.mod h1:ShHKP8E60yPsKNw/w8w+VYaj9H6buA5UqDp8dhbQZ6g=
|
||||
google.golang.org/api v0.69.0/go.mod h1:boanBiw+h5c3s+tBPgEzLDRHfFLWV0qXxRHz3ws7C80=
|
||||
google.golang.org/api v0.70.0 h1:67zQnAE0T2rB0A3CwLSas0K+SbVzSxP+zTLkQLexeiw=
|
||||
google.golang.org/api v0.70.0/go.mod h1:Bs4ZM2HGifEvXwd50TtW70ovgJffJYw2oRCOFU/SkfA=
|
||||
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
|
||||
google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
|
@ -1543,10 +1550,14 @@ google.golang.org/genproto v0.0.0-20211221195035-429b39de9b1c/go.mod h1:5CzLGKJ6
|
|||
google.golang.org/genproto v0.0.0-20211223182754-3ac035c7e7cb/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
|
||||
google.golang.org/genproto v0.0.0-20220111164026-67b88f271998/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
|
||||
google.golang.org/genproto v0.0.0-20220114231437-d2e6a121cae0/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
|
||||
google.golang.org/genproto v0.0.0-20220126215142-9970aeb2e350/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
|
||||
google.golang.org/genproto v0.0.0-20220201184016-50beb8ab5c44/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
|
||||
google.golang.org/genproto v0.0.0-20220204002441-d6cc3cc0770e/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
|
||||
google.golang.org/genproto v0.0.0-20220211171837-173942840c17 h1:2X+CNIheCutWRyKRte8szGxrE5ggtV4U+NKAbh/oLhg=
|
||||
google.golang.org/genproto v0.0.0-20220207164111-0872dc986b00/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
|
||||
google.golang.org/genproto v0.0.0-20220211171837-173942840c17/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
|
||||
google.golang.org/genproto v0.0.0-20220216160803-4663080d8bc8/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
|
||||
google.golang.org/genproto v0.0.0-20220218161850-94dd64e39d7c/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
|
||||
google.golang.org/genproto v0.0.0-20220222154240-daf995802d7b h1:wHqTlwZVR0x5EG2S6vKlCq63+Tl/vBoQELitHxqxDOo=
|
||||
google.golang.org/genproto v0.0.0-20220222154240-daf995802d7b/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
|
||||
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
|
||||
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
||||
google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM=
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
package blockcache
|
||||
|
||||
import (
|
||||
"container/heap"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
@ -135,7 +136,7 @@ func (c *Cache) Misses() uint64 {
|
|||
func (c *Cache) cleaner() {
|
||||
ticker := time.NewTicker(57 * time.Second)
|
||||
defer ticker.Stop()
|
||||
perKeyMissesTicker := time.NewTicker(7 * time.Minute)
|
||||
perKeyMissesTicker := time.NewTicker(3 * time.Minute)
|
||||
defer perKeyMissesTicker.Stop()
|
||||
for {
|
||||
select {
|
||||
|
@ -176,7 +177,7 @@ type cache struct {
|
|||
getMaxSizeBytes func() int
|
||||
|
||||
// mu protects all the fields below.
|
||||
mu sync.RWMutex
|
||||
mu sync.Mutex
|
||||
|
||||
// m contains cached blocks keyed by Key.Part and then by Key.Offset
|
||||
m map[interface{}]map[uint64]*cacheEntry
|
||||
|
@ -185,6 +186,9 @@ type cache struct {
|
|||
//
|
||||
// Blocks with less than 2 cache misses aren't stored in the cache in order to prevent from eviction for frequently accessed items.
|
||||
perKeyMisses map[Key]int
|
||||
|
||||
// The heap for removing the least recently used entries from m.
|
||||
lah lastAccessHeap
|
||||
}
|
||||
|
||||
// Key represents a key, which uniquely identifies the Block.
|
||||
|
@ -208,13 +212,17 @@ type Block interface {
|
|||
}
|
||||
|
||||
type cacheEntry struct {
|
||||
// Atomically updated fields must go first in the struct, so they are properly
|
||||
// aligned to 8 bytes on 32-bit architectures.
|
||||
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/212
|
||||
// The timestamp in seconds for the last access to the given entry.
|
||||
lastAccessTime uint64
|
||||
|
||||
// block contains the cached block.
|
||||
block Block
|
||||
// heapIdx is the index for the entry in lastAccessHeap.
|
||||
heapIdx int
|
||||
|
||||
// k contains the associated key for the given block.
|
||||
k Key
|
||||
|
||||
// b contains the cached block.
|
||||
b Block
|
||||
}
|
||||
|
||||
func newCache(getMaxSizeBytes func() int) *cache {
|
||||
|
@ -227,14 +235,16 @@ func newCache(getMaxSizeBytes func() int) *cache {
|
|||
|
||||
func (c *cache) RemoveBlocksForPart(p interface{}) {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
|
||||
sizeBytes := 0
|
||||
for _, e := range c.m[p] {
|
||||
sizeBytes += e.block.SizeBytes()
|
||||
sizeBytes += e.b.SizeBytes()
|
||||
heap.Remove(&c.lah, e.heapIdx)
|
||||
// do not delete the entry from c.perKeyMisses, since it is removed by cache.cleaner later.
|
||||
}
|
||||
c.updateSizeBytes(-sizeBytes)
|
||||
delete(c.m, p)
|
||||
c.mu.Unlock()
|
||||
}
|
||||
|
||||
func (c *cache) updateSizeBytes(n int) {
|
||||
|
@ -248,55 +258,56 @@ func (c *cache) cleanPerKeyMisses() {
|
|||
}
|
||||
|
||||
func (c *cache) cleanByTimeout() {
|
||||
// Delete items accessed more than five minutes ago.
|
||||
// Delete items accessed more than three minutes ago.
|
||||
// This time should be enough for repeated queries.
|
||||
lastAccessTime := fasttime.UnixTimestamp() - 5*60
|
||||
lastAccessTime := fasttime.UnixTimestamp() - 3*60
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
|
||||
for _, pes := range c.m {
|
||||
for offset, e := range pes {
|
||||
if lastAccessTime > atomic.LoadUint64(&e.lastAccessTime) {
|
||||
c.updateSizeBytes(-e.block.SizeBytes())
|
||||
delete(pes, offset)
|
||||
// do not delete the entry from c.perKeyMisses, since it is removed by cache.cleaner later.
|
||||
}
|
||||
for len(c.lah) > 0 {
|
||||
e := c.lah[0]
|
||||
if lastAccessTime < e.lastAccessTime {
|
||||
break
|
||||
}
|
||||
c.updateSizeBytes(-e.b.SizeBytes())
|
||||
pes := c.m[e.k.Part]
|
||||
delete(pes, e.k.Offset)
|
||||
heap.Pop(&c.lah)
|
||||
}
|
||||
}
|
||||
|
||||
func (c *cache) GetBlock(k Key) Block {
|
||||
atomic.AddUint64(&c.requests, 1)
|
||||
var e *cacheEntry
|
||||
c.mu.RLock()
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
|
||||
pes := c.m[k.Part]
|
||||
if pes != nil {
|
||||
e = pes[k.Offset]
|
||||
}
|
||||
c.mu.RUnlock()
|
||||
if e != nil {
|
||||
// Fast path - the block already exists in the cache, so return it to the caller.
|
||||
currentTime := fasttime.UnixTimestamp()
|
||||
if atomic.LoadUint64(&e.lastAccessTime) != currentTime {
|
||||
atomic.StoreUint64(&e.lastAccessTime, currentTime)
|
||||
if e != nil {
|
||||
// Fast path - the block already exists in the cache, so return it to the caller.
|
||||
currentTime := fasttime.UnixTimestamp()
|
||||
if e.lastAccessTime != currentTime {
|
||||
e.lastAccessTime = currentTime
|
||||
heap.Fix(&c.lah, e.heapIdx)
|
||||
}
|
||||
return e.b
|
||||
}
|
||||
return e.block
|
||||
}
|
||||
// Slow path - the entry is missing in the cache.
|
||||
c.mu.Lock()
|
||||
c.perKeyMisses[k]++
|
||||
c.mu.Unlock()
|
||||
atomic.AddUint64(&c.misses, 1)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *cache) PutBlock(k Key, b Block) {
|
||||
c.mu.RLock()
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
// If the entry wasn't accessed yet (e.g. c.perKeyMisses[k] == 0), then cache it, since it is likely it will be accessed soon.
|
||||
// Do not cache the entry only if there was only a single unsuccessful attempt to access it.
|
||||
// This may be one-time-wonders entry, which won't be accessed more, so there is no need in caching it.
|
||||
doNotCache := c.perKeyMisses[k] == 1
|
||||
c.mu.RUnlock()
|
||||
if doNotCache {
|
||||
// Do not cache b if it has been requested only once (aka one-time-wonders items).
|
||||
// This should reduce memory usage for the cache.
|
||||
|
@ -304,9 +315,6 @@ func (c *cache) PutBlock(k Key, b Block) {
|
|||
}
|
||||
|
||||
// Store b in the cache.
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
|
||||
pes := c.m[k.Part]
|
||||
if pes == nil {
|
||||
pes = make(map[uint64]*cacheEntry)
|
||||
|
@ -317,33 +325,30 @@ func (c *cache) PutBlock(k Key, b Block) {
|
|||
}
|
||||
e := &cacheEntry{
|
||||
lastAccessTime: fasttime.UnixTimestamp(),
|
||||
block: b,
|
||||
k: k,
|
||||
b: b,
|
||||
}
|
||||
heap.Push(&c.lah, e)
|
||||
pes[k.Offset] = e
|
||||
c.updateSizeBytes(e.block.SizeBytes())
|
||||
c.updateSizeBytes(e.b.SizeBytes())
|
||||
maxSizeBytes := c.getMaxSizeBytes()
|
||||
if c.SizeBytes() > maxSizeBytes {
|
||||
// Entries in the cache occupy too much space. Free up space by deleting some entries.
|
||||
for _, pes := range c.m {
|
||||
for offset, e := range pes {
|
||||
c.updateSizeBytes(-e.block.SizeBytes())
|
||||
delete(pes, offset)
|
||||
// do not delete the entry from c.perKeyMisses, since it is removed by cache.cleaner later.
|
||||
if c.SizeBytes() < maxSizeBytes {
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
for c.SizeBytes() > maxSizeBytes && len(c.lah) > 0 {
|
||||
e := c.lah[0]
|
||||
c.updateSizeBytes(-e.b.SizeBytes())
|
||||
pes := c.m[e.k.Part]
|
||||
delete(pes, e.k.Offset)
|
||||
heap.Pop(&c.lah)
|
||||
}
|
||||
}
|
||||
|
||||
func (c *cache) Len() int {
|
||||
c.mu.RLock()
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
|
||||
n := 0
|
||||
for _, m := range c.m {
|
||||
n += len(m)
|
||||
}
|
||||
c.mu.RUnlock()
|
||||
return n
|
||||
}
|
||||
|
||||
|
@ -362,3 +367,35 @@ func (c *cache) Requests() uint64 {
|
|||
func (c *cache) Misses() uint64 {
|
||||
return atomic.LoadUint64(&c.misses)
|
||||
}
|
||||
|
||||
// lastAccessHeap implements heap.Interface
|
||||
type lastAccessHeap []*cacheEntry
|
||||
|
||||
func (lah *lastAccessHeap) Len() int {
|
||||
return len(*lah)
|
||||
}
|
||||
func (lah *lastAccessHeap) Swap(i, j int) {
|
||||
h := *lah
|
||||
a := h[i]
|
||||
b := h[j]
|
||||
a.heapIdx = j
|
||||
b.heapIdx = i
|
||||
h[i] = b
|
||||
h[j] = a
|
||||
}
|
||||
func (lah *lastAccessHeap) Less(i, j int) bool {
|
||||
h := *lah
|
||||
return h[i].lastAccessTime < h[j].lastAccessTime
|
||||
}
|
||||
func (lah *lastAccessHeap) Push(x interface{}) {
|
||||
e := x.(*cacheEntry)
|
||||
h := *lah
|
||||
e.heapIdx = len(h)
|
||||
*lah = append(h, e)
|
||||
}
|
||||
func (lah *lastAccessHeap) Pop() interface{} {
|
||||
h := *lah
|
||||
e := h[len(h)-1]
|
||||
*lah = h[:len(h)-1]
|
||||
return e
|
||||
}
|
||||
|
|
|
@ -4,10 +4,17 @@ import (
|
|||
"fmt"
|
||||
"sync"
|
||||
"testing"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
|
||||
)
|
||||
|
||||
func TestCache(t *testing.T) {
|
||||
const sizeMaxBytes = 1024 * 1024
|
||||
sizeMaxBytes := 64 * 1024
|
||||
// Multiply sizeMaxBytes by the square of available CPU cores
|
||||
// in order to get proper distribution of sizes between cache shards.
|
||||
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2204
|
||||
cpus := cgroup.AvailableCPUs()
|
||||
sizeMaxBytes *= cpus * cpus
|
||||
getMaxSize := func() int {
|
||||
return sizeMaxBytes
|
||||
}
|
||||
|
|
|
@ -15,9 +15,16 @@ import (
|
|||
var idxbCache = blockcache.NewCache(getMaxIndexBlocksCacheSize)
|
||||
var ibCache = blockcache.NewCache(getMaxInmemoryBlocksCacheSize)
|
||||
|
||||
// SetIndexBlocksCacheSize overrides the default size of indexdb/indexBlock cache
|
||||
func SetIndexBlocksCacheSize(size int) {
|
||||
maxIndexBlockCacheSize = size
|
||||
}
|
||||
|
||||
func getMaxIndexBlocksCacheSize() int {
|
||||
maxIndexBlockCacheSizeOnce.Do(func() {
|
||||
maxIndexBlockCacheSize = int(0.15 * float64(memory.Allowed()))
|
||||
if maxIndexBlockCacheSize <= 0 {
|
||||
maxIndexBlockCacheSize = int(0.10 * float64(memory.Allowed()))
|
||||
}
|
||||
})
|
||||
return maxIndexBlockCacheSize
|
||||
}
|
||||
|
@ -27,9 +34,16 @@ var (
|
|||
maxIndexBlockCacheSizeOnce sync.Once
|
||||
)
|
||||
|
||||
// SetDataBlocksCacheSize overrides the default size of indexdb/dataBlocks cache
|
||||
func SetDataBlocksCacheSize(size int) {
|
||||
maxInmemoryBlockCacheSize = size
|
||||
}
|
||||
|
||||
func getMaxInmemoryBlocksCacheSize() int {
|
||||
maxInmemoryBlockCacheSizeOnce.Do(func() {
|
||||
maxInmemoryBlockCacheSize = int(0.4 * float64(memory.Allowed()))
|
||||
if maxInmemoryBlockCacheSize <= 0 {
|
||||
maxInmemoryBlockCacheSize = int(0.25 * float64(memory.Allowed()))
|
||||
}
|
||||
})
|
||||
return maxInmemoryBlockCacheSize
|
||||
}
|
||||
|
|
|
@ -215,7 +215,7 @@ func (sc *ScrapeConfig) mustStop() {
|
|||
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#file_sd_config
|
||||
type FileSDConfig struct {
|
||||
Files []string `yaml:"files"`
|
||||
// `refresh_interval` is ignored. See `-prometheus.fileSDCheckInterval`
|
||||
// `refresh_interval` is ignored. See `-promscrape.fileSDCheckInterval`
|
||||
}
|
||||
|
||||
// StaticConfig represents essential parts for `static_config` section of Prometheus config.
|
||||
|
|
|
@ -42,7 +42,7 @@ type serviceWatcher struct {
|
|||
stopCh chan struct{}
|
||||
}
|
||||
|
||||
// newConsulWatcher creates new watcher and start background service discovery for Consul.
|
||||
// newConsulWatcher creates new watcher and starts background service discovery for Consul.
|
||||
func newConsulWatcher(client *discoveryutils.Client, sdc *SDConfig, datacenter, namespace string) *consulWatcher {
|
||||
baseQueryArgs := "?dc=" + url.QueryEscape(datacenter)
|
||||
if sdc.AllowStale {
|
||||
|
@ -67,7 +67,10 @@ func newConsulWatcher(client *discoveryutils.Client, sdc *SDConfig, datacenter,
|
|||
services: make(map[string]*serviceWatcher),
|
||||
stopCh: make(chan struct{}),
|
||||
}
|
||||
go cw.watchForServicesUpdates()
|
||||
initCh := make(chan struct{})
|
||||
go cw.watchForServicesUpdates(initCh)
|
||||
// wait for initialization to complete
|
||||
<-initCh
|
||||
return cw
|
||||
}
|
||||
|
||||
|
@ -78,11 +81,57 @@ func (cw *consulWatcher) mustStop() {
|
|||
// TODO: add ability to cancel blocking requests.
|
||||
}
|
||||
|
||||
func (cw *consulWatcher) updateServices(serviceNames []string) {
|
||||
var initWG sync.WaitGroup
|
||||
// Start watchers for new services.
|
||||
cw.servicesLock.Lock()
|
||||
for _, serviceName := range serviceNames {
|
||||
if _, ok := cw.services[serviceName]; ok {
|
||||
// The watcher for serviceName already exists.
|
||||
continue
|
||||
}
|
||||
sw := &serviceWatcher{
|
||||
serviceName: serviceName,
|
||||
stopCh: make(chan struct{}),
|
||||
}
|
||||
cw.services[serviceName] = sw
|
||||
cw.wg.Add(1)
|
||||
serviceWatchersCreated.Inc()
|
||||
initWG.Add(1)
|
||||
go func() {
|
||||
serviceWatchersCount.Inc()
|
||||
sw.watchForServiceNodesUpdates(cw, &initWG)
|
||||
serviceWatchersCount.Dec()
|
||||
cw.wg.Done()
|
||||
}()
|
||||
}
|
||||
|
||||
// Stop watchers for removed services.
|
||||
newServiceNamesMap := make(map[string]struct{}, len(serviceNames))
|
||||
for _, serviceName := range serviceNames {
|
||||
newServiceNamesMap[serviceName] = struct{}{}
|
||||
}
|
||||
for serviceName, sw := range cw.services {
|
||||
if _, ok := newServiceNamesMap[serviceName]; ok {
|
||||
continue
|
||||
}
|
||||
close(sw.stopCh)
|
||||
delete(cw.services, serviceName)
|
||||
serviceWatchersStopped.Inc()
|
||||
|
||||
// Do not wait for the watcher goroutine to exit, since this may take for up to maxWaitTime
|
||||
// if it is blocked in Consul API request.
|
||||
}
|
||||
cw.servicesLock.Unlock()
|
||||
|
||||
// Wait for initialization to complete.
|
||||
initWG.Wait()
|
||||
}
|
||||
|
||||
// watchForServicesUpdates watches for new services and updates it in cw.
|
||||
func (cw *consulWatcher) watchForServicesUpdates() {
|
||||
checkInterval := getCheckInterval()
|
||||
ticker := time.NewTicker(checkInterval / 2)
|
||||
defer ticker.Stop()
|
||||
//
|
||||
// watchForServicesUpdates closes the initCh once the initialization is complete and first discovery iteration is done.
|
||||
func (cw *consulWatcher) watchForServicesUpdates(initCh chan struct{}) {
|
||||
index := int64(0)
|
||||
clientAddr := cw.client.Addr()
|
||||
f := func() {
|
||||
|
@ -95,51 +144,19 @@ func (cw *consulWatcher) watchForServicesUpdates() {
|
|||
// Nothing changed.
|
||||
return
|
||||
}
|
||||
|
||||
cw.servicesLock.Lock()
|
||||
// Start watchers for new services.
|
||||
for _, serviceName := range serviceNames {
|
||||
if _, ok := cw.services[serviceName]; ok {
|
||||
// The watcher for serviceName already exists.
|
||||
continue
|
||||
}
|
||||
sw := &serviceWatcher{
|
||||
serviceName: serviceName,
|
||||
stopCh: make(chan struct{}),
|
||||
}
|
||||
cw.services[serviceName] = sw
|
||||
cw.wg.Add(1)
|
||||
serviceWatchersCreated.Inc()
|
||||
go func() {
|
||||
serviceWatchersCount.Inc()
|
||||
sw.watchForServiceNodesUpdates(cw)
|
||||
serviceWatchersCount.Dec()
|
||||
cw.wg.Done()
|
||||
}()
|
||||
}
|
||||
// Stop watchers for removed services.
|
||||
newServiceNamesMap := make(map[string]struct{}, len(serviceNames))
|
||||
for _, serviceName := range serviceNames {
|
||||
newServiceNamesMap[serviceName] = struct{}{}
|
||||
}
|
||||
for serviceName, sw := range cw.services {
|
||||
if _, ok := newServiceNamesMap[serviceName]; ok {
|
||||
continue
|
||||
}
|
||||
close(sw.stopCh)
|
||||
delete(cw.services, serviceName)
|
||||
serviceWatchersStopped.Inc()
|
||||
|
||||
// Do not wait for the watcher goroutine to exit, since this may take for up to maxWaitTime
|
||||
// if it is blocked in Consul API request.
|
||||
}
|
||||
cw.servicesLock.Unlock()
|
||||
|
||||
cw.updateServices(serviceNames)
|
||||
index = newIndex
|
||||
}
|
||||
|
||||
logger.Infof("started Consul service watcher for %q", clientAddr)
|
||||
f()
|
||||
|
||||
// send signal that initialization is complete
|
||||
close(initCh)
|
||||
|
||||
checkInterval := getCheckInterval()
|
||||
ticker := time.NewTicker(checkInterval / 2)
|
||||
defer ticker.Stop()
|
||||
for {
|
||||
select {
|
||||
case <-ticker.C:
|
||||
|
@ -196,10 +213,9 @@ func (cw *consulWatcher) getBlockingServiceNames(index int64) ([]string, int64,
|
|||
}
|
||||
|
||||
// watchForServiceNodesUpdates watches for Consul serviceNode changes for the given serviceName.
|
||||
func (sw *serviceWatcher) watchForServiceNodesUpdates(cw *consulWatcher) {
|
||||
checkInterval := getCheckInterval()
|
||||
ticker := time.NewTicker(checkInterval / 2)
|
||||
defer ticker.Stop()
|
||||
//
|
||||
// watchForServiceNodesUpdates calls initWG.Done() once the initialization is complete and the first discovery iteration is done.
|
||||
func (sw *serviceWatcher) watchForServiceNodesUpdates(cw *consulWatcher, initWG *sync.WaitGroup) {
|
||||
clientAddr := cw.client.Addr()
|
||||
index := int64(0)
|
||||
path := "/v1/health/service/" + sw.serviceName + cw.serviceNodesQueryArgs
|
||||
|
@ -227,6 +243,12 @@ func (sw *serviceWatcher) watchForServiceNodesUpdates(cw *consulWatcher) {
|
|||
}
|
||||
|
||||
f()
|
||||
// Notify caller that initialization is complete
|
||||
initWG.Done()
|
||||
|
||||
checkInterval := getCheckInterval()
|
||||
ticker := time.NewTicker(checkInterval / 2)
|
||||
defer ticker.Stop()
|
||||
for {
|
||||
select {
|
||||
case <-ticker.C:
|
||||
|
|
|
@ -35,7 +35,7 @@ var (
|
|||
"The path can point to local file and to http url. "+
|
||||
"See https://docs.victoriametrics.com/#how-to-scrape-prometheus-exporters-such-as-node-exporter for details")
|
||||
|
||||
fileSDCheckInterval = flag.Duration("promscrape.fileSDCheckInterval", 30*time.Second, "Interval for checking for changes in 'file_sd_config'. "+
|
||||
fileSDCheckInterval = flag.Duration("promscrape.fileSDCheckInterval", 5*time.Minute, "Interval for checking for changes in 'file_sd_config'. "+
|
||||
"See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#file_sd_config for details")
|
||||
)
|
||||
|
||||
|
|
|
@ -45,14 +45,20 @@ job={%q= jobName %} (0/0 up)
|
|||
<script>
|
||||
function collapse_all() {
|
||||
for (var i = 0; i < {%d len(jts) %}; i++) {
|
||||
var id = "table-" + i;
|
||||
document.getElementById(id).style.display = 'none';
|
||||
let el = document.getElementById("table-" + i);
|
||||
if (!el) {
|
||||
continue;
|
||||
}
|
||||
el.style.display = 'none';
|
||||
}
|
||||
}
|
||||
function expand_all() {
|
||||
for (var i = 0; i < {%d len(jts) %}; i++) {
|
||||
var id = "table-" + i;
|
||||
document.getElementById(id).style.display = 'block';
|
||||
let el = document.getElementById("table-" + i);
|
||||
if (!el) {
|
||||
continue;
|
||||
}
|
||||
el.style.display = 'block';
|
||||
}
|
||||
}
|
||||
</script>
|
||||
|
|
|
@ -199,276 +199,276 @@ func StreamTargetsResponseHTML(qw422016 *qt422016.Writer, jts []jobTargetsStatus
|
|||
//line lib/promscrape/targetstatus.qtpl:47
|
||||
qw422016.N().D(len(jts))
|
||||
//line lib/promscrape/targetstatus.qtpl:47
|
||||
qw422016.N().S(`; i++) {var id = "table-" + i;document.getElementById(id).style.display = 'none';}}function expand_all() {for (var i = 0; i <`)
|
||||
//line lib/promscrape/targetstatus.qtpl:53
|
||||
qw422016.N().S(`; i++) {let el = document.getElementById("table-" + i);if (!el) {continue;}el.style.display = 'none';}}function expand_all() {for (var i = 0; i <`)
|
||||
//line lib/promscrape/targetstatus.qtpl:56
|
||||
qw422016.N().D(len(jts))
|
||||
//line lib/promscrape/targetstatus.qtpl:53
|
||||
qw422016.N().S(`; i++) {var id = "table-" + i;document.getElementById(id).style.display = 'block';}}</script></head><body class="m-3"><h1>Scrape targets</h1><div style="padding: 3px"><button type="button" class="btn`)
|
||||
//line lib/promscrape/targetstatus.qtpl:63
|
||||
//line lib/promscrape/targetstatus.qtpl:56
|
||||
qw422016.N().S(`; i++) {let el = document.getElementById("table-" + i);if (!el) {continue;}el.style.display = 'block';}}</script></head><body class="m-3"><h1>Scrape targets</h1><div style="padding: 3px"><button type="button" class="btn`)
|
||||
//line lib/promscrape/targetstatus.qtpl:69
|
||||
qw422016.N().S(` `)
|
||||
//line lib/promscrape/targetstatus.qtpl:63
|
||||
//line lib/promscrape/targetstatus.qtpl:69
|
||||
if !onlyUnhealthy {
|
||||
//line lib/promscrape/targetstatus.qtpl:63
|
||||
//line lib/promscrape/targetstatus.qtpl:69
|
||||
qw422016.N().S(`btn-primary`)
|
||||
//line lib/promscrape/targetstatus.qtpl:63
|
||||
//line lib/promscrape/targetstatus.qtpl:69
|
||||
} else {
|
||||
//line lib/promscrape/targetstatus.qtpl:63
|
||||
//line lib/promscrape/targetstatus.qtpl:69
|
||||
qw422016.N().S(`btn-secondary`)
|
||||
//line lib/promscrape/targetstatus.qtpl:63
|
||||
//line lib/promscrape/targetstatus.qtpl:69
|
||||
}
|
||||
//line lib/promscrape/targetstatus.qtpl:63
|
||||
//line lib/promscrape/targetstatus.qtpl:69
|
||||
qw422016.N().S(`" onclick="location.href='targets'">All</button><button type="button" class="btn`)
|
||||
//line lib/promscrape/targetstatus.qtpl:66
|
||||
//line lib/promscrape/targetstatus.qtpl:72
|
||||
qw422016.N().S(` `)
|
||||
//line lib/promscrape/targetstatus.qtpl:66
|
||||
//line lib/promscrape/targetstatus.qtpl:72
|
||||
if onlyUnhealthy {
|
||||
//line lib/promscrape/targetstatus.qtpl:66
|
||||
//line lib/promscrape/targetstatus.qtpl:72
|
||||
qw422016.N().S(`btn-primary`)
|
||||
//line lib/promscrape/targetstatus.qtpl:66
|
||||
//line lib/promscrape/targetstatus.qtpl:72
|
||||
} else {
|
||||
//line lib/promscrape/targetstatus.qtpl:66
|
||||
//line lib/promscrape/targetstatus.qtpl:72
|
||||
qw422016.N().S(`btn-secondary`)
|
||||
//line lib/promscrape/targetstatus.qtpl:66
|
||||
//line lib/promscrape/targetstatus.qtpl:72
|
||||
}
|
||||
//line lib/promscrape/targetstatus.qtpl:66
|
||||
//line lib/promscrape/targetstatus.qtpl:72
|
||||
qw422016.N().S(`" onclick="location.href='targets?show_only_unhealthy=true'">Unhealthy</button><button type="button" class="btn btn-primary" onclick="collapse_all()">Collapse all</button><button type="button" class="btn btn-secondary" onclick="expand_all()">Expand all</button></div>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:76
|
||||
//line lib/promscrape/targetstatus.qtpl:82
|
||||
for i, js := range jts {
|
||||
//line lib/promscrape/targetstatus.qtpl:77
|
||||
//line lib/promscrape/targetstatus.qtpl:83
|
||||
if onlyUnhealthy && js.upCount == js.targetsTotal {
|
||||
//line lib/promscrape/targetstatus.qtpl:77
|
||||
//line lib/promscrape/targetstatus.qtpl:83
|
||||
continue
|
||||
//line lib/promscrape/targetstatus.qtpl:77
|
||||
//line lib/promscrape/targetstatus.qtpl:83
|
||||
}
|
||||
//line lib/promscrape/targetstatus.qtpl:77
|
||||
//line lib/promscrape/targetstatus.qtpl:83
|
||||
qw422016.N().S(`<div><h4>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:80
|
||||
//line lib/promscrape/targetstatus.qtpl:86
|
||||
qw422016.E().S(js.job)
|
||||
//line lib/promscrape/targetstatus.qtpl:80
|
||||
//line lib/promscrape/targetstatus.qtpl:86
|
||||
qw422016.N().S(` `)
|
||||
//line lib/promscrape/targetstatus.qtpl:80
|
||||
//line lib/promscrape/targetstatus.qtpl:86
|
||||
qw422016.N().S(`(`)
|
||||
//line lib/promscrape/targetstatus.qtpl:80
|
||||
//line lib/promscrape/targetstatus.qtpl:86
|
||||
qw422016.N().D(js.upCount)
|
||||
//line lib/promscrape/targetstatus.qtpl:80
|
||||
//line lib/promscrape/targetstatus.qtpl:86
|
||||
qw422016.N().S(`/`)
|
||||
//line lib/promscrape/targetstatus.qtpl:80
|
||||
//line lib/promscrape/targetstatus.qtpl:86
|
||||
qw422016.N().D(js.targetsTotal)
|
||||
//line lib/promscrape/targetstatus.qtpl:80
|
||||
//line lib/promscrape/targetstatus.qtpl:86
|
||||
qw422016.N().S(` `)
|
||||
//line lib/promscrape/targetstatus.qtpl:80
|
||||
//line lib/promscrape/targetstatus.qtpl:86
|
||||
qw422016.N().S(`up)<button type="button" class="btn btn-primary" onclick="document.getElementById('table-`)
|
||||
//line lib/promscrape/targetstatus.qtpl:81
|
||||
//line lib/promscrape/targetstatus.qtpl:87
|
||||
qw422016.N().D(i)
|
||||
//line lib/promscrape/targetstatus.qtpl:81
|
||||
//line lib/promscrape/targetstatus.qtpl:87
|
||||
qw422016.N().S(`').style.display='none'">collapse</button><button type="button" class="btn btn-secondary" onclick="document.getElementById('table-`)
|
||||
//line lib/promscrape/targetstatus.qtpl:82
|
||||
//line lib/promscrape/targetstatus.qtpl:88
|
||||
qw422016.N().D(i)
|
||||
//line lib/promscrape/targetstatus.qtpl:82
|
||||
//line lib/promscrape/targetstatus.qtpl:88
|
||||
qw422016.N().S(`').style.display='block'">expand</button></h4><div id="table-`)
|
||||
//line lib/promscrape/targetstatus.qtpl:84
|
||||
//line lib/promscrape/targetstatus.qtpl:90
|
||||
qw422016.N().D(i)
|
||||
//line lib/promscrape/targetstatus.qtpl:84
|
||||
//line lib/promscrape/targetstatus.qtpl:90
|
||||
qw422016.N().S(`"><table class="table table-striped table-hover table-bordered table-sm"><thead><tr><th scope="col">Endpoint</th><th scope="col">State</th><th scope="col" title="scrape target labels">Labels</th><th scope="col" title="total scrapes">Scrapes</th><th scope="col" title="total scrape errors">Errors</th><th scope="col" title="the time of the last scrape">Last Scrape</th><th scope="col" title="the duration of the last scrape">Duration</th><th scope="col" title="the number of metrics scraped during the last scrape">Samples</th><th scope="col" title="error from the last scrape (if any)">Last error</th></tr></thead><tbody>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:100
|
||||
//line lib/promscrape/targetstatus.qtpl:106
|
||||
for _, ts := range js.targetsStatus {
|
||||
//line lib/promscrape/targetstatus.qtpl:102
|
||||
//line lib/promscrape/targetstatus.qtpl:108
|
||||
endpoint := ts.sw.Config.ScrapeURL
|
||||
targetID := getTargetID(ts.sw)
|
||||
lastScrapeTime := ts.getDurationFromLastScrape()
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:106
|
||||
//line lib/promscrape/targetstatus.qtpl:112
|
||||
if onlyUnhealthy && ts.up {
|
||||
//line lib/promscrape/targetstatus.qtpl:106
|
||||
//line lib/promscrape/targetstatus.qtpl:112
|
||||
continue
|
||||
//line lib/promscrape/targetstatus.qtpl:106
|
||||
//line lib/promscrape/targetstatus.qtpl:112
|
||||
}
|
||||
//line lib/promscrape/targetstatus.qtpl:106
|
||||
//line lib/promscrape/targetstatus.qtpl:112
|
||||
qw422016.N().S(`<tr`)
|
||||
//line lib/promscrape/targetstatus.qtpl:107
|
||||
//line lib/promscrape/targetstatus.qtpl:113
|
||||
if !ts.up {
|
||||
//line lib/promscrape/targetstatus.qtpl:107
|
||||
//line lib/promscrape/targetstatus.qtpl:113
|
||||
qw422016.N().S(` `)
|
||||
//line lib/promscrape/targetstatus.qtpl:107
|
||||
//line lib/promscrape/targetstatus.qtpl:113
|
||||
qw422016.N().S(`class="alert alert-danger" role="alert"`)
|
||||
//line lib/promscrape/targetstatus.qtpl:107
|
||||
//line lib/promscrape/targetstatus.qtpl:113
|
||||
}
|
||||
//line lib/promscrape/targetstatus.qtpl:107
|
||||
//line lib/promscrape/targetstatus.qtpl:113
|
||||
qw422016.N().S(`><td><a href="`)
|
||||
//line lib/promscrape/targetstatus.qtpl:108
|
||||
//line lib/promscrape/targetstatus.qtpl:114
|
||||
qw422016.E().S(endpoint)
|
||||
//line lib/promscrape/targetstatus.qtpl:108
|
||||
//line lib/promscrape/targetstatus.qtpl:114
|
||||
qw422016.N().S(`" target="_blank">`)
|
||||
//line lib/promscrape/targetstatus.qtpl:108
|
||||
//line lib/promscrape/targetstatus.qtpl:114
|
||||
qw422016.E().S(endpoint)
|
||||
//line lib/promscrape/targetstatus.qtpl:108
|
||||
//line lib/promscrape/targetstatus.qtpl:114
|
||||
qw422016.N().S(`</a> (<a href="target_response?id=`)
|
||||
//line lib/promscrape/targetstatus.qtpl:109
|
||||
//line lib/promscrape/targetstatus.qtpl:115
|
||||
qw422016.E().S(targetID)
|
||||
//line lib/promscrape/targetstatus.qtpl:109
|
||||
//line lib/promscrape/targetstatus.qtpl:115
|
||||
qw422016.N().S(`" target="_blank" title="click to fetch target response on behalf of the scraper">response</a>)</td><td>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:111
|
||||
//line lib/promscrape/targetstatus.qtpl:117
|
||||
if ts.up {
|
||||
//line lib/promscrape/targetstatus.qtpl:111
|
||||
//line lib/promscrape/targetstatus.qtpl:117
|
||||
qw422016.N().S(`UP`)
|
||||
//line lib/promscrape/targetstatus.qtpl:111
|
||||
//line lib/promscrape/targetstatus.qtpl:117
|
||||
} else {
|
||||
//line lib/promscrape/targetstatus.qtpl:111
|
||||
//line lib/promscrape/targetstatus.qtpl:117
|
||||
qw422016.N().S(`DOWN`)
|
||||
//line lib/promscrape/targetstatus.qtpl:111
|
||||
//line lib/promscrape/targetstatus.qtpl:117
|
||||
}
|
||||
//line lib/promscrape/targetstatus.qtpl:111
|
||||
//line lib/promscrape/targetstatus.qtpl:117
|
||||
qw422016.N().S(`</td><td><div title="click to show original labels" onclick="document.getElementById('original_labels_`)
|
||||
//line lib/promscrape/targetstatus.qtpl:113
|
||||
//line lib/promscrape/targetstatus.qtpl:119
|
||||
qw422016.E().S(targetID)
|
||||
//line lib/promscrape/targetstatus.qtpl:113
|
||||
//line lib/promscrape/targetstatus.qtpl:119
|
||||
qw422016.N().S(`').style.display='block'">`)
|
||||
//line lib/promscrape/targetstatus.qtpl:114
|
||||
//line lib/promscrape/targetstatus.qtpl:120
|
||||
streamformatLabel(qw422016, promrelabel.FinalizeLabels(nil, ts.sw.Config.Labels))
|
||||
//line lib/promscrape/targetstatus.qtpl:114
|
||||
//line lib/promscrape/targetstatus.qtpl:120
|
||||
qw422016.N().S(`</div><div style="display:none" id="original_labels_`)
|
||||
//line lib/promscrape/targetstatus.qtpl:116
|
||||
//line lib/promscrape/targetstatus.qtpl:122
|
||||
qw422016.E().S(targetID)
|
||||
//line lib/promscrape/targetstatus.qtpl:116
|
||||
//line lib/promscrape/targetstatus.qtpl:122
|
||||
qw422016.N().S(`">`)
|
||||
//line lib/promscrape/targetstatus.qtpl:117
|
||||
streamformatLabel(qw422016, ts.sw.Config.OriginalLabels)
|
||||
//line lib/promscrape/targetstatus.qtpl:117
|
||||
qw422016.N().S(`</div></td><td>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:120
|
||||
qw422016.N().D(ts.scrapesTotal)
|
||||
//line lib/promscrape/targetstatus.qtpl:120
|
||||
qw422016.N().S(`</td><td>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:121
|
||||
qw422016.N().D(ts.scrapesFailed)
|
||||
//line lib/promscrape/targetstatus.qtpl:121
|
||||
qw422016.N().S(`</td><td>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:123
|
||||
if lastScrapeTime < 365*24*time.Hour {
|
||||
//line lib/promscrape/targetstatus.qtpl:124
|
||||
qw422016.N().FPrec(lastScrapeTime.Seconds(), 3)
|
||||
//line lib/promscrape/targetstatus.qtpl:124
|
||||
qw422016.N().S(`s ago`)
|
||||
//line lib/promscrape/targetstatus.qtpl:125
|
||||
} else {
|
||||
//line lib/promscrape/targetstatus.qtpl:125
|
||||
qw422016.N().S(`none`)
|
||||
//line lib/promscrape/targetstatus.qtpl:127
|
||||
}
|
||||
//line lib/promscrape/targetstatus.qtpl:127
|
||||
qw422016.N().S(`<td>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:128
|
||||
qw422016.N().D(int(ts.scrapeDuration))
|
||||
//line lib/promscrape/targetstatus.qtpl:128
|
||||
qw422016.N().S(`ms</td><td>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:129
|
||||
qw422016.N().D(ts.samplesScraped)
|
||||
//line lib/promscrape/targetstatus.qtpl:129
|
||||
streamformatLabel(qw422016, ts.sw.Config.OriginalLabels)
|
||||
//line lib/promscrape/targetstatus.qtpl:123
|
||||
qw422016.N().S(`</div></td><td>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:126
|
||||
qw422016.N().D(ts.scrapesTotal)
|
||||
//line lib/promscrape/targetstatus.qtpl:126
|
||||
qw422016.N().S(`</td><td>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:127
|
||||
qw422016.N().D(ts.scrapesFailed)
|
||||
//line lib/promscrape/targetstatus.qtpl:127
|
||||
qw422016.N().S(`</td><td>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:129
|
||||
if lastScrapeTime < 365*24*time.Hour {
|
||||
//line lib/promscrape/targetstatus.qtpl:130
|
||||
if ts.err != nil {
|
||||
//line lib/promscrape/targetstatus.qtpl:130
|
||||
qw422016.E().S(ts.err.Error())
|
||||
qw422016.N().FPrec(lastScrapeTime.Seconds(), 3)
|
||||
//line lib/promscrape/targetstatus.qtpl:130
|
||||
qw422016.N().S(`s ago`)
|
||||
//line lib/promscrape/targetstatus.qtpl:131
|
||||
} else {
|
||||
//line lib/promscrape/targetstatus.qtpl:131
|
||||
qw422016.N().S(`none`)
|
||||
//line lib/promscrape/targetstatus.qtpl:133
|
||||
}
|
||||
//line lib/promscrape/targetstatus.qtpl:130
|
||||
//line lib/promscrape/targetstatus.qtpl:133
|
||||
qw422016.N().S(`<td>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:134
|
||||
qw422016.N().D(int(ts.scrapeDuration))
|
||||
//line lib/promscrape/targetstatus.qtpl:134
|
||||
qw422016.N().S(`ms</td><td>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:135
|
||||
qw422016.N().D(ts.samplesScraped)
|
||||
//line lib/promscrape/targetstatus.qtpl:135
|
||||
qw422016.N().S(`</td><td>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:136
|
||||
if ts.err != nil {
|
||||
//line lib/promscrape/targetstatus.qtpl:136
|
||||
qw422016.E().S(ts.err.Error())
|
||||
//line lib/promscrape/targetstatus.qtpl:136
|
||||
}
|
||||
//line lib/promscrape/targetstatus.qtpl:136
|
||||
qw422016.N().S(`</td></tr>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:132
|
||||
//line lib/promscrape/targetstatus.qtpl:138
|
||||
}
|
||||
//line lib/promscrape/targetstatus.qtpl:132
|
||||
//line lib/promscrape/targetstatus.qtpl:138
|
||||
qw422016.N().S(`</tbody></table></div></div>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:137
|
||||
//line lib/promscrape/targetstatus.qtpl:143
|
||||
}
|
||||
//line lib/promscrape/targetstatus.qtpl:139
|
||||
//line lib/promscrape/targetstatus.qtpl:145
|
||||
for _, jobName := range emptyJobs {
|
||||
//line lib/promscrape/targetstatus.qtpl:139
|
||||
//line lib/promscrape/targetstatus.qtpl:145
|
||||
qw422016.N().S(`<div><h4><a>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:142
|
||||
//line lib/promscrape/targetstatus.qtpl:148
|
||||
qw422016.E().S(jobName)
|
||||
//line lib/promscrape/targetstatus.qtpl:142
|
||||
//line lib/promscrape/targetstatus.qtpl:148
|
||||
qw422016.N().S(`(0/0 up)</a></h4><table class="table table-striped table-hover table-bordered table-sm"><thead><tr><th scope="col">Endpoint</th><th scope="col">State</th><th scope="col">Labels</th><th scope="col">Last Scrape</th><th scope="col">Scrape Duration</th><th scope="col">Samples Scraped</th><th scope="col">Error</th></tr></thead></table></div>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:158
|
||||
//line lib/promscrape/targetstatus.qtpl:164
|
||||
}
|
||||
//line lib/promscrape/targetstatus.qtpl:158
|
||||
//line lib/promscrape/targetstatus.qtpl:164
|
||||
qw422016.N().S(`</body></html>`)
|
||||
//line lib/promscrape/targetstatus.qtpl:161
|
||||
//line lib/promscrape/targetstatus.qtpl:167
|
||||
}
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:161
|
||||
//line lib/promscrape/targetstatus.qtpl:167
|
||||
func WriteTargetsResponseHTML(qq422016 qtio422016.Writer, jts []jobTargetsStatuses, emptyJobs []string, onlyUnhealthy bool) {
|
||||
//line lib/promscrape/targetstatus.qtpl:161
|
||||
//line lib/promscrape/targetstatus.qtpl:167
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line lib/promscrape/targetstatus.qtpl:161
|
||||
//line lib/promscrape/targetstatus.qtpl:167
|
||||
StreamTargetsResponseHTML(qw422016, jts, emptyJobs, onlyUnhealthy)
|
||||
//line lib/promscrape/targetstatus.qtpl:161
|
||||
//line lib/promscrape/targetstatus.qtpl:167
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line lib/promscrape/targetstatus.qtpl:161
|
||||
//line lib/promscrape/targetstatus.qtpl:167
|
||||
}
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:161
|
||||
//line lib/promscrape/targetstatus.qtpl:167
|
||||
func TargetsResponseHTML(jts []jobTargetsStatuses, emptyJobs []string, onlyUnhealthy bool) string {
|
||||
//line lib/promscrape/targetstatus.qtpl:161
|
||||
//line lib/promscrape/targetstatus.qtpl:167
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line lib/promscrape/targetstatus.qtpl:161
|
||||
//line lib/promscrape/targetstatus.qtpl:167
|
||||
WriteTargetsResponseHTML(qb422016, jts, emptyJobs, onlyUnhealthy)
|
||||
//line lib/promscrape/targetstatus.qtpl:161
|
||||
//line lib/promscrape/targetstatus.qtpl:167
|
||||
qs422016 := string(qb422016.B)
|
||||
//line lib/promscrape/targetstatus.qtpl:161
|
||||
//line lib/promscrape/targetstatus.qtpl:167
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line lib/promscrape/targetstatus.qtpl:161
|
||||
//line lib/promscrape/targetstatus.qtpl:167
|
||||
return qs422016
|
||||
//line lib/promscrape/targetstatus.qtpl:161
|
||||
//line lib/promscrape/targetstatus.qtpl:167
|
||||
}
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:163
|
||||
//line lib/promscrape/targetstatus.qtpl:169
|
||||
func streamformatLabel(qw422016 *qt422016.Writer, labels []prompbmarshal.Label) {
|
||||
//line lib/promscrape/targetstatus.qtpl:163
|
||||
//line lib/promscrape/targetstatus.qtpl:169
|
||||
qw422016.N().S(`{`)
|
||||
//line lib/promscrape/targetstatus.qtpl:165
|
||||
//line lib/promscrape/targetstatus.qtpl:171
|
||||
for i, label := range labels {
|
||||
//line lib/promscrape/targetstatus.qtpl:166
|
||||
//line lib/promscrape/targetstatus.qtpl:172
|
||||
qw422016.E().S(label.Name)
|
||||
//line lib/promscrape/targetstatus.qtpl:166
|
||||
//line lib/promscrape/targetstatus.qtpl:172
|
||||
qw422016.N().S(`=`)
|
||||
//line lib/promscrape/targetstatus.qtpl:166
|
||||
//line lib/promscrape/targetstatus.qtpl:172
|
||||
qw422016.E().Q(label.Value)
|
||||
//line lib/promscrape/targetstatus.qtpl:167
|
||||
//line lib/promscrape/targetstatus.qtpl:173
|
||||
if i+1 < len(labels) {
|
||||
//line lib/promscrape/targetstatus.qtpl:167
|
||||
//line lib/promscrape/targetstatus.qtpl:173
|
||||
qw422016.N().S(`,`)
|
||||
//line lib/promscrape/targetstatus.qtpl:167
|
||||
//line lib/promscrape/targetstatus.qtpl:173
|
||||
qw422016.N().S(` `)
|
||||
//line lib/promscrape/targetstatus.qtpl:167
|
||||
//line lib/promscrape/targetstatus.qtpl:173
|
||||
}
|
||||
//line lib/promscrape/targetstatus.qtpl:168
|
||||
//line lib/promscrape/targetstatus.qtpl:174
|
||||
}
|
||||
//line lib/promscrape/targetstatus.qtpl:168
|
||||
//line lib/promscrape/targetstatus.qtpl:174
|
||||
qw422016.N().S(`}`)
|
||||
//line lib/promscrape/targetstatus.qtpl:170
|
||||
//line lib/promscrape/targetstatus.qtpl:176
|
||||
}
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:170
|
||||
//line lib/promscrape/targetstatus.qtpl:176
|
||||
func writeformatLabel(qq422016 qtio422016.Writer, labels []prompbmarshal.Label) {
|
||||
//line lib/promscrape/targetstatus.qtpl:170
|
||||
//line lib/promscrape/targetstatus.qtpl:176
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line lib/promscrape/targetstatus.qtpl:170
|
||||
//line lib/promscrape/targetstatus.qtpl:176
|
||||
streamformatLabel(qw422016, labels)
|
||||
//line lib/promscrape/targetstatus.qtpl:170
|
||||
//line lib/promscrape/targetstatus.qtpl:176
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line lib/promscrape/targetstatus.qtpl:170
|
||||
//line lib/promscrape/targetstatus.qtpl:176
|
||||
}
|
||||
|
||||
//line lib/promscrape/targetstatus.qtpl:170
|
||||
//line lib/promscrape/targetstatus.qtpl:176
|
||||
func formatLabel(labels []prompbmarshal.Label) string {
|
||||
//line lib/promscrape/targetstatus.qtpl:170
|
||||
//line lib/promscrape/targetstatus.qtpl:176
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line lib/promscrape/targetstatus.qtpl:170
|
||||
//line lib/promscrape/targetstatus.qtpl:176
|
||||
writeformatLabel(qb422016, labels)
|
||||
//line lib/promscrape/targetstatus.qtpl:170
|
||||
//line lib/promscrape/targetstatus.qtpl:176
|
||||
qs422016 := string(qb422016.B)
|
||||
//line lib/promscrape/targetstatus.qtpl:170
|
||||
//line lib/promscrape/targetstatus.qtpl:176
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line lib/promscrape/targetstatus.qtpl:170
|
||||
//line lib/promscrape/targetstatus.qtpl:176
|
||||
return qs422016
|
||||
//line lib/promscrape/targetstatus.qtpl:170
|
||||
//line lib/promscrape/targetstatus.qtpl:176
|
||||
}
|
||||
|
|
|
@ -779,19 +779,21 @@ func (is *indexSearch) searchTagKeysOnDate(tks map[string]struct{}, date uint64,
|
|||
continue
|
||||
}
|
||||
key := mp.Tag.Key
|
||||
if isArtificialTagKey(key) {
|
||||
// Skip artificially created tag key.
|
||||
continue
|
||||
if !isArtificialTagKey(key) {
|
||||
tks[string(key)] = struct{}{}
|
||||
}
|
||||
// Store tag key.
|
||||
tks[string(key)] = struct{}{}
|
||||
|
||||
// Search for the next tag key.
|
||||
// The last char in kb.B must be tagSeparatorChar.
|
||||
// Just increment it in order to jump to the next tag key.
|
||||
kb.B = is.marshalCommonPrefix(kb.B[:0], nsPrefixDateTagToMetricIDs)
|
||||
kb.B = encoding.MarshalUint64(kb.B, date)
|
||||
kb.B = marshalTagValue(kb.B, key)
|
||||
if len(key) > 0 && key[0] == compositeTagKeyPrefix {
|
||||
// skip composite tag entries
|
||||
kb.B = append(kb.B, compositeTagKeyPrefix)
|
||||
} else {
|
||||
kb.B = marshalTagValue(kb.B, key)
|
||||
}
|
||||
kb.B[len(kb.B)-1]++
|
||||
ts.Seek(kb.B)
|
||||
}
|
||||
|
@ -858,18 +860,20 @@ func (is *indexSearch) searchTagKeys(tks map[string]struct{}, maxTagKeys int) er
|
|||
continue
|
||||
}
|
||||
key := mp.Tag.Key
|
||||
if isArtificialTagKey(key) {
|
||||
// Skip artificailly created tag keys.
|
||||
continue
|
||||
if !isArtificialTagKey(key) {
|
||||
tks[string(key)] = struct{}{}
|
||||
}
|
||||
// Store tag key.
|
||||
tks[string(key)] = struct{}{}
|
||||
|
||||
// Search for the next tag key.
|
||||
// The last char in kb.B must be tagSeparatorChar.
|
||||
// Just increment it in order to jump to the next tag key.
|
||||
kb.B = is.marshalCommonPrefix(kb.B[:0], nsPrefixTagToMetricIDs)
|
||||
kb.B = marshalTagValue(kb.B, key)
|
||||
if len(key) > 0 && key[0] == compositeTagKeyPrefix {
|
||||
// skip composite tag entries
|
||||
kb.B = append(kb.B, compositeTagKeyPrefix)
|
||||
} else {
|
||||
kb.B = marshalTagValue(kb.B, key)
|
||||
}
|
||||
kb.B[len(kb.B)-1]++
|
||||
ts.Seek(kb.B)
|
||||
}
|
||||
|
@ -977,10 +981,10 @@ func (is *indexSearch) searchTagValuesOnDate(tvs map[string]struct{}, tagKey []b
|
|||
if mp.IsDeletedTag(dmis) {
|
||||
continue
|
||||
}
|
||||
|
||||
// Store tag value
|
||||
if string(mp.Tag.Key) != string(tagKey) {
|
||||
break
|
||||
}
|
||||
tvs[string(mp.Tag.Value)] = struct{}{}
|
||||
|
||||
if mp.MetricIDsLen() < maxMetricIDsPerRow/2 {
|
||||
// There is no need in searching for the next tag value,
|
||||
// since it is likely it is located in the next row,
|
||||
|
@ -1062,10 +1066,10 @@ func (is *indexSearch) searchTagValues(tvs map[string]struct{}, tagKey []byte, m
|
|||
if mp.IsDeletedTag(dmis) {
|
||||
continue
|
||||
}
|
||||
|
||||
// Store tag value
|
||||
if string(mp.Tag.Key) != string(tagKey) {
|
||||
break
|
||||
}
|
||||
tvs[string(mp.Tag.Value)] = struct{}{}
|
||||
|
||||
if mp.MetricIDsLen() < maxMetricIDsPerRow/2 {
|
||||
// There is no need in searching for the next tag value,
|
||||
// since it is likely it is located in the next row,
|
||||
|
@ -1396,6 +1400,14 @@ func (is *indexSearch) getTSDBStatusWithFiltersForDate(tfss []*TagFilters, date
|
|||
}
|
||||
if isArtificialTagKey(tmp) {
|
||||
// Skip artificially created tag keys.
|
||||
kb.B = append(kb.B[:0], prefix...)
|
||||
if len(tmp) > 0 && tmp[0] == compositeTagKeyPrefix {
|
||||
kb.B = append(kb.B, compositeTagKeyPrefix)
|
||||
} else {
|
||||
kb.B = marshalTagValue(kb.B, tmp)
|
||||
}
|
||||
kb.B[len(kb.B)-1]++
|
||||
ts.Seek(kb.B)
|
||||
continue
|
||||
}
|
||||
if len(tmp) == 0 {
|
||||
|
|
|
@ -202,7 +202,7 @@ func OpenStorage(path string, retentionMsecs int64, maxHourlySeries, maxDailySer
|
|||
|
||||
// Load caches.
|
||||
mem := memory.Allowed()
|
||||
s.tsidCache = s.mustLoadCache("MetricName->TSID", "metricName_tsid", int(float64(mem)*0.35))
|
||||
s.tsidCache = s.mustLoadCache("MetricName->TSID", "metricName_tsid", getTSIDCacheSize())
|
||||
s.metricIDCache = s.mustLoadCache("MetricID->TSID", "metricID_tsid", mem/16)
|
||||
s.metricNameCache = s.mustLoadCache("MetricID->MetricName", "metricID_metricName", mem/10)
|
||||
s.dateMetricIDCache = newDateMetricIDCache()
|
||||
|
@ -271,6 +271,20 @@ func OpenStorage(path string, retentionMsecs int64, maxHourlySeries, maxDailySer
|
|||
return s, nil
|
||||
}
|
||||
|
||||
var maxTSIDCacheSize int
|
||||
|
||||
// SetTSIDCacheSize overrides the default size of storage/tsid cahce
|
||||
func SetTSIDCacheSize(size int) {
|
||||
maxTSIDCacheSize = size
|
||||
}
|
||||
|
||||
func getTSIDCacheSize() int {
|
||||
if maxTSIDCacheSize <= 0 {
|
||||
return int(float64(memory.Allowed()) * 0.37)
|
||||
}
|
||||
return maxTSIDCacheSize
|
||||
}
|
||||
|
||||
func (s *Storage) getDeletedMetricIDs() *uint64set.Set {
|
||||
return s.deletedMetricIDs.Load().(*uint64set.Set)
|
||||
}
|
||||
|
@ -725,6 +739,9 @@ func (s *Storage) mustRotateIndexDB() {
|
|||
}
|
||||
|
||||
func (s *Storage) resetAndSaveTSIDCache() {
|
||||
// Reset cache and then store the reset cache on disk in order to prevent
|
||||
// from inconsistent behaviour after possible unclean shutdown.
|
||||
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1347
|
||||
s.tsidCache.Reset()
|
||||
s.mustSaveCache(s.tsidCache, "MetricName->TSID", "metricName_tsid")
|
||||
}
|
||||
|
|
|
@ -3,7 +3,7 @@ ARG GO_VERSION
|
|||
RUN apt-get update && apt-get install -y git make wget binutils build-essential bzip2 cpp cpp-5 dpkg-dev fakeroot g++ g++-5 gcc gcc-5
|
||||
|
||||
RUN cd /usr/local &&\
|
||||
wget https://dl.google.com/go/go1.17.3.linux-amd64.tar.gz &&\
|
||||
wget https://dl.google.com/go/go$GO_VERSION.linux-amd64.tar.gz &&\
|
||||
tar -zxvf go$GO_VERSION.linux-amd64.tar.gz && rm go$GO_VERSION.linux-amd64.tar.gz
|
||||
ENV PATH="/usr/local/go/bin:${PATH}"
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
GO_VERSION ?=1.17.3
|
||||
GO_VERSION ?=1.17.7
|
||||
SNAP_BUILDER_IMAGE := local/snap-builder:2.0.0-$(shell echo $(GO_VERSION) | tr :/ __)
|
||||
|
||||
|
||||
|
|
7
vendor/cloud.google.com/go/iam/CHANGES.md
generated
vendored
7
vendor/cloud.google.com/go/iam/CHANGES.md
generated
vendored
|
@ -1,5 +1,12 @@
|
|||
# Changes
|
||||
|
||||
## [0.2.0](https://github.com/googleapis/google-cloud-go/compare/iam/v0.1.1...iam/v0.2.0) (2022-02-14)
|
||||
|
||||
|
||||
### Features
|
||||
|
||||
* **iam:** add file for tracking version ([17b36ea](https://github.com/googleapis/google-cloud-go/commit/17b36ead42a96b1a01105122074e65164357519e))
|
||||
|
||||
### [0.1.1](https://www.github.com/googleapis/google-cloud-go/compare/iam/v0.1.0...iam/v0.1.1) (2022-01-14)
|
||||
|
||||
|
||||
|
|
8
vendor/cloud.google.com/go/storage/CHANGES.md
generated
vendored
8
vendor/cloud.google.com/go/storage/CHANGES.md
generated
vendored
|
@ -1,6 +1,14 @@
|
|||
# Changes
|
||||
|
||||
|
||||
## [1.21.0](https://github.com/googleapis/google-cloud-go/compare/storage/v1.20.0...storage/v1.21.0) (2022-02-17)
|
||||
|
||||
|
||||
### Features
|
||||
|
||||
* **storage:** add better version metadata to calls ([#5507](https://github.com/googleapis/google-cloud-go/issues/5507)) ([13fe0bc](https://github.com/googleapis/google-cloud-go/commit/13fe0bc0d8acbffd46b59ab69b25449f1cbd6a88)), refs [#2749](https://github.com/googleapis/google-cloud-go/issues/2749)
|
||||
* **storage:** add Writer.ChunkRetryDeadline ([#5482](https://github.com/googleapis/google-cloud-go/issues/5482)) ([498a746](https://github.com/googleapis/google-cloud-go/commit/498a746769fa43958b92af8875b927879947128e))
|
||||
|
||||
## [1.20.0](https://www.github.com/googleapis/google-cloud-go/compare/storage/v1.19.0...storage/v1.20.0) (2022-02-04)
|
||||
|
||||
|
||||
|
|
2
vendor/cloud.google.com/go/storage/internal/apiv2/doc.go
generated
vendored
2
vendor/cloud.google.com/go/storage/internal/apiv2/doc.go
generated
vendored
|
@ -84,7 +84,7 @@ import (
|
|||
type clientHookParams struct{}
|
||||
type clientHook func(context.Context, clientHookParams) ([]option.ClientOption, error)
|
||||
|
||||
const versionClient = "20220202"
|
||||
const versionClient = "20220216"
|
||||
|
||||
func insertMetadata(ctx context.Context, mds ...metadata.MD) context.Context {
|
||||
out, _ := metadata.FromOutgoingContext(ctx)
|
||||
|
|
18
vendor/cloud.google.com/go/storage/internal/version.go
generated
vendored
Normal file
18
vendor/cloud.google.com/go/storage/internal/version.go
generated
vendored
Normal file
|
@ -0,0 +1,18 @@
|
|||
// Copyright 2022 Google LLC
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package internal
|
||||
|
||||
// Version is the current tagged release of the library.
|
||||
const Version = "1.21.0"
|
5
vendor/cloud.google.com/go/storage/storage.go
generated
vendored
5
vendor/cloud.google.com/go/storage/storage.go
generated
vendored
|
@ -40,6 +40,7 @@ import (
|
|||
"cloud.google.com/go/internal/optional"
|
||||
"cloud.google.com/go/internal/trace"
|
||||
"cloud.google.com/go/internal/version"
|
||||
"cloud.google.com/go/storage/internal"
|
||||
gapic "cloud.google.com/go/storage/internal/apiv2"
|
||||
"github.com/googleapis/gax-go/v2"
|
||||
"golang.org/x/oauth2/google"
|
||||
|
@ -69,7 +70,7 @@ var (
|
|||
errMethodNotValid = fmt.Errorf("storage: HTTP method should be one of %v", reflect.ValueOf(signedURLMethods).MapKeys())
|
||||
)
|
||||
|
||||
var userAgent = fmt.Sprintf("gcloud-golang-storage/%s", version.Repo)
|
||||
var userAgent = fmt.Sprintf("gcloud-golang-storage/%s", internal.Version)
|
||||
|
||||
const (
|
||||
// ScopeFullControl grants permissions to manage your
|
||||
|
@ -91,7 +92,7 @@ const (
|
|||
defaultConnPoolSize = 4
|
||||
)
|
||||
|
||||
var xGoogHeader = fmt.Sprintf("gl-go/%s gccl/%s", version.Go(), version.Repo)
|
||||
var xGoogHeader = fmt.Sprintf("gl-go/%s gccl/%s", version.Go(), internal.Version)
|
||||
|
||||
func setClientHeader(headers http.Header) {
|
||||
headers.Set("x-goog-api-client", xGoogHeader)
|
||||
|
|
21
vendor/cloud.google.com/go/storage/writer.go
generated
vendored
21
vendor/cloud.google.com/go/storage/writer.go
generated
vendored
|
@ -21,6 +21,7 @@ import (
|
|||
"fmt"
|
||||
"io"
|
||||
"sync"
|
||||
"time"
|
||||
"unicode/utf8"
|
||||
|
||||
"github.com/golang/protobuf/proto"
|
||||
|
@ -76,6 +77,23 @@ type Writer struct {
|
|||
// ChunkSize must be set before the first Write call.
|
||||
ChunkSize int
|
||||
|
||||
// ChunkRetryDeadline sets a per-chunk retry deadline for multi-chunk
|
||||
// resumable uploads.
|
||||
//
|
||||
// For uploads of larger files, the Writer will attempt to retry if the
|
||||
// request to upload a particular chunk fails with a transient error.
|
||||
// If a single chunk has been attempting to upload for longer than this
|
||||
// deadline and the request fails, it will no longer be retried, and the error
|
||||
// will be returned to the caller. This is only applicable for files which are
|
||||
// large enough to require a multi-chunk resumable upload. The default value
|
||||
// is 32s. Users may want to pick a longer deadline if they are using larger
|
||||
// values for ChunkSize or if they expect to have a slow or unreliable
|
||||
// internet connection.
|
||||
//
|
||||
// To set a deadline on the entire upload, use context timeout or
|
||||
// cancellation.
|
||||
ChunkRetryDeadline time.Duration
|
||||
|
||||
// ProgressFunc can be used to monitor the progress of a large write.
|
||||
// operation. If ProgressFunc is not nil and writing requires multiple
|
||||
// calls to the underlying service (see
|
||||
|
@ -127,6 +145,9 @@ func (w *Writer) open() error {
|
|||
if c := attrs.ContentType; c != "" {
|
||||
mediaOpts = append(mediaOpts, googleapi.ContentType(c))
|
||||
}
|
||||
if w.ChunkRetryDeadline != 0 {
|
||||
mediaOpts = append(mediaOpts, googleapi.ChunkRetryDeadline(w.ChunkRetryDeadline))
|
||||
}
|
||||
|
||||
go func() {
|
||||
defer close(w.donec)
|
||||
|
|
6
vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go
generated
vendored
6
vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go
generated
vendored
|
@ -9817,6 +9817,9 @@ var awsPartition = partition{
|
|||
endpointKey{
|
||||
Region: "ap-southeast-2",
|
||||
}: endpoint{},
|
||||
endpointKey{
|
||||
Region: "ap-southeast-3",
|
||||
}: endpoint{},
|
||||
endpointKey{
|
||||
Region: "ca-central-1",
|
||||
}: endpoint{},
|
||||
|
@ -13547,6 +13550,9 @@ var awsPartition = partition{
|
|||
},
|
||||
"mq": service{
|
||||
Endpoints: serviceEndpoints{
|
||||
endpointKey{
|
||||
Region: "af-south-1",
|
||||
}: endpoint{},
|
||||
endpointKey{
|
||||
Region: "ap-east-1",
|
||||
}: endpoint{},
|
||||
|
|
2
vendor/github.com/aws/aws-sdk-go/aws/version.go
generated
vendored
2
vendor/github.com/aws/aws-sdk-go/aws/version.go
generated
vendored
|
@ -5,4 +5,4 @@ package aws
|
|||
const SDKName = "aws-sdk-go"
|
||||
|
||||
// SDKVersion is the version of this SDK
|
||||
const SDKVersion = "1.42.52"
|
||||
const SDKVersion = "1.43.3"
|
||||
|
|
4
vendor/github.com/aws/aws-sdk-go/private/protocol/rest/build.go
generated
vendored
4
vendor/github.com/aws/aws-sdk-go/private/protocol/rest/build.go
generated
vendored
|
@ -272,6 +272,9 @@ func convertType(v reflect.Value, tag reflect.StructTag) (str string, err error)
|
|||
|
||||
switch value := v.Interface().(type) {
|
||||
case string:
|
||||
if tag.Get("suppressedJSONValue") == "true" && tag.Get("location") == "header" {
|
||||
value = base64.StdEncoding.EncodeToString([]byte(value))
|
||||
}
|
||||
str = value
|
||||
case []byte:
|
||||
str = base64.StdEncoding.EncodeToString(value)
|
||||
|
@ -306,5 +309,6 @@ func convertType(v reflect.Value, tag reflect.StructTag) (str string, err error)
|
|||
err := fmt.Errorf("unsupported value for param %v (%s)", v.Interface(), v.Type())
|
||||
return "", err
|
||||
}
|
||||
|
||||
return str, nil
|
||||
}
|
||||
|
|
7
vendor/github.com/aws/aws-sdk-go/private/protocol/rest/unmarshal.go
generated
vendored
7
vendor/github.com/aws/aws-sdk-go/private/protocol/rest/unmarshal.go
generated
vendored
|
@ -204,6 +204,13 @@ func unmarshalHeader(v reflect.Value, header string, tag reflect.StructTag) erro
|
|||
|
||||
switch v.Interface().(type) {
|
||||
case *string:
|
||||
if tag.Get("suppressedJSONValue") == "true" && tag.Get("location") == "header" {
|
||||
b, err := base64.StdEncoding.DecodeString(header)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to decode JSONValue, %v", err)
|
||||
}
|
||||
header = string(b)
|
||||
}
|
||||
v.Set(reflect.ValueOf(&header))
|
||||
case []byte:
|
||||
b, err := base64.StdEncoding.DecodeString(header)
|
||||
|
|
2
vendor/github.com/aws/aws-sdk-go/service/s3/api.go
generated
vendored
2
vendor/github.com/aws/aws-sdk-go/service/s3/api.go
generated
vendored
|
@ -10535,7 +10535,7 @@ type SelectObjectContentEventStream struct {
|
|||
// (e.g. http.Response.Body), that will be closed when the stream Close method
|
||||
// is called.
|
||||
//
|
||||
// es := NewSelectObjectContentEventStream(func(o *SelectObjectContentEventStream{
|
||||
// es := NewSelectObjectContentEventStream(func(o *SelectObjectContentEventStream){
|
||||
// es.Reader = myMockStreamReader
|
||||
// es.StreamCloser = myMockStreamCloser
|
||||
// })
|
||||
|
|
28
vendor/github.com/klauspost/compress/README.md
generated
vendored
28
vendor/github.com/klauspost/compress/README.md
generated
vendored
|
@ -17,6 +17,19 @@ This package provides various compression algorithms.
|
|||
|
||||
# changelog
|
||||
|
||||
* Feb 17, 2022 (v1.14.3)
|
||||
* flate: Improve fastest levels compression speed ~10% more throughput. [#482](https://github.com/klauspost/compress/pull/482) [#489](https://github.com/klauspost/compress/pull/489) [#490](https://github.com/klauspost/compress/pull/490) [#491](https://github.com/klauspost/compress/pull/491) [#494](https://github.com/klauspost/compress/pull/494) [#478](https://github.com/klauspost/compress/pull/478)
|
||||
* flate: Faster decompression speed, ~5-10%. [#483](https://github.com/klauspost/compress/pull/483)
|
||||
* s2: Faster compression with Go v1.18 and amd64 microarch level 3+. [#484](https://github.com/klauspost/compress/pull/484) [#486](https://github.com/klauspost/compress/pull/486)
|
||||
|
||||
* Jan 25, 2022 (v1.14.2)
|
||||
* zstd: improve header decoder by @dsnet [#476](https://github.com/klauspost/compress/pull/476)
|
||||
* zstd: Add bigger default blocks [#469](https://github.com/klauspost/compress/pull/469)
|
||||
* zstd: Remove unused decompression buffer [#470](https://github.com/klauspost/compress/pull/470)
|
||||
* zstd: Fix logically dead code by @ningmingxiao [#472](https://github.com/klauspost/compress/pull/472)
|
||||
* flate: Improve level 7-9 [#471](https://github.com/klauspost/compress/pull/471) [#473](https://github.com/klauspost/compress/pull/473)
|
||||
* zstd: Add noasm tag for xxhash [#475](https://github.com/klauspost/compress/pull/475)
|
||||
|
||||
* Jan 11, 2022 (v1.14.1)
|
||||
* s2: Add stream index in [#462](https://github.com/klauspost/compress/pull/462)
|
||||
* flate: Speed and efficiency improvements in [#439](https://github.com/klauspost/compress/pull/439) [#461](https://github.com/klauspost/compress/pull/461) [#455](https://github.com/klauspost/compress/pull/455) [#452](https://github.com/klauspost/compress/pull/452) [#458](https://github.com/klauspost/compress/pull/458)
|
||||
|
@ -53,6 +66,9 @@ This package provides various compression algorithms.
|
|||
* zstd: Detect short invalid signatures [#382](https://github.com/klauspost/compress/pull/382)
|
||||
* zstd: Spawn decoder goroutine only if needed. [#380](https://github.com/klauspost/compress/pull/380)
|
||||
|
||||
<details>
|
||||
<summary>See changes to v1.12.x</summary>
|
||||
|
||||
* May 25, 2021 (v1.12.3)
|
||||
* deflate: Better/faster Huffman encoding [#374](https://github.com/klauspost/compress/pull/374)
|
||||
* deflate: Allocate less for history. [#375](https://github.com/klauspost/compress/pull/375)
|
||||
|
@ -74,9 +90,10 @@ This package provides various compression algorithms.
|
|||
* s2c/s2d/s2sx: Always truncate when writing files [#352](https://github.com/klauspost/compress/pull/352)
|
||||
* zstd: Reduce memory usage further when using [WithLowerEncoderMem](https://pkg.go.dev/github.com/klauspost/compress/zstd#WithLowerEncoderMem) [#346](https://github.com/klauspost/compress/pull/346)
|
||||
* s2: Fix potential problem with amd64 assembly and profilers [#349](https://github.com/klauspost/compress/pull/349)
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>See changes prior to v1.12.1</summary>
|
||||
<summary>See changes to v1.11.x</summary>
|
||||
|
||||
* Mar 26, 2021 (v1.11.13)
|
||||
* zstd: Big speedup on small dictionary encodes [#344](https://github.com/klauspost/compress/pull/344) [#345](https://github.com/klauspost/compress/pull/345)
|
||||
|
@ -135,7 +152,7 @@ This package provides various compression algorithms.
|
|||
</details>
|
||||
|
||||
<details>
|
||||
<summary>See changes prior to v1.11.0</summary>
|
||||
<summary>See changes to v1.10.x</summary>
|
||||
|
||||
* July 8, 2020 (v1.10.11)
|
||||
* zstd: Fix extra block when compressing with ReadFrom. [#278](https://github.com/klauspost/compress/pull/278)
|
||||
|
@ -297,11 +314,6 @@ This package provides various compression algorithms.
|
|||
|
||||
# deflate usage
|
||||
|
||||
* [High Throughput Benchmark](http://blog.klauspost.com/go-gzipdeflate-benchmarks/).
|
||||
* [Small Payload/Webserver Benchmarks](http://blog.klauspost.com/gzip-performance-for-go-webservers/).
|
||||
* [Linear Time Compression](http://blog.klauspost.com/constant-time-gzipzip-compression/).
|
||||
* [Re-balancing Deflate Compression Levels](https://blog.klauspost.com/rebalancing-deflate-compression-levels/)
|
||||
|
||||
The packages are drop-in replacements for standard libraries. Simply replace the import path to use them:
|
||||
|
||||
| old import | new import | Documentation
|
||||
|
@ -323,6 +335,8 @@ Memory usage is typically 1MB for a Writer. stdlib is in the same range.
|
|||
If you expect to have a lot of concurrently allocated Writers consider using
|
||||
the stateless compress described below.
|
||||
|
||||
For compression performance, see: [this spreadsheet](https://docs.google.com/spreadsheets/d/1nuNE2nPfuINCZJRMt6wFWhKpToF95I47XjSsc-1rbPQ/edit?usp=sharing).
|
||||
|
||||
# Stateless compression
|
||||
|
||||
This package offers stateless compression as a special option for gzip/deflate.
|
||||
|
|
2
vendor/github.com/klauspost/compress/flate/fast_encoder.go
generated
vendored
2
vendor/github.com/klauspost/compress/flate/fast_encoder.go
generated
vendored
|
@ -179,7 +179,7 @@ func (e *fastGen) matchlen(s, t int32, src []byte) int32 {
|
|||
// matchlenLong will return the match length between offsets and t in src.
|
||||
// It is assumed that s > t, that t >=0 and s < len(src).
|
||||
func (e *fastGen) matchlenLong(s, t int32, src []byte) int32 {
|
||||
if debugDecode {
|
||||
if debugDeflate {
|
||||
if t >= s {
|
||||
panic(fmt.Sprint("t >=s:", t, s))
|
||||
}
|
||||
|
|
166
vendor/github.com/klauspost/compress/flate/huffman_bit_writer.go
generated
vendored
166
vendor/github.com/klauspost/compress/flate/huffman_bit_writer.go
generated
vendored
|
@ -8,6 +8,7 @@ import (
|
|||
"encoding/binary"
|
||||
"fmt"
|
||||
"io"
|
||||
"math"
|
||||
)
|
||||
|
||||
const (
|
||||
|
@ -24,6 +25,10 @@ const (
|
|||
codegenCodeCount = 19
|
||||
badCode = 255
|
||||
|
||||
// maxPredefinedTokens is the maximum number of tokens
|
||||
// where we check if fixed size is smaller.
|
||||
maxPredefinedTokens = 250
|
||||
|
||||
// bufferFlushSize indicates the buffer size
|
||||
// after which bytes are flushed to the writer.
|
||||
// Should preferably be a multiple of 6, since
|
||||
|
@ -36,8 +41,11 @@ const (
|
|||
bufferSize = bufferFlushSize + 8
|
||||
)
|
||||
|
||||
// Minimum length code that emits bits.
|
||||
const lengthExtraBitsMinCode = 8
|
||||
|
||||
// The number of extra bits needed by length code X - LENGTH_CODES_START.
|
||||
var lengthExtraBits = [32]int8{
|
||||
var lengthExtraBits = [32]uint8{
|
||||
/* 257 */ 0, 0, 0,
|
||||
/* 260 */ 0, 0, 0, 0, 0, 1, 1, 1, 1, 2,
|
||||
/* 270 */ 2, 2, 2, 3, 3, 3, 3, 4, 4, 4,
|
||||
|
@ -51,6 +59,9 @@ var lengthBase = [32]uint8{
|
|||
64, 80, 96, 112, 128, 160, 192, 224, 255,
|
||||
}
|
||||
|
||||
// Minimum offset code that emits bits.
|
||||
const offsetExtraBitsMinCode = 4
|
||||
|
||||
// offset code word extra bits.
|
||||
var offsetExtraBits = [32]int8{
|
||||
0, 0, 0, 0, 1, 1, 2, 2, 3, 3,
|
||||
|
@ -78,10 +89,10 @@ func init() {
|
|||
|
||||
for i := range offsetCombined[:] {
|
||||
// Don't use extended window values...
|
||||
if offsetBase[i] > 0x006000 {
|
||||
if offsetExtraBits[i] == 0 || offsetBase[i] > 0x006000 {
|
||||
continue
|
||||
}
|
||||
offsetCombined[i] = uint32(offsetExtraBits[i])<<16 | (offsetBase[i])
|
||||
offsetCombined[i] = uint32(offsetExtraBits[i]) | (offsetBase[i] << 8)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -97,7 +108,7 @@ type huffmanBitWriter struct {
|
|||
// Data waiting to be written is bytes[0:nbytes]
|
||||
// and then the low nbits of bits.
|
||||
bits uint64
|
||||
nbits uint16
|
||||
nbits uint8
|
||||
nbytes uint8
|
||||
lastHuffMan bool
|
||||
literalEncoding *huffmanEncoder
|
||||
|
@ -215,7 +226,7 @@ func (w *huffmanBitWriter) write(b []byte) {
|
|||
_, w.err = w.writer.Write(b)
|
||||
}
|
||||
|
||||
func (w *huffmanBitWriter) writeBits(b int32, nb uint16) {
|
||||
func (w *huffmanBitWriter) writeBits(b int32, nb uint8) {
|
||||
w.bits |= uint64(b) << (w.nbits & 63)
|
||||
w.nbits += nb
|
||||
if w.nbits >= 48 {
|
||||
|
@ -571,7 +582,10 @@ func (w *huffmanBitWriter) writeBlock(tokens *tokens, eof bool, input []byte) {
|
|||
// Fixed Huffman baseline.
|
||||
var literalEncoding = fixedLiteralEncoding
|
||||
var offsetEncoding = fixedOffsetEncoding
|
||||
var size = w.fixedSize(extraBits)
|
||||
var size = math.MaxInt32
|
||||
if tokens.n < maxPredefinedTokens {
|
||||
size = w.fixedSize(extraBits)
|
||||
}
|
||||
|
||||
// Dynamic Huffman?
|
||||
var numCodegens int
|
||||
|
@ -672,19 +686,21 @@ func (w *huffmanBitWriter) writeBlockDynamic(tokens *tokens, eof bool, input []b
|
|||
size = reuseSize
|
||||
}
|
||||
|
||||
if preSize := w.fixedSize(extraBits) + 7; usePrefs && preSize < size {
|
||||
// Check if we get a reasonable size decrease.
|
||||
if storable && ssize <= size {
|
||||
w.writeStoredHeader(len(input), eof)
|
||||
w.writeBytes(input)
|
||||
if tokens.n < maxPredefinedTokens {
|
||||
if preSize := w.fixedSize(extraBits) + 7; usePrefs && preSize < size {
|
||||
// Check if we get a reasonable size decrease.
|
||||
if storable && ssize <= size {
|
||||
w.writeStoredHeader(len(input), eof)
|
||||
w.writeBytes(input)
|
||||
return
|
||||
}
|
||||
w.writeFixedHeader(eof)
|
||||
if !sync {
|
||||
tokens.AddEOB()
|
||||
}
|
||||
w.writeTokens(tokens.Slice(), fixedLiteralEncoding.codes, fixedOffsetEncoding.codes)
|
||||
return
|
||||
}
|
||||
w.writeFixedHeader(eof)
|
||||
if !sync {
|
||||
tokens.AddEOB()
|
||||
}
|
||||
w.writeTokens(tokens.Slice(), fixedLiteralEncoding.codes, fixedOffsetEncoding.codes)
|
||||
return
|
||||
}
|
||||
// Check if we get a reasonable size decrease.
|
||||
if storable && ssize <= size {
|
||||
|
@ -717,19 +733,21 @@ func (w *huffmanBitWriter) writeBlockDynamic(tokens *tokens, eof bool, input []b
|
|||
size, numCodegens = w.dynamicSize(w.literalEncoding, w.offsetEncoding, extraBits)
|
||||
|
||||
// Store predefined, if we don't get a reasonable improvement.
|
||||
if preSize := w.fixedSize(extraBits); usePrefs && preSize <= size {
|
||||
// Store bytes, if we don't get an improvement.
|
||||
if storable && ssize <= preSize {
|
||||
w.writeStoredHeader(len(input), eof)
|
||||
w.writeBytes(input)
|
||||
if tokens.n < maxPredefinedTokens {
|
||||
if preSize := w.fixedSize(extraBits); usePrefs && preSize <= size {
|
||||
// Store bytes, if we don't get an improvement.
|
||||
if storable && ssize <= preSize {
|
||||
w.writeStoredHeader(len(input), eof)
|
||||
w.writeBytes(input)
|
||||
return
|
||||
}
|
||||
w.writeFixedHeader(eof)
|
||||
if !sync {
|
||||
tokens.AddEOB()
|
||||
}
|
||||
w.writeTokens(tokens.Slice(), fixedLiteralEncoding.codes, fixedOffsetEncoding.codes)
|
||||
return
|
||||
}
|
||||
w.writeFixedHeader(eof)
|
||||
if !sync {
|
||||
tokens.AddEOB()
|
||||
}
|
||||
w.writeTokens(tokens.Slice(), fixedLiteralEncoding.codes, fixedOffsetEncoding.codes)
|
||||
return
|
||||
}
|
||||
|
||||
if storable && ssize <= size {
|
||||
|
@ -833,9 +851,9 @@ func (w *huffmanBitWriter) writeTokens(tokens []token, leCodes, oeCodes []hcode)
|
|||
bits, nbits, nbytes := w.bits, w.nbits, w.nbytes
|
||||
|
||||
for _, t := range tokens {
|
||||
if t < matchType {
|
||||
if t < 256 {
|
||||
//w.writeCode(lits[t.literal()])
|
||||
c := lits[t.literal()]
|
||||
c := lits[t]
|
||||
bits |= uint64(c.code) << (nbits & 63)
|
||||
nbits += c.len
|
||||
if nbits >= 48 {
|
||||
|
@ -858,12 +876,12 @@ func (w *huffmanBitWriter) writeTokens(tokens []token, leCodes, oeCodes []hcode)
|
|||
|
||||
// Write the length
|
||||
length := t.length()
|
||||
lengthCode := lengthCode(length)
|
||||
lengthCode := lengthCode(length) & 31
|
||||
if false {
|
||||
w.writeCode(lengths[lengthCode&31])
|
||||
w.writeCode(lengths[lengthCode])
|
||||
} else {
|
||||
// inlined
|
||||
c := lengths[lengthCode&31]
|
||||
c := lengths[lengthCode]
|
||||
bits |= uint64(c.code) << (nbits & 63)
|
||||
nbits += c.len
|
||||
if nbits >= 48 {
|
||||
|
@ -883,10 +901,10 @@ func (w *huffmanBitWriter) writeTokens(tokens []token, leCodes, oeCodes []hcode)
|
|||
}
|
||||
}
|
||||
|
||||
extraLengthBits := uint16(lengthExtraBits[lengthCode&31])
|
||||
if extraLengthBits > 0 {
|
||||
if lengthCode >= lengthExtraBitsMinCode {
|
||||
extraLengthBits := lengthExtraBits[lengthCode]
|
||||
//w.writeBits(extraLength, extraLengthBits)
|
||||
extraLength := int32(length - lengthBase[lengthCode&31])
|
||||
extraLength := int32(length - lengthBase[lengthCode])
|
||||
bits |= uint64(extraLength) << (nbits & 63)
|
||||
nbits += extraLengthBits
|
||||
if nbits >= 48 {
|
||||
|
@ -907,10 +925,9 @@ func (w *huffmanBitWriter) writeTokens(tokens []token, leCodes, oeCodes []hcode)
|
|||
}
|
||||
// Write the offset
|
||||
offset := t.offset()
|
||||
offsetCode := offset >> 16
|
||||
offset &= matchOffsetOnlyMask
|
||||
offsetCode := (offset >> 16) & 31
|
||||
if false {
|
||||
w.writeCode(offs[offsetCode&31])
|
||||
w.writeCode(offs[offsetCode])
|
||||
} else {
|
||||
// inlined
|
||||
c := offs[offsetCode]
|
||||
|
@ -932,11 +949,12 @@ func (w *huffmanBitWriter) writeTokens(tokens []token, leCodes, oeCodes []hcode)
|
|||
}
|
||||
}
|
||||
}
|
||||
offsetComb := offsetCombined[offsetCode]
|
||||
if offsetComb > 1<<16 {
|
||||
|
||||
if offsetCode >= offsetExtraBitsMinCode {
|
||||
offsetComb := offsetCombined[offsetCode]
|
||||
//w.writeBits(extraOffset, extraOffsetBits)
|
||||
bits |= uint64(offset-(offsetComb&0xffff)) << (nbits & 63)
|
||||
nbits += uint16(offsetComb >> 16)
|
||||
bits |= uint64((offset-(offsetComb>>8))&matchOffsetOnlyMask) << (nbits & 63)
|
||||
nbits += uint8(offsetComb)
|
||||
if nbits >= 48 {
|
||||
binary.LittleEndian.PutUint64(w.bytes[nbytes:], bits)
|
||||
//*(*uint64)(unsafe.Pointer(&w.bytes[nbytes])) = bits
|
||||
|
@ -1002,6 +1020,29 @@ func (w *huffmanBitWriter) writeBlockHuff(eof bool, input []byte, sync bool) {
|
|||
// https://stackoverflow.com/a/25454430
|
||||
const guessHeaderSizeBits = 70 * 8
|
||||
histogram(input, w.literalFreq[:numLiterals], fill)
|
||||
ssize, storable := w.storedSize(input)
|
||||
if storable && len(input) > 1024 {
|
||||
// Quick check for incompressible content.
|
||||
abs := float64(0)
|
||||
avg := float64(len(input)) / 256
|
||||
max := float64(len(input) * 2)
|
||||
for _, v := range w.literalFreq[:256] {
|
||||
diff := float64(v) - avg
|
||||
abs += diff * diff
|
||||
if abs > max {
|
||||
break
|
||||
}
|
||||
}
|
||||
if abs < max {
|
||||
if debugDeflate {
|
||||
fmt.Println("stored", abs, "<", max)
|
||||
}
|
||||
// No chance we can compress this...
|
||||
w.writeStoredHeader(len(input), eof)
|
||||
w.writeBytes(input)
|
||||
return
|
||||
}
|
||||
}
|
||||
w.literalFreq[endBlockMarker] = 1
|
||||
w.tmpLitEncoding.generate(w.literalFreq[:numLiterals], 15)
|
||||
if fill {
|
||||
|
@ -1019,8 +1060,10 @@ func (w *huffmanBitWriter) writeBlockHuff(eof bool, input []byte, sync bool) {
|
|||
estBits += estBits >> w.logNewTablePenalty
|
||||
|
||||
// Store bytes, if we don't get a reasonable improvement.
|
||||
ssize, storable := w.storedSize(input)
|
||||
if storable && ssize <= estBits {
|
||||
if debugDeflate {
|
||||
fmt.Println("stored,", ssize, "<=", estBits)
|
||||
}
|
||||
w.writeStoredHeader(len(input), eof)
|
||||
w.writeBytes(input)
|
||||
return
|
||||
|
@ -1031,7 +1074,7 @@ func (w *huffmanBitWriter) writeBlockHuff(eof bool, input []byte, sync bool) {
|
|||
|
||||
if estBits < reuseSize {
|
||||
if debugDeflate {
|
||||
//fmt.Println("not reusing, reuse:", reuseSize/8, "> new:", estBits/8, "- header est:", w.lastHeader/8)
|
||||
fmt.Println("NOT reusing, reuse:", reuseSize/8, "> new:", estBits/8, "header est:", w.lastHeader/8, "bytes")
|
||||
}
|
||||
// We owe an EOB
|
||||
w.writeCode(w.literalEncoding.codes[endBlockMarker])
|
||||
|
@ -1065,6 +1108,9 @@ func (w *huffmanBitWriter) writeBlockHuff(eof bool, input []byte, sync bool) {
|
|||
// Go 1.16 LOVES having these on stack. At least 1.5x the speed.
|
||||
bits, nbits, nbytes := w.bits, w.nbits, w.nbytes
|
||||
|
||||
if debugDeflate {
|
||||
count -= int(nbytes)*8 + int(nbits)
|
||||
}
|
||||
// Unroll, write 3 codes/loop.
|
||||
// Fastest number of unrolls.
|
||||
for len(input) > 3 {
|
||||
|
@ -1074,13 +1120,16 @@ func (w *huffmanBitWriter) writeBlockHuff(eof bool, input []byte, sync bool) {
|
|||
binary.LittleEndian.PutUint64(w.bytes[nbytes:], bits)
|
||||
bits >>= (n * 8) & 63
|
||||
nbits -= n * 8
|
||||
nbytes += uint8(n)
|
||||
nbytes += n
|
||||
}
|
||||
if nbytes >= bufferFlushSize {
|
||||
if w.err != nil {
|
||||
nbytes = 0
|
||||
return
|
||||
}
|
||||
if debugDeflate {
|
||||
count += int(nbytes) * 8
|
||||
}
|
||||
_, w.err = w.writer.Write(w.bytes[:nbytes])
|
||||
nbytes = 0
|
||||
}
|
||||
|
@ -1096,13 +1145,6 @@ func (w *huffmanBitWriter) writeBlockHuff(eof bool, input []byte, sync bool) {
|
|||
|
||||
// Remaining...
|
||||
for _, t := range input {
|
||||
// Bitwriting inlined, ~30% speedup
|
||||
c := encoding[t]
|
||||
bits |= uint64(c.code) << (nbits & 63)
|
||||
nbits += c.len
|
||||
if debugDeflate {
|
||||
count += int(c.len)
|
||||
}
|
||||
if nbits >= 48 {
|
||||
binary.LittleEndian.PutUint64(w.bytes[nbytes:], bits)
|
||||
//*(*uint64)(unsafe.Pointer(&w.bytes[nbytes])) = bits
|
||||
|
@ -1114,17 +1156,33 @@ func (w *huffmanBitWriter) writeBlockHuff(eof bool, input []byte, sync bool) {
|
|||
nbytes = 0
|
||||
return
|
||||
}
|
||||
if debugDeflate {
|
||||
count += int(nbytes) * 8
|
||||
}
|
||||
_, w.err = w.writer.Write(w.bytes[:nbytes])
|
||||
nbytes = 0
|
||||
}
|
||||
}
|
||||
// Bitwriting inlined, ~30% speedup
|
||||
c := encoding[t]
|
||||
bits |= uint64(c.code) << (nbits & 63)
|
||||
nbits += c.len
|
||||
if debugDeflate {
|
||||
count += int(c.len)
|
||||
}
|
||||
}
|
||||
// Restore...
|
||||
w.bits, w.nbits, w.nbytes = bits, nbits, nbytes
|
||||
|
||||
if debugDeflate {
|
||||
fmt.Println("wrote", count/8, "bytes")
|
||||
nb := count + int(nbytes)*8 + int(nbits)
|
||||
fmt.Println("wrote", nb, "bits,", nb/8, "bytes.")
|
||||
}
|
||||
// Flush if needed to have space.
|
||||
if w.nbits >= 48 {
|
||||
w.writeOutBits()
|
||||
}
|
||||
|
||||
if eof || sync {
|
||||
w.writeCode(w.literalEncoding.codes[endBlockMarker])
|
||||
w.lastHeader = 0
|
||||
|
|
40
vendor/github.com/klauspost/compress/flate/huffman_code.go
generated
vendored
40
vendor/github.com/klauspost/compress/flate/huffman_code.go
generated
vendored
|
@ -17,7 +17,8 @@ const (
|
|||
|
||||
// hcode is a huffman code with a bit code and bit length.
|
||||
type hcode struct {
|
||||
code, len uint16
|
||||
code uint16
|
||||
len uint8
|
||||
}
|
||||
|
||||
type huffmanEncoder struct {
|
||||
|
@ -56,7 +57,7 @@ type levelInfo struct {
|
|||
}
|
||||
|
||||
// set sets the code and length of an hcode.
|
||||
func (h *hcode) set(code uint16, length uint16) {
|
||||
func (h *hcode) set(code uint16, length uint8) {
|
||||
h.len = length
|
||||
h.code = code
|
||||
}
|
||||
|
@ -80,7 +81,7 @@ func generateFixedLiteralEncoding() *huffmanEncoder {
|
|||
var ch uint16
|
||||
for ch = 0; ch < literalCount; ch++ {
|
||||
var bits uint16
|
||||
var size uint16
|
||||
var size uint8
|
||||
switch {
|
||||
case ch < 144:
|
||||
// size 8, 000110000 .. 10111111
|
||||
|
@ -99,7 +100,7 @@ func generateFixedLiteralEncoding() *huffmanEncoder {
|
|||
bits = ch + 192 - 280
|
||||
size = 8
|
||||
}
|
||||
codes[ch] = hcode{code: reverseBits(bits, byte(size)), len: size}
|
||||
codes[ch] = hcode{code: reverseBits(bits, size), len: size}
|
||||
}
|
||||
return h
|
||||
}
|
||||
|
@ -187,14 +188,19 @@ func (h *huffmanEncoder) bitCounts(list []literalNode, maxBits int32) []int32 {
|
|||
// of the level j ancestor.
|
||||
var leafCounts [maxBitsLimit][maxBitsLimit]int32
|
||||
|
||||
// Descending to only have 1 bounds check.
|
||||
l2f := int32(list[2].freq)
|
||||
l1f := int32(list[1].freq)
|
||||
l0f := int32(list[0].freq) + int32(list[1].freq)
|
||||
|
||||
for level := int32(1); level <= maxBits; level++ {
|
||||
// For every level, the first two items are the first two characters.
|
||||
// We initialize the levels as if we had already figured this out.
|
||||
levels[level] = levelInfo{
|
||||
level: level,
|
||||
lastFreq: int32(list[1].freq),
|
||||
nextCharFreq: int32(list[2].freq),
|
||||
nextPairFreq: int32(list[0].freq) + int32(list[1].freq),
|
||||
lastFreq: l1f,
|
||||
nextCharFreq: l2f,
|
||||
nextPairFreq: l0f,
|
||||
}
|
||||
leafCounts[level][level] = 2
|
||||
if level == 1 {
|
||||
|
@ -205,8 +211,8 @@ func (h *huffmanEncoder) bitCounts(list []literalNode, maxBits int32) []int32 {
|
|||
// We need a total of 2*n - 2 items at top level and have already generated 2.
|
||||
levels[maxBits].needed = 2*n - 4
|
||||
|
||||
level := maxBits
|
||||
for {
|
||||
level := uint32(maxBits)
|
||||
for level < 16 {
|
||||
l := &levels[level]
|
||||
if l.nextPairFreq == math.MaxInt32 && l.nextCharFreq == math.MaxInt32 {
|
||||
// We've run out of both leafs and pairs.
|
||||
|
@ -238,7 +244,13 @@ func (h *huffmanEncoder) bitCounts(list []literalNode, maxBits int32) []int32 {
|
|||
// more values in the level below
|
||||
l.lastFreq = l.nextPairFreq
|
||||
// Take leaf counts from the lower level, except counts[level] remains the same.
|
||||
copy(leafCounts[level][:level], leafCounts[level-1][:level])
|
||||
if true {
|
||||
save := leafCounts[level][level]
|
||||
leafCounts[level] = leafCounts[level-1]
|
||||
leafCounts[level][level] = save
|
||||
} else {
|
||||
copy(leafCounts[level][:level], leafCounts[level-1][:level])
|
||||
}
|
||||
levels[l.level-1].needed = 2
|
||||
}
|
||||
|
||||
|
@ -296,7 +308,7 @@ func (h *huffmanEncoder) assignEncodingAndSize(bitCount []int32, list []literalN
|
|||
|
||||
sortByLiteral(chunk)
|
||||
for _, node := range chunk {
|
||||
h.codes[node.literal] = hcode{code: reverseBits(code, uint8(n)), len: uint16(n)}
|
||||
h.codes[node.literal] = hcode{code: reverseBits(code, uint8(n)), len: uint8(n)}
|
||||
code++
|
||||
}
|
||||
list = list[0 : len(list)-int(bits)]
|
||||
|
@ -309,6 +321,7 @@ func (h *huffmanEncoder) assignEncodingAndSize(bitCount []int32, list []literalN
|
|||
// maxBits The maximum number of bits to use for any literal.
|
||||
func (h *huffmanEncoder) generate(freq []uint16, maxBits int32) {
|
||||
list := h.freqcache[:len(freq)+1]
|
||||
codes := h.codes[:len(freq)]
|
||||
// Number of non-zero literals
|
||||
count := 0
|
||||
// Set list to be the set of all non-zero literals and their frequencies
|
||||
|
@ -317,11 +330,10 @@ func (h *huffmanEncoder) generate(freq []uint16, maxBits int32) {
|
|||
list[count] = literalNode{uint16(i), f}
|
||||
count++
|
||||
} else {
|
||||
list[count] = literalNode{}
|
||||
h.codes[i].len = 0
|
||||
codes[i].len = 0
|
||||
}
|
||||
}
|
||||
list[len(freq)] = literalNode{}
|
||||
list[count] = literalNode{}
|
||||
|
||||
list = list[:count]
|
||||
if count <= 2 {
|
||||
|
|
222
vendor/github.com/klauspost/compress/flate/inflate.go
generated
vendored
222
vendor/github.com/klauspost/compress/flate/inflate.go
generated
vendored
|
@ -36,6 +36,13 @@ type lengthExtra struct {
|
|||
|
||||
var decCodeToLen = [32]lengthExtra{{length: 0x0, extra: 0x0}, {length: 0x1, extra: 0x0}, {length: 0x2, extra: 0x0}, {length: 0x3, extra: 0x0}, {length: 0x4, extra: 0x0}, {length: 0x5, extra: 0x0}, {length: 0x6, extra: 0x0}, {length: 0x7, extra: 0x0}, {length: 0x8, extra: 0x1}, {length: 0xa, extra: 0x1}, {length: 0xc, extra: 0x1}, {length: 0xe, extra: 0x1}, {length: 0x10, extra: 0x2}, {length: 0x14, extra: 0x2}, {length: 0x18, extra: 0x2}, {length: 0x1c, extra: 0x2}, {length: 0x20, extra: 0x3}, {length: 0x28, extra: 0x3}, {length: 0x30, extra: 0x3}, {length: 0x38, extra: 0x3}, {length: 0x40, extra: 0x4}, {length: 0x50, extra: 0x4}, {length: 0x60, extra: 0x4}, {length: 0x70, extra: 0x4}, {length: 0x80, extra: 0x5}, {length: 0xa0, extra: 0x5}, {length: 0xc0, extra: 0x5}, {length: 0xe0, extra: 0x5}, {length: 0xff, extra: 0x0}, {length: 0x0, extra: 0x0}, {length: 0x0, extra: 0x0}, {length: 0x0, extra: 0x0}}
|
||||
|
||||
var bitMask32 = [32]uint32{
|
||||
0, 1, 3, 7, 0xF, 0x1F, 0x3F, 0x7F, 0xFF,
|
||||
0x1FF, 0x3FF, 0x7FF, 0xFFF, 0x1FFF, 0x3FFF, 0x7FFF, 0xFFFF,
|
||||
0x1ffff, 0x3ffff, 0x7FFFF, 0xfFFFF, 0x1fFFFF, 0x3fFFFF, 0x7fFFFF, 0xffFFFF,
|
||||
0x1ffFFFF, 0x3ffFFFF, 0x7ffFFFF, 0xfffFFFF, 0x1fffFFFF, 0x3fffFFFF, 0x7fffFFFF,
|
||||
} // up to 32 bits
|
||||
|
||||
// Initialize the fixedHuffmanDecoder only once upon first use.
|
||||
var fixedOnce sync.Once
|
||||
var fixedHuffmanDecoder huffmanDecoder
|
||||
|
@ -559,221 +566,6 @@ func (f *decompressor) readHuffman() error {
|
|||
return nil
|
||||
}
|
||||
|
||||
// Decode a single Huffman block from f.
|
||||
// hl and hd are the Huffman states for the lit/length values
|
||||
// and the distance values, respectively. If hd == nil, using the
|
||||
// fixed distance encoding associated with fixed Huffman blocks.
|
||||
func (f *decompressor) huffmanBlockGeneric() {
|
||||
const (
|
||||
stateInit = iota // Zero value must be stateInit
|
||||
stateDict
|
||||
)
|
||||
|
||||
switch f.stepState {
|
||||
case stateInit:
|
||||
goto readLiteral
|
||||
case stateDict:
|
||||
goto copyHistory
|
||||
}
|
||||
|
||||
readLiteral:
|
||||
// Read literal and/or (length, distance) according to RFC section 3.2.3.
|
||||
{
|
||||
var v int
|
||||
{
|
||||
// Inlined v, err := f.huffSym(f.hl)
|
||||
// Since a huffmanDecoder can be empty or be composed of a degenerate tree
|
||||
// with single element, huffSym must error on these two edge cases. In both
|
||||
// cases, the chunks slice will be 0 for the invalid sequence, leading it
|
||||
// satisfy the n == 0 check below.
|
||||
n := uint(f.hl.maxRead)
|
||||
// Optimization. Compiler isn't smart enough to keep f.b,f.nb in registers,
|
||||
// but is smart enough to keep local variables in registers, so use nb and b,
|
||||
// inline call to moreBits and reassign b,nb back to f on return.
|
||||
nb, b := f.nb, f.b
|
||||
for {
|
||||
for nb < n {
|
||||
c, err := f.r.ReadByte()
|
||||
if err != nil {
|
||||
f.b = b
|
||||
f.nb = nb
|
||||
f.err = noEOF(err)
|
||||
return
|
||||
}
|
||||
f.roffset++
|
||||
b |= uint32(c) << (nb & regSizeMaskUint32)
|
||||
nb += 8
|
||||
}
|
||||
chunk := f.hl.chunks[b&(huffmanNumChunks-1)]
|
||||
n = uint(chunk & huffmanCountMask)
|
||||
if n > huffmanChunkBits {
|
||||
chunk = f.hl.links[chunk>>huffmanValueShift][(b>>huffmanChunkBits)&f.hl.linkMask]
|
||||
n = uint(chunk & huffmanCountMask)
|
||||
}
|
||||
if n <= nb {
|
||||
if n == 0 {
|
||||
f.b = b
|
||||
f.nb = nb
|
||||
if debugDecode {
|
||||
fmt.Println("huffsym: n==0")
|
||||
}
|
||||
f.err = CorruptInputError(f.roffset)
|
||||
return
|
||||
}
|
||||
f.b = b >> (n & regSizeMaskUint32)
|
||||
f.nb = nb - n
|
||||
v = int(chunk >> huffmanValueShift)
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var n uint // number of bits extra
|
||||
var length int
|
||||
var err error
|
||||
switch {
|
||||
case v < 256:
|
||||
f.dict.writeByte(byte(v))
|
||||
if f.dict.availWrite() == 0 {
|
||||
f.toRead = f.dict.readFlush()
|
||||
f.step = (*decompressor).huffmanBlockGeneric
|
||||
f.stepState = stateInit
|
||||
return
|
||||
}
|
||||
goto readLiteral
|
||||
case v == 256:
|
||||
f.finishBlock()
|
||||
return
|
||||
// otherwise, reference to older data
|
||||
case v < 265:
|
||||
length = v - (257 - 3)
|
||||
n = 0
|
||||
case v < 269:
|
||||
length = v*2 - (265*2 - 11)
|
||||
n = 1
|
||||
case v < 273:
|
||||
length = v*4 - (269*4 - 19)
|
||||
n = 2
|
||||
case v < 277:
|
||||
length = v*8 - (273*8 - 35)
|
||||
n = 3
|
||||
case v < 281:
|
||||
length = v*16 - (277*16 - 67)
|
||||
n = 4
|
||||
case v < 285:
|
||||
length = v*32 - (281*32 - 131)
|
||||
n = 5
|
||||
case v < maxNumLit:
|
||||
length = 258
|
||||
n = 0
|
||||
default:
|
||||
if debugDecode {
|
||||
fmt.Println(v, ">= maxNumLit")
|
||||
}
|
||||
f.err = CorruptInputError(f.roffset)
|
||||
return
|
||||
}
|
||||
if n > 0 {
|
||||
for f.nb < n {
|
||||
if err = f.moreBits(); err != nil {
|
||||
if debugDecode {
|
||||
fmt.Println("morebits n>0:", err)
|
||||
}
|
||||
f.err = err
|
||||
return
|
||||
}
|
||||
}
|
||||
length += int(f.b & uint32(1<<(n®SizeMaskUint32)-1))
|
||||
f.b >>= n & regSizeMaskUint32
|
||||
f.nb -= n
|
||||
}
|
||||
|
||||
var dist uint32
|
||||
if f.hd == nil {
|
||||
for f.nb < 5 {
|
||||
if err = f.moreBits(); err != nil {
|
||||
if debugDecode {
|
||||
fmt.Println("morebits f.nb<5:", err)
|
||||
}
|
||||
f.err = err
|
||||
return
|
||||
}
|
||||
}
|
||||
dist = uint32(bits.Reverse8(uint8(f.b & 0x1F << 3)))
|
||||
f.b >>= 5
|
||||
f.nb -= 5
|
||||
} else {
|
||||
sym, err := f.huffSym(f.hd)
|
||||
if err != nil {
|
||||
if debugDecode {
|
||||
fmt.Println("huffsym:", err)
|
||||
}
|
||||
f.err = err
|
||||
return
|
||||
}
|
||||
dist = uint32(sym)
|
||||
}
|
||||
|
||||
switch {
|
||||
case dist < 4:
|
||||
dist++
|
||||
case dist < maxNumDist:
|
||||
nb := uint(dist-2) >> 1
|
||||
// have 1 bit in bottom of dist, need nb more.
|
||||
extra := (dist & 1) << (nb & regSizeMaskUint32)
|
||||
for f.nb < nb {
|
||||
if err = f.moreBits(); err != nil {
|
||||
if debugDecode {
|
||||
fmt.Println("morebits f.nb<nb:", err)
|
||||
}
|
||||
f.err = err
|
||||
return
|
||||
}
|
||||
}
|
||||
extra |= f.b & uint32(1<<(nb®SizeMaskUint32)-1)
|
||||
f.b >>= nb & regSizeMaskUint32
|
||||
f.nb -= nb
|
||||
dist = 1<<((nb+1)®SizeMaskUint32) + 1 + extra
|
||||
default:
|
||||
if debugDecode {
|
||||
fmt.Println("dist too big:", dist, maxNumDist)
|
||||
}
|
||||
f.err = CorruptInputError(f.roffset)
|
||||
return
|
||||
}
|
||||
|
||||
// No check on length; encoding can be prescient.
|
||||
if dist > uint32(f.dict.histSize()) {
|
||||
if debugDecode {
|
||||
fmt.Println("dist > f.dict.histSize():", dist, f.dict.histSize())
|
||||
}
|
||||
f.err = CorruptInputError(f.roffset)
|
||||
return
|
||||
}
|
||||
|
||||
f.copyLen, f.copyDist = length, int(dist)
|
||||
goto copyHistory
|
||||
}
|
||||
|
||||
copyHistory:
|
||||
// Perform a backwards copy according to RFC section 3.2.3.
|
||||
{
|
||||
cnt := f.dict.tryWriteCopy(f.copyDist, f.copyLen)
|
||||
if cnt == 0 {
|
||||
cnt = f.dict.writeCopy(f.copyDist, f.copyLen)
|
||||
}
|
||||
f.copyLen -= cnt
|
||||
|
||||
if f.dict.availWrite() == 0 || f.copyLen > 0 {
|
||||
f.toRead = f.dict.readFlush()
|
||||
f.step = (*decompressor).huffmanBlockGeneric // We need to continue this work
|
||||
f.stepState = stateDict
|
||||
return
|
||||
}
|
||||
goto readLiteral
|
||||
}
|
||||
}
|
||||
|
||||
// Copy a single uncompressed data block from input to output.
|
||||
func (f *decompressor) dataBlock() {
|
||||
// Uncompressed.
|
||||
|
|
659
vendor/github.com/klauspost/compress/flate/inflate_gen.go
generated
vendored
659
vendor/github.com/klauspost/compress/flate/inflate_gen.go
generated
vendored
File diff suppressed because it is too large
Load diff
56
vendor/github.com/klauspost/compress/flate/level1.go
generated
vendored
56
vendor/github.com/klauspost/compress/flate/level1.go
generated
vendored
|
@ -1,6 +1,10 @@
|
|||
package flate
|
||||
|
||||
import "fmt"
|
||||
import (
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"math/bits"
|
||||
)
|
||||
|
||||
// fastGen maintains the table for matches,
|
||||
// and the previous byte block for level 2.
|
||||
|
@ -116,7 +120,32 @@ func (e *fastEncL1) Encode(dst *tokens, src []byte) {
|
|||
|
||||
// Extend the 4-byte match as long as possible.
|
||||
t := candidate.offset - e.cur
|
||||
l := e.matchlenLong(s+4, t+4, src) + 4
|
||||
var l = int32(4)
|
||||
if false {
|
||||
l = e.matchlenLong(s+4, t+4, src) + 4
|
||||
} else {
|
||||
// inlined:
|
||||
a := src[s+4:]
|
||||
b := src[t+4:]
|
||||
for len(a) >= 8 {
|
||||
if diff := binary.LittleEndian.Uint64(a) ^ binary.LittleEndian.Uint64(b); diff != 0 {
|
||||
l += int32(bits.TrailingZeros64(diff) >> 3)
|
||||
break
|
||||
}
|
||||
l += 8
|
||||
a = a[8:]
|
||||
b = b[8:]
|
||||
}
|
||||
if len(a) < 8 {
|
||||
b = b[:len(a)]
|
||||
for i := range a {
|
||||
if a[i] != b[i] {
|
||||
break
|
||||
}
|
||||
l++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Extend backwards
|
||||
for t > 0 && s > nextEmit && src[t-1] == src[s-1] {
|
||||
|
@ -129,7 +158,28 @@ func (e *fastEncL1) Encode(dst *tokens, src []byte) {
|
|||
}
|
||||
|
||||
// Save the match found
|
||||
dst.AddMatchLong(l, uint32(s-t-baseMatchOffset))
|
||||
if false {
|
||||
dst.AddMatchLong(l, uint32(s-t-baseMatchOffset))
|
||||
} else {
|
||||
// Inlined...
|
||||
xoffset := uint32(s - t - baseMatchOffset)
|
||||
xlength := l
|
||||
oc := offsetCode(xoffset)
|
||||
xoffset |= oc << 16
|
||||
for xlength > 0 {
|
||||
xl := xlength
|
||||
if xl > 258 {
|
||||
// We need to have at least baseMatchLength left over for next loop.
|
||||
xl = 258 - baseMatchLength
|
||||
}
|
||||
xlength -= xl
|
||||
xl -= baseMatchLength
|
||||
dst.extraHist[lengthCodes1[uint8(xl)]]++
|
||||
dst.offHist[oc]++
|
||||
dst.tokens[dst.n] = token(matchType | uint32(xl)<<lengthShift | xoffset)
|
||||
dst.n++
|
||||
}
|
||||
}
|
||||
s += l
|
||||
nextEmit = s
|
||||
if nextS >= s {
|
||||
|
|
45
vendor/github.com/klauspost/compress/flate/level3.go
generated
vendored
45
vendor/github.com/klauspost/compress/flate/level3.go
generated
vendored
|
@ -5,7 +5,7 @@ import "fmt"
|
|||
// fastEncL3
|
||||
type fastEncL3 struct {
|
||||
fastGen
|
||||
table [tableSize]tableEntryPrev
|
||||
table [1 << 16]tableEntryPrev
|
||||
}
|
||||
|
||||
// Encode uses a similar algorithm to level 2, will check up to two candidates.
|
||||
|
@ -13,6 +13,8 @@ func (e *fastEncL3) Encode(dst *tokens, src []byte) {
|
|||
const (
|
||||
inputMargin = 8 - 1
|
||||
minNonLiteralBlockSize = 1 + 1 + inputMargin
|
||||
tableBits = 16
|
||||
tableSize = 1 << tableBits
|
||||
)
|
||||
|
||||
if debugDeflate && e.cur < 0 {
|
||||
|
@ -73,7 +75,7 @@ func (e *fastEncL3) Encode(dst *tokens, src []byte) {
|
|||
nextS := s
|
||||
var candidate tableEntry
|
||||
for {
|
||||
nextHash := hash(cv)
|
||||
nextHash := hash4u(cv, tableBits)
|
||||
s = nextS
|
||||
nextS = s + 1 + (s-nextEmit)>>skipLog
|
||||
if nextS > sLimit {
|
||||
|
@ -156,7 +158,7 @@ func (e *fastEncL3) Encode(dst *tokens, src []byte) {
|
|||
// Index first pair after match end.
|
||||
if int(t+4) < len(src) && t > 0 {
|
||||
cv := load3232(src, t)
|
||||
nextHash := hash(cv)
|
||||
nextHash := hash4u(cv, tableBits)
|
||||
e.table[nextHash] = tableEntryPrev{
|
||||
Prev: e.table[nextHash].Cur,
|
||||
Cur: tableEntry{offset: e.cur + t},
|
||||
|
@ -165,30 +167,31 @@ func (e *fastEncL3) Encode(dst *tokens, src []byte) {
|
|||
goto emitRemainder
|
||||
}
|
||||
|
||||
// We could immediately start working at s now, but to improve
|
||||
// compression we first update the hash table at s-3 to s.
|
||||
x := load6432(src, s-3)
|
||||
prevHash := hash(uint32(x))
|
||||
e.table[prevHash] = tableEntryPrev{
|
||||
Prev: e.table[prevHash].Cur,
|
||||
Cur: tableEntry{offset: e.cur + s - 3},
|
||||
// Store every 5th hash in-between.
|
||||
for i := s - l + 2; i < s-5; i += 5 {
|
||||
nextHash := hash4u(load3232(src, i), tableBits)
|
||||
e.table[nextHash] = tableEntryPrev{
|
||||
Prev: e.table[nextHash].Cur,
|
||||
Cur: tableEntry{offset: e.cur + i}}
|
||||
}
|
||||
x >>= 8
|
||||
prevHash = hash(uint32(x))
|
||||
// We could immediately start working at s now, but to improve
|
||||
// compression we first update the hash table at s-2 to s.
|
||||
x := load6432(src, s-2)
|
||||
prevHash := hash4u(uint32(x), tableBits)
|
||||
|
||||
e.table[prevHash] = tableEntryPrev{
|
||||
Prev: e.table[prevHash].Cur,
|
||||
Cur: tableEntry{offset: e.cur + s - 2},
|
||||
}
|
||||
x >>= 8
|
||||
prevHash = hash(uint32(x))
|
||||
prevHash = hash4u(uint32(x), tableBits)
|
||||
|
||||
e.table[prevHash] = tableEntryPrev{
|
||||
Prev: e.table[prevHash].Cur,
|
||||
Cur: tableEntry{offset: e.cur + s - 1},
|
||||
}
|
||||
x >>= 8
|
||||
currHash := hash(uint32(x))
|
||||
currHash := hash4u(uint32(x), tableBits)
|
||||
candidates := e.table[currHash]
|
||||
cv = uint32(x)
|
||||
e.table[currHash] = tableEntryPrev{
|
||||
|
@ -200,15 +203,15 @@ func (e *fastEncL3) Encode(dst *tokens, src []byte) {
|
|||
candidate = candidates.Cur
|
||||
minOffset := e.cur + s - (maxMatchOffset - 4)
|
||||
|
||||
if candidate.offset > minOffset && cv != load3232(src, candidate.offset-e.cur) {
|
||||
// We only check if value mismatches.
|
||||
// Offset will always be invalid in other cases.
|
||||
if candidate.offset > minOffset {
|
||||
if cv == load3232(src, candidate.offset-e.cur) {
|
||||
// Found a match...
|
||||
continue
|
||||
}
|
||||
candidate = candidates.Prev
|
||||
if candidate.offset > minOffset && cv == load3232(src, candidate.offset-e.cur) {
|
||||
offset := s - (candidate.offset - e.cur)
|
||||
if offset <= maxMatchOffset {
|
||||
continue
|
||||
}
|
||||
// Match at prev...
|
||||
continue
|
||||
}
|
||||
}
|
||||
cv = uint32(x >> 8)
|
||||
|
|
17
vendor/github.com/klauspost/compress/flate/token.go
generated
vendored
17
vendor/github.com/klauspost/compress/flate/token.go
generated
vendored
|
@ -13,11 +13,10 @@ import (
|
|||
)
|
||||
|
||||
const (
|
||||
// From top
|
||||
// 2 bits: type 0 = literal 1=EOF 2=Match 3=Unused
|
||||
// 8 bits: xlength = length - MIN_MATCH_LENGTH
|
||||
// 5 bits offsetcode
|
||||
// 16 bits xoffset = offset - MIN_OFFSET_SIZE, or literal
|
||||
// bits 0-16 xoffset = offset - MIN_OFFSET_SIZE, or literal - 16 bits
|
||||
// bits 16-22 offsetcode - 5 bits
|
||||
// bits 22-30 xlength = length - MIN_MATCH_LENGTH - 8 bits
|
||||
// bits 30-32 type 0 = literal 1=EOF 2=Match 3=Unused - 2 bits
|
||||
lengthShift = 22
|
||||
offsetMask = 1<<lengthShift - 1
|
||||
typeMask = 3 << 30
|
||||
|
@ -276,7 +275,7 @@ func (t *tokens) AddMatch(xlength uint32, xoffset uint32) {
|
|||
xoffset |= oCode << 16
|
||||
|
||||
t.extraHist[lengthCodes1[uint8(xlength)]]++
|
||||
t.offHist[oCode]++
|
||||
t.offHist[oCode&31]++
|
||||
t.tokens[t.n] = token(matchType | xlength<<lengthShift | xoffset)
|
||||
t.n++
|
||||
}
|
||||
|
@ -300,7 +299,7 @@ func (t *tokens) AddMatchLong(xlength int32, xoffset uint32) {
|
|||
xlength -= xl
|
||||
xl -= baseMatchLength
|
||||
t.extraHist[lengthCodes1[uint8(xl)]]++
|
||||
t.offHist[oc]++
|
||||
t.offHist[oc&31]++
|
||||
t.tokens[t.n] = token(matchType | uint32(xl)<<lengthShift | xoffset)
|
||||
t.n++
|
||||
}
|
||||
|
@ -356,8 +355,8 @@ func (t token) offset() uint32 { return uint32(t) & offsetMask }
|
|||
|
||||
func (t token) length() uint8 { return uint8(t >> lengthShift) }
|
||||
|
||||
// The code is never more than 8 bits, but is returned as uint32 for convenience.
|
||||
func lengthCode(len uint8) uint32 { return uint32(lengthCodes[len]) }
|
||||
// Convert length to code.
|
||||
func lengthCode(len uint8) uint8 { return lengthCodes[len] }
|
||||
|
||||
// Returns the offset code corresponding to a specific offset
|
||||
func offsetCode(off uint32) uint32 {
|
||||
|
|
321
vendor/github.com/klauspost/compress/huff0/decompress.go
generated
vendored
321
vendor/github.com/klauspost/compress/huff0/decompress.go
generated
vendored
|
@ -4,6 +4,7 @@ import (
|
|||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"sync"
|
||||
|
||||
"github.com/klauspost/compress/fse"
|
||||
)
|
||||
|
@ -216,6 +217,7 @@ func (s *Scratch) Decoder() *Decoder {
|
|||
return &Decoder{
|
||||
dt: s.dt,
|
||||
actualTableLog: s.actualTableLog,
|
||||
bufs: &s.decPool,
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -223,6 +225,15 @@ func (s *Scratch) Decoder() *Decoder {
|
|||
type Decoder struct {
|
||||
dt dTable
|
||||
actualTableLog uint8
|
||||
bufs *sync.Pool
|
||||
}
|
||||
|
||||
func (d *Decoder) buffer() *[4][256]byte {
|
||||
buf, ok := d.bufs.Get().(*[4][256]byte)
|
||||
if ok {
|
||||
return buf
|
||||
}
|
||||
return &[4][256]byte{}
|
||||
}
|
||||
|
||||
// Decompress1X will decompress a 1X encoded stream.
|
||||
|
@ -249,7 +260,8 @@ func (d *Decoder) Decompress1X(dst, src []byte) ([]byte, error) {
|
|||
dt := d.dt.single[:tlSize]
|
||||
|
||||
// Use temp table to avoid bound checks/append penalty.
|
||||
var buf [256]byte
|
||||
bufs := d.buffer()
|
||||
buf := &bufs[0]
|
||||
var off uint8
|
||||
|
||||
for br.off >= 8 {
|
||||
|
@ -277,6 +289,7 @@ func (d *Decoder) Decompress1X(dst, src []byte) ([]byte, error) {
|
|||
if off == 0 {
|
||||
if len(dst)+256 > maxDecodedSize {
|
||||
br.close()
|
||||
d.bufs.Put(bufs)
|
||||
return nil, ErrMaxDecodedSizeExceeded
|
||||
}
|
||||
dst = append(dst, buf[:]...)
|
||||
|
@ -284,6 +297,7 @@ func (d *Decoder) Decompress1X(dst, src []byte) ([]byte, error) {
|
|||
}
|
||||
|
||||
if len(dst)+int(off) > maxDecodedSize {
|
||||
d.bufs.Put(bufs)
|
||||
br.close()
|
||||
return nil, ErrMaxDecodedSizeExceeded
|
||||
}
|
||||
|
@ -310,6 +324,7 @@ func (d *Decoder) Decompress1X(dst, src []byte) ([]byte, error) {
|
|||
}
|
||||
}
|
||||
if len(dst) >= maxDecodedSize {
|
||||
d.bufs.Put(bufs)
|
||||
br.close()
|
||||
return nil, ErrMaxDecodedSizeExceeded
|
||||
}
|
||||
|
@ -319,6 +334,7 @@ func (d *Decoder) Decompress1X(dst, src []byte) ([]byte, error) {
|
|||
bitsLeft -= nBits
|
||||
dst = append(dst, uint8(v.entry>>8))
|
||||
}
|
||||
d.bufs.Put(bufs)
|
||||
return dst, br.close()
|
||||
}
|
||||
|
||||
|
@ -341,7 +357,8 @@ func (d *Decoder) decompress1X8Bit(dst, src []byte) ([]byte, error) {
|
|||
dt := d.dt.single[:256]
|
||||
|
||||
// Use temp table to avoid bound checks/append penalty.
|
||||
var buf [256]byte
|
||||
bufs := d.buffer()
|
||||
buf := &bufs[0]
|
||||
var off uint8
|
||||
|
||||
switch d.actualTableLog {
|
||||
|
@ -369,6 +386,7 @@ func (d *Decoder) decompress1X8Bit(dst, src []byte) ([]byte, error) {
|
|||
if off == 0 {
|
||||
if len(dst)+256 > maxDecodedSize {
|
||||
br.close()
|
||||
d.bufs.Put(bufs)
|
||||
return nil, ErrMaxDecodedSizeExceeded
|
||||
}
|
||||
dst = append(dst, buf[:]...)
|
||||
|
@ -398,6 +416,7 @@ func (d *Decoder) decompress1X8Bit(dst, src []byte) ([]byte, error) {
|
|||
if off == 0 {
|
||||
if len(dst)+256 > maxDecodedSize {
|
||||
br.close()
|
||||
d.bufs.Put(bufs)
|
||||
return nil, ErrMaxDecodedSizeExceeded
|
||||
}
|
||||
dst = append(dst, buf[:]...)
|
||||
|
@ -426,6 +445,7 @@ func (d *Decoder) decompress1X8Bit(dst, src []byte) ([]byte, error) {
|
|||
off += 4
|
||||
if off == 0 {
|
||||
if len(dst)+256 > maxDecodedSize {
|
||||
d.bufs.Put(bufs)
|
||||
br.close()
|
||||
return nil, ErrMaxDecodedSizeExceeded
|
||||
}
|
||||
|
@ -455,6 +475,7 @@ func (d *Decoder) decompress1X8Bit(dst, src []byte) ([]byte, error) {
|
|||
off += 4
|
||||
if off == 0 {
|
||||
if len(dst)+256 > maxDecodedSize {
|
||||
d.bufs.Put(bufs)
|
||||
br.close()
|
||||
return nil, ErrMaxDecodedSizeExceeded
|
||||
}
|
||||
|
@ -484,6 +505,7 @@ func (d *Decoder) decompress1X8Bit(dst, src []byte) ([]byte, error) {
|
|||
off += 4
|
||||
if off == 0 {
|
||||
if len(dst)+256 > maxDecodedSize {
|
||||
d.bufs.Put(bufs)
|
||||
br.close()
|
||||
return nil, ErrMaxDecodedSizeExceeded
|
||||
}
|
||||
|
@ -513,6 +535,7 @@ func (d *Decoder) decompress1X8Bit(dst, src []byte) ([]byte, error) {
|
|||
off += 4
|
||||
if off == 0 {
|
||||
if len(dst)+256 > maxDecodedSize {
|
||||
d.bufs.Put(bufs)
|
||||
br.close()
|
||||
return nil, ErrMaxDecodedSizeExceeded
|
||||
}
|
||||
|
@ -542,6 +565,7 @@ func (d *Decoder) decompress1X8Bit(dst, src []byte) ([]byte, error) {
|
|||
off += 4
|
||||
if off == 0 {
|
||||
if len(dst)+256 > maxDecodedSize {
|
||||
d.bufs.Put(bufs)
|
||||
br.close()
|
||||
return nil, ErrMaxDecodedSizeExceeded
|
||||
}
|
||||
|
@ -571,6 +595,7 @@ func (d *Decoder) decompress1X8Bit(dst, src []byte) ([]byte, error) {
|
|||
off += 4
|
||||
if off == 0 {
|
||||
if len(dst)+256 > maxDecodedSize {
|
||||
d.bufs.Put(bufs)
|
||||
br.close()
|
||||
return nil, ErrMaxDecodedSizeExceeded
|
||||
}
|
||||
|
@ -578,10 +603,12 @@ func (d *Decoder) decompress1X8Bit(dst, src []byte) ([]byte, error) {
|
|||
}
|
||||
}
|
||||
default:
|
||||
d.bufs.Put(bufs)
|
||||
return nil, fmt.Errorf("invalid tablelog: %d", d.actualTableLog)
|
||||
}
|
||||
|
||||
if len(dst)+int(off) > maxDecodedSize {
|
||||
d.bufs.Put(bufs)
|
||||
br.close()
|
||||
return nil, ErrMaxDecodedSizeExceeded
|
||||
}
|
||||
|
@ -601,6 +628,7 @@ func (d *Decoder) decompress1X8Bit(dst, src []byte) ([]byte, error) {
|
|||
}
|
||||
if len(dst) >= maxDecodedSize {
|
||||
br.close()
|
||||
d.bufs.Put(bufs)
|
||||
return nil, ErrMaxDecodedSizeExceeded
|
||||
}
|
||||
v := dt[br.peekByteFast()>>shift]
|
||||
|
@ -609,6 +637,7 @@ func (d *Decoder) decompress1X8Bit(dst, src []byte) ([]byte, error) {
|
|||
bitsLeft -= int8(nBits)
|
||||
dst = append(dst, uint8(v.entry>>8))
|
||||
}
|
||||
d.bufs.Put(bufs)
|
||||
return dst, br.close()
|
||||
}
|
||||
|
||||
|
@ -628,7 +657,8 @@ func (d *Decoder) decompress1X8BitExactly(dst, src []byte) ([]byte, error) {
|
|||
dt := d.dt.single[:256]
|
||||
|
||||
// Use temp table to avoid bound checks/append penalty.
|
||||
var buf [256]byte
|
||||
bufs := d.buffer()
|
||||
buf := &bufs[0]
|
||||
var off uint8
|
||||
|
||||
const shift = 56
|
||||
|
@ -655,6 +685,7 @@ func (d *Decoder) decompress1X8BitExactly(dst, src []byte) ([]byte, error) {
|
|||
off += 4
|
||||
if off == 0 {
|
||||
if len(dst)+256 > maxDecodedSize {
|
||||
d.bufs.Put(bufs)
|
||||
br.close()
|
||||
return nil, ErrMaxDecodedSizeExceeded
|
||||
}
|
||||
|
@ -663,6 +694,7 @@ func (d *Decoder) decompress1X8BitExactly(dst, src []byte) ([]byte, error) {
|
|||
}
|
||||
|
||||
if len(dst)+int(off) > maxDecodedSize {
|
||||
d.bufs.Put(bufs)
|
||||
br.close()
|
||||
return nil, ErrMaxDecodedSizeExceeded
|
||||
}
|
||||
|
@ -679,6 +711,7 @@ func (d *Decoder) decompress1X8BitExactly(dst, src []byte) ([]byte, error) {
|
|||
}
|
||||
}
|
||||
if len(dst) >= maxDecodedSize {
|
||||
d.bufs.Put(bufs)
|
||||
br.close()
|
||||
return nil, ErrMaxDecodedSizeExceeded
|
||||
}
|
||||
|
@ -688,6 +721,7 @@ func (d *Decoder) decompress1X8BitExactly(dst, src []byte) ([]byte, error) {
|
|||
bitsLeft -= int8(nBits)
|
||||
dst = append(dst, uint8(v.entry>>8))
|
||||
}
|
||||
d.bufs.Put(bufs)
|
||||
return dst, br.close()
|
||||
}
|
||||
|
||||
|
@ -735,12 +769,12 @@ func (d *Decoder) Decompress4X(dst, src []byte) ([]byte, error) {
|
|||
single := d.dt.single[:tlSize]
|
||||
|
||||
// Use temp table to avoid bound checks/append penalty.
|
||||
var buf [256]byte
|
||||
buf := d.buffer()
|
||||
var off uint8
|
||||
var decoded int
|
||||
|
||||
// Decode 2 values from each decoder/loop.
|
||||
const bufoff = 256 / 4
|
||||
const bufoff = 256
|
||||
for {
|
||||
if br[0].off < 4 || br[1].off < 4 || br[2].off < 4 || br[3].off < 4 {
|
||||
break
|
||||
|
@ -758,8 +792,8 @@ func (d *Decoder) Decompress4X(dst, src []byte) ([]byte, error) {
|
|||
v2 := single[val2&tlMask]
|
||||
br[stream].advance(uint8(v.entry))
|
||||
br[stream2].advance(uint8(v2.entry))
|
||||
buf[off+bufoff*stream] = uint8(v.entry >> 8)
|
||||
buf[off+bufoff*stream2] = uint8(v2.entry >> 8)
|
||||
buf[stream][off] = uint8(v.entry >> 8)
|
||||
buf[stream2][off] = uint8(v2.entry >> 8)
|
||||
|
||||
val = br[stream].peekBitsFast(d.actualTableLog)
|
||||
val2 = br[stream2].peekBitsFast(d.actualTableLog)
|
||||
|
@ -767,8 +801,8 @@ func (d *Decoder) Decompress4X(dst, src []byte) ([]byte, error) {
|
|||
v2 = single[val2&tlMask]
|
||||
br[stream].advance(uint8(v.entry))
|
||||
br[stream2].advance(uint8(v2.entry))
|
||||
buf[off+bufoff*stream+1] = uint8(v.entry >> 8)
|
||||
buf[off+bufoff*stream2+1] = uint8(v2.entry >> 8)
|
||||
buf[stream][off+1] = uint8(v.entry >> 8)
|
||||
buf[stream2][off+1] = uint8(v2.entry >> 8)
|
||||
}
|
||||
|
||||
{
|
||||
|
@ -783,8 +817,8 @@ func (d *Decoder) Decompress4X(dst, src []byte) ([]byte, error) {
|
|||
v2 := single[val2&tlMask]
|
||||
br[stream].advance(uint8(v.entry))
|
||||
br[stream2].advance(uint8(v2.entry))
|
||||
buf[off+bufoff*stream] = uint8(v.entry >> 8)
|
||||
buf[off+bufoff*stream2] = uint8(v2.entry >> 8)
|
||||
buf[stream][off] = uint8(v.entry >> 8)
|
||||
buf[stream2][off] = uint8(v2.entry >> 8)
|
||||
|
||||
val = br[stream].peekBitsFast(d.actualTableLog)
|
||||
val2 = br[stream2].peekBitsFast(d.actualTableLog)
|
||||
|
@ -792,25 +826,26 @@ func (d *Decoder) Decompress4X(dst, src []byte) ([]byte, error) {
|
|||
v2 = single[val2&tlMask]
|
||||
br[stream].advance(uint8(v.entry))
|
||||
br[stream2].advance(uint8(v2.entry))
|
||||
buf[off+bufoff*stream+1] = uint8(v.entry >> 8)
|
||||
buf[off+bufoff*stream2+1] = uint8(v2.entry >> 8)
|
||||
buf[stream][off+1] = uint8(v.entry >> 8)
|
||||
buf[stream2][off+1] = uint8(v2.entry >> 8)
|
||||
}
|
||||
|
||||
off += 2
|
||||
|
||||
if off == bufoff {
|
||||
if off == 0 {
|
||||
if bufoff > dstEvery {
|
||||
d.bufs.Put(buf)
|
||||
return nil, errors.New("corruption detected: stream overrun 1")
|
||||
}
|
||||
copy(out, buf[:bufoff])
|
||||
copy(out[dstEvery:], buf[bufoff:bufoff*2])
|
||||
copy(out[dstEvery*2:], buf[bufoff*2:bufoff*3])
|
||||
copy(out[dstEvery*3:], buf[bufoff*3:bufoff*4])
|
||||
off = 0
|
||||
copy(out, buf[0][:])
|
||||
copy(out[dstEvery:], buf[1][:])
|
||||
copy(out[dstEvery*2:], buf[2][:])
|
||||
copy(out[dstEvery*3:], buf[3][:])
|
||||
out = out[bufoff:]
|
||||
decoded += 256
|
||||
decoded += bufoff * 4
|
||||
// There must at least be 3 buffers left.
|
||||
if len(out) < dstEvery*3 {
|
||||
d.bufs.Put(buf)
|
||||
return nil, errors.New("corruption detected: stream overrun 2")
|
||||
}
|
||||
}
|
||||
|
@ -818,12 +853,13 @@ func (d *Decoder) Decompress4X(dst, src []byte) ([]byte, error) {
|
|||
if off > 0 {
|
||||
ioff := int(off)
|
||||
if len(out) < dstEvery*3+ioff {
|
||||
d.bufs.Put(buf)
|
||||
return nil, errors.New("corruption detected: stream overrun 3")
|
||||
}
|
||||
copy(out, buf[:off])
|
||||
copy(out[dstEvery:dstEvery+ioff], buf[bufoff:bufoff*2])
|
||||
copy(out[dstEvery*2:dstEvery*2+ioff], buf[bufoff*2:bufoff*3])
|
||||
copy(out[dstEvery*3:dstEvery*3+ioff], buf[bufoff*3:bufoff*4])
|
||||
copy(out, buf[0][:off])
|
||||
copy(out[dstEvery:], buf[1][:off])
|
||||
copy(out[dstEvery*2:], buf[2][:off])
|
||||
copy(out[dstEvery*3:], buf[3][:off])
|
||||
decoded += int(off) * 4
|
||||
out = out[off:]
|
||||
}
|
||||
|
@ -853,6 +889,7 @@ func (d *Decoder) Decompress4X(dst, src []byte) ([]byte, error) {
|
|||
}
|
||||
// end inline...
|
||||
if offset >= len(out) {
|
||||
d.bufs.Put(buf)
|
||||
return nil, errors.New("corruption detected: stream overrun 4")
|
||||
}
|
||||
|
||||
|
@ -871,6 +908,7 @@ func (d *Decoder) Decompress4X(dst, src []byte) ([]byte, error) {
|
|||
return nil, err
|
||||
}
|
||||
}
|
||||
d.bufs.Put(buf)
|
||||
if dstSize != decoded {
|
||||
return nil, errors.New("corruption detected: short output block")
|
||||
}
|
||||
|
@ -916,12 +954,12 @@ func (d *Decoder) decompress4X8bit(dst, src []byte) ([]byte, error) {
|
|||
single := d.dt.single[:tlSize]
|
||||
|
||||
// Use temp table to avoid bound checks/append penalty.
|
||||
var buf [256]byte
|
||||
buf := d.buffer()
|
||||
var off uint8
|
||||
var decoded int
|
||||
|
||||
// Decode 4 values from each decoder/loop.
|
||||
const bufoff = 256 / 4
|
||||
const bufoff = 256
|
||||
for {
|
||||
if br[0].off < 4 || br[1].off < 4 || br[2].off < 4 || br[3].off < 4 {
|
||||
break
|
||||
|
@ -942,8 +980,8 @@ func (d *Decoder) decompress4X8bit(dst, src []byte) ([]byte, error) {
|
|||
br1.value <<= v & 63
|
||||
br2.bitsRead += uint8(v2)
|
||||
br2.value <<= v2 & 63
|
||||
buf[off+bufoff*stream] = uint8(v >> 8)
|
||||
buf[off+bufoff*stream2] = uint8(v2 >> 8)
|
||||
buf[stream][off] = uint8(v >> 8)
|
||||
buf[stream2][off] = uint8(v2 >> 8)
|
||||
|
||||
v = single[uint8(br1.value>>shift)].entry
|
||||
v2 = single[uint8(br2.value>>shift)].entry
|
||||
|
@ -951,8 +989,8 @@ func (d *Decoder) decompress4X8bit(dst, src []byte) ([]byte, error) {
|
|||
br1.value <<= v & 63
|
||||
br2.bitsRead += uint8(v2)
|
||||
br2.value <<= v2 & 63
|
||||
buf[off+bufoff*stream+1] = uint8(v >> 8)
|
||||
buf[off+bufoff*stream2+1] = uint8(v2 >> 8)
|
||||
buf[stream][off+1] = uint8(v >> 8)
|
||||
buf[stream2][off+1] = uint8(v2 >> 8)
|
||||
|
||||
v = single[uint8(br1.value>>shift)].entry
|
||||
v2 = single[uint8(br2.value>>shift)].entry
|
||||
|
@ -960,8 +998,8 @@ func (d *Decoder) decompress4X8bit(dst, src []byte) ([]byte, error) {
|
|||
br1.value <<= v & 63
|
||||
br2.bitsRead += uint8(v2)
|
||||
br2.value <<= v2 & 63
|
||||
buf[off+bufoff*stream+2] = uint8(v >> 8)
|
||||
buf[off+bufoff*stream2+2] = uint8(v2 >> 8)
|
||||
buf[stream][off+2] = uint8(v >> 8)
|
||||
buf[stream2][off+2] = uint8(v2 >> 8)
|
||||
|
||||
v = single[uint8(br1.value>>shift)].entry
|
||||
v2 = single[uint8(br2.value>>shift)].entry
|
||||
|
@ -969,8 +1007,8 @@ func (d *Decoder) decompress4X8bit(dst, src []byte) ([]byte, error) {
|
|||
br1.value <<= v & 63
|
||||
br2.bitsRead += uint8(v2)
|
||||
br2.value <<= v2 & 63
|
||||
buf[off+bufoff*stream2+3] = uint8(v2 >> 8)
|
||||
buf[off+bufoff*stream+3] = uint8(v >> 8)
|
||||
buf[stream][off+3] = uint8(v >> 8)
|
||||
buf[stream2][off+3] = uint8(v2 >> 8)
|
||||
}
|
||||
|
||||
{
|
||||
|
@ -987,8 +1025,8 @@ func (d *Decoder) decompress4X8bit(dst, src []byte) ([]byte, error) {
|
|||
br1.value <<= v & 63
|
||||
br2.bitsRead += uint8(v2)
|
||||
br2.value <<= v2 & 63
|
||||
buf[off+bufoff*stream] = uint8(v >> 8)
|
||||
buf[off+bufoff*stream2] = uint8(v2 >> 8)
|
||||
buf[stream][off] = uint8(v >> 8)
|
||||
buf[stream2][off] = uint8(v2 >> 8)
|
||||
|
||||
v = single[uint8(br1.value>>shift)].entry
|
||||
v2 = single[uint8(br2.value>>shift)].entry
|
||||
|
@ -996,8 +1034,8 @@ func (d *Decoder) decompress4X8bit(dst, src []byte) ([]byte, error) {
|
|||
br1.value <<= v & 63
|
||||
br2.bitsRead += uint8(v2)
|
||||
br2.value <<= v2 & 63
|
||||
buf[off+bufoff*stream+1] = uint8(v >> 8)
|
||||
buf[off+bufoff*stream2+1] = uint8(v2 >> 8)
|
||||
buf[stream][off+1] = uint8(v >> 8)
|
||||
buf[stream2][off+1] = uint8(v2 >> 8)
|
||||
|
||||
v = single[uint8(br1.value>>shift)].entry
|
||||
v2 = single[uint8(br2.value>>shift)].entry
|
||||
|
@ -1005,8 +1043,8 @@ func (d *Decoder) decompress4X8bit(dst, src []byte) ([]byte, error) {
|
|||
br1.value <<= v & 63
|
||||
br2.bitsRead += uint8(v2)
|
||||
br2.value <<= v2 & 63
|
||||
buf[off+bufoff*stream+2] = uint8(v >> 8)
|
||||
buf[off+bufoff*stream2+2] = uint8(v2 >> 8)
|
||||
buf[stream][off+2] = uint8(v >> 8)
|
||||
buf[stream2][off+2] = uint8(v2 >> 8)
|
||||
|
||||
v = single[uint8(br1.value>>shift)].entry
|
||||
v2 = single[uint8(br2.value>>shift)].entry
|
||||
|
@ -1014,25 +1052,26 @@ func (d *Decoder) decompress4X8bit(dst, src []byte) ([]byte, error) {
|
|||
br1.value <<= v & 63
|
||||
br2.bitsRead += uint8(v2)
|
||||
br2.value <<= v2 & 63
|
||||
buf[off+bufoff*stream2+3] = uint8(v2 >> 8)
|
||||
buf[off+bufoff*stream+3] = uint8(v >> 8)
|
||||
buf[stream][off+3] = uint8(v >> 8)
|
||||
buf[stream2][off+3] = uint8(v2 >> 8)
|
||||
}
|
||||
|
||||
off += 4
|
||||
|
||||
if off == bufoff {
|
||||
if off == 0 {
|
||||
if bufoff > dstEvery {
|
||||
d.bufs.Put(buf)
|
||||
return nil, errors.New("corruption detected: stream overrun 1")
|
||||
}
|
||||
copy(out, buf[:bufoff])
|
||||
copy(out[dstEvery:], buf[bufoff:bufoff*2])
|
||||
copy(out[dstEvery*2:], buf[bufoff*2:bufoff*3])
|
||||
copy(out[dstEvery*3:], buf[bufoff*3:bufoff*4])
|
||||
off = 0
|
||||
copy(out, buf[0][:])
|
||||
copy(out[dstEvery:], buf[1][:])
|
||||
copy(out[dstEvery*2:], buf[2][:])
|
||||
copy(out[dstEvery*3:], buf[3][:])
|
||||
out = out[bufoff:]
|
||||
decoded += 256
|
||||
decoded += bufoff * 4
|
||||
// There must at least be 3 buffers left.
|
||||
if len(out) < dstEvery*3 {
|
||||
d.bufs.Put(buf)
|
||||
return nil, errors.New("corruption detected: stream overrun 2")
|
||||
}
|
||||
}
|
||||
|
@ -1040,12 +1079,13 @@ func (d *Decoder) decompress4X8bit(dst, src []byte) ([]byte, error) {
|
|||
if off > 0 {
|
||||
ioff := int(off)
|
||||
if len(out) < dstEvery*3+ioff {
|
||||
d.bufs.Put(buf)
|
||||
return nil, errors.New("corruption detected: stream overrun 3")
|
||||
}
|
||||
copy(out, buf[:off])
|
||||
copy(out[dstEvery:dstEvery+ioff], buf[bufoff:bufoff*2])
|
||||
copy(out[dstEvery*2:dstEvery*2+ioff], buf[bufoff*2:bufoff*3])
|
||||
copy(out[dstEvery*3:dstEvery*3+ioff], buf[bufoff*3:bufoff*4])
|
||||
copy(out, buf[0][:off])
|
||||
copy(out[dstEvery:], buf[1][:off])
|
||||
copy(out[dstEvery*2:], buf[2][:off])
|
||||
copy(out[dstEvery*3:], buf[3][:off])
|
||||
decoded += int(off) * 4
|
||||
out = out[off:]
|
||||
}
|
||||
|
@ -1057,6 +1097,7 @@ func (d *Decoder) decompress4X8bit(dst, src []byte) ([]byte, error) {
|
|||
bitsLeft := int(br.off*8) + int(64-br.bitsRead)
|
||||
for bitsLeft > 0 {
|
||||
if br.finished() {
|
||||
d.bufs.Put(buf)
|
||||
return nil, io.ErrUnexpectedEOF
|
||||
}
|
||||
if br.bitsRead >= 56 {
|
||||
|
@ -1077,6 +1118,7 @@ func (d *Decoder) decompress4X8bit(dst, src []byte) ([]byte, error) {
|
|||
}
|
||||
// end inline...
|
||||
if offset >= len(out) {
|
||||
d.bufs.Put(buf)
|
||||
return nil, errors.New("corruption detected: stream overrun 4")
|
||||
}
|
||||
|
||||
|
@ -1091,9 +1133,11 @@ func (d *Decoder) decompress4X8bit(dst, src []byte) ([]byte, error) {
|
|||
decoded += offset - dstEvery*i
|
||||
err = br.close()
|
||||
if err != nil {
|
||||
d.bufs.Put(buf)
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
d.bufs.Put(buf)
|
||||
if dstSize != decoded {
|
||||
return nil, errors.New("corruption detected: short output block")
|
||||
}
|
||||
|
@ -1135,12 +1179,12 @@ func (d *Decoder) decompress4X8bitExactly(dst, src []byte) ([]byte, error) {
|
|||
single := d.dt.single[:tlSize]
|
||||
|
||||
// Use temp table to avoid bound checks/append penalty.
|
||||
var buf [256]byte
|
||||
buf := d.buffer()
|
||||
var off uint8
|
||||
var decoded int
|
||||
|
||||
// Decode 4 values from each decoder/loop.
|
||||
const bufoff = 256 / 4
|
||||
const bufoff = 256
|
||||
for {
|
||||
if br[0].off < 4 || br[1].off < 4 || br[2].off < 4 || br[3].off < 4 {
|
||||
break
|
||||
|
@ -1150,104 +1194,109 @@ func (d *Decoder) decompress4X8bitExactly(dst, src []byte) ([]byte, error) {
|
|||
// Interleave 2 decodes.
|
||||
const stream = 0
|
||||
const stream2 = 1
|
||||
br[stream].fillFast()
|
||||
br[stream2].fillFast()
|
||||
br1 := &br[stream]
|
||||
br2 := &br[stream2]
|
||||
br1.fillFast()
|
||||
br2.fillFast()
|
||||
|
||||
v := single[uint8(br[stream].value>>shift)].entry
|
||||
v2 := single[uint8(br[stream2].value>>shift)].entry
|
||||
br[stream].bitsRead += uint8(v)
|
||||
br[stream].value <<= v & 63
|
||||
br[stream2].bitsRead += uint8(v2)
|
||||
br[stream2].value <<= v2 & 63
|
||||
buf[off+bufoff*stream] = uint8(v >> 8)
|
||||
buf[off+bufoff*stream2] = uint8(v2 >> 8)
|
||||
v := single[uint8(br1.value>>shift)].entry
|
||||
v2 := single[uint8(br2.value>>shift)].entry
|
||||
br1.bitsRead += uint8(v)
|
||||
br1.value <<= v & 63
|
||||
br2.bitsRead += uint8(v2)
|
||||
br2.value <<= v2 & 63
|
||||
buf[stream][off] = uint8(v >> 8)
|
||||
buf[stream2][off] = uint8(v2 >> 8)
|
||||
|
||||
v = single[uint8(br[stream].value>>shift)].entry
|
||||
v2 = single[uint8(br[stream2].value>>shift)].entry
|
||||
br[stream].bitsRead += uint8(v)
|
||||
br[stream].value <<= v & 63
|
||||
br[stream2].bitsRead += uint8(v2)
|
||||
br[stream2].value <<= v2 & 63
|
||||
buf[off+bufoff*stream+1] = uint8(v >> 8)
|
||||
buf[off+bufoff*stream2+1] = uint8(v2 >> 8)
|
||||
v = single[uint8(br1.value>>shift)].entry
|
||||
v2 = single[uint8(br2.value>>shift)].entry
|
||||
br1.bitsRead += uint8(v)
|
||||
br1.value <<= v & 63
|
||||
br2.bitsRead += uint8(v2)
|
||||
br2.value <<= v2 & 63
|
||||
buf[stream][off+1] = uint8(v >> 8)
|
||||
buf[stream2][off+1] = uint8(v2 >> 8)
|
||||
|
||||
v = single[uint8(br[stream].value>>shift)].entry
|
||||
v2 = single[uint8(br[stream2].value>>shift)].entry
|
||||
br[stream].bitsRead += uint8(v)
|
||||
br[stream].value <<= v & 63
|
||||
br[stream2].bitsRead += uint8(v2)
|
||||
br[stream2].value <<= v2 & 63
|
||||
buf[off+bufoff*stream+2] = uint8(v >> 8)
|
||||
buf[off+bufoff*stream2+2] = uint8(v2 >> 8)
|
||||
v = single[uint8(br1.value>>shift)].entry
|
||||
v2 = single[uint8(br2.value>>shift)].entry
|
||||
br1.bitsRead += uint8(v)
|
||||
br1.value <<= v & 63
|
||||
br2.bitsRead += uint8(v2)
|
||||
br2.value <<= v2 & 63
|
||||
buf[stream][off+2] = uint8(v >> 8)
|
||||
buf[stream2][off+2] = uint8(v2 >> 8)
|
||||
|
||||
v = single[uint8(br[stream].value>>shift)].entry
|
||||
v2 = single[uint8(br[stream2].value>>shift)].entry
|
||||
br[stream].bitsRead += uint8(v)
|
||||
br[stream].value <<= v & 63
|
||||
br[stream2].bitsRead += uint8(v2)
|
||||
br[stream2].value <<= v2 & 63
|
||||
buf[off+bufoff*stream+3] = uint8(v >> 8)
|
||||
buf[off+bufoff*stream2+3] = uint8(v2 >> 8)
|
||||
v = single[uint8(br1.value>>shift)].entry
|
||||
v2 = single[uint8(br2.value>>shift)].entry
|
||||
br1.bitsRead += uint8(v)
|
||||
br1.value <<= v & 63
|
||||
br2.bitsRead += uint8(v2)
|
||||
br2.value <<= v2 & 63
|
||||
buf[stream][off+3] = uint8(v >> 8)
|
||||
buf[stream2][off+3] = uint8(v2 >> 8)
|
||||
}
|
||||
|
||||
{
|
||||
const stream = 2
|
||||
const stream2 = 3
|
||||
br[stream].fillFast()
|
||||
br[stream2].fillFast()
|
||||
br1 := &br[stream]
|
||||
br2 := &br[stream2]
|
||||
br1.fillFast()
|
||||
br2.fillFast()
|
||||
|
||||
v := single[uint8(br[stream].value>>shift)].entry
|
||||
v2 := single[uint8(br[stream2].value>>shift)].entry
|
||||
br[stream].bitsRead += uint8(v)
|
||||
br[stream].value <<= v & 63
|
||||
br[stream2].bitsRead += uint8(v2)
|
||||
br[stream2].value <<= v2 & 63
|
||||
buf[off+bufoff*stream] = uint8(v >> 8)
|
||||
buf[off+bufoff*stream2] = uint8(v2 >> 8)
|
||||
v := single[uint8(br1.value>>shift)].entry
|
||||
v2 := single[uint8(br2.value>>shift)].entry
|
||||
br1.bitsRead += uint8(v)
|
||||
br1.value <<= v & 63
|
||||
br2.bitsRead += uint8(v2)
|
||||
br2.value <<= v2 & 63
|
||||
buf[stream][off] = uint8(v >> 8)
|
||||
buf[stream2][off] = uint8(v2 >> 8)
|
||||
|
||||
v = single[uint8(br[stream].value>>shift)].entry
|
||||
v2 = single[uint8(br[stream2].value>>shift)].entry
|
||||
br[stream].bitsRead += uint8(v)
|
||||
br[stream].value <<= v & 63
|
||||
br[stream2].bitsRead += uint8(v2)
|
||||
br[stream2].value <<= v2 & 63
|
||||
buf[off+bufoff*stream+1] = uint8(v >> 8)
|
||||
buf[off+bufoff*stream2+1] = uint8(v2 >> 8)
|
||||
v = single[uint8(br1.value>>shift)].entry
|
||||
v2 = single[uint8(br2.value>>shift)].entry
|
||||
br1.bitsRead += uint8(v)
|
||||
br1.value <<= v & 63
|
||||
br2.bitsRead += uint8(v2)
|
||||
br2.value <<= v2 & 63
|
||||
buf[stream][off+1] = uint8(v >> 8)
|
||||
buf[stream2][off+1] = uint8(v2 >> 8)
|
||||
|
||||
v = single[uint8(br[stream].value>>shift)].entry
|
||||
v2 = single[uint8(br[stream2].value>>shift)].entry
|
||||
br[stream].bitsRead += uint8(v)
|
||||
br[stream].value <<= v & 63
|
||||
br[stream2].bitsRead += uint8(v2)
|
||||
br[stream2].value <<= v2 & 63
|
||||
buf[off+bufoff*stream+2] = uint8(v >> 8)
|
||||
buf[off+bufoff*stream2+2] = uint8(v2 >> 8)
|
||||
v = single[uint8(br1.value>>shift)].entry
|
||||
v2 = single[uint8(br2.value>>shift)].entry
|
||||
br1.bitsRead += uint8(v)
|
||||
br1.value <<= v & 63
|
||||
br2.bitsRead += uint8(v2)
|
||||
br2.value <<= v2 & 63
|
||||
buf[stream][off+2] = uint8(v >> 8)
|
||||
buf[stream2][off+2] = uint8(v2 >> 8)
|
||||
|
||||
v = single[uint8(br[stream].value>>shift)].entry
|
||||
v2 = single[uint8(br[stream2].value>>shift)].entry
|
||||
br[stream].bitsRead += uint8(v)
|
||||
br[stream].value <<= v & 63
|
||||
br[stream2].bitsRead += uint8(v2)
|
||||
br[stream2].value <<= v2 & 63
|
||||
buf[off+bufoff*stream+3] = uint8(v >> 8)
|
||||
buf[off+bufoff*stream2+3] = uint8(v2 >> 8)
|
||||
v = single[uint8(br1.value>>shift)].entry
|
||||
v2 = single[uint8(br2.value>>shift)].entry
|
||||
br1.bitsRead += uint8(v)
|
||||
br1.value <<= v & 63
|
||||
br2.bitsRead += uint8(v2)
|
||||
br2.value <<= v2 & 63
|
||||
buf[stream][off+3] = uint8(v >> 8)
|
||||
buf[stream2][off+3] = uint8(v2 >> 8)
|
||||
}
|
||||
|
||||
off += 4
|
||||
|
||||
if off == bufoff {
|
||||
if off == 0 {
|
||||
if bufoff > dstEvery {
|
||||
d.bufs.Put(buf)
|
||||
return nil, errors.New("corruption detected: stream overrun 1")
|
||||
}
|
||||
copy(out, buf[:bufoff])
|
||||
copy(out[dstEvery:], buf[bufoff:bufoff*2])
|
||||
copy(out[dstEvery*2:], buf[bufoff*2:bufoff*3])
|
||||
copy(out[dstEvery*3:], buf[bufoff*3:bufoff*4])
|
||||
off = 0
|
||||
copy(out, buf[0][:])
|
||||
copy(out[dstEvery:], buf[1][:])
|
||||
copy(out[dstEvery*2:], buf[2][:])
|
||||
copy(out[dstEvery*3:], buf[3][:])
|
||||
out = out[bufoff:]
|
||||
decoded += 256
|
||||
decoded += bufoff * 4
|
||||
// There must at least be 3 buffers left.
|
||||
if len(out) < dstEvery*3 {
|
||||
d.bufs.Put(buf)
|
||||
return nil, errors.New("corruption detected: stream overrun 2")
|
||||
}
|
||||
}
|
||||
|
@ -1257,10 +1306,10 @@ func (d *Decoder) decompress4X8bitExactly(dst, src []byte) ([]byte, error) {
|
|||
if len(out) < dstEvery*3+ioff {
|
||||
return nil, errors.New("corruption detected: stream overrun 3")
|
||||
}
|
||||
copy(out, buf[:off])
|
||||
copy(out[dstEvery:dstEvery+ioff], buf[bufoff:bufoff*2])
|
||||
copy(out[dstEvery*2:dstEvery*2+ioff], buf[bufoff*2:bufoff*3])
|
||||
copy(out[dstEvery*3:dstEvery*3+ioff], buf[bufoff*3:bufoff*4])
|
||||
copy(out, buf[0][:off])
|
||||
copy(out[dstEvery:], buf[1][:off])
|
||||
copy(out[dstEvery*2:], buf[2][:off])
|
||||
copy(out[dstEvery*3:], buf[3][:off])
|
||||
decoded += int(off) * 4
|
||||
out = out[off:]
|
||||
}
|
||||
|
@ -1272,6 +1321,7 @@ func (d *Decoder) decompress4X8bitExactly(dst, src []byte) ([]byte, error) {
|
|||
bitsLeft := int(br.off*8) + int(64-br.bitsRead)
|
||||
for bitsLeft > 0 {
|
||||
if br.finished() {
|
||||
d.bufs.Put(buf)
|
||||
return nil, io.ErrUnexpectedEOF
|
||||
}
|
||||
if br.bitsRead >= 56 {
|
||||
|
@ -1292,6 +1342,7 @@ func (d *Decoder) decompress4X8bitExactly(dst, src []byte) ([]byte, error) {
|
|||
}
|
||||
// end inline...
|
||||
if offset >= len(out) {
|
||||
d.bufs.Put(buf)
|
||||
return nil, errors.New("corruption detected: stream overrun 4")
|
||||
}
|
||||
|
||||
|
@ -1306,9 +1357,11 @@ func (d *Decoder) decompress4X8bitExactly(dst, src []byte) ([]byte, error) {
|
|||
decoded += offset - dstEvery*i
|
||||
err = br.close()
|
||||
if err != nil {
|
||||
d.bufs.Put(buf)
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
d.bufs.Put(buf)
|
||||
if dstSize != decoded {
|
||||
return nil, errors.New("corruption detected: short output block")
|
||||
}
|
||||
|
|
2
vendor/github.com/klauspost/compress/huff0/huff0.go
generated
vendored
2
vendor/github.com/klauspost/compress/huff0/huff0.go
generated
vendored
|
@ -8,6 +8,7 @@ import (
|
|||
"fmt"
|
||||
"math"
|
||||
"math/bits"
|
||||
"sync"
|
||||
|
||||
"github.com/klauspost/compress/fse"
|
||||
)
|
||||
|
@ -116,6 +117,7 @@ type Scratch struct {
|
|||
nodes []nodeElt
|
||||
tmpOut [4][]byte
|
||||
fse *fse.Scratch
|
||||
decPool sync.Pool // *[4][256]byte buffers.
|
||||
huffWeight [maxSymbolValue + 1]byte
|
||||
}
|
||||
|
||||
|
|
4
vendor/github.com/klauspost/compress/zstd/enc_fast.go
generated
vendored
4
vendor/github.com/klauspost/compress/zstd/enc_fast.go
generated
vendored
|
@ -85,7 +85,7 @@ func (e *fastEncoder) Encode(blk *blockEnc, src []byte) {
|
|||
// TEMPLATE
|
||||
const hashLog = tableBits
|
||||
// seems global, but would be nice to tweak.
|
||||
const kSearchStrength = 7
|
||||
const kSearchStrength = 6
|
||||
|
||||
// nextEmit is where in src the next emitLiteral should start from.
|
||||
nextEmit := s
|
||||
|
@ -334,7 +334,7 @@ func (e *fastEncoder) EncodeNoHist(blk *blockEnc, src []byte) {
|
|||
// TEMPLATE
|
||||
const hashLog = tableBits
|
||||
// seems global, but would be nice to tweak.
|
||||
const kSearchStrength = 8
|
||||
const kSearchStrength = 6
|
||||
|
||||
// nextEmit is where in src the next emitLiteral should start from.
|
||||
nextEmit := s
|
||||
|
|
7
vendor/golang.org/x/sys/unix/syscall_linux.go
generated
vendored
7
vendor/golang.org/x/sys/unix/syscall_linux.go
generated
vendored
|
@ -250,6 +250,13 @@ func Getwd() (wd string, err error) {
|
|||
if n < 1 || n > len(buf) || buf[n-1] != 0 {
|
||||
return "", EINVAL
|
||||
}
|
||||
// In some cases, Linux can return a path that starts with the
|
||||
// "(unreachable)" prefix, which can potentially be a valid relative
|
||||
// path. To work around that, return ENOENT if path is not absolute.
|
||||
if buf[0] != '/' {
|
||||
return "", ENOENT
|
||||
}
|
||||
|
||||
return string(buf[0 : n-1]), nil
|
||||
}
|
||||
|
||||
|
|
8
vendor/golang.org/x/sys/unix/syscall_linux_386.go
generated
vendored
8
vendor/golang.org/x/sys/unix/syscall_linux_386.go
generated
vendored
|
@ -173,14 +173,6 @@ const (
|
|||
_SENDMMSG = 20
|
||||
)
|
||||
|
||||
func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) {
|
||||
fd, e := socketcall(_ACCEPT, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)), 0, 0, 0)
|
||||
if e != 0 {
|
||||
err = e
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func accept4(s int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (fd int, err error) {
|
||||
fd, e := socketcall(_ACCEPT4, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)), uintptr(flags), 0, 0)
|
||||
if e != 0 {
|
||||
|
|
1
vendor/golang.org/x/sys/unix/syscall_linux_amd64.go
generated
vendored
1
vendor/golang.org/x/sys/unix/syscall_linux_amd64.go
generated
vendored
|
@ -62,7 +62,6 @@ func Stat(path string, stat *Stat_t) (err error) {
|
|||
//sys SyncFileRange(fd int, off int64, n int64, flags int) (err error)
|
||||
//sys Truncate(path string, length int64) (err error)
|
||||
//sys Ustat(dev int, ubuf *Ustat_t) (err error)
|
||||
//sys accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error)
|
||||
//sys accept4(s int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (fd int, err error)
|
||||
//sys bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error)
|
||||
//sys connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error)
|
||||
|
|
1
vendor/golang.org/x/sys/unix/syscall_linux_arm.go
generated
vendored
1
vendor/golang.org/x/sys/unix/syscall_linux_arm.go
generated
vendored
|
@ -27,7 +27,6 @@ func Seek(fd int, offset int64, whence int) (newoffset int64, err error) {
|
|||
return newoffset, nil
|
||||
}
|
||||
|
||||
//sys accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error)
|
||||
//sys accept4(s int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (fd int, err error)
|
||||
//sys bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error)
|
||||
//sys connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error)
|
||||
|
|
1
vendor/golang.org/x/sys/unix/syscall_linux_arm64.go
generated
vendored
1
vendor/golang.org/x/sys/unix/syscall_linux_arm64.go
generated
vendored
|
@ -66,7 +66,6 @@ func Ustat(dev int, ubuf *Ustat_t) (err error) {
|
|||
return ENOSYS
|
||||
}
|
||||
|
||||
//sys accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error)
|
||||
//sys accept4(s int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (fd int, err error)
|
||||
//sys bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error)
|
||||
//sys connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error)
|
||||
|
|
1
vendor/golang.org/x/sys/unix/syscall_linux_mips64x.go
generated
vendored
1
vendor/golang.org/x/sys/unix/syscall_linux_mips64x.go
generated
vendored
|
@ -48,7 +48,6 @@ func Select(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (n int, err
|
|||
//sys SyncFileRange(fd int, off int64, n int64, flags int) (err error)
|
||||
//sys Truncate(path string, length int64) (err error)
|
||||
//sys Ustat(dev int, ubuf *Ustat_t) (err error)
|
||||
//sys accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error)
|
||||
//sys accept4(s int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (fd int, err error)
|
||||
//sys bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error)
|
||||
//sys connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error)
|
||||
|
|
1
vendor/golang.org/x/sys/unix/syscall_linux_mipsx.go
generated
vendored
1
vendor/golang.org/x/sys/unix/syscall_linux_mipsx.go
generated
vendored
|
@ -41,7 +41,6 @@ func Syscall9(trap, a1, a2, a3, a4, a5, a6, a7, a8, a9 uintptr) (r1, r2 uintptr,
|
|||
//sys SyncFileRange(fd int, off int64, n int64, flags int) (err error)
|
||||
//sys Truncate(path string, length int64) (err error) = SYS_TRUNCATE64
|
||||
//sys Ustat(dev int, ubuf *Ustat_t) (err error)
|
||||
//sys accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error)
|
||||
//sys accept4(s int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (fd int, err error)
|
||||
//sys bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error)
|
||||
//sys connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error)
|
||||
|
|
1
vendor/golang.org/x/sys/unix/syscall_linux_ppc.go
generated
vendored
1
vendor/golang.org/x/sys/unix/syscall_linux_ppc.go
generated
vendored
|
@ -43,7 +43,6 @@ import (
|
|||
//sys Stat(path string, stat *Stat_t) (err error) = SYS_STAT64
|
||||
//sys Truncate(path string, length int64) (err error) = SYS_TRUNCATE64
|
||||
//sys Ustat(dev int, ubuf *Ustat_t) (err error)
|
||||
//sys accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error)
|
||||
//sys accept4(s int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (fd int, err error)
|
||||
//sys bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error)
|
||||
//sys connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error)
|
||||
|
|
1
vendor/golang.org/x/sys/unix/syscall_linux_ppc64x.go
generated
vendored
1
vendor/golang.org/x/sys/unix/syscall_linux_ppc64x.go
generated
vendored
|
@ -45,7 +45,6 @@ package unix
|
|||
//sys Statfs(path string, buf *Statfs_t) (err error)
|
||||
//sys Truncate(path string, length int64) (err error)
|
||||
//sys Ustat(dev int, ubuf *Ustat_t) (err error)
|
||||
//sys accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error)
|
||||
//sys accept4(s int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (fd int, err error)
|
||||
//sys bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error)
|
||||
//sys connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error)
|
||||
|
|
1
vendor/golang.org/x/sys/unix/syscall_linux_riscv64.go
generated
vendored
1
vendor/golang.org/x/sys/unix/syscall_linux_riscv64.go
generated
vendored
|
@ -65,7 +65,6 @@ func Ustat(dev int, ubuf *Ustat_t) (err error) {
|
|||
return ENOSYS
|
||||
}
|
||||
|
||||
//sys accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error)
|
||||
//sys accept4(s int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (fd int, err error)
|
||||
//sys bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error)
|
||||
//sys connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error)
|
||||
|
|
9
vendor/golang.org/x/sys/unix/syscall_linux_s390x.go
generated
vendored
9
vendor/golang.org/x/sys/unix/syscall_linux_s390x.go
generated
vendored
|
@ -145,15 +145,6 @@ const (
|
|||
netSendMMsg = 20
|
||||
)
|
||||
|
||||
func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (int, error) {
|
||||
args := [3]uintptr{uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))}
|
||||
fd, _, err := Syscall(SYS_SOCKETCALL, netAccept, uintptr(unsafe.Pointer(&args)), 0)
|
||||
if err != 0 {
|
||||
return 0, err
|
||||
}
|
||||
return int(fd), nil
|
||||
}
|
||||
|
||||
func accept4(s int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (int, error) {
|
||||
args := [4]uintptr{uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)), uintptr(flags)}
|
||||
fd, _, err := Syscall(SYS_SOCKETCALL, netAccept4, uintptr(unsafe.Pointer(&args)), 0)
|
||||
|
|
1
vendor/golang.org/x/sys/unix/syscall_linux_sparc64.go
generated
vendored
1
vendor/golang.org/x/sys/unix/syscall_linux_sparc64.go
generated
vendored
|
@ -42,7 +42,6 @@ package unix
|
|||
//sys Statfs(path string, buf *Statfs_t) (err error)
|
||||
//sys SyncFileRange(fd int, off int64, n int64, flags int) (err error)
|
||||
//sys Truncate(path string, length int64) (err error)
|
||||
//sys accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error)
|
||||
//sys accept4(s int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (fd int, err error)
|
||||
//sys bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error)
|
||||
//sys connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error)
|
||||
|
|
11
vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go
generated
vendored
11
vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go
generated
vendored
|
@ -444,17 +444,6 @@ func Ustat(dev int, ubuf *Ustat_t) (err error) {
|
|||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) {
|
||||
r0, _, e1 := Syscall(SYS_ACCEPT, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)))
|
||||
fd = int(r0)
|
||||
if e1 != 0 {
|
||||
err = errnoErr(e1)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func accept4(s int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (fd int, err error) {
|
||||
r0, _, e1 := Syscall6(SYS_ACCEPT4, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)), uintptr(flags), 0, 0)
|
||||
fd = int(r0)
|
||||
|
|
11
vendor/golang.org/x/sys/unix/zsyscall_linux_arm.go
generated
vendored
11
vendor/golang.org/x/sys/unix/zsyscall_linux_arm.go
generated
vendored
|
@ -46,17 +46,6 @@ func Tee(rfd int, wfd int, len int, flags int) (n int64, err error) {
|
|||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) {
|
||||
r0, _, e1 := Syscall(SYS_ACCEPT, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)))
|
||||
fd = int(r0)
|
||||
if e1 != 0 {
|
||||
err = errnoErr(e1)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func accept4(s int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (fd int, err error) {
|
||||
r0, _, e1 := Syscall6(SYS_ACCEPT4, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)), uintptr(flags), 0, 0)
|
||||
fd = int(r0)
|
||||
|
|
11
vendor/golang.org/x/sys/unix/zsyscall_linux_arm64.go
generated
vendored
11
vendor/golang.org/x/sys/unix/zsyscall_linux_arm64.go
generated
vendored
|
@ -389,17 +389,6 @@ func Truncate(path string, length int64) (err error) {
|
|||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) {
|
||||
r0, _, e1 := Syscall(SYS_ACCEPT, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)))
|
||||
fd = int(r0)
|
||||
if e1 != 0 {
|
||||
err = errnoErr(e1)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func accept4(s int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (fd int, err error) {
|
||||
r0, _, e1 := Syscall6(SYS_ACCEPT4, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)), uintptr(flags), 0, 0)
|
||||
fd = int(r0)
|
||||
|
|
11
vendor/golang.org/x/sys/unix/zsyscall_linux_mips.go
generated
vendored
11
vendor/golang.org/x/sys/unix/zsyscall_linux_mips.go
generated
vendored
|
@ -344,17 +344,6 @@ func Ustat(dev int, ubuf *Ustat_t) (err error) {
|
|||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) {
|
||||
r0, _, e1 := Syscall(SYS_ACCEPT, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)))
|
||||
fd = int(r0)
|
||||
if e1 != 0 {
|
||||
err = errnoErr(e1)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func accept4(s int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (fd int, err error) {
|
||||
r0, _, e1 := Syscall6(SYS_ACCEPT4, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)), uintptr(flags), 0, 0)
|
||||
fd = int(r0)
|
||||
|
|
11
vendor/golang.org/x/sys/unix/zsyscall_linux_mips64.go
generated
vendored
11
vendor/golang.org/x/sys/unix/zsyscall_linux_mips64.go
generated
vendored
|
@ -399,17 +399,6 @@ func Ustat(dev int, ubuf *Ustat_t) (err error) {
|
|||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) {
|
||||
r0, _, e1 := Syscall(SYS_ACCEPT, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)))
|
||||
fd = int(r0)
|
||||
if e1 != 0 {
|
||||
err = errnoErr(e1)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func accept4(s int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (fd int, err error) {
|
||||
r0, _, e1 := Syscall6(SYS_ACCEPT4, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)), uintptr(flags), 0, 0)
|
||||
fd = int(r0)
|
||||
|
|
11
vendor/golang.org/x/sys/unix/zsyscall_linux_mips64le.go
generated
vendored
11
vendor/golang.org/x/sys/unix/zsyscall_linux_mips64le.go
generated
vendored
|
@ -399,17 +399,6 @@ func Ustat(dev int, ubuf *Ustat_t) (err error) {
|
|||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) {
|
||||
r0, _, e1 := Syscall(SYS_ACCEPT, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)))
|
||||
fd = int(r0)
|
||||
if e1 != 0 {
|
||||
err = errnoErr(e1)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func accept4(s int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (fd int, err error) {
|
||||
r0, _, e1 := Syscall6(SYS_ACCEPT4, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)), uintptr(flags), 0, 0)
|
||||
fd = int(r0)
|
||||
|
|
11
vendor/golang.org/x/sys/unix/zsyscall_linux_mipsle.go
generated
vendored
11
vendor/golang.org/x/sys/unix/zsyscall_linux_mipsle.go
generated
vendored
|
@ -344,17 +344,6 @@ func Ustat(dev int, ubuf *Ustat_t) (err error) {
|
|||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) {
|
||||
r0, _, e1 := Syscall(SYS_ACCEPT, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)))
|
||||
fd = int(r0)
|
||||
if e1 != 0 {
|
||||
err = errnoErr(e1)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func accept4(s int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (fd int, err error) {
|
||||
r0, _, e1 := Syscall6(SYS_ACCEPT4, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)), uintptr(flags), 0, 0)
|
||||
fd = int(r0)
|
||||
|
|
11
vendor/golang.org/x/sys/unix/zsyscall_linux_ppc.go
generated
vendored
11
vendor/golang.org/x/sys/unix/zsyscall_linux_ppc.go
generated
vendored
|
@ -409,17 +409,6 @@ func Ustat(dev int, ubuf *Ustat_t) (err error) {
|
|||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) {
|
||||
r0, _, e1 := Syscall(SYS_ACCEPT, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)))
|
||||
fd = int(r0)
|
||||
if e1 != 0 {
|
||||
err = errnoErr(e1)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func accept4(s int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (fd int, err error) {
|
||||
r0, _, e1 := Syscall6(SYS_ACCEPT4, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen)), uintptr(flags), 0, 0)
|
||||
fd = int(r0)
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue