mirror of
https://github.com/VictoriaMetrics/VictoriaMetrics.git
synced 2024-12-01 14:47:38 +00:00
Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files
This commit is contained in:
commit
1a9cb85647
74 changed files with 1664 additions and 746 deletions
14
README.md
14
README.md
|
@ -432,6 +432,10 @@ This command should return the following output if everything is OK:
|
|||
{"metric":{"__name__":"system.load.1","environment":"test","host":"test.example.com"},"values":[0.5],"timestamps":[1632833641000]}
|
||||
```
|
||||
|
||||
VictoriaMetrics automatically sanitizes metric names for the data ingested via DataDog protocol
|
||||
according to [DataDog metric naming recommendations](https://docs.datadoghq.com/metrics/custom_metrics/#naming-custom-metrics).
|
||||
If you need accepting metric names as is without sanitizing, then pass `-datadog.sanitizeMetricName=false` command-line flag to VictoriaMetrics.
|
||||
|
||||
Extra labels may be added to all the written time series by passing `extra_label=name=value` query args.
|
||||
For example, `/datadog/api/v1/series?extra_label=foo=bar` would add `{foo="bar"}` label to all the ingested metrics.
|
||||
|
||||
|
@ -465,6 +469,9 @@ VictoriaMetrics performs the following transformations to the ingested InfluxDB
|
|||
* Field names are mapped to time series names prefixed with `{measurement}{separator}` value, where `{separator}` equals to `_` by default. It can be changed with `-influxMeasurementFieldSeparator` command-line flag. See also `-influxSkipSingleField` command-line flag. If `{measurement}` is empty or if `-influxSkipMeasurement` command-line flag is set, then time series names correspond to field names.
|
||||
* Field values are mapped to time series values.
|
||||
* Tags are mapped to Prometheus labels as-is.
|
||||
* If `-usePromCompatibleNaming` command-line flag is set, then all the metric names and label names
|
||||
are normalized to [Prometheus-compatible naming](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels) by replacing unsupported chars with `_`.
|
||||
For example, `foo.bar-baz/1` metric name or label name is substituted with `foo_bar_baz_1`.
|
||||
|
||||
For example, the following InfluxDB line:
|
||||
|
||||
|
@ -1520,7 +1527,8 @@ VictoriaMetrics exposes currently running queries and their execution times at `
|
|||
|
||||
VictoriaMetrics exposes queries, which take the most time to execute, at `/api/v1/status/top_queries` page.
|
||||
|
||||
See also [troubleshooting docs](https://docs.victoriametrics.com/Troubleshooting.html).
|
||||
See also [VictoriaMetrics Monitoring](https://victoriametrics.com/blog/victoriametrics-monitoring/)
|
||||
and [troubleshooting docs](https://docs.victoriametrics.com/Troubleshooting.html).
|
||||
|
||||
## TSDB stats
|
||||
|
||||
|
@ -1993,6 +2001,8 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
-datadog.maxInsertRequestSize size
|
||||
The maximum size in bytes of a single DataDog POST request to /api/v1/series
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 67108864)
|
||||
-datadog.sanitizeMetricName
|
||||
Sanitize metric names for the ingested DataDog data to comply with DataDog behaviour described at https://docs.datadoghq.com/metrics/custom_metrics/#naming-custom-metrics (default true)
|
||||
-dedup.minScrapeInterval duration
|
||||
Leave only the last sample in every time series per each discrete interval equal to -dedup.minScrapeInterval > 0. See https://docs.victoriametrics.com/#deduplication and https://docs.victoriametrics.com/#downsampling
|
||||
-deleteAuthKey string
|
||||
|
@ -2332,6 +2342,8 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
-tlsKeyFile string
|
||||
Path to file with TLS key if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
|
||||
-usePromCompatibleNaming
|
||||
Whether to replace characters unsupported by Prometheus with underscores in the ingested metric names and label names. For example, foo.bar{a.b='c'} is transformed into foo_bar{a_b='c'} during data ingestion if this flag is set. See https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels
|
||||
-version
|
||||
Show VictoriaMetrics version
|
||||
-vmalert.proxyURL string
|
||||
|
|
|
@ -915,6 +915,8 @@ See the docs at https://docs.victoriametrics.com/vmagent.html .
|
|||
-datadog.maxInsertRequestSize size
|
||||
The maximum size in bytes of a single DataDog POST request to /api/v1/series
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 67108864)
|
||||
-datadog.sanitizeMetricName
|
||||
Sanitize metric names for the ingested DataDog data to comply with DataDog behaviour described at https://docs.datadoghq.com/metrics/custom_metrics/#naming-custom-metrics (default true)
|
||||
-denyQueryTracing
|
||||
Whether to disable the ability to trace queries. See https://docs.victoriametrics.com/#query-tracing
|
||||
-dryRun
|
||||
|
@ -1267,6 +1269,8 @@ See the docs at https://docs.victoriametrics.com/vmagent.html .
|
|||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
-tlsKeyFile string
|
||||
Path to file with TLS key if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
|
||||
-usePromCompatibleNaming
|
||||
Whether to replace characters unsupported by Prometheus with underscores in the ingested metric names and label names. For example, foo.bar{a.b='c'} is transformed into foo_bar{a_b='c'} during data ingestion if this flag is set. See https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels
|
||||
-version
|
||||
Show VictoriaMetrics version
|
||||
```
|
||||
|
|
|
@ -3,6 +3,7 @@ package remotewrite
|
|||
import (
|
||||
"flag"
|
||||
"fmt"
|
||||
"regexp"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
|
@ -25,6 +26,10 @@ var (
|
|||
relabelDebug = flagutil.NewArrayBool("remoteWrite.urlRelabelDebug", "Whether to log metrics before and after relabeling with -remoteWrite.urlRelabelConfig. "+
|
||||
"If the -remoteWrite.urlRelabelDebug is enabled, then the metrics aren't sent to the corresponding -remoteWrite.url. "+
|
||||
"This is useful for debugging the relabeling configs")
|
||||
|
||||
usePromCompatibleNaming = flag.Bool("usePromCompatibleNaming", false, "Whether to replace characters unsupported by Prometheus with underscores "+
|
||||
"in the ingested metric names and label names. For example, foo.bar{a.b='c'} is transformed into foo_bar{a_b='c'} during data ingestion if this flag is set. "+
|
||||
"See https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels")
|
||||
)
|
||||
|
||||
var labelsGlobal []prompbmarshal.Label
|
||||
|
@ -107,6 +112,18 @@ func (rctx *relabelCtx) applyRelabeling(tss []prompbmarshal.TimeSeries, extraLab
|
|||
labels = append(labels, *extraLabel)
|
||||
}
|
||||
}
|
||||
if *usePromCompatibleNaming {
|
||||
// Replace unsupported Prometheus chars in label names and metric names with underscores.
|
||||
tmpLabels := labels[labelsLen:]
|
||||
for j := range tmpLabels {
|
||||
label := &tmpLabels[j]
|
||||
if label.Name == "__name__" {
|
||||
label.Value = unsupportedPromChars.ReplaceAllString(label.Value, "_")
|
||||
} else {
|
||||
label.Name = unsupportedPromChars.ReplaceAllString(label.Name, "_")
|
||||
}
|
||||
}
|
||||
}
|
||||
labels = pcs.Apply(labels, labelsLen, true)
|
||||
if len(labels) == labelsLen {
|
||||
// Drop the current time series, since relabeling removed all the labels.
|
||||
|
@ -121,6 +138,9 @@ func (rctx *relabelCtx) applyRelabeling(tss []prompbmarshal.TimeSeries, extraLab
|
|||
return tssDst
|
||||
}
|
||||
|
||||
// See https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels
|
||||
var unsupportedPromChars = regexp.MustCompile(`[^a-zA-Z0-9_:]`)
|
||||
|
||||
type relabelCtx struct {
|
||||
// pool for labels, which are used during the relabeling.
|
||||
labels []prompbmarshal.Label
|
||||
|
|
|
@ -620,7 +620,7 @@ There are following non-required `replay` flags:
|
|||
Progress bar may generate a lot of log records, which is not formatted as standard VictoriaMetrics logger.
|
||||
It could break logs parsing by external system and generate additional load on it.
|
||||
|
||||
See full description for these flags in `./vmalert --help`.
|
||||
See full description for these flags in `./vmalert -help`.
|
||||
|
||||
### Limitations
|
||||
|
||||
|
@ -650,6 +650,11 @@ may get empty response from datasource and produce empty recording rules or rese
|
|||
|
||||
<img alt="vmalert evaluation when data is delayed" src="vmalert_ts_data_delay.gif">
|
||||
|
||||
_By default recently written samples to VictoriaMetrics aren't visible for queries for up to 30s.
|
||||
This behavior is controlled by `-search.latencyOffset` command-line flag on vmselect. Usually, this results into
|
||||
a 30s shift for recording rules results.
|
||||
Note that too small value passed to `-search.latencyOffset` may lead to incomplete query results._
|
||||
|
||||
Try the following recommendations in such cases:
|
||||
|
||||
* Always configure group's `evaluationInterval` to be bigger or equal to `scrape_interval` at which metrics
|
||||
|
@ -658,7 +663,7 @@ are delivered to the datasource;
|
|||
command-line flag to add a time shift for evaluations;
|
||||
* If time intervals between datapoints in datasource are irregular - try changing vmalert's `-datasource.queryStep`
|
||||
command-line flag to specify how far search query can lookback for the recent datapoint. By default, this value
|
||||
is equal to group's `evaluationInterval`.
|
||||
is equal to group's evaluation interval.
|
||||
|
||||
Sometimes, it is not clear why some specific alert fired or didn't fire. It is very important to remember, that
|
||||
alerts with `for: 0` fire immediately when their expression becomes true. And alerts with `for > 0` will fire only
|
||||
|
@ -767,7 +772,7 @@ The shortlist of configuration flags is the following:
|
|||
-datasource.oauth2.tokenUrl string
|
||||
Optional OAuth2 tokenURL to use for -datasource.url.
|
||||
-datasource.queryStep duration
|
||||
queryStep defines how far a value can fallback to when evaluating queries. For example, if datasource.queryStep=15s then param "step" with value "15s" will be added to every query.If queryStep isn't specified, rule's evaluationInterval will be used instead.
|
||||
How far a value can fallback to when evaluating queries. For example, if -datasource.queryStep=15s then param "step" with value "15s" will be added to every query. If set to 0, rule's evaluation interval will be used instead. (default 5m0s)
|
||||
-datasource.queryTimeAlignment
|
||||
Whether to align "time" parameter with evaluation interval.Alignment supposed to produce deterministic results despite of number of vmalert replicas or time they were started. See more details here https://github.com/VictoriaMetrics/VictoriaMetrics/pull/1257 (default true)
|
||||
-datasource.roundDigits int
|
||||
|
|
|
@ -6,6 +6,7 @@ import (
|
|||
"net/http"
|
||||
"net/url"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/utils"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
|
||||
|
@ -42,9 +43,9 @@ var (
|
|||
oauth2Scopes = flag.String("datasource.oauth2.scopes", "", "Optional OAuth2 scopes to use for -datasource.url. Scopes must be delimited by ';'")
|
||||
|
||||
lookBack = flag.Duration("datasource.lookback", 0, `Lookback defines how far into the past to look when evaluating queries. For example, if the datasource.lookback=5m then param "time" with value now()-5m will be added to every query.`)
|
||||
queryStep = flag.Duration("datasource.queryStep", 0, "queryStep defines how far a value can fallback to when evaluating queries. "+
|
||||
"For example, if datasource.queryStep=15s then param \"step\" with value \"15s\" will be added to every query."+
|
||||
"If queryStep isn't specified, rule's evaluationInterval will be used instead.")
|
||||
queryStep = flag.Duration("datasource.queryStep", 5*time.Minute, "How far a value can fallback to when evaluating queries. "+
|
||||
"For example, if -datasource.queryStep=15s then param \"step\" with value \"15s\" will be added to every query. "+
|
||||
"If set to 0, rule's evaluation interval will be used instead.")
|
||||
queryTimeAlignment = flag.Bool("datasource.queryTimeAlignment", true, `Whether to align "time" parameter with evaluation interval.`+
|
||||
"Alignment supposed to produce deterministic results despite of number of vmalert replicas or time they were started. See more details here https://github.com/VictoriaMetrics/VictoriaMetrics/pull/1257")
|
||||
maxIdleConnections = flag.Int("datasource.maxIdleConnections", 100, `Defines the number of idle (keep-alive connections) to each configured datasource. Consider setting this value equal to the value: groups_total * group.concurrency. Too low a value may result in a high number of sockets in TIME_WAIT state.`)
|
||||
|
|
|
@ -3,6 +3,7 @@ package relabel
|
|||
import (
|
||||
"flag"
|
||||
"fmt"
|
||||
"regexp"
|
||||
"sync/atomic"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
|
@ -20,6 +21,10 @@ var (
|
|||
"See https://docs.victoriametrics.com/#relabeling for details. The config is reloaded on SIGHUP signal")
|
||||
relabelDebug = flag.Bool("relabelDebug", false, "Whether to log metrics before and after relabeling with -relabelConfig. If the -relabelDebug is enabled, "+
|
||||
"then the metrics aren't sent to storage. This is useful for debugging the relabeling configs")
|
||||
|
||||
usePromCompatibleNaming = flag.Bool("usePromCompatibleNaming", false, "Whether to replace characters unsupported by Prometheus with underscores "+
|
||||
"in the ingested metric names and label names. For example, foo.bar{a.b='c'} is transformed into foo_bar{a_b='c'} during data ingestion if this flag is set. "+
|
||||
"See https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels")
|
||||
)
|
||||
|
||||
// Init must be called after flag.Parse and before using the relabel package.
|
||||
|
@ -67,7 +72,7 @@ func loadRelabelConfig() (*promrelabel.ParsedConfigs, error) {
|
|||
// HasRelabeling returns true if there is global relabeling.
|
||||
func HasRelabeling() bool {
|
||||
pcs := pcsGlobal.Load().(*promrelabel.ParsedConfigs)
|
||||
return pcs.Len() > 0
|
||||
return pcs.Len() > 0 || *usePromCompatibleNaming
|
||||
}
|
||||
|
||||
// Ctx holds relabeling context.
|
||||
|
@ -87,11 +92,11 @@ func (ctx *Ctx) Reset() {
|
|||
// The returned labels are valid until the next call to ApplyRelabeling.
|
||||
func (ctx *Ctx) ApplyRelabeling(labels []prompb.Label) []prompb.Label {
|
||||
pcs := pcsGlobal.Load().(*promrelabel.ParsedConfigs)
|
||||
if pcs.Len() == 0 {
|
||||
if pcs.Len() == 0 && !*usePromCompatibleNaming {
|
||||
// There are no relabeling rules.
|
||||
return labels
|
||||
}
|
||||
// Convert src to prompbmarshal.Label format suitable for relabeling.
|
||||
// Convert labels to prompbmarshal.Label format suitable for relabeling.
|
||||
tmpLabels := ctx.tmpLabels[:0]
|
||||
for _, label := range labels {
|
||||
name := bytesutil.ToUnsafeString(label.Name)
|
||||
|
@ -105,13 +110,28 @@ func (ctx *Ctx) ApplyRelabeling(labels []prompb.Label) []prompb.Label {
|
|||
})
|
||||
}
|
||||
|
||||
// Apply relabeling
|
||||
tmpLabels = pcs.Apply(tmpLabels, 0, true)
|
||||
ctx.tmpLabels = tmpLabels
|
||||
if len(tmpLabels) == 0 {
|
||||
metricsDropped.Inc()
|
||||
if *usePromCompatibleNaming {
|
||||
// Replace unsupported Prometheus chars in label names and metric names with underscores.
|
||||
for i := range tmpLabels {
|
||||
label := &tmpLabels[i]
|
||||
if label.Name == "__name__" {
|
||||
label.Value = unsupportedPromChars.ReplaceAllString(label.Value, "_")
|
||||
} else {
|
||||
label.Name = unsupportedPromChars.ReplaceAllString(label.Name, "_")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if pcs.Len() > 0 {
|
||||
// Apply relabeling
|
||||
tmpLabels = pcs.Apply(tmpLabels, 0, true)
|
||||
if len(tmpLabels) == 0 {
|
||||
metricsDropped.Inc()
|
||||
}
|
||||
}
|
||||
|
||||
ctx.tmpLabels = tmpLabels
|
||||
|
||||
// Return back labels to the desired format.
|
||||
dst := labels[:0]
|
||||
for _, label := range tmpLabels {
|
||||
|
@ -129,3 +149,6 @@ func (ctx *Ctx) ApplyRelabeling(labels []prompb.Label) []prompb.Label {
|
|||
}
|
||||
|
||||
var metricsDropped = metrics.NewCounter(`vm_relabel_metrics_dropped_total`)
|
||||
|
||||
// See https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels
|
||||
var unsupportedPromChars = regexp.MustCompile(`[^a-zA-Z0-9_:]`)
|
||||
|
|
|
@ -1495,11 +1495,8 @@ func rollupDelta(rfa *rollupFuncArg) float64 {
|
|||
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/894
|
||||
return values[len(values)-1] - rfa.realPrevValue
|
||||
}
|
||||
// Assume that the previous non-existing value was 0 only in the following cases:
|
||||
//
|
||||
// - If the delta with the next value equals to 0.
|
||||
// This is the case for slow-changing counter - see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/962
|
||||
// - If the first value doesn't exceed too much the delta with the next value.
|
||||
// Assume that the previous non-existing value was 0
|
||||
// only if the first value doesn't exceed too much the delta with the next value.
|
||||
//
|
||||
// This should prevent from improper increase() results for os-level counters
|
||||
// such as cpu time or bytes sent over the network interface.
|
||||
|
@ -1513,9 +1510,6 @@ func rollupDelta(rfa *rollupFuncArg) float64 {
|
|||
} else if !math.IsNaN(rfa.realNextValue) {
|
||||
d = rfa.realNextValue - values[0]
|
||||
}
|
||||
if d == 0 {
|
||||
d = 10
|
||||
}
|
||||
if math.Abs(values[0]) < 10*(math.Abs(d)+1) {
|
||||
prevValue = 0
|
||||
} else {
|
||||
|
|
|
@ -1425,19 +1425,16 @@ func TestRollupDelta(t *testing.T) {
|
|||
|
||||
// Small initial value
|
||||
f(nan, nan, nan, []float64{1}, 1)
|
||||
f(nan, nan, nan, []float64{10}, 10)
|
||||
f(nan, nan, nan, []float64{100}, 100)
|
||||
f(nan, nan, nan, []float64{10}, 0)
|
||||
f(nan, nan, nan, []float64{100}, 0)
|
||||
f(nan, nan, nan, []float64{1, 2, 3}, 3)
|
||||
f(1, nan, nan, []float64{1, 2, 3}, 2)
|
||||
f(nan, nan, nan, []float64{5, 6, 8}, 8)
|
||||
f(2, nan, nan, []float64{5, 6, 8}, 6)
|
||||
|
||||
// Moderate initial value with zero delta after that.
|
||||
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/962
|
||||
f(nan, nan, nan, []float64{100}, 100)
|
||||
f(nan, nan, nan, []float64{100, 100}, 100)
|
||||
f(nan, nan, nan, []float64{100, 100}, 0)
|
||||
|
||||
// Big initial value with with zero delta after that.
|
||||
// Big initial value with zero delta after that.
|
||||
f(nan, nan, nan, []float64{1000}, 0)
|
||||
f(nan, nan, nan, []float64{1000, 1000}, 0)
|
||||
|
||||
|
|
|
@ -60,7 +60,7 @@
|
|||
"fiscalYearStartMonth": 0,
|
||||
"graphTooltip": 1,
|
||||
"id": null,
|
||||
"iteration": 1663336027743,
|
||||
"iteration": 1663928613745,
|
||||
"links": [
|
||||
{
|
||||
"icon": "doc",
|
||||
|
@ -118,7 +118,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
}
|
||||
]
|
||||
},
|
||||
|
@ -185,7 +186,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
}
|
||||
]
|
||||
},
|
||||
|
@ -253,7 +255,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
}
|
||||
]
|
||||
},
|
||||
|
@ -320,7 +323,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
}
|
||||
]
|
||||
},
|
||||
|
@ -376,7 +380,7 @@
|
|||
"datasource": {
|
||||
"uid": "$ds"
|
||||
},
|
||||
"description": "How many entries inverted index contains. This value is proportional to the number of unique timeseries in storage(cardinality).",
|
||||
"description": "Shows the number of active time series with new data points inserted during the last hour. High value may result in ingestion slowdown. \n\nSee more details here https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
|
@ -387,7 +391,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
}
|
||||
]
|
||||
},
|
||||
|
@ -427,7 +432,7 @@
|
|||
"uid": "$ds"
|
||||
},
|
||||
"exemplar": true,
|
||||
"expr": "sum(vm_rows{job=~\"$job_storage\", type=\"indexdb\"})",
|
||||
"expr": "sum(vm_cache_entries{job=~\"$job\", instance=~\"$instance.*\", type=\"storage/hour_metric_ids\"})",
|
||||
"format": "time_series",
|
||||
"instant": true,
|
||||
"interval": "",
|
||||
|
@ -436,7 +441,7 @@
|
|||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"title": "Index size",
|
||||
"title": "Active series",
|
||||
"type": "stat"
|
||||
},
|
||||
{
|
||||
|
@ -455,7 +460,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
}
|
||||
]
|
||||
},
|
||||
|
@ -522,7 +528,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
}
|
||||
]
|
||||
},
|
||||
|
@ -590,7 +597,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
}
|
||||
]
|
||||
},
|
||||
|
@ -663,7 +671,8 @@
|
|||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green"
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
|
@ -8067,8 +8076,8 @@
|
|||
{
|
||||
"current": {
|
||||
"selected": false,
|
||||
"text": "VictoriaMetrics",
|
||||
"value": "VictoriaMetrics"
|
||||
"text": "VictoriaMetrics - cluster",
|
||||
"value": "VictoriaMetrics - cluster"
|
||||
},
|
||||
"hide": 0,
|
||||
"includeAll": false,
|
||||
|
@ -8235,4 +8244,4 @@
|
|||
"uid": "oS7Bi_0Wz",
|
||||
"version": 1,
|
||||
"weekStart": ""
|
||||
}
|
||||
}
|
|
@ -61,7 +61,7 @@
|
|||
"gnetId": 10229,
|
||||
"graphTooltip": 0,
|
||||
"id": null,
|
||||
"iteration": 1663338736864,
|
||||
"iteration": 1663928777219,
|
||||
"links": [
|
||||
{
|
||||
"icon": "doc",
|
||||
|
@ -484,7 +484,7 @@
|
|||
"datasource": {
|
||||
"uid": "$ds"
|
||||
},
|
||||
"description": "How many entries inverted index contains. This value is proportional to the number of unique timeseries in storage(cardinality).",
|
||||
"description": "Shows the number of active time series with new data points inserted during the last hour. High value may result in ingestion slowdown. \n\nSee more details here https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
|
@ -535,7 +535,7 @@
|
|||
"uid": "$ds"
|
||||
},
|
||||
"exemplar": true,
|
||||
"expr": "sum(vm_rows{job=~\"$job\", instance=~\"$instance\", type=\"indexdb\"})",
|
||||
"expr": "vm_cache_entries{job=~\"$job\", instance=~\"$instance\", type=\"storage/hour_metric_ids\"}",
|
||||
"format": "time_series",
|
||||
"instant": true,
|
||||
"interval": "",
|
||||
|
@ -544,7 +544,7 @@
|
|||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"title": "Index size",
|
||||
"title": "Active series",
|
||||
"type": "stat"
|
||||
},
|
||||
{
|
||||
|
|
|
@ -177,3 +177,16 @@ package-via-docker-386:
|
|||
|
||||
remove-docker-images:
|
||||
docker image ls --format '{{.Repository}}\t{{.ID}}' | awk '{print $$2}' | xargs docker image rm -f
|
||||
|
||||
|
||||
docker-single-up:
|
||||
docker-compose -f deployment/docker/docker-compose.yml up
|
||||
|
||||
docker-single-down:
|
||||
docker-compose -f deployment/docker/docker-compose.yml down -v
|
||||
|
||||
docker-cluster-up:
|
||||
docker-compose -f deployment/docker/docker-compose-cluster.yml up
|
||||
|
||||
docker-cluster-down:
|
||||
docker-compose -f deployment/docker/docker-compose-cluster.yml down -v
|
|
@ -1,12 +1,33 @@
|
|||
# Docker compose environment for VictoriaMetrics
|
||||
|
||||
To spin-up VictoriaMetrics, vmagent, vmalert, Alertmanager and Grafana run the following command:
|
||||
Docker compose environment for VictoriaMetrics includes VictoriaMetrics components,
|
||||
[Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/)
|
||||
and [Grafana](https://grafana.com/).
|
||||
|
||||
`docker-compose up`
|
||||
For starting the docker-compose environment ensure you have docker installed and running and access to the Internet.
|
||||
All commands should be executed from the root directory of this repo.
|
||||
|
||||
For clustered version check [docker compose in cluster branch](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/cluster/deployment/docker).
|
||||
To spin-up environment for single server VictoriaMetrics run the following command :
|
||||
```
|
||||
make docker-single-up
|
||||
```
|
||||
|
||||
## VictoriaMetrics
|
||||
To shutdown the docker compose environment for single server run the following command:
|
||||
```
|
||||
make docker-single-down
|
||||
```
|
||||
|
||||
For cluster version the command will be the following:
|
||||
```
|
||||
make docker-cluster-up
|
||||
```
|
||||
|
||||
To shutdown the docker compose environment for cluster version run the following command:
|
||||
```
|
||||
make docker-cluster-down
|
||||
```
|
||||
|
||||
## VictoriaMetrics single server
|
||||
|
||||
VictoriaMetrics will be accessible on the following ports:
|
||||
|
||||
|
@ -14,6 +35,40 @@ VictoriaMetrics will be accessible on the following ports:
|
|||
* `--opentsdbListenAddr=:4242`
|
||||
* `--httpListenAddr=:8428`
|
||||
|
||||
The communication scheme between components is the following:
|
||||
* [vmagent](#vmagent) sends scraped metrics to VictoriaMetrics;
|
||||
* [grafana](#grafana) is configured with datasource pointing to VictoriaMetrics;
|
||||
* [vmalert](#vmalert) is configured to query VictoriaMetrics and send alerts state
|
||||
and recording rules back to it;
|
||||
* [alertmanager](#alertmanager) is configured to receive notifications from vmalert.
|
||||
|
||||
To access `vmalert` via `vmselect`
|
||||
use link [http://localhost:8428/vmalert](http://localhost:8428/vmalert/).
|
||||
|
||||
To access [vmui](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#vmui)
|
||||
use link [http://localhost:8428/vmui](http://localhost:8428/vmui).
|
||||
|
||||
## VictoriaMetrics cluster
|
||||
|
||||
VictoriaMetrics cluster environemnt consists of vminsert, vmstorage and vmselect components. vmselect
|
||||
has exposed port `:8481`, vminsert has exposed port `:8480` and the rest of components are available
|
||||
only inside of environment.
|
||||
|
||||
The communication scheme between components is the following:
|
||||
* [vmagent](#vmagent) sends scraped metrics to vminsert;
|
||||
* vminsert forwards data to vmstorage;
|
||||
* vmselect is connected to vmstorage for querying data;
|
||||
* [grafana](#grafana) is configured with datasource pointing to vmselect;
|
||||
* [vmalert](#vmalert) is configured to query vmselect and send alerts state
|
||||
and recording rules to vminsert;
|
||||
* [alertmanager](#alertmanager) is configured to receive notifications from vmalert.
|
||||
|
||||
To access `vmalert` via `vmselect`
|
||||
use link [http://localhost:8481/select/0/prometheus/vmalert](http://localhost:8481/select/0/prometheus/vmalert/).
|
||||
|
||||
To access [vmui](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#vmui)
|
||||
use link [http://localhost:8481/select/0/prometheus/vmui](http://localhost:8481/select/0/prometheus/vmui).
|
||||
|
||||
## vmagent
|
||||
|
||||
vmagent is used for scraping and pushing timeseries to
|
||||
|
@ -48,6 +103,11 @@ Default credential:
|
|||
|
||||
Grafana is provisioned by default with following entities:
|
||||
|
||||
* VictoriaMetrics datasource
|
||||
* Prometheus datasource
|
||||
* VictoriaMetrics overview dashboard
|
||||
* `VictoriaMetrics` datasource
|
||||
* `VictoriaMetrics - cluster` datasource
|
||||
* `VictoriaMetrics overview` dashboard
|
||||
* `VictoriaMetrics - cluster` dashboard
|
||||
* `VictoriaMetrics - vmagent` dashboard
|
||||
* `VictoriaMetrics - vmalert` dashboard
|
||||
|
||||
Remember to pick `VictoriaMetrics - cluster` datasource when viewing `VictoriaMetrics - cluster` dashboard.
|
199
deployment/docker/alerts-cluster.yml
Normal file
199
deployment/docker/alerts-cluster.yml
Normal file
|
@ -0,0 +1,199 @@
|
|||
# File contains default list of alerts for VictoriaMetrics cluster.
|
||||
# The alerts below are just recommendations and may require some updates
|
||||
# and threshold calibration according to every specific setup.
|
||||
groups:
|
||||
# Alerts group for VM cluster assumes that Grafana dashboard
|
||||
# https://grafana.com/grafana/dashboards/11176 is installed.
|
||||
# Please, update the `dashboard` annotation according to your setup.
|
||||
- name: vmcluster
|
||||
interval: 30s
|
||||
concurrency: 2
|
||||
rules:
|
||||
- alert: DiskRunsOutOfSpaceIn3Days
|
||||
expr: |
|
||||
vm_free_disk_space_bytes / ignoring(path)
|
||||
(
|
||||
(
|
||||
rate(vm_rows_added_to_storage_total[1d]) -
|
||||
ignoring(type) rate(vm_deduplicated_samples_total{type="merge"}[1d])
|
||||
)
|
||||
* scalar(
|
||||
sum(vm_data_size_bytes{type!="indexdb"}) /
|
||||
sum(vm_rows{type!="indexdb"})
|
||||
)
|
||||
) < 3 * 24 * 3600 > 0
|
||||
for: 30m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/oS7Bi_0Wz?viewPanel=113&var-instance={{ $labels.instance }}"
|
||||
summary: "Instance {{ $labels.instance }} will run out of disk space in 3 days"
|
||||
description: "Taking into account current ingestion rate, free disk space will be enough only
|
||||
for {{ $value | humanizeDuration }} on instance {{ $labels.instance }}.\n
|
||||
Consider to limit the ingestion rate, decrease retention or scale the disk space up if possible."
|
||||
|
||||
- alert: DiskRunsOutOfSpace
|
||||
expr: |
|
||||
sum(vm_data_size_bytes) by(instance) /
|
||||
(
|
||||
sum(vm_free_disk_space_bytes) by(instance) +
|
||||
sum(vm_data_size_bytes) by(instance)
|
||||
) > 0.8
|
||||
for: 30m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
dashboard: http://localhost:3000/d/oS7Bi_0Wz?viewPanel=110&var-instance={{ $labels.instance }}"
|
||||
summary: "Instance {{ $labels.instance }} will run out of disk space soon"
|
||||
description: "Disk utilisation on instance {{ $labels.instance }} is more than 80%.\n
|
||||
Having less than 20% of free disk space could cripple merges processes and overall performance.
|
||||
Consider to limit the ingestion rate, decrease retention or scale the disk space if possible."
|
||||
|
||||
- alert: RequestErrorsToAPI
|
||||
expr: increase(vm_http_request_errors_total[5m]) > 0
|
||||
for: 15m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/oS7Bi_0Wz?viewPanel=52&var-instance={{ $labels.instance }}"
|
||||
summary: "Too many errors served for {{ $labels.job }} path {{ $labels.path }} (instance {{ $labels.instance }})"
|
||||
description: "Requests to path {{ $labels.path }} are receiving errors.
|
||||
Please verify if clients are sending correct requests."
|
||||
|
||||
- alert: RPCErrors
|
||||
expr: |
|
||||
(
|
||||
sum(increase(vm_rpc_connection_errors_total[5m])) by(job, instance)
|
||||
+
|
||||
sum(increase(vm_rpc_dial_errors_total[5m])) by(job, instance)
|
||||
+
|
||||
sum(increase(vm_rpc_handshake_errors_total[5m])) by(job, instance)
|
||||
) > 0
|
||||
for: 15m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/oS7Bi_0Wz?viewPanel=44&var-instance={{ $labels.instance }}"
|
||||
summary: "Too many RPC errors for {{ $labels.job }} (instance {{ $labels.instance }})"
|
||||
description: "RPC errors are interconnection errors between cluster components.\n
|
||||
Possible reasons for errors are misconfiguration, overload, network blips or unreachable components."
|
||||
|
||||
- alert: ConcurrentFlushesHitTheLimit
|
||||
expr: avg_over_time(vm_concurrent_addrows_current[1m]) >= vm_concurrent_addrows_capacity
|
||||
for: 15m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/oS7Bi_0Wz?viewPanel=133&var-instance={{ $labels.instance }}"
|
||||
summary: "vmstorage on instance {{ $labels.instance }} is constantly hitting concurrent flushes limit"
|
||||
description: "The limit of concurrent flushes on instance {{ $labels.instance }} is equal to number of CPUs.\n
|
||||
When vmstorage constantly hits the limit it means that storage is overloaded and requires more CPU."
|
||||
|
||||
- alert: TooManyLogs
|
||||
expr: sum(increase(vm_log_messages_total{level="error"}[5m])) by (job, instance) > 0
|
||||
for: 15m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/oS7Bi_0Wz?viewPanel=104&var-instance={{ $labels.instance }}"
|
||||
summary: "Too many logs printed for job \"{{ $labels.job }}\" ({{ $labels.instance }})"
|
||||
description: "Logging rate for job \"{{ $labels.job }}\" ({{ $labels.instance }}) is {{ $value }} for last 15m.\n
|
||||
Worth to check logs for specific error messages."
|
||||
|
||||
- alert: RowsRejectedOnIngestion
|
||||
expr: sum(rate(vm_rows_ignored_total[5m])) by (instance, reason) > 0
|
||||
for: 15m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/oS7Bi_0Wz?viewPanel=135&var-instance={{ $labels.instance }}"
|
||||
summary: "Some rows are rejected on \"{{ $labels.instance }}\" on ingestion attempt"
|
||||
description: "VM is rejecting to ingest rows on \"{{ $labels.instance }}\" due to the
|
||||
following reason: \"{{ $labels.reason }}\""
|
||||
|
||||
- alert: TooHighChurnRate
|
||||
expr: |
|
||||
(
|
||||
sum(rate(vm_new_timeseries_created_total[5m]))
|
||||
/
|
||||
sum(rate(vm_rows_inserted_total[5m]))
|
||||
) > 0.1
|
||||
for: 15m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/oS7Bi_0Wz?viewPanel=102"
|
||||
summary: "Churn rate is more than 10% for the last 15m"
|
||||
description: "VM constantly creates new time series.\n
|
||||
This effect is known as Churn Rate.\n
|
||||
High Churn Rate tightly connected with database performance and may
|
||||
result in unexpected OOM's or slow queries."
|
||||
|
||||
- alert: TooHighChurnRate24h
|
||||
expr: |
|
||||
sum(increase(vm_new_timeseries_created_total[24h]))
|
||||
>
|
||||
(sum(vm_cache_entries{type="storage/hour_metric_ids"})* 3)
|
||||
for: 15m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/oS7Bi_0Wz?viewPanel=102"
|
||||
summary: "Too high number of new series created over last 24h"
|
||||
description: "The number of created new time series over last 24h is 3x times higher than
|
||||
current number of active series.\n
|
||||
This effect is known as Churn Rate.\n
|
||||
High Churn Rate tightly connected with database performance and may
|
||||
result in unexpected OOM's or slow queries."
|
||||
|
||||
- alert: TooHighSlowInsertsRate
|
||||
expr: |
|
||||
(
|
||||
sum(rate(vm_slow_row_inserts_total[5m]))
|
||||
/
|
||||
sum(rate(vm_rows_inserted_total[5m]))
|
||||
) > 0.05
|
||||
for: 15m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/oS7Bi_0Wz?viewPanel=108"
|
||||
summary: "Percentage of slow inserts is more than 5% for the last 15m"
|
||||
description: "High rate of slow inserts may be a sign of resource exhaustion
|
||||
for the current load. It is likely more RAM is needed for optimal handling of the current number of active time series."
|
||||
|
||||
- alert: ProcessNearFDLimits
|
||||
expr: (process_max_fds - process_open_fds) < 100
|
||||
for: 5m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/oS7Bi_0Wz?viewPanel=117&var-instance={{ $labels.instance }}"
|
||||
summary: "Number of free file descriptors is less than 100 for \"{{ $labels.job }}\"(\"{{ $labels.instance }}\") for the last 5m"
|
||||
description: "Exhausting OS file descriptors limit can cause severe degradation of the process.
|
||||
Consider to increase the limit as fast as possible."
|
||||
|
||||
- alert: LabelsLimitExceededOnIngestion
|
||||
expr: sum(increase(vm_metrics_with_dropped_labels_total[5m])) by (instance) > 0
|
||||
for: 15m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/oS7Bi_0Wz?viewPanel=116&var-instance={{ $labels.instance }}"
|
||||
summary: "Metrics ingested to vminsert on {{ $labels.instance }} are exceeding labels limit"
|
||||
description: "VictoriaMetrics limits the number of labels per each metric with `-maxLabelsPerTimeseries` command-line flag.\n
|
||||
This prevents from ingesting metrics with too many labels. Please verify that `-maxLabelsPerTimeseries` is configured
|
||||
correctly or that clients which send these metrics aren't misbehaving."
|
||||
|
||||
- alert: VminsertVmstorageConnectionIsSaturated
|
||||
expr: rate(vm_rpc_send_duration_seconds_total[5m]) > 0.9
|
||||
for: 15m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/oS7Bi_0Wz?viewPanel=139&var-instance={{ $labels.instance }}"
|
||||
summary: "Connection between vminsert on {{ $labels.instance }} and vmstorage on {{ $labels.addr }} is saturated"
|
||||
description: "The connection between vminsert (instance {{ $labels.instance }}) and vmstorage (instance {{ $labels.addr }})
|
||||
is saturated by more than 90% and vminsert won't be able to keep up.\n
|
||||
This usually means that more vminsert or vmstorage nodes must be added to the cluster in order to increase
|
||||
the total number of vminsert -> vmstorage links."
|
54
deployment/docker/alerts-health.yml
Normal file
54
deployment/docker/alerts-health.yml
Normal file
|
@ -0,0 +1,54 @@
|
|||
# File contains default list of alerts for VM components.
|
||||
# The alerts below are just recommendations and may require some updates
|
||||
# and threshold calibration according to every specific setup.
|
||||
groups:
|
||||
- name: vm-health
|
||||
# note the `job` filter and update accordingly to your setup
|
||||
rules:
|
||||
- alert: TooManyRestarts
|
||||
expr: changes(process_start_time_seconds{job=~"victoriametrics|vmselect|vminsert|vmstorage|vmagent|vmalert"}[15m]) > 2
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: "{{ $labels.job }} too many restarts (instance {{ $labels.instance }})"
|
||||
description: "Job {{ $labels.job }} (instance {{ $labels.instance }}) has restarted more than twice in the last 15 minutes.
|
||||
It might be crashlooping."
|
||||
|
||||
- alert: ServiceDown
|
||||
expr: up{job=~"victoriametrics|vmselect|vminsert|vmstorage|vmagent|vmalert"} == 0
|
||||
for: 2m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: "Service {{ $labels.job }} is down on {{ $labels.instance }}"
|
||||
description: "{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 2 minutes."
|
||||
|
||||
- alert: ProcessNearFDLimits
|
||||
expr: (process_max_fds - process_open_fds) < 100
|
||||
for: 5m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: "Number of free file descriptors is less than 100 for \"{{ $labels.job }}\"(\"{{ $labels.instance }}\") for the last 5m"
|
||||
description: "Exhausting OS file descriptors limit can cause severe degradation of the process.
|
||||
Consider to increase the limit as fast as possible."
|
||||
|
||||
- alert: TooHighMemoryUsage
|
||||
expr: (process_resident_memory_anon_bytes / vm_available_memory_bytes) > 0.9
|
||||
for: 5m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: "It is more than 90% of memory used by \"{{ $labels.job }}\"(\"{{ $labels.instance }}\") during the last 5m"
|
||||
description: "Too high memory usage may result into multiple issues such as OOMs or degraded performance.
|
||||
Consider to either increase available memory or decrease the load on the process."
|
||||
|
||||
- alert: TooHighCPUUsage
|
||||
expr: rate(process_cpu_seconds_total[5m]) / process_cpu_cores_available > 0.9
|
||||
for: 5m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: "More than 90% of CPU is used by \"{{ $labels.job }}\"(\"{{ $labels.instance }}\") during the last 5m"
|
||||
description: "Too high CPU usage may be a sign of insufficient resources and make process unstable.
|
||||
Consider to either increase available CPU resources or decrease the load on the process."
|
122
deployment/docker/alerts-vmagent.yml
Normal file
122
deployment/docker/alerts-vmagent.yml
Normal file
|
@ -0,0 +1,122 @@
|
|||
# File contains default list of alerts for vmagent service.
|
||||
# The alerts below are just recommendations and may require some updates
|
||||
# and threshold calibration according to every specific setup.
|
||||
groups:
|
||||
# Alerts group for vmagent assumes that Grafana dashboard
|
||||
# https://grafana.com/grafana/dashboards/12683 is installed.
|
||||
# Pls update the `dashboard` annotation according to your setup.
|
||||
- name: vmagent
|
||||
interval: 30s
|
||||
concurrency: 2
|
||||
rules:
|
||||
- alert: PersistentQueueIsDroppingData
|
||||
expr: sum(increase(vm_persistentqueue_bytes_dropped_total[5m])) by (job, instance) > 0
|
||||
for: 10m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/G7Z9GzMGz?viewPanel=49&var-instance={{ $labels.instance }}"
|
||||
summary: "Instance {{ $labels.instance }} is dropping data from persistent queue"
|
||||
description: "Vmagent dropped {{ $value | humanize1024 }} from persistent queue
|
||||
on instance {{ $labels.instance }} for the last 10m."
|
||||
|
||||
- alert: RejectedRemoteWriteDataBlocksAreDropped
|
||||
expr: sum(increase(vmagent_remotewrite_packets_dropped_total[5m])) by (job, instance) > 0
|
||||
for: 15m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/G7Z9GzMGz?viewPanel=79&var-instance={{ $labels.instance }}"
|
||||
summary: "Job \"{{ $labels.job }}\" on instance {{ $labels.instance }} drops the rejected by
|
||||
remote-write server data blocks. Check the logs to find the reason for rejects."
|
||||
|
||||
- alert: TooManyScrapeErrors
|
||||
expr: sum(increase(vm_promscrape_scrapes_failed_total[5m])) by (job, instance) > 0
|
||||
for: 15m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/G7Z9GzMGz?viewPanel=31&var-instance={{ $labels.instance }}"
|
||||
summary: "Job \"{{ $labels.job }}\" on instance {{ $labels.instance }} fails to scrape targets for last 15m"
|
||||
|
||||
- alert: TooManyWriteErrors
|
||||
expr: |
|
||||
(sum(increase(vm_ingestserver_request_errors_total[5m])) by (job, instance)
|
||||
+
|
||||
sum(increase(vmagent_http_request_errors_total[5m])) by (job, instance)) > 0
|
||||
for: 15m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/G7Z9GzMGz?viewPanel=77&var-instance={{ $labels.instance }}"
|
||||
summary: "Job \"{{ $labels.job }}\" on instance {{ $labels.instance }} responds with errors to write requests for last 15m."
|
||||
|
||||
- alert: TooManyRemoteWriteErrors
|
||||
expr: sum(rate(vmagent_remotewrite_retries_count_total[5m])) by(job, instance, url) > 0
|
||||
for: 15m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/G7Z9GzMGz?viewPanel=61&var-instance={{ $labels.instance }}"
|
||||
summary: "Job \"{{ $labels.job }}\" on instance {{ $labels.instance }} fails to push to remote storage"
|
||||
description: "Vmagent fails to push data via remote write protocol to destination \"{{ $labels.url }}\"\n
|
||||
Ensure that destination is up and reachable."
|
||||
|
||||
- alert: RemoteWriteConnectionIsSaturated
|
||||
expr: |
|
||||
sum(rate(vmagent_remotewrite_send_duration_seconds_total[5m])) by(job, instance, url)
|
||||
> 0.9 * max(vmagent_remotewrite_queues) by(job, instance, url)
|
||||
for: 15m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/G7Z9GzMGz?viewPanel=84&var-instance={{ $labels.instance }}"
|
||||
summary: "Remote write connection from \"{{ $labels.job }}\" (instance {{ $labels.instance }}) to {{ $labels.url }} is saturated"
|
||||
description: "The remote write connection between vmagent \"{{ $labels.job }}\" (instance {{ $labels.instance }}) and destination \"{{ $labels.url }}\"
|
||||
is saturated by more than 90% and vmagent won't be able to keep up.\n
|
||||
This usually means that `-remoteWrite.queues` command-line flag must be increased in order to increase
|
||||
the number of connections per each remote storage."
|
||||
|
||||
- alert: PersistentQueueForWritesIsSaturated
|
||||
expr: rate(vm_persistentqueue_write_duration_seconds_total[5m]) > 0.9
|
||||
for: 15m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/G7Z9GzMGz?viewPanel=98&var-instance={{ $labels.instance }}"
|
||||
summary: "Persistent queue writes for instance {{ $labels.instance }} are saturated"
|
||||
description: "Persistent queue writes for vmagent \"{{ $labels.job }}\" (instance {{ $labels.instance }})
|
||||
are saturated by more than 90% and vmagent won't be able to keep up with flushing data on disk.
|
||||
In this case, consider to decrease load on the vmagent or improve the disk throughput."
|
||||
|
||||
- alert: PersistentQueueForReadsIsSaturated
|
||||
expr: rate(vm_persistentqueue_read_duration_seconds_total[5m]) > 0.9
|
||||
for: 15m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/G7Z9GzMGz?viewPanel=99&var-instance={{ $labels.instance }}"
|
||||
summary: "Persistent queue reads for instance {{ $labels.instance }} are saturated"
|
||||
description: "Persistent queue reads for vmagent \"{{ $labels.job }}\" (instance {{ $labels.instance }})
|
||||
are saturated by more than 90% and vmagent won't be able to keep up with reading data from the disk.
|
||||
In this case, consider to decrease load on the vmagent or improve the disk throughput."
|
||||
|
||||
- alert: SeriesLimitHourReached
|
||||
expr: (vmagent_hourly_series_limit_current_series / vmagent_hourly_series_limit_max_series) > 0.9
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/G7Z9GzMGz?viewPanel=88&var-instance={{ $labels.instance }}"
|
||||
summary: "Instance {{ $labels.instance }} reached 90% of the limit"
|
||||
description: "Max series limit set via -remoteWrite.maxHourlySeries flag is close to reaching the max value.
|
||||
Then samples for new time series will be dropped instead of sending them to remote storage systems."
|
||||
|
||||
- alert: SeriesLimitDayReached
|
||||
expr: (vmagent_daily_series_limit_current_series / vmagent_daily_series_limit_max_series) > 0.9
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/G7Z9GzMGz?viewPanel=90&var-instance={{ $labels.instance }}"
|
||||
summary: "Instance {{ $labels.instance }} reached 90% of the limit"
|
||||
description: "Max series limit set via -remoteWrite.maxDailySeries flag is close to reaching the max value.
|
||||
Then samples for new time series will be dropped instead of sending them to remote storage systems."
|
|
@ -1,60 +1,7 @@
|
|||
# File contains default list of alerts for vm-single and vmagent services.
|
||||
# File contains default list of alerts for VictoriaMetrics single server.
|
||||
# The alerts below are just recommendations and may require some updates
|
||||
# and threshold calibration according to every specific setup.
|
||||
groups:
|
||||
- name: vm-health
|
||||
# note the `job` filter and update accordingly to your setup
|
||||
rules:
|
||||
# note the `job` filter and update accordingly to your setup
|
||||
- alert: TooManyRestarts
|
||||
expr: changes(process_start_time_seconds{job=~"victoriametrics|vmagent|vmalert"}[15m]) > 2
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: "{{ $labels.job }} too many restarts (instance {{ $labels.instance }})"
|
||||
description: "Job {{ $labels.job }} has restarted more than twice in the last 15 minutes.
|
||||
It might be crashlooping."
|
||||
|
||||
- alert: ServiceDown
|
||||
expr: up{job=~"victoriametrics|vmagent|vmalert"} == 0
|
||||
for: 2m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: "Service {{ $labels.job }} is down on {{ $labels.instance }}"
|
||||
description: "{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 2 minutes."
|
||||
|
||||
- alert: ProcessNearFDLimits
|
||||
expr: (process_max_fds - process_open_fds) < 100
|
||||
for: 5m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: "Number of free file descriptors is less than 100 for \"{{ $labels.job }}\"(\"{{ $labels.instance }}\") for the last 5m"
|
||||
description: "Exhausting OS file descriptors limit can cause severe degradation of the process.
|
||||
Consider to increase the limit as fast as possible."
|
||||
|
||||
- alert: TooHighMemoryUsage
|
||||
expr: (process_resident_memory_anon_bytes / vm_available_memory_bytes) > 0.9
|
||||
for: 5m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: "It is more than 90% of memory used by \"{{ $labels.job }}\"(\"{{ $labels.instance }}\") during the last 5m"
|
||||
description: "Too high memory usage may result into multiple issues such as OOMs or degraded performance.
|
||||
Consider to either increase available memory or decrease the load on the process."
|
||||
|
||||
- alert: TooHighCPUUsage
|
||||
expr: rate(process_cpu_seconds_total[5m]) / process_cpu_cores_available > 0.9
|
||||
for: 5m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: "More than 90% of CPU is used by \"{{ $labels.job }}\"(\"{{ $labels.instance }}\") during the last 5m"
|
||||
description: "Too high CPU usage may be a sign of insufficient resources and make process unstable.
|
||||
Consider to either increase available CPU resources or decrease the load on the process."
|
||||
|
||||
|
||||
# Alerts group for VM single assumes that Grafana dashboard
|
||||
# https://grafana.com/grafana/dashboards/10229 is installed.
|
||||
# Pls update the `dashboard` annotation according to your setup.
|
||||
|
@ -207,123 +154,4 @@ groups:
|
|||
summary: "Metrics ingested in ({{ $labels.instance }}) are exceeding labels limit"
|
||||
description: "VictoriaMetrics limits the number of labels per each metric with `-maxLabelsPerTimeseries` command-line flag.\n
|
||||
This prevents from ingesting metrics with too many labels. Please verify that `-maxLabelsPerTimeseries` is configured
|
||||
correctly or that clients which send these metrics aren't misbehaving."
|
||||
|
||||
# Alerts group for vmagent assumes that Grafana dashboard
|
||||
# https://grafana.com/grafana/dashboards/12683 is installed.
|
||||
# Pls update the `dashboard` annotation according to your setup.
|
||||
- name: vmagent
|
||||
interval: 30s
|
||||
concurrency: 2
|
||||
rules:
|
||||
- alert: PersistentQueueIsDroppingData
|
||||
expr: sum(increase(vm_persistentqueue_bytes_dropped_total[5m])) by (job, instance) > 0
|
||||
for: 10m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/G7Z9GzMGz?viewPanel=49&var-instance={{ $labels.instance }}"
|
||||
summary: "Instance {{ $labels.instance }} is dropping data from persistent queue"
|
||||
description: "Vmagent dropped {{ $value | humanize1024 }} from persistent queue
|
||||
on instance {{ $labels.instance }} for the last 10m."
|
||||
|
||||
- alert: RejectedRemoteWriteDataBlocksAreDropped
|
||||
expr: sum(increase(vmagent_remotewrite_packets_dropped_total[5m])) by (job, instance) > 0
|
||||
for: 15m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/G7Z9GzMGz?viewPanel=79&var-instance={{ $labels.instance }}"
|
||||
summary: "Job \"{{ $labels.job }}\" on instance {{ $labels.instance }} drops the rejected by
|
||||
remote-write server data blocks. Check the logs to find the reason for rejects."
|
||||
|
||||
- alert: TooManyScrapeErrors
|
||||
expr: sum(increase(vm_promscrape_scrapes_failed_total[5m])) by (job, instance) > 0
|
||||
for: 15m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/G7Z9GzMGz?viewPanel=31&var-instance={{ $labels.instance }}"
|
||||
summary: "Job \"{{ $labels.job }}\" on instance {{ $labels.instance }} fails to scrape targets for last 15m"
|
||||
|
||||
- alert: TooManyWriteErrors
|
||||
expr: |
|
||||
(sum(increase(vm_ingestserver_request_errors_total[5m])) by (job, instance)
|
||||
+
|
||||
sum(increase(vmagent_http_request_errors_total[5m])) by (job, instance)) > 0
|
||||
for: 15m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/G7Z9GzMGz?viewPanel=77&var-instance={{ $labels.instance }}"
|
||||
summary: "Job \"{{ $labels.job }}\" on instance {{ $labels.instance }} responds with errors to write requests for last 15m."
|
||||
|
||||
- alert: TooManyRemoteWriteErrors
|
||||
expr: sum(rate(vmagent_remotewrite_retries_count_total[5m])) by(job, instance, url) > 0
|
||||
for: 15m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/G7Z9GzMGz?viewPanel=61&var-instance={{ $labels.instance }}"
|
||||
summary: "Job \"{{ $labels.job }}\" on instance {{ $labels.instance }} fails to push to remote storage"
|
||||
description: "Vmagent fails to push data via remote write protocol to destination \"{{ $labels.url }}\"\n
|
||||
Ensure that destination is up and reachable."
|
||||
|
||||
- alert: RemoteWriteConnectionIsSaturated
|
||||
expr: |
|
||||
sum(rate(vmagent_remotewrite_send_duration_seconds_total[5m])) by(job, instance, url)
|
||||
> 0.9 * max(vmagent_remotewrite_queues) by(job, instance, url)
|
||||
for: 15m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/G7Z9GzMGz?viewPanel=84&var-instance={{ $labels.instance }}"
|
||||
summary: "Remote write connection from \"{{ $labels.job }}\" (instance {{ $labels.instance }}) to {{ $labels.url }} is saturated"
|
||||
description: "The remote write connection between vmagent \"{{ $labels.job }}\" (instance {{ $labels.instance }}) and destination \"{{ $labels.url }}\"
|
||||
is saturated by more than 90% and vmagent won't be able to keep up.\n
|
||||
This usually means that `-remoteWrite.queues` command-line flag must be increased in order to increase
|
||||
the number of connections per each remote storage."
|
||||
|
||||
- alert: PersistentQueueForWritesIsSaturated
|
||||
expr: rate(vm_persistentqueue_write_duration_seconds_total[5m]) > 0.9
|
||||
for: 15m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/G7Z9GzMGz?viewPanel=98&var-instance={{ $labels.instance }}"
|
||||
summary: "Persistent queue writes for instance {{ $labels.instance }} are saturated"
|
||||
description: "Persistent queue writes for vmagent \"{{ $labels.job }}\" (instance {{ $labels.instance }})
|
||||
are saturated by more than 90% and vmagent won't be able to keep up with flushing data on disk.
|
||||
In this case, consider to decrease load on the vmagent or improve the disk throughput."
|
||||
|
||||
- alert: PersistentQueueForReadsIsSaturated
|
||||
expr: rate(vm_persistentqueue_read_duration_seconds_total[5m]) > 0.9
|
||||
for: 15m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/G7Z9GzMGz?viewPanel=99&var-instance={{ $labels.instance }}"
|
||||
summary: "Persistent queue reads for instance {{ $labels.instance }} are saturated"
|
||||
description: "Persistent queue reads for vmagent \"{{ $labels.job }}\" (instance {{ $labels.instance }})
|
||||
are saturated by more than 90% and vmagent won't be able to keep up with reading data from the disk.
|
||||
In this case, consider to decrease load on the vmagent or improve the disk throughput."
|
||||
|
||||
- alert: SeriesLimitHourReached
|
||||
expr: (vmagent_hourly_series_limit_current_series / vmagent_hourly_series_limit_max_series) > 0.9
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/G7Z9GzMGz?viewPanel=88&var-instance={{ $labels.instance }}"
|
||||
summary: "Instance {{ $labels.instance }} reached 90% of the limit"
|
||||
description: "Max series limit set via -remoteWrite.maxHourlySeries flag is close to reaching the max value.
|
||||
Then samples for new time series will be dropped instead of sending them to remote storage systems."
|
||||
|
||||
- alert: SeriesLimitDayReached
|
||||
expr: (vmagent_daily_series_limit_current_series / vmagent_daily_series_limit_max_series) > 0.9
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
dashboard: "http://localhost:3000/d/G7Z9GzMGz?viewPanel=90&var-instance={{ $labels.instance }}"
|
||||
summary: "Instance {{ $labels.instance }} reached 90% of the limit"
|
||||
description: "Max series limit set via -remoteWrite.maxDailySeries flag is close to reaching the max value.
|
||||
Then samples for new time series will be dropped instead of sending them to remote storage systems."
|
||||
correctly or that clients which send these metrics aren't misbehaving."
|
121
deployment/docker/docker-compose-cluster.yml
Normal file
121
deployment/docker/docker-compose-cluster.yml
Normal file
|
@ -0,0 +1,121 @@
|
|||
version: '3.5'
|
||||
services:
|
||||
vmagent:
|
||||
container_name: vmagent
|
||||
image: victoriametrics/vmagent:v1.81.2
|
||||
depends_on:
|
||||
- "vminsert"
|
||||
ports:
|
||||
- 8429:8429
|
||||
volumes:
|
||||
- vmagentdata:/vmagentdata
|
||||
- ./prometheus-cluster.yml:/etc/prometheus/prometheus.yml
|
||||
command:
|
||||
- '--promscrape.config=/etc/prometheus/prometheus.yml'
|
||||
- '--remoteWrite.url=http://vminsert:8480/insert/0/prometheus/'
|
||||
restart: always
|
||||
|
||||
grafana:
|
||||
container_name: grafana
|
||||
image: grafana/grafana:9.1.0
|
||||
depends_on:
|
||||
- "vmselect"
|
||||
ports:
|
||||
- 3000:3000
|
||||
restart: always
|
||||
volumes:
|
||||
- grafanadata:/var/lib/grafana
|
||||
- ./provisioning/:/etc/grafana/provisioning/
|
||||
- ./../../dashboards/victoriametrics-cluster.json:/var/lib/grafana/dashboards/vm.json
|
||||
- ./../../dashboards/vmagent.json:/var/lib/grafana/dashboards/vmagent.json
|
||||
- ./../../dashboards/vmalert.json:/var/lib/grafana/dashboards/vmalert.json
|
||||
|
||||
vmstorage-1:
|
||||
container_name: vmstorage-1
|
||||
image: victoriametrics/vmstorage:v1.81.2-cluster
|
||||
ports:
|
||||
- 8482
|
||||
- 8400
|
||||
- 8401
|
||||
volumes:
|
||||
- strgdata-1:/storage
|
||||
command:
|
||||
- '--storageDataPath=/storage'
|
||||
restart: always
|
||||
vmstorage-2:
|
||||
container_name: vmstorage-2
|
||||
image: victoriametrics/vmstorage:v1.81.2-cluster
|
||||
ports:
|
||||
- 8482
|
||||
- 8400
|
||||
- 8401
|
||||
volumes:
|
||||
- strgdata-2:/storage
|
||||
command:
|
||||
- '--storageDataPath=/storage'
|
||||
restart: always
|
||||
vminsert:
|
||||
container_name: vminsert
|
||||
image: victoriametrics/vminsert:v1.81.2-cluster
|
||||
depends_on:
|
||||
- "vmstorage-1"
|
||||
- "vmstorage-2"
|
||||
command:
|
||||
- '--storageNode=vmstorage-1:8400'
|
||||
- '--storageNode=vmstorage-2:8400'
|
||||
ports:
|
||||
- 8480:8480
|
||||
restart: always
|
||||
vmselect:
|
||||
container_name: vmselect
|
||||
image: victoriametrics/vmselect:v1.81.2-cluster
|
||||
depends_on:
|
||||
- "vmstorage-1"
|
||||
- "vmstorage-2"
|
||||
command:
|
||||
- '--storageNode=vmstorage-1:8401'
|
||||
- '--storageNode=vmstorage-2:8401'
|
||||
- '--vmalert.proxyURL=http://vmalert:8880'
|
||||
ports:
|
||||
- 8481:8481
|
||||
restart: always
|
||||
|
||||
vmalert:
|
||||
container_name: vmalert
|
||||
image: victoriametrics/vmalert:v1.81.2
|
||||
depends_on:
|
||||
- "vmselect"
|
||||
ports:
|
||||
- 8880:8880
|
||||
volumes:
|
||||
- ./alerts-cluster.yml:/etc/alerts/alerts.yml
|
||||
- ./alerts-health.yml:/etc/alerts/alerts-health.yml
|
||||
- ./alerts-vmagent.yml:/etc/alerts/alerts-vmagent.yml
|
||||
command:
|
||||
- '--datasource.url=http://vmselect:8481/select/0/prometheus'
|
||||
- '--remoteRead.url=http://vmselect:8481/select/0/prometheus'
|
||||
- '--remoteWrite.url=http://vminsert:8480/insert/0/prometheus'
|
||||
- '--notifier.url=http://alertmanager:9093/'
|
||||
- '--rule=/etc/alerts/*.yml'
|
||||
# display source of alerts in grafana
|
||||
- '-external.url=http://127.0.0.1:3000' #grafana outside container
|
||||
# when copypaste the line below be aware of '$$' for escaping in '$expr'
|
||||
- '--external.alert.source=explore?orgId=1&left=["now-1h","now","VictoriaMetrics",{"expr":"{{$$expr|quotesEscape|crlfEscape|queryEscape}}"},{"mode":"Metrics"},{"ui":[true,true,true,"none"]}]'
|
||||
restart: always
|
||||
|
||||
alertmanager:
|
||||
container_name: alertmanager
|
||||
image: prom/alertmanager:v0.24.0
|
||||
volumes:
|
||||
- ./alertmanager.yml:/config/alertmanager.yml
|
||||
command:
|
||||
- '--config.file=/config/alertmanager.yml'
|
||||
ports:
|
||||
- 9093:9093
|
||||
restart: always
|
||||
|
||||
volumes:
|
||||
vmagentdata: {}
|
||||
strgdata-1: {}
|
||||
strgdata-2: {}
|
||||
grafanadata: {}
|
|
@ -2,7 +2,7 @@ version: "3.5"
|
|||
services:
|
||||
vmagent:
|
||||
container_name: vmagent
|
||||
image: victoriametrics/vmagent:v1.80.0
|
||||
image: victoriametrics/vmagent:v1.81.2
|
||||
depends_on:
|
||||
- "victoriametrics"
|
||||
ports:
|
||||
|
@ -18,7 +18,7 @@ services:
|
|||
restart: always
|
||||
victoriametrics:
|
||||
container_name: victoriametrics
|
||||
image: victoriametrics/victoria-metrics:v1.80.0
|
||||
image: victoriametrics/victoria-metrics:v1.81.2
|
||||
ports:
|
||||
- 8428:8428
|
||||
- 8089:8089
|
||||
|
@ -56,7 +56,7 @@ services:
|
|||
restart: always
|
||||
vmalert:
|
||||
container_name: vmalert
|
||||
image: victoriametrics/vmalert:v1.80.0
|
||||
image: victoriametrics/vmalert:v1.81.2
|
||||
depends_on:
|
||||
- "victoriametrics"
|
||||
- "alertmanager"
|
||||
|
@ -64,6 +64,8 @@ services:
|
|||
- 8880:8880
|
||||
volumes:
|
||||
- ./alerts.yml:/etc/alerts/alerts.yml
|
||||
- ./alerts-health.yml:/etc/alerts/alerts-health.yml
|
||||
- ./alerts-vmagent.yml:/etc/alerts/alerts-vmagent.yml
|
||||
command:
|
||||
- "--datasource.url=http://victoriametrics:8428/"
|
||||
- "--remoteRead.url=http://victoriametrics:8428/"
|
||||
|
@ -72,7 +74,8 @@ services:
|
|||
- "--rule=/etc/alerts/*.yml"
|
||||
# display source of alerts in grafana
|
||||
- "--external.url=http://127.0.0.1:3000" #grafana outside container
|
||||
- '--external.alert.source=explore?orgId=1&left=["now-1h","now","VictoriaMetrics",{"expr":"{{$$expr|quotesEscape|crlfEscape|queryEscape}}"},{"mode":"Metrics"},{"ui":[true,true,true,"none"]}]' ## when copypaste the line be aware of '$$' for escaping in '$expr'
|
||||
# when copypaste the line be aware of '$$' for escaping in '$expr'
|
||||
- '--external.alert.source=explore?orgId=1&left=["now-1h","now","VictoriaMetrics",{"expr":"{{$$expr|quotesEscape|crlfEscape|queryEscape}}"},{"mode":"Metrics"},{"ui":[true,true,true,"none"]}]'
|
||||
networks:
|
||||
- vm_net
|
||||
restart: always
|
||||
|
|
19
deployment/docker/prometheus-cluster.yml
Normal file
19
deployment/docker/prometheus-cluster.yml
Normal file
|
@ -0,0 +1,19 @@
|
|||
global:
|
||||
scrape_interval: 10s
|
||||
|
||||
scrape_configs:
|
||||
- job_name: 'vmagent'
|
||||
static_configs:
|
||||
- targets: ['vmagent:8429']
|
||||
- job_name: 'vmalert'
|
||||
static_configs:
|
||||
- targets: ['vmalert:8880']
|
||||
- job_name: 'vminsert'
|
||||
static_configs:
|
||||
- targets: ['vminsert:8480']
|
||||
- job_name: 'vmselect'
|
||||
static_configs:
|
||||
- targets: ['vmselect:8481']
|
||||
- job_name: 'vmstorage'
|
||||
static_configs:
|
||||
- targets: ['vmstorage-1:8482', 'vmstorage-2:8482']
|
|
@ -6,3 +6,9 @@ datasources:
|
|||
access: proxy
|
||||
url: http://victoriametrics:8428
|
||||
isDefault: true
|
||||
|
||||
- name: VictoriaMetrics - cluster
|
||||
type: prometheus
|
||||
access: proxy
|
||||
url: http://vmselect:8481/select/0/prometheus
|
||||
isDefault: false
|
|
@ -57,6 +57,7 @@ See also [case studies](https://docs.victoriametrics.com/CaseStudies.html).
|
|||
* [VictoriaMetrics Source Code Analysis - Bloom filter](https://www.sobyte.net/post/2022-05/victoriametrics-bloomfilter/)
|
||||
* [How we tried using VictoriaMetrics and Thanos at the same time](https://habr.com/ru/company/sravni/blog/672908/)
|
||||
* [Prometheus, Grafana, and Kubernetes, Oh My!](https://www.groundcover.com/blog/prometheus-grafana-kubernetes)
|
||||
* [Explaining modern server monitoring stacks for self-hosting](https://dataswamp.org/~solene/2022-09-11-exploring-monitoring-stacks.html)
|
||||
|
||||
## Our articles
|
||||
|
||||
|
@ -93,6 +94,7 @@ See also [case studies](https://docs.victoriametrics.com/CaseStudies.html).
|
|||
|
||||
### Tutorials, guides and how-to articles
|
||||
|
||||
* [Monitoring VictoriaMetrics](https://victoriametrics.com/blog/victoriametrics-monitoring/)
|
||||
* [PromQL tutorial for beginners and humans](https://valyala.medium.com/promql-tutorial-for-beginners-9ab455142085)
|
||||
* [How to optimize PromQL and MetricsQL queries](https://valyala.medium.com/how-to-optimize-promql-and-metricsql-queries-85a1b75bf986)
|
||||
* [Analyzing Prometheus data with external tools](https://valyala.medium.com/analyzing-prometheus-data-with-external-tools-5f3e5e147639)
|
||||
|
|
|
@ -15,8 +15,13 @@ The following tip changes can be tested by building VictoriaMetrics components f
|
|||
|
||||
## tip
|
||||
|
||||
**Update note:** this release changes data format for [/api/v1/export/native](https://docs.victoriametrics.com/#how-to-export-data-in-native-format) in incompatible way, so it cannot be imported into older version of VictoriaMetrics via [/api/v1/import/native](https://docs.victoriametrics.com/#how-to-import-data-in-native-format).
|
||||
**Update note 1:** this release changes data format for [/api/v1/export/native](https://docs.victoriametrics.com/#how-to-export-data-in-native-format) in incompatible way, so it cannot be imported into older version of VictoriaMetrics via [/api/v1/import/native](https://docs.victoriametrics.com/#how-to-import-data-in-native-format).
|
||||
|
||||
**Update note 2:** [vmalert](https://docs.victoriametrics.com/vmalert.html) changes default value for command-line flag `-datasource.queryStep` from `0s` to `5m`. The change supposed to improve reliability of the rules evaluation when evaluation interval is lower than scraping interval.
|
||||
|
||||
* FEATURE: sanitize metric names for data ingested via [DataDog protocol](https://docs.victoriametrics.com/#how-to-send-data-from-datadog-agent) according to [DataDog metric naming](https://docs.datadoghq.com/metrics/custom_metrics/#naming-custom-metrics). The behaviour can be disabled by passing `-datadog.sanitizeMetricName=false` command-line flag. Thanks to @PerGon for [the pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/3105).
|
||||
* FEATURE: add `-usePromCompatibleNaming` command-line flag to [vmagent](https://docs.victoriametrics.com/vmagent.html), to single-node VictoriaMetrics and to `vminsert` component of VictoriaMetrics cluster. This flag can be used for normalizing the ingested metric names and label names to [Prometheus-compatible form](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels). If this flag is set, then all the chars unsupported by Prometheus are replaced with `_` chars in metric names and labels of the ingested samples. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3113).
|
||||
* FEATURE: accept whitespace in metric names and tags ingested via [Graphite plaintext protocol](https://docs.victoriametrics.com/#how-to-send-data-from-graphite-compatible-agents-such-as-statsd) according to [the specs](https://graphite.readthedocs.io/en/latest/tags.html). See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3102).
|
||||
* FEATURE: check the correctess of raw sample timestamps stored on disk when reading them. This reduces the probability of possible silent corruption of the data stored on disk. This should help [this](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2998) and [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3011).
|
||||
* FEATURE: atomically delete directories with snapshots, parts and partitions at [storage level](https://docs.victoriametrics.com/#storage). Previously such directories can be left in partially deleted state when the deletion operation was interrupted by unclean shutdown. This may result in `cannot open file ...: no such file or directory` error on the next start. The probability of this error was quite high when NFS or EFS was used as persistent storage for VictoriaMetrics data. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3038).
|
||||
* FEATURE: set the `start` arg to `end - 5 minutes` if isn't passed explicitly to [/api/v1/labels](https://docs.victoriametrics.com/url-examples.html#apiv1labels) and [/api/v1/label/.../values](https://docs.victoriametrics.com/url-examples.html#apiv1labelvalues). See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/3052).
|
||||
|
@ -26,12 +31,15 @@ The following tip changes can be tested by building VictoriaMetrics components f
|
|||
* FEATURE: [vmalert](https://docs.victoriametrics.com/vmalert.html): add experimental feature for displaying last 10 states of the rule (recording or alerting) evaluation. The state is available on the Rule page, which can be opened by clicking on `Details` link next to Rule's name on the `/groups` page.
|
||||
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): minimize the time needed for reading large responses from scrape targets in [stream parsing mode](https://docs.victoriametrics.com/vmagent.html#stream-parsing-mode). This should reduce scrape durations for such targets as [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics) running in a big Kubernetes cluster.
|
||||
* FEATURE: [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html): add [sort_by_label_numeric](https://docs.victoriametrics.com/MetricsQL.html#sort_by_label_numeric) and [sort_by_label_numeric_desc](https://docs.victoriametrics.com/MetricsQL.html#sort_by_label_numeric_desc) functions for [numeric sort](https://www.gnu.org/software/coreutils/manual/html_node/Version-sort-is-not-the-same-as-numeric-sort.html) of input time series by the specified labels. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2938).
|
||||
* FEATURE: [vmbackup](https://docs.victoriametrics.com/vmbackup.html) and [vmrestore](https://docs.victoriametrics.com/vmrestore.html): retry GCS operations for up to 3 minutes on temporary failures. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3147).
|
||||
|
||||
* BUGFIX: [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html): properly calculate `rate_over_sum(m[d])` as `sum_over_time(m[d])/d`. Previously the `sum_over_time(m[d])` could be improperly divided by smaller than `d` time range. See [rate_over_sum() docs](https://docs.victoriametrics.com/MetricsQL.html#rate_over_sum) and [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3045).
|
||||
* BUGFIX: [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html): properly calculate `increase(m[d])` over slow-changing counters with values smaller than 100. Previously [increase](https://docs.victoriametrics.com/MetricsQL.html#increase) could return unexpectedly big results in this case. See [the related issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/962) and [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/3163).
|
||||
* BUGFIX: [VictoriaMetrics cluster](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html): properly calculate query results at `vmselect`. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3067). The issue has been introduced in [v1.81.0](https://docs.victoriametrics.com/CHANGELOG.html#v1810).
|
||||
* BUGFIX: [VictoriaMetrics cluster](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html): log clear error when multiple identical `-storageNode` command-line flags are passed to `vmselect` or to `vminsert`. Previously these components were crashed with cryptic panic `metric ... is already registered` in this case. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3076).
|
||||
* BUGFIX: [vmui](https://docs.victoriametrics.com/#vmui): fix `RangeError: Maximum call stack size exceeded` error when the query returns too many data points at `Table` view. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/3092/files).
|
||||
* BUGFIX: [vmalert](https://docs.victoriametrics.com/vmalert.html): re-evaluate annotations per each each alert evaluation. Previously, annotations were evaluated only on alert's value change. This could result in stale annotations in some cases described in [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/3119).
|
||||
* BUGFIX: [vmalert](https://docs.victoriametrics.com/vmalert.html): change default value for command-line flag `-datasource.queryStep` from `0s` to `5m`. Param `step` is added by vmalert to every rule evaluation request sent to datasource. Before this change, `step` was equal to group's evaluation interval by default. Param `step` for instant queries defines how far VM can look back for the last written data point. The change supposed to improve reliability of the rules evaluation when evaluation interval is lower than scraping interval.
|
||||
|
||||
## [v1.81.2](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.81.2)
|
||||
|
||||
|
|
|
@ -144,7 +144,7 @@ Ports may be altered by setting `-httpListenAddr` on the corresponding nodes.
|
|||
It is recommended setting up [monitoring](#monitoring) for the cluster.
|
||||
|
||||
The following tools can simplify cluster setup:
|
||||
- [An example docker-compose config for VictoriaMetrics cluster](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/cluster/deployment/docker/docker-compose.yml)
|
||||
- [An example docker-compose config for VictoriaMetrics cluster](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/docker-compose-cluster.yml)
|
||||
- [Helm charts for VictoriaMetrics](https://github.com/VictoriaMetrics/helm-charts)
|
||||
- [Kubernetes operator for VictoriaMetrics](https://github.com/VictoriaMetrics/operator)
|
||||
|
||||
|
@ -193,6 +193,7 @@ with [the official Grafana dashboard for VictoriaMetrics cluster](https://grafan
|
|||
or [an alternative dashboard for VictoriaMetrics cluster](https://grafana.com/grafana/dashboards/11831). Graphs on these dashboards contain useful hints - hover the `i` icon at the top left corner of each graph in order to read it.
|
||||
|
||||
It is recommended setting up alerts in [vmalert](https://docs.victoriametrics.com/vmalert.html) or in Prometheus from [this config](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/cluster/deployment/docker/alerts.yml).
|
||||
See more details in the article [VictoriaMetrics Monitoring](https://victoriametrics.com/blog/victoriametrics-monitoring/).
|
||||
|
||||
## Cardinality limiter
|
||||
|
||||
|
@ -638,6 +639,8 @@ Below is the output for `/path/to/vminsert -help`:
|
|||
-datadog.maxInsertRequestSize size
|
||||
The maximum size in bytes of a single DataDog POST request to /api/v1/series
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 67108864)
|
||||
-datadog.sanitizeMetricName
|
||||
Sanitize metric names for the ingested DataDog data to comply with DataDog behaviour described at https://docs.datadoghq.com/metrics/custom_metrics/#naming-custom-metrics (default true)
|
||||
-denyQueryTracing
|
||||
Whether to disable the ability to trace queries. See https://docs.victoriametrics.com/#query-tracing
|
||||
-disableRerouting
|
||||
|
@ -774,6 +777,8 @@ Below is the output for `/path/to/vminsert -help`:
|
|||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
-tlsKeyFile string
|
||||
Path to file with TLS key if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
|
||||
-usePromCompatibleNaming
|
||||
Whether to replace characters unsupported by Prometheus with underscores in the ingested metric names and label names. For example, foo.bar{a.b='c'} is transformed into foo_bar{a_b='c'} during data ingestion if this flag is set. See https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels
|
||||
-version
|
||||
Show VictoriaMetrics version
|
||||
-vmstorageDialTimeout duration
|
||||
|
|
|
@ -58,21 +58,22 @@ There is also [VictoriaMetrics cluster](https://docs.victoriametrics.com/Cluster
|
|||
### Starting VM-Cluster via Docker
|
||||
|
||||
The following commands clone the latest available
|
||||
[VictoriaMetrics cluster repository](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/cluster)
|
||||
and start the docker container via 'docker-compose'. Further customization is possible by editing
|
||||
the [docker-compose.yaml](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/cluster/deployment/docker/docker-compose.yml)
|
||||
[VictoriaMetrics repository](https://github.com/VictoriaMetrics/VictoriaMetrics)
|
||||
and start the docker container via 'make docker-cluster-up'. Further customization is possible by editing
|
||||
the [docker-compose-cluster.yml](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/docker-compose-cluster.yml)
|
||||
file.
|
||||
|
||||
<div class="with-copy" markdown="1">
|
||||
|
||||
```console
|
||||
git clone https://github.com/VictoriaMetrics/VictoriaMetrics --branch cluster &&
|
||||
cd VictoriaMetrics/deployment/docker &&
|
||||
docker-compose up
|
||||
git clone https://github.com/VictoriaMetrics/VictoriaMetrics &&
|
||||
make docker-cluster-up
|
||||
```
|
||||
|
||||
</div>
|
||||
|
||||
See more details [here](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker#readme).
|
||||
|
||||
* [Cluster setup](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#cluster-setup)
|
||||
|
||||
## Write data
|
||||
|
@ -140,10 +141,11 @@ The list of alerts for [single](https://github.com/VictoriaMetrics/VictoriaMetri
|
|||
and [cluster](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/cluster/deployment/docker/alerts.yml)
|
||||
versions would also help to identify and notify about issues with the system.
|
||||
|
||||
The rule of the thumb is to have a separate installation of VictoriaMetrics or any other monitoring system
|
||||
The rule of thumb is to have a separate installation of VictoriaMetrics or any other monitoring system
|
||||
to monitor the production installation of VictoriaMetrics. This would make monitoring independent and
|
||||
will help identify problems with the main monitoring installation.
|
||||
|
||||
See more details in the article [VictoriaMetrics Monitoring](https://victoriametrics.com/blog/victoriametrics-monitoring/).
|
||||
|
||||
### Capacity planning
|
||||
|
||||
|
|
|
@ -432,6 +432,10 @@ This command should return the following output if everything is OK:
|
|||
{"metric":{"__name__":"system.load.1","environment":"test","host":"test.example.com"},"values":[0.5],"timestamps":[1632833641000]}
|
||||
```
|
||||
|
||||
VictoriaMetrics automatically sanitizes metric names for the data ingested via DataDog protocol
|
||||
according to [DataDog metric naming recommendations](https://docs.datadoghq.com/metrics/custom_metrics/#naming-custom-metrics).
|
||||
If you need accepting metric names as is without sanitizing, then pass `-datadog.sanitizeMetricName=false` command-line flag to VictoriaMetrics.
|
||||
|
||||
Extra labels may be added to all the written time series by passing `extra_label=name=value` query args.
|
||||
For example, `/datadog/api/v1/series?extra_label=foo=bar` would add `{foo="bar"}` label to all the ingested metrics.
|
||||
|
||||
|
@ -465,6 +469,9 @@ VictoriaMetrics performs the following transformations to the ingested InfluxDB
|
|||
* Field names are mapped to time series names prefixed with `{measurement}{separator}` value, where `{separator}` equals to `_` by default. It can be changed with `-influxMeasurementFieldSeparator` command-line flag. See also `-influxSkipSingleField` command-line flag. If `{measurement}` is empty or if `-influxSkipMeasurement` command-line flag is set, then time series names correspond to field names.
|
||||
* Field values are mapped to time series values.
|
||||
* Tags are mapped to Prometheus labels as-is.
|
||||
* If `-usePromCompatibleNaming` command-line flag is set, then all the metric names and label names
|
||||
are normalized to [Prometheus-compatible naming](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels) by replacing unsupported chars with `_`.
|
||||
For example, `foo.bar-baz/1` metric name or label name is substituted with `foo_bar_baz_1`.
|
||||
|
||||
For example, the following InfluxDB line:
|
||||
|
||||
|
@ -1520,7 +1527,8 @@ VictoriaMetrics exposes currently running queries and their execution times at `
|
|||
|
||||
VictoriaMetrics exposes queries, which take the most time to execute, at `/api/v1/status/top_queries` page.
|
||||
|
||||
See also [troubleshooting docs](https://docs.victoriametrics.com/Troubleshooting.html).
|
||||
See also [VictoriaMetrics Monitoring](https://victoriametrics.com/blog/victoriametrics-monitoring/)
|
||||
and [troubleshooting docs](https://docs.victoriametrics.com/Troubleshooting.html).
|
||||
|
||||
## TSDB stats
|
||||
|
||||
|
@ -1993,6 +2001,8 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
-datadog.maxInsertRequestSize size
|
||||
The maximum size in bytes of a single DataDog POST request to /api/v1/series
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 67108864)
|
||||
-datadog.sanitizeMetricName
|
||||
Sanitize metric names for the ingested DataDog data to comply with DataDog behaviour described at https://docs.datadoghq.com/metrics/custom_metrics/#naming-custom-metrics (default true)
|
||||
-dedup.minScrapeInterval duration
|
||||
Leave only the last sample in every time series per each discrete interval equal to -dedup.minScrapeInterval > 0. See https://docs.victoriametrics.com/#deduplication and https://docs.victoriametrics.com/#downsampling
|
||||
-deleteAuthKey string
|
||||
|
@ -2332,6 +2342,8 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
-tlsKeyFile string
|
||||
Path to file with TLS key if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
|
||||
-usePromCompatibleNaming
|
||||
Whether to replace characters unsupported by Prometheus with underscores in the ingested metric names and label names. For example, foo.bar{a.b='c'} is transformed into foo_bar{a_b='c'} during data ingestion if this flag is set. See https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels
|
||||
-version
|
||||
Show VictoriaMetrics version
|
||||
-vmalert.proxyURL string
|
||||
|
|
|
@ -436,6 +436,10 @@ This command should return the following output if everything is OK:
|
|||
{"metric":{"__name__":"system.load.1","environment":"test","host":"test.example.com"},"values":[0.5],"timestamps":[1632833641000]}
|
||||
```
|
||||
|
||||
VictoriaMetrics automatically sanitizes metric names for the data ingested via DataDog protocol
|
||||
according to [DataDog metric naming recommendations](https://docs.datadoghq.com/metrics/custom_metrics/#naming-custom-metrics).
|
||||
If you need accepting metric names as is without sanitizing, then pass `-datadog.sanitizeMetricName=false` command-line flag to VictoriaMetrics.
|
||||
|
||||
Extra labels may be added to all the written time series by passing `extra_label=name=value` query args.
|
||||
For example, `/datadog/api/v1/series?extra_label=foo=bar` would add `{foo="bar"}` label to all the ingested metrics.
|
||||
|
||||
|
@ -469,6 +473,9 @@ VictoriaMetrics performs the following transformations to the ingested InfluxDB
|
|||
* Field names are mapped to time series names prefixed with `{measurement}{separator}` value, where `{separator}` equals to `_` by default. It can be changed with `-influxMeasurementFieldSeparator` command-line flag. See also `-influxSkipSingleField` command-line flag. If `{measurement}` is empty or if `-influxSkipMeasurement` command-line flag is set, then time series names correspond to field names.
|
||||
* Field values are mapped to time series values.
|
||||
* Tags are mapped to Prometheus labels as-is.
|
||||
* If `-usePromCompatibleNaming` command-line flag is set, then all the metric names and label names
|
||||
are normalized to [Prometheus-compatible naming](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels) by replacing unsupported chars with `_`.
|
||||
For example, `foo.bar-baz/1` metric name or label name is substituted with `foo_bar_baz_1`.
|
||||
|
||||
For example, the following InfluxDB line:
|
||||
|
||||
|
@ -1524,7 +1531,8 @@ VictoriaMetrics exposes currently running queries and their execution times at `
|
|||
|
||||
VictoriaMetrics exposes queries, which take the most time to execute, at `/api/v1/status/top_queries` page.
|
||||
|
||||
See also [troubleshooting docs](https://docs.victoriametrics.com/Troubleshooting.html).
|
||||
See also [VictoriaMetrics Monitoring](https://victoriametrics.com/blog/victoriametrics-monitoring/)
|
||||
and [troubleshooting docs](https://docs.victoriametrics.com/Troubleshooting.html).
|
||||
|
||||
## TSDB stats
|
||||
|
||||
|
@ -1997,6 +2005,8 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
-datadog.maxInsertRequestSize size
|
||||
The maximum size in bytes of a single DataDog POST request to /api/v1/series
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 67108864)
|
||||
-datadog.sanitizeMetricName
|
||||
Sanitize metric names for the ingested DataDog data to comply with DataDog behaviour described at https://docs.datadoghq.com/metrics/custom_metrics/#naming-custom-metrics (default true)
|
||||
-dedup.minScrapeInterval duration
|
||||
Leave only the last sample in every time series per each discrete interval equal to -dedup.minScrapeInterval > 0. See https://docs.victoriametrics.com/#deduplication and https://docs.victoriametrics.com/#downsampling
|
||||
-deleteAuthKey string
|
||||
|
@ -2336,6 +2346,8 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
-tlsKeyFile string
|
||||
Path to file with TLS key if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
|
||||
-usePromCompatibleNaming
|
||||
Whether to replace characters unsupported by Prometheus with underscores in the ingested metric names and label names. For example, foo.bar{a.b='c'} is transformed into foo_bar{a_b='c'} during data ingestion if this flag is set. See https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels
|
||||
-version
|
||||
Show VictoriaMetrics version
|
||||
-vmalert.proxyURL string
|
||||
|
|
|
@ -919,6 +919,8 @@ See the docs at https://docs.victoriametrics.com/vmagent.html .
|
|||
-datadog.maxInsertRequestSize size
|
||||
The maximum size in bytes of a single DataDog POST request to /api/v1/series
|
||||
Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 67108864)
|
||||
-datadog.sanitizeMetricName
|
||||
Sanitize metric names for the ingested DataDog data to comply with DataDog behaviour described at https://docs.datadoghq.com/metrics/custom_metrics/#naming-custom-metrics (default true)
|
||||
-denyQueryTracing
|
||||
Whether to disable the ability to trace queries. See https://docs.victoriametrics.com/#query-tracing
|
||||
-dryRun
|
||||
|
@ -1271,6 +1273,8 @@ See the docs at https://docs.victoriametrics.com/vmagent.html .
|
|||
Supports an array of values separated by comma or specified via multiple flags.
|
||||
-tlsKeyFile string
|
||||
Path to file with TLS key if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
|
||||
-usePromCompatibleNaming
|
||||
Whether to replace characters unsupported by Prometheus with underscores in the ingested metric names and label names. For example, foo.bar{a.b='c'} is transformed into foo_bar{a_b='c'} during data ingestion if this flag is set. See https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels
|
||||
-version
|
||||
Show VictoriaMetrics version
|
||||
```
|
||||
|
|
|
@ -624,7 +624,7 @@ There are following non-required `replay` flags:
|
|||
Progress bar may generate a lot of log records, which is not formatted as standard VictoriaMetrics logger.
|
||||
It could break logs parsing by external system and generate additional load on it.
|
||||
|
||||
See full description for these flags in `./vmalert --help`.
|
||||
See full description for these flags in `./vmalert -help`.
|
||||
|
||||
### Limitations
|
||||
|
||||
|
@ -654,6 +654,11 @@ may get empty response from datasource and produce empty recording rules or rese
|
|||
|
||||
<img alt="vmalert evaluation when data is delayed" src="vmalert_ts_data_delay.gif">
|
||||
|
||||
_By default recently written samples to VictoriaMetrics aren't visible for queries for up to 30s.
|
||||
This behavior is controlled by `-search.latencyOffset` command-line flag on vmselect. Usually, this results into
|
||||
a 30s shift for recording rules results.
|
||||
Note that too small value passed to `-search.latencyOffset` may lead to incomplete query results._
|
||||
|
||||
Try the following recommendations in such cases:
|
||||
|
||||
* Always configure group's `evaluationInterval` to be bigger or equal to `scrape_interval` at which metrics
|
||||
|
@ -662,7 +667,7 @@ are delivered to the datasource;
|
|||
command-line flag to add a time shift for evaluations;
|
||||
* If time intervals between datapoints in datasource are irregular - try changing vmalert's `-datasource.queryStep`
|
||||
command-line flag to specify how far search query can lookback for the recent datapoint. By default, this value
|
||||
is equal to group's `evaluationInterval`.
|
||||
is equal to group's evaluation interval.
|
||||
|
||||
Sometimes, it is not clear why some specific alert fired or didn't fire. It is very important to remember, that
|
||||
alerts with `for: 0` fire immediately when their expression becomes true. And alerts with `for > 0` will fire only
|
||||
|
@ -771,7 +776,7 @@ The shortlist of configuration flags is the following:
|
|||
-datasource.oauth2.tokenUrl string
|
||||
Optional OAuth2 tokenURL to use for -datasource.url.
|
||||
-datasource.queryStep duration
|
||||
queryStep defines how far a value can fallback to when evaluating queries. For example, if datasource.queryStep=15s then param "step" with value "15s" will be added to every query.If queryStep isn't specified, rule's evaluationInterval will be used instead.
|
||||
How far a value can fallback to when evaluating queries. For example, if -datasource.queryStep=15s then param "step" with value "15s" will be added to every query. If set to 0, rule's evaluation interval will be used instead. (default 5m0s)
|
||||
-datasource.queryTimeAlignment
|
||||
Whether to align "time" parameter with evaluation interval.Alignment supposed to produce deterministic results despite of number of vmalert replicas or time they were started. See more details here https://github.com/VictoriaMetrics/VictoriaMetrics/pull/1257 (default true)
|
||||
-datasource.roundDigits int
|
||||
|
|
18
go.mod
18
go.mod
|
@ -3,7 +3,7 @@ module github.com/VictoriaMetrics/VictoriaMetrics
|
|||
go 1.19
|
||||
|
||||
require (
|
||||
cloud.google.com/go/storage v1.26.0
|
||||
cloud.google.com/go/storage v1.27.0
|
||||
github.com/VictoriaMetrics/fastcache v1.12.0
|
||||
|
||||
// Do not use the original github.com/valyala/fasthttp because of issues
|
||||
|
@ -11,15 +11,16 @@ require (
|
|||
github.com/VictoriaMetrics/fasthttp v1.1.0
|
||||
github.com/VictoriaMetrics/metrics v1.22.2
|
||||
github.com/VictoriaMetrics/metricsql v0.45.0
|
||||
github.com/aws/aws-sdk-go v1.44.101
|
||||
github.com/aws/aws-sdk-go v1.44.105
|
||||
github.com/cespare/xxhash/v2 v2.1.2
|
||||
github.com/cheggaaa/pb/v3 v3.1.0
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect
|
||||
github.com/fatih/color v1.13.0 // indirect
|
||||
github.com/go-kit/kit v0.12.0
|
||||
github.com/golang/snappy v0.0.4
|
||||
github.com/googleapis/gax-go/v2 v2.5.1
|
||||
github.com/influxdata/influxdb v1.10.0
|
||||
github.com/klauspost/compress v1.15.10
|
||||
github.com/klauspost/compress v1.15.11
|
||||
github.com/mattn/go-colorable v0.1.13 // indirect
|
||||
github.com/mattn/go-runewidth v0.0.13 // indirect
|
||||
github.com/oklog/ulid v1.3.1
|
||||
|
@ -31,10 +32,10 @@ require (
|
|||
github.com/valyala/fasttemplate v1.2.1
|
||||
github.com/valyala/gozstd v1.17.0
|
||||
github.com/valyala/quicktemplate v1.7.0
|
||||
golang.org/x/net v0.0.0-20220919232410-f2f64ebce3c1
|
||||
golang.org/x/net v0.0.0-20220923203811-8be639271d50
|
||||
golang.org/x/oauth2 v0.0.0-20220909003341-f21342109be1
|
||||
golang.org/x/sys v0.0.0-20220919091848-fb04ddd9f9c8
|
||||
google.golang.org/api v0.96.0
|
||||
google.golang.org/api v0.97.0
|
||||
gopkg.in/yaml.v2 v2.4.0
|
||||
)
|
||||
|
||||
|
@ -51,10 +52,9 @@ require (
|
|||
github.com/google/go-cmp v0.5.9 // indirect
|
||||
github.com/google/uuid v1.3.0 // indirect
|
||||
github.com/googleapis/enterprise-certificate-proxy v0.1.0 // indirect
|
||||
github.com/googleapis/gax-go/v2 v2.5.1 // indirect
|
||||
github.com/jmespath/go-jmespath v0.4.0 // indirect
|
||||
github.com/mattn/go-isatty v0.0.16 // indirect
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.1 // indirect
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.2 // indirect
|
||||
github.com/pkg/errors v0.9.1 // indirect
|
||||
github.com/prometheus/client_golang v1.13.0 // indirect
|
||||
github.com/prometheus/client_model v0.2.0 // indirect
|
||||
|
@ -67,11 +67,11 @@ require (
|
|||
go.opencensus.io v0.23.0 // indirect
|
||||
go.uber.org/atomic v1.10.0 // indirect
|
||||
go.uber.org/goleak v1.1.11-0.20210813005559-691160354723 // indirect
|
||||
golang.org/x/sync v0.0.0-20220907140024-f12130a52804 // indirect
|
||||
golang.org/x/sync v0.0.0-20220923202941-7f9b1623fab7 // indirect
|
||||
golang.org/x/text v0.3.7 // indirect
|
||||
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 // indirect
|
||||
google.golang.org/appengine v1.6.7 // indirect
|
||||
google.golang.org/genproto v0.0.0-20220919141832-68c03719ef51 // indirect
|
||||
google.golang.org/genproto v0.0.0-20220923205249-dd2d53f1fffc // indirect
|
||||
google.golang.org/grpc v1.49.0 // indirect
|
||||
google.golang.org/protobuf v1.28.1 // indirect
|
||||
)
|
||||
|
|
31
go.sum
31
go.sum
|
@ -62,8 +62,8 @@ cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohl
|
|||
cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs=
|
||||
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
|
||||
cloud.google.com/go/storage v1.22.1/go.mod h1:S8N1cAStu7BOeFfE8KAQzmyyLkK8p/vmRq6kuBTW58Y=
|
||||
cloud.google.com/go/storage v1.26.0 h1:lYAGjknyDJirSzfwUlkv4Nsnj7od7foxQNH/fqZqles=
|
||||
cloud.google.com/go/storage v1.26.0/go.mod h1:mk/N7YwIKEWyTvXAWQCIeiCTdLoRH6Pd5xmSnolQLTI=
|
||||
cloud.google.com/go/storage v1.27.0 h1:YOO045NZI9RKfCj1c5A/ZtuuENUc8OAW+gHdGnDgyMQ=
|
||||
cloud.google.com/go/storage v1.27.0/go.mod h1:x9DOL8TK/ygDUMieqwfhdpQryTeEkhGKMi80i/iqR2s=
|
||||
collectd.org v0.3.0/go.mod h1:A/8DzQBkF6abtvrT2j/AU/4tiBgJWYyh0y/oB/4MlWE=
|
||||
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
|
||||
github.com/Azure/azure-sdk-for-go v48.2.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
|
||||
|
@ -149,8 +149,8 @@ github.com/aws/aws-lambda-go v1.13.3/go.mod h1:4UKl9IzQMoD+QF79YdCuzCwp8VbmG4VAQ
|
|||
github.com/aws/aws-sdk-go v1.27.0/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
|
||||
github.com/aws/aws-sdk-go v1.34.28/go.mod h1:H7NKnBqNVzoTJpGfLrQkkD+ytBA93eiDYi/+8rV9s48=
|
||||
github.com/aws/aws-sdk-go v1.35.31/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
|
||||
github.com/aws/aws-sdk-go v1.44.101 h1:O/em5aIxKI/FkwcWAFKEY+JhPDCRsqoVUC6xEF4tGic=
|
||||
github.com/aws/aws-sdk-go v1.44.101/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo=
|
||||
github.com/aws/aws-sdk-go v1.44.105 h1:UUwoD1PRKIj3ltrDUYTDQj5fOTK3XsnqolLpRTMmSEM=
|
||||
github.com/aws/aws-sdk-go v1.44.105/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo=
|
||||
github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g=
|
||||
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
|
||||
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
|
||||
|
@ -578,8 +578,8 @@ github.com/klauspost/compress v1.4.0/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0
|
|||
github.com/klauspost/compress v1.9.5/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A=
|
||||
github.com/klauspost/compress v1.13.4/go.mod h1:8dP1Hq4DHOhN9w426knH3Rhby4rFm6D8eO+e+Dq5Gzg=
|
||||
github.com/klauspost/compress v1.13.5/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
|
||||
github.com/klauspost/compress v1.15.10 h1:Ai8UzuomSCDw90e1qNMtb15msBXsNpH6gzkkENQNcJo=
|
||||
github.com/klauspost/compress v1.15.10/go.mod h1:QPwzmACJjUTFsnSHH934V6woptycfrDDJnH7hvFVbGM=
|
||||
github.com/klauspost/compress v1.15.11 h1:Lcadnb3RKGin4FYM/orgq0qde+nc15E5Cbqg4B9Sx9c=
|
||||
github.com/klauspost/compress v1.15.11/go.mod h1:QPwzmACJjUTFsnSHH934V6woptycfrDDJnH7hvFVbGM=
|
||||
github.com/klauspost/cpuid v0.0.0-20170728055534-ae7887de9fa5/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek=
|
||||
github.com/klauspost/crc32 v0.0.0-20161016154125-cb6bfca970f6/go.mod h1:+ZoRqAPRLkC4NPOvfYeR5KNOrY6TD+/sAC3HXPZgDYg=
|
||||
github.com/klauspost/pgzip v1.0.2-0.20170402124221-0bf5dcad4ada/go.mod h1:Ch1tH69qFZu15pkjo5kYi6mth2Zzwzt50oCQKQE9RUs=
|
||||
|
@ -633,8 +633,9 @@ github.com/mattn/go-runewidth v0.0.13 h1:lTGmDsbAYt5DmK6OnoV7EuIF1wEIFAcxld6ypU4
|
|||
github.com/mattn/go-runewidth v0.0.13/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
|
||||
github.com/mattn/go-sqlite3 v1.11.0/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc=
|
||||
github.com/mattn/go-tty v0.0.0-20180907095812-13ff1204f104/go.mod h1:XPvLUNfbS4fJH25nqRHfWLMa1ONC8Amw+mIA639KxkE=
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.2 h1:hAHbPm5IJGijwng3PWk09JkG9WeqChjprR5s9bBZ+OM=
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.2/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
|
||||
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
|
||||
github.com/miekg/dns v1.1.26/go.mod h1:bPDLeHnStXmXAq1m/Ch/hvfNHr14JKNPMBo3VZKjuso=
|
||||
github.com/miekg/dns v1.1.35/go.mod h1:KNUDUusw/aVsxyTYZM1oqvCicbwhgbNgztCETuNZ7xM=
|
||||
|
@ -1013,8 +1014,8 @@ golang.org/x/net v0.0.0-20220425223048-2871e0cb64e4/go.mod h1:CfG3xpIq0wQ8r1q4Su
|
|||
golang.org/x/net v0.0.0-20220607020251-c690dde0001d/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
|
||||
golang.org/x/net v0.0.0-20220624214902-1bab6f366d9e/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
|
||||
golang.org/x/net v0.0.0-20220909164309-bea034e7d591/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk=
|
||||
golang.org/x/net v0.0.0-20220919232410-f2f64ebce3c1 h1:TWZxd/th7FbRSMret2MVQdlI8uT49QEtwZdvJrxjEHU=
|
||||
golang.org/x/net v0.0.0-20220919232410-f2f64ebce3c1/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk=
|
||||
golang.org/x/net v0.0.0-20220923203811-8be639271d50 h1:vKyz8L3zkd+xrMeIaBsQ/MNVPVFSffdaU3ZyYlBGFnI=
|
||||
golang.org/x/net v0.0.0-20220923203811-8be639271d50/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
|
@ -1051,8 +1052,8 @@ golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJ
|
|||
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20220907140024-f12130a52804 h1:0SH2R3f1b1VmIMG7BXbEZCBUu2dKmHschSmjqGUrW8A=
|
||||
golang.org/x/sync v0.0.0-20220907140024-f12130a52804/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20220923202941-7f9b1623fab7 h1:ZrnxWX62AgTKOSagEqxvb3ffipvEDX2pl7E1TdqLqIc=
|
||||
golang.org/x/sync v0.0.0-20220923202941-7f9b1623fab7/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
|
@ -1296,8 +1297,8 @@ google.golang.org/api v0.75.0/go.mod h1:pU9QmyHLnzlpar1Mjt4IbapUCy8J+6HD6GeELN69
|
|||
google.golang.org/api v0.78.0/go.mod h1:1Sg78yoMLOhlQTeF+ARBoytAcH1NNyyl390YMy6rKmw=
|
||||
google.golang.org/api v0.80.0/go.mod h1:xY3nI94gbvBrE0J6NHXhxOmW97HG7Khjkku6AFB3Hyg=
|
||||
google.golang.org/api v0.84.0/go.mod h1:NTsGnUFJMYROtiquksZHBWtHfeMC7iYthki7Eq3pa8o=
|
||||
google.golang.org/api v0.96.0 h1:F60cuQPJq7K7FzsxMYHAUJSiXh2oKctHxBMbDygxhfM=
|
||||
google.golang.org/api v0.96.0/go.mod h1:w7wJQLTM+wvQpNf5JyEcBoxK0RH7EDrh/L4qfsuJ13s=
|
||||
google.golang.org/api v0.97.0 h1:x/vEL1XDF/2V4xzdNgFPaKHluRESo2aTsL7QzHnBtGQ=
|
||||
google.golang.org/api v0.97.0/go.mod h1:w7wJQLTM+wvQpNf5JyEcBoxK0RH7EDrh/L4qfsuJ13s=
|
||||
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
|
||||
google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
|
@ -1389,8 +1390,8 @@ google.golang.org/genproto v0.0.0-20220523171625-347a074981d8/go.mod h1:RAyBrSAP
|
|||
google.golang.org/genproto v0.0.0-20220608133413-ed9918b62aac/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA=
|
||||
google.golang.org/genproto v0.0.0-20220616135557-88e70c0c3a90/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA=
|
||||
google.golang.org/genproto v0.0.0-20220624142145-8cd45d7dbd1f/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA=
|
||||
google.golang.org/genproto v0.0.0-20220919141832-68c03719ef51 h1:ucpgjuzWqWrj0NEwjUpsGTf2IGxyLtmuSk0oGgifjec=
|
||||
google.golang.org/genproto v0.0.0-20220919141832-68c03719ef51/go.mod h1:0Nb8Qy+Sk5eDzHnzlStwW3itdNaWoZA5XeSG+R3JHSo=
|
||||
google.golang.org/genproto v0.0.0-20220923205249-dd2d53f1fffc h1:saaNe2+SBQxandnzcD/qB1JEBQ2Pqew+KlFLLdA/XcM=
|
||||
google.golang.org/genproto v0.0.0-20220923205249-dd2d53f1fffc/go.mod h1:yEEpwVWKMZZzo81NwRgyEJnA2fQvpXAYPVisv8EgDVs=
|
||||
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
|
||||
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
||||
google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM=
|
||||
|
|
|
@ -5,11 +5,13 @@ import (
|
|||
"fmt"
|
||||
"io"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"cloud.google.com/go/storage"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/backup/common"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/backup/fscommon"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/googleapis/gax-go/v2"
|
||||
"google.golang.org/api/iterator"
|
||||
"google.golang.org/api/option"
|
||||
)
|
||||
|
@ -61,6 +63,14 @@ func (fs *FS) Init() error {
|
|||
}
|
||||
client = c
|
||||
}
|
||||
|
||||
client.SetRetry(
|
||||
storage.WithPolicy(storage.RetryAlways),
|
||||
storage.WithBackoff(gax.Backoff{
|
||||
Initial: time.Second,
|
||||
Max: time.Minute * 3,
|
||||
Multiplier: 3,
|
||||
}))
|
||||
fs.bkt = client.Bucket(fs.Bucket)
|
||||
return nil
|
||||
}
|
||||
|
|
|
@ -2,8 +2,10 @@ package datadog
|
|||
|
||||
import (
|
||||
"bufio"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io"
|
||||
"regexp"
|
||||
"sync"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
|
@ -14,8 +16,18 @@ import (
|
|||
"github.com/VictoriaMetrics/metrics"
|
||||
)
|
||||
|
||||
// The maximum request size is defined at https://docs.datadoghq.com/api/latest/metrics/#submit-metrics
|
||||
var maxInsertRequestSize = flagutil.NewBytes("datadog.maxInsertRequestSize", 64*1024*1024, "The maximum size in bytes of a single DataDog POST request to /api/v1/series")
|
||||
var (
|
||||
// The maximum request size is defined at https://docs.datadoghq.com/api/latest/metrics/#submit-metrics
|
||||
maxInsertRequestSize = flagutil.NewBytes("datadog.maxInsertRequestSize", 64*1024*1024, "The maximum size in bytes of a single DataDog POST request to /api/v1/series")
|
||||
|
||||
// If all metrics in Datadog have the same naming schema as custom metrics, then the following rules apply:
|
||||
// https://docs.datadoghq.com/metrics/custom_metrics/#naming-custom-metrics
|
||||
// But there's some hidden behaviour. In addition to what it states in the docs, the following is also done:
|
||||
// - Consecutive underscores are replaced with just one underscore
|
||||
// - Underscore immediately before or after a dot are removed
|
||||
sanitizeMetricName = flag.Bool("datadog.sanitizeMetricName", true, "Sanitize metric names for the ingested DataDog data to comply with DataDog behaviour described at "+
|
||||
"https://docs.datadoghq.com/metrics/custom_metrics/#naming-custom-metrics")
|
||||
)
|
||||
|
||||
// ParseStream parses DataDog POST request for /api/v1/series from reader and calls callback for the parsed request.
|
||||
//
|
||||
|
@ -52,6 +64,9 @@ func ParseStream(r io.Reader, contentEncoding string, callback func(series []Ser
|
|||
series := req.Series
|
||||
for i := range series {
|
||||
rows += len(series[i].Points)
|
||||
if *sanitizeMetricName {
|
||||
series[i].Metric = sanitizeName(series[i].Metric)
|
||||
}
|
||||
}
|
||||
rowsRead.Add(rows)
|
||||
|
||||
|
@ -136,3 +151,19 @@ func putRequest(req *Request) {
|
|||
}
|
||||
|
||||
var requestPool sync.Pool
|
||||
|
||||
// sanitizeName performs DataDog-compatible santizing for metric names
|
||||
//
|
||||
// See https://docs.datadoghq.com/metrics/custom_metrics/#naming-custom-metrics
|
||||
func sanitizeName(s string) string {
|
||||
s = unsupportedDatadogChars.ReplaceAllString(s, "_")
|
||||
s = multiUnderscores.ReplaceAllString(s, "_")
|
||||
s = underscoresWithDots.ReplaceAllString(s, ".")
|
||||
return s
|
||||
}
|
||||
|
||||
var (
|
||||
unsupportedDatadogChars = regexp.MustCompile(`[^0-9a-zA-Z_\.]+`)
|
||||
multiUnderscores = regexp.MustCompile(`_+`)
|
||||
underscoresWithDots = regexp.MustCompile(`_?\._?`)
|
||||
)
|
||||
|
|
23
lib/protoparser/datadog/streamparser_test.go
Normal file
23
lib/protoparser/datadog/streamparser_test.go
Normal file
|
@ -0,0 +1,23 @@
|
|||
package datadog
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestSanitizeName(t *testing.T) {
|
||||
f := func(s, resultExpected string) {
|
||||
t.Helper()
|
||||
result := sanitizeName(s)
|
||||
if result != resultExpected {
|
||||
t.Fatalf("unexpected result for sanitizeName(%q); got\n%q\nwant\n%q", s, result, resultExpected)
|
||||
}
|
||||
}
|
||||
f("before.dot.metric!.name", "before.dot.metric.name")
|
||||
f("after.dot.metric.!name", "after.dot.metric.name")
|
||||
f("in.the.middle.met!ric.name", "in.the.middle.met_ric.name")
|
||||
f("before.and.after.and.middle.met!ric!.!name", "before.and.after.and.middle.met_ric.name")
|
||||
f("many.consecutive.met!!!!ric!!.!!name", "many.consecutive.met_ric.name")
|
||||
f("many.non.consecutive.m!e!t!r!i!c!.!name", "many.non.consecutive.m_e_t_r_i_c.name")
|
||||
f("how.about.underscores_.!_metric!_!.__!!name", "how.about.underscores.metric.name")
|
||||
f("how.about.underscores.middle.met!_!_ric.name", "how.about.underscores.middle.met_ric.name")
|
||||
}
|
|
@ -61,9 +61,6 @@ func (r *Row) reset() {
|
|||
|
||||
// UnmarshalMetricAndTags unmarshals metric and optional tags from s.
|
||||
func (r *Row) UnmarshalMetricAndTags(s string, tagsPool []Tag) ([]Tag, error) {
|
||||
if strings.Contains(s, " ") {
|
||||
return tagsPool, fmt.Errorf("unexpected whitespace found in %q", s)
|
||||
}
|
||||
n := strings.IndexByte(s, ';')
|
||||
if n < 0 {
|
||||
// No tags
|
||||
|
@ -72,11 +69,7 @@ func (r *Row) UnmarshalMetricAndTags(s string, tagsPool []Tag) ([]Tag, error) {
|
|||
// Tags found
|
||||
r.Metric = s[:n]
|
||||
tagsStart := len(tagsPool)
|
||||
var err error
|
||||
tagsPool, err = unmarshalTags(tagsPool, s[n+1:])
|
||||
if err != nil {
|
||||
return tagsPool, fmt.Errorf("cannot unmarshal tags: %w", err)
|
||||
}
|
||||
tagsPool = unmarshalTags(tagsPool, s[n+1:])
|
||||
tags := tagsPool[tagsStart:]
|
||||
r.Tags = tags[:len(tags):len(tags)]
|
||||
}
|
||||
|
@ -88,40 +81,43 @@ func (r *Row) UnmarshalMetricAndTags(s string, tagsPool []Tag) ([]Tag, error) {
|
|||
|
||||
func (r *Row) unmarshal(s string, tagsPool []Tag) ([]Tag, error) {
|
||||
r.reset()
|
||||
n := strings.IndexAny(s, graphiteSeparators)
|
||||
sOrig := s
|
||||
s = stripTrailingWhitespace(s)
|
||||
n := strings.LastIndexAny(s, graphiteSeparators)
|
||||
if n < 0 {
|
||||
return tagsPool, fmt.Errorf("cannot find separator between metric and value in %q", s)
|
||||
return tagsPool, fmt.Errorf("cannot find separator between value and timestamp in %q", s)
|
||||
}
|
||||
metricAndTags := s[:n]
|
||||
tail := stripLeadingWhitespace(s[n+1:])
|
||||
|
||||
timestampStr := s[n+1:]
|
||||
valueStr := ""
|
||||
metricAndTags := ""
|
||||
s = stripTrailingWhitespace(s[:n])
|
||||
n = strings.LastIndexAny(s, graphiteSeparators)
|
||||
if n < 0 {
|
||||
// Missing timestamp
|
||||
metricAndTags = stripLeadingWhitespace(s)
|
||||
valueStr = timestampStr
|
||||
timestampStr = ""
|
||||
} else {
|
||||
metricAndTags = stripLeadingWhitespace(s[:n])
|
||||
valueStr = s[n+1:]
|
||||
}
|
||||
metricAndTags = stripTrailingWhitespace(metricAndTags)
|
||||
tagsPool, err := r.UnmarshalMetricAndTags(metricAndTags, tagsPool)
|
||||
if err != nil {
|
||||
return tagsPool, err
|
||||
return tagsPool, fmt.Errorf("cannot parse metric and tags from %q: %w; original line: %q", metricAndTags, err, sOrig)
|
||||
}
|
||||
|
||||
n = strings.IndexAny(tail, graphiteSeparators)
|
||||
if n < 0 {
|
||||
// There is no timestamp. Use default timestamp instead.
|
||||
v, err := fastfloat.Parse(tail)
|
||||
if len(timestampStr) > 0 {
|
||||
ts, err := fastfloat.Parse(timestampStr)
|
||||
if err != nil {
|
||||
return tagsPool, fmt.Errorf("cannot unmarshal value from %q: %w", tail, err)
|
||||
return tagsPool, fmt.Errorf("cannot unmarshal timestamp from %q: %w; orignal line: %q", timestampStr, err, sOrig)
|
||||
}
|
||||
r.Value = v
|
||||
return tagsPool, nil
|
||||
r.Timestamp = int64(ts)
|
||||
}
|
||||
v, err := fastfloat.Parse(tail[:n])
|
||||
v, err := fastfloat.Parse(valueStr)
|
||||
if err != nil {
|
||||
return tagsPool, fmt.Errorf("cannot unmarshal value from %q: %w", tail[:n], err)
|
||||
}
|
||||
tail = stripLeadingWhitespace(tail[n+1:])
|
||||
tail = stripTrailingWhitespace(tail)
|
||||
ts, err := fastfloat.Parse(tail)
|
||||
if err != nil {
|
||||
return tagsPool, fmt.Errorf("cannot unmarshal timestamp from %q: %w", tail, err)
|
||||
return tagsPool, fmt.Errorf("cannot unmarshal value from %q: %w; original line: %q", valueStr, err, sOrig)
|
||||
}
|
||||
r.Value = v
|
||||
r.Timestamp = int64(ts)
|
||||
return tagsPool, nil
|
||||
}
|
||||
|
||||
|
@ -142,11 +138,11 @@ func unmarshalRow(dst []Row, s string, tagsPool []Tag) ([]Row, []Tag) {
|
|||
if len(s) > 0 && s[len(s)-1] == '\r' {
|
||||
s = s[:len(s)-1]
|
||||
}
|
||||
s = stripLeadingWhitespace(s)
|
||||
if len(s) == 0 {
|
||||
// Skip empty line
|
||||
return dst, tagsPool
|
||||
}
|
||||
|
||||
if cap(dst) > len(dst) {
|
||||
dst = dst[:len(dst)+1]
|
||||
} else {
|
||||
|
@ -165,7 +161,7 @@ func unmarshalRow(dst []Row, s string, tagsPool []Tag) ([]Row, []Tag) {
|
|||
|
||||
var invalidLines = metrics.NewCounter(`vm_rows_invalid_total{type="graphite"}`)
|
||||
|
||||
func unmarshalTags(dst []Tag, s string) ([]Tag, error) {
|
||||
func unmarshalTags(dst []Tag, s string) []Tag {
|
||||
for {
|
||||
if cap(dst) > len(dst) {
|
||||
dst = dst[:len(dst)+1]
|
||||
|
@ -182,7 +178,7 @@ func unmarshalTags(dst []Tag, s string) ([]Tag, error) {
|
|||
// Skip empty tag
|
||||
dst = dst[:len(dst)-1]
|
||||
}
|
||||
return dst, nil
|
||||
return dst
|
||||
}
|
||||
tag.unmarshal(s[:n])
|
||||
s = s[n+1:]
|
||||
|
|
|
@ -19,13 +19,6 @@ func TestUnmarshalMetricAndTagsFailure(t *testing.T) {
|
|||
}
|
||||
f("")
|
||||
f(";foo=bar")
|
||||
f(" ")
|
||||
f("foo ;bar=baz")
|
||||
f("f oo;bar=baz")
|
||||
f("foo;bar=baz ")
|
||||
f("foo;bar= baz")
|
||||
f("foo;bar=b az")
|
||||
f("foo;b ar=baz")
|
||||
}
|
||||
|
||||
func TestUnmarshalMetricAndTagsSuccess(t *testing.T) {
|
||||
|
@ -40,10 +33,67 @@ func TestUnmarshalMetricAndTagsSuccess(t *testing.T) {
|
|||
t.Fatalf("unexpected row;\ngot\n%+v\nwant\n%+v", &r, rExpected)
|
||||
}
|
||||
}
|
||||
f(" ", &Row{
|
||||
Metric: " ",
|
||||
})
|
||||
f("foo ;bar=baz", &Row{
|
||||
Metric: "foo ",
|
||||
Tags: []Tag{
|
||||
{
|
||||
Key: "bar",
|
||||
Value: "baz",
|
||||
},
|
||||
},
|
||||
})
|
||||
f("f oo;bar=baz", &Row{
|
||||
Metric: "f oo",
|
||||
Tags: []Tag{
|
||||
{
|
||||
Key: "bar",
|
||||
Value: "baz",
|
||||
},
|
||||
},
|
||||
})
|
||||
f("foo;bar=baz ", &Row{
|
||||
Metric: "foo",
|
||||
Tags: []Tag{
|
||||
{
|
||||
Key: "bar",
|
||||
Value: "baz ",
|
||||
},
|
||||
},
|
||||
})
|
||||
f("foo;bar= baz", &Row{
|
||||
Metric: "foo",
|
||||
Tags: []Tag{
|
||||
{
|
||||
Key: "bar",
|
||||
Value: " baz",
|
||||
},
|
||||
},
|
||||
})
|
||||
f("foo;bar=b az", &Row{
|
||||
Metric: "foo",
|
||||
Tags: []Tag{
|
||||
{
|
||||
Key: "bar",
|
||||
Value: "b az",
|
||||
},
|
||||
},
|
||||
})
|
||||
f("foo;b ar=baz", &Row{
|
||||
Metric: "foo",
|
||||
Tags: []Tag{
|
||||
{
|
||||
Key: "b ar",
|
||||
Value: "baz",
|
||||
},
|
||||
},
|
||||
})
|
||||
f("foo", &Row{
|
||||
Metric: "foo",
|
||||
})
|
||||
f("foo;bar=123;baz=aabb", &Row{
|
||||
f("foo;bar=123;baz=aa=bb", &Row{
|
||||
Metric: "foo",
|
||||
Tags: []Tag{
|
||||
{
|
||||
|
@ -52,7 +102,7 @@ func TestUnmarshalMetricAndTagsSuccess(t *testing.T) {
|
|||
},
|
||||
{
|
||||
Key: "baz",
|
||||
Value: "aabb",
|
||||
Value: "aa=bb",
|
||||
},
|
||||
},
|
||||
})
|
||||
|
@ -74,16 +124,9 @@ func TestRowsUnmarshalFailure(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
// Missing metric
|
||||
f(" 123 455")
|
||||
|
||||
// Missing value
|
||||
f("aaa")
|
||||
|
||||
// unexpected space in tag value
|
||||
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/99
|
||||
f("s;tag1=aaa1;tag2=bb b2;tag3=ccc3 1")
|
||||
|
||||
// invalid value
|
||||
f("aa bb")
|
||||
|
||||
|
@ -103,7 +146,7 @@ func TestRowsUnmarshalSuccess(t *testing.T) {
|
|||
// Try unmarshaling again
|
||||
rows.Unmarshal(s)
|
||||
if !reflect.DeepEqual(rows.Rows, rowsExpected.Rows) {
|
||||
t.Fatalf("unexpected rows;\ngot\n%+v;\nwant\n%+v", rows.Rows, rowsExpected.Rows)
|
||||
t.Fatalf("unexpected rows on second unmarshal;\ngot\n%+v;\nwant\n%+v", rows.Rows, rowsExpected.Rows)
|
||||
}
|
||||
|
||||
rows.Reset()
|
||||
|
@ -119,6 +162,12 @@ func TestRowsUnmarshalSuccess(t *testing.T) {
|
|||
f("\n\r\n", &Rows{})
|
||||
|
||||
// Single line
|
||||
f(" 123 455", &Rows{
|
||||
Rows: []Row{{
|
||||
Metric: "123",
|
||||
Value: 455,
|
||||
}},
|
||||
})
|
||||
f("foobar -123.456 789", &Rows{
|
||||
Rows: []Row{{
|
||||
Metric: "foobar",
|
||||
|
@ -134,6 +183,26 @@ func TestRowsUnmarshalSuccess(t *testing.T) {
|
|||
}},
|
||||
})
|
||||
|
||||
// Whitespace in metric name, tag name and tag value
|
||||
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3102
|
||||
f("s a;ta g1=aaa1;tag2=bb b2;tag3 1 23", &Rows{
|
||||
Rows: []Row{{
|
||||
Metric: "s a",
|
||||
Value: 1,
|
||||
Timestamp: 23,
|
||||
Tags: []Tag{
|
||||
{
|
||||
Key: "ta g1",
|
||||
Value: "aaa1",
|
||||
},
|
||||
{
|
||||
Key: "tag2",
|
||||
Value: "bb b2",
|
||||
},
|
||||
},
|
||||
}},
|
||||
})
|
||||
|
||||
// Missing timestamp
|
||||
f("aaa 1123", &Rows{
|
||||
Rows: []Row{{
|
||||
|
|
|
@ -225,7 +225,7 @@ func (t *Tracer) getLastChildDoneTime(defaultTime time.Time) time.Time {
|
|||
|
||||
// span represents a single trace span
|
||||
type span struct {
|
||||
// DurationMsec is the duration for the current trace span in microseconds.
|
||||
// DurationMsec is the duration for the current trace span in milliseconds.
|
||||
DurationMsec float64 `json:"duration_msec"`
|
||||
// Message is a trace message
|
||||
Message string `json:"message"`
|
||||
|
|
2
vendor/cloud.google.com/go/storage/.release-please-manifest.json
generated
vendored
2
vendor/cloud.google.com/go/storage/.release-please-manifest.json
generated
vendored
|
@ -1,3 +1,3 @@
|
|||
{
|
||||
"storage": "1.26.0"
|
||||
"storage": "1.27.0"
|
||||
}
|
7
vendor/cloud.google.com/go/storage/CHANGES.md
generated
vendored
7
vendor/cloud.google.com/go/storage/CHANGES.md
generated
vendored
|
@ -1,6 +1,13 @@
|
|||
# Changes
|
||||
|
||||
|
||||
## [1.27.0](https://github.com/googleapis/google-cloud-go/compare/storage/v1.26.0...storage/v1.27.0) (2022-09-22)
|
||||
|
||||
|
||||
### Features
|
||||
|
||||
* **storage:** Find GoogleAccessID when using impersonated creds ([#6591](https://github.com/googleapis/google-cloud-go/issues/6591)) ([a2d16a7](https://github.com/googleapis/google-cloud-go/commit/a2d16a7a778c85d13217fc67955ec5dac1da34e8))
|
||||
|
||||
## [1.26.0](https://github.com/googleapis/google-cloud-go/compare/storage/v1.25.0...storage/v1.26.0) (2022-08-29)
|
||||
|
||||
|
||||
|
|
2
vendor/cloud.google.com/go/storage/README.md
generated
vendored
2
vendor/cloud.google.com/go/storage/README.md
generated
vendored
|
@ -2,7 +2,7 @@
|
|||
|
||||
- [About Cloud Storage](https://cloud.google.com/storage/)
|
||||
- [API documentation](https://cloud.google.com/storage/docs)
|
||||
- [Go client documentation](https://pkg.go.dev/cloud.google.com/go/storage)
|
||||
- [Go client documentation](https://cloud.google.com/go/docs/reference/cloud.google.com/go/storage/latest)
|
||||
- [Complete sample programs](https://github.com/GoogleCloudPlatform/golang-samples/tree/main/storage)
|
||||
|
||||
### Example Usage
|
||||
|
|
72
vendor/cloud.google.com/go/storage/bucket.go
generated
vendored
72
vendor/cloud.google.com/go/storage/bucket.go
generated
vendored
|
@ -21,6 +21,7 @@ import (
|
|||
"errors"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"cloud.google.com/go/compute/metadata"
|
||||
|
@ -159,22 +160,17 @@ func (b *BucketHandle) Update(ctx context.Context, uattrs BucketAttrsToUpdate) (
|
|||
}
|
||||
|
||||
// SignedURL returns a URL for the specified object. Signed URLs allow anyone
|
||||
// access to a restricted resource for a limited time without needing a
|
||||
// Google account or signing in. For more information about signed URLs, see
|
||||
// https://cloud.google.com/storage/docs/accesscontrol#signed_urls_query_string_authentication
|
||||
// access to a restricted resource for a limited time without needing a Google
|
||||
// account or signing in.
|
||||
// For more information about signed URLs, see "[Overview of access control]."
|
||||
//
|
||||
// This method only requires the Method and Expires fields in the specified
|
||||
// SignedURLOptions opts to be non-nil. If not provided, it attempts to fill the
|
||||
// GoogleAccessID and PrivateKey from the GOOGLE_APPLICATION_CREDENTIALS environment variable.
|
||||
// If you are authenticating with a custom HTTP client, Service Account based
|
||||
// auto-detection will be hindered.
|
||||
// This method requires the Method and Expires fields in the specified
|
||||
// SignedURLOptions to be non-nil. You may need to set the GoogleAccessID and
|
||||
// PrivateKey fields in some cases. Read more on the [automatic detection of credentials]
|
||||
// for this method.
|
||||
//
|
||||
// If no private key is found, it attempts to use the GoogleAccessID to sign the URL.
|
||||
// This requires the IAM Service Account Credentials API to be enabled
|
||||
// (https://console.developers.google.com/apis/api/iamcredentials.googleapis.com/overview)
|
||||
// and iam.serviceAccounts.signBlob permissions on the GoogleAccessID service account.
|
||||
// If you do not want these fields set for you, you may pass them in through opts or use
|
||||
// SignedURL(bucket, name string, opts *SignedURLOptions) instead.
|
||||
// [Overview of access control]: https://cloud.google.com/storage/docs/accesscontrol#signed_urls_query_string_authentication
|
||||
// [automatic detection of credentials]: https://pkg.go.dev/cloud.google.com/go/storage#hdr-Credential_requirements_for_[BucketHandle.SignedURL]_and_[BucketHandle.GenerateSignedPostPolicyV4]
|
||||
func (b *BucketHandle) SignedURL(object string, opts *SignedURLOptions) (string, error) {
|
||||
if opts.GoogleAccessID != "" && (opts.SignBytes != nil || len(opts.PrivateKey) > 0) {
|
||||
return SignedURL(b.name, object, opts)
|
||||
|
@ -212,18 +208,11 @@ func (b *BucketHandle) SignedURL(object string, opts *SignedURLOptions) (string,
|
|||
// GenerateSignedPostPolicyV4 generates a PostPolicyV4 value from bucket, object and opts.
|
||||
// The generated URL and fields will then allow an unauthenticated client to perform multipart uploads.
|
||||
//
|
||||
// This method only requires the Expires field in the specified PostPolicyV4Options
|
||||
// to be non-nil. If not provided, it attempts to fill the GoogleAccessID and PrivateKey
|
||||
// from the GOOGLE_APPLICATION_CREDENTIALS environment variable.
|
||||
// If you are authenticating with a custom HTTP client, Service Account based
|
||||
// auto-detection will be hindered.
|
||||
// This method requires the Expires field in the specified PostPolicyV4Options
|
||||
// to be non-nil. You may need to set the GoogleAccessID and PrivateKey fields
|
||||
// in some cases. Read more on the [automatic detection of credentials] for this method.
|
||||
//
|
||||
// If no private key is found, it attempts to use the GoogleAccessID to sign the URL.
|
||||
// This requires the IAM Service Account Credentials API to be enabled
|
||||
// (https://console.developers.google.com/apis/api/iamcredentials.googleapis.com/overview)
|
||||
// and iam.serviceAccounts.signBlob permissions on the GoogleAccessID service account.
|
||||
// If you do not want these fields set for you, you may pass them in through opts or use
|
||||
// GenerateSignedPostPolicyV4(bucket, name string, opts *PostPolicyV4Options) instead.
|
||||
// [automatic detection of credentials]: https://pkg.go.dev/cloud.google.com/go/storage#hdr-Credential_requirements_for_[BucketHandle.SignedURL]_and_[BucketHandle.GenerateSignedPostPolicyV4]
|
||||
func (b *BucketHandle) GenerateSignedPostPolicyV4(object string, opts *PostPolicyV4Options) (*PostPolicyV4, error) {
|
||||
if opts.GoogleAccessID != "" && (opts.SignRawBytes != nil || opts.SignBytes != nil || len(opts.PrivateKey) > 0) {
|
||||
return GenerateSignedPostPolicyV4(b.name, object, opts)
|
||||
|
@ -263,17 +252,27 @@ func (b *BucketHandle) detectDefaultGoogleAccessID() (string, error) {
|
|||
|
||||
if b.c.creds != nil && len(b.c.creds.JSON) > 0 {
|
||||
var sa struct {
|
||||
ClientEmail string `json:"client_email"`
|
||||
}
|
||||
err := json.Unmarshal(b.c.creds.JSON, &sa)
|
||||
if err == nil && sa.ClientEmail != "" {
|
||||
return sa.ClientEmail, nil
|
||||
} else if err != nil {
|
||||
returnErr = err
|
||||
} else {
|
||||
returnErr = errors.New("storage: empty client email in credentials")
|
||||
ClientEmail string `json:"client_email"`
|
||||
SAImpersonationURL string `json:"service_account_impersonation_url"`
|
||||
CredType string `json:"type"`
|
||||
}
|
||||
|
||||
err := json.Unmarshal(b.c.creds.JSON, &sa)
|
||||
if err != nil {
|
||||
returnErr = err
|
||||
} else if sa.CredType == "impersonated_service_account" {
|
||||
start, end := strings.LastIndex(sa.SAImpersonationURL, "/"), strings.LastIndex(sa.SAImpersonationURL, ":")
|
||||
|
||||
if end <= start {
|
||||
returnErr = errors.New("error parsing impersonated service account credentials")
|
||||
} else {
|
||||
return sa.SAImpersonationURL[start+1 : end], nil
|
||||
}
|
||||
} else if sa.CredType == "service_account" && sa.ClientEmail != "" {
|
||||
return sa.ClientEmail, nil
|
||||
} else {
|
||||
returnErr = errors.New("unable to parse credentials; only service_account and impersonated_service_account credentials are supported")
|
||||
}
|
||||
}
|
||||
|
||||
// Don't error out if we can't unmarshal, fallback to GCE check.
|
||||
|
@ -284,11 +283,11 @@ func (b *BucketHandle) detectDefaultGoogleAccessID() (string, error) {
|
|||
} else if err != nil {
|
||||
returnErr = err
|
||||
} else {
|
||||
returnErr = errors.New("got empty email from GCE metadata service")
|
||||
returnErr = errors.New("empty email from GCE metadata service")
|
||||
}
|
||||
|
||||
}
|
||||
return "", fmt.Errorf("storage: unable to detect default GoogleAccessID: %v", returnErr)
|
||||
return "", fmt.Errorf("storage: unable to detect default GoogleAccessID: %w. Please provide the GoogleAccessID or use a supported means for autodetecting it (see https://pkg.go.dev/cloud.google.com/go/storage#hdr-Credential_requirements_for_[BucketHandle.SignedURL]_and_[BucketHandle.GenerateSignedPostPolicyV4])", returnErr)
|
||||
}
|
||||
|
||||
func (b *BucketHandle) defaultSignBytesFunc(email string) func([]byte) ([]byte, error) {
|
||||
|
@ -776,6 +775,7 @@ func newBucketFromProto(b *storagepb.Bucket) *BucketAttrs {
|
|||
LocationType: b.GetLocationType(),
|
||||
RPO: toRPOFromProto(b),
|
||||
CustomPlacementConfig: customPlacementFromProto(b.GetCustomPlacementConfig()),
|
||||
ProjectNumber: parseProjectNumber(b.GetProject()), // this can return 0 the project resource name is ID based
|
||||
}
|
||||
}
|
||||
|
||||
|
|
90
vendor/cloud.google.com/go/storage/doc.go
generated
vendored
90
vendor/cloud.google.com/go/storage/doc.go
generated
vendored
|
@ -24,7 +24,7 @@ connection pooling and similar aspects of this package.
|
|||
|
||||
# Creating a Client
|
||||
|
||||
To start working with this package, create a client:
|
||||
To start working with this package, create a [Client]:
|
||||
|
||||
ctx := context.Background()
|
||||
client, err := storage.NewClient(ctx)
|
||||
|
@ -33,7 +33,7 @@ To start working with this package, create a client:
|
|||
}
|
||||
|
||||
The client will use your default application credentials. Clients should be
|
||||
reused instead of created as needed. The methods of Client are safe for
|
||||
reused instead of created as needed. The methods of [Client] are safe for
|
||||
concurrent use by multiple goroutines.
|
||||
|
||||
If you only wish to access public data, you can create
|
||||
|
@ -75,7 +75,7 @@ bucket, make a bucket handle:
|
|||
|
||||
A handle is a reference to a bucket. You can have a handle even if the
|
||||
bucket doesn't exist yet. To create a bucket in Google Cloud Storage,
|
||||
call Create on the handle:
|
||||
call [BucketHandle.Create]:
|
||||
|
||||
if err := bkt.Create(ctx, projectID, nil); err != nil {
|
||||
// TODO: Handle error.
|
||||
|
@ -85,9 +85,9 @@ Note that although buckets are associated with projects, bucket names are
|
|||
global across all projects.
|
||||
|
||||
Each bucket has associated metadata, represented in this package by
|
||||
BucketAttrs. The third argument to BucketHandle.Create allows you to set
|
||||
the initial BucketAttrs of a bucket. To retrieve a bucket's attributes, use
|
||||
Attrs:
|
||||
[BucketAttrs]. The third argument to [BucketHandle.Create] allows you to set
|
||||
the initial [BucketAttrs] of a bucket. To retrieve a bucket's attributes, use
|
||||
[BucketHandle.Attrs]:
|
||||
|
||||
attrs, err := bkt.Attrs(ctx)
|
||||
if err != nil {
|
||||
|
@ -101,8 +101,8 @@ Attrs:
|
|||
An object holds arbitrary data as a sequence of bytes, like a file. You
|
||||
refer to objects using a handle, just as with buckets, but unlike buckets
|
||||
you don't explicitly create an object. Instead, the first time you write
|
||||
to an object it will be created. You can use the standard Go io.Reader
|
||||
and io.Writer interfaces to read and write object data:
|
||||
to an object it will be created. You can use the standard Go [io.Reader]
|
||||
and [io.Writer] interfaces to read and write object data:
|
||||
|
||||
obj := bkt.Object("data")
|
||||
// Write something to obj.
|
||||
|
@ -128,7 +128,7 @@ and io.Writer interfaces to read and write object data:
|
|||
}
|
||||
// Prints "This object contains text."
|
||||
|
||||
Objects also have attributes, which you can fetch with Attrs:
|
||||
Objects also have attributes, which you can fetch with [ObjectHandle.Attrs]:
|
||||
|
||||
objAttrs, err := obj.Attrs(ctx)
|
||||
if err != nil {
|
||||
|
@ -139,7 +139,7 @@ Objects also have attributes, which you can fetch with Attrs:
|
|||
|
||||
# Listing objects
|
||||
|
||||
Listing objects in a bucket is done with the Bucket.Objects method:
|
||||
Listing objects in a bucket is done with the [BucketHandle.Objects] method:
|
||||
|
||||
query := &storage.Query{Prefix: ""}
|
||||
|
||||
|
@ -157,7 +157,7 @@ Listing objects in a bucket is done with the Bucket.Objects method:
|
|||
}
|
||||
|
||||
Objects are listed lexicographically by name. To filter objects
|
||||
lexicographically, Query.StartOffset and/or Query.EndOffset can be used:
|
||||
lexicographically, [Query.StartOffset] and/or [Query.EndOffset] can be used:
|
||||
|
||||
query := &storage.Query{
|
||||
Prefix: "",
|
||||
|
@ -168,7 +168,7 @@ lexicographically, Query.StartOffset and/or Query.EndOffset can be used:
|
|||
// ... as before
|
||||
|
||||
If only a subset of object attributes is needed when listing, specifying this
|
||||
subset using Query.SetAttrSelection may speed up the listing process:
|
||||
subset using [Query.SetAttrSelection] may speed up the listing process:
|
||||
|
||||
query := &storage.Query{Prefix: ""}
|
||||
query.SetAttrSelection([]string{"Name"})
|
||||
|
@ -180,10 +180,9 @@ subset using Query.SetAttrSelection may speed up the listing process:
|
|||
Both objects and buckets have ACLs (Access Control Lists). An ACL is a list of
|
||||
ACLRules, each of which specifies the role of a user, group or project. ACLs
|
||||
are suitable for fine-grained control, but you may prefer using IAM to control
|
||||
access at the project level (see
|
||||
https://cloud.google.com/storage/docs/access-control/iam).
|
||||
access at the project level (see [Cloud Storage IAM docs].
|
||||
|
||||
To list the ACLs of a bucket or object, obtain an ACLHandle and call its List method:
|
||||
To list the ACLs of a bucket or object, obtain an [ACLHandle] and call [ACLHandle.List]:
|
||||
|
||||
acls, err := obj.ACL().List(ctx)
|
||||
if err != nil {
|
||||
|
@ -199,7 +198,7 @@ You can also set and delete ACLs.
|
|||
|
||||
Every object has a generation and a metageneration. The generation changes
|
||||
whenever the content changes, and the metageneration changes whenever the
|
||||
metadata changes. Conditions let you check these values before an operation;
|
||||
metadata changes. [Conditions] let you check these values before an operation;
|
||||
the operation only executes if the conditions match. You can use conditions to
|
||||
prevent race conditions in read-modify-write operations.
|
||||
|
||||
|
@ -214,8 +213,8 @@ since you read it. Here is how to express that:
|
|||
|
||||
You can obtain a URL that lets anyone read or write an object for a limited time.
|
||||
Signing a URL requires credentials authorized to sign a URL. To use the same
|
||||
authentication that was used when instantiating the Storage client, use the
|
||||
BucketHandle.SignedURL method.
|
||||
authentication that was used when instantiating the Storage client, use
|
||||
[BucketHandle.SignedURL].
|
||||
|
||||
url, err := client.Bucket(bucketName).SignedURL(objectName, opts)
|
||||
if err != nil {
|
||||
|
@ -223,8 +222,8 @@ BucketHandle.SignedURL method.
|
|||
}
|
||||
fmt.Println(url)
|
||||
|
||||
You can also sign a URL wihout creating a client. See the documentation of
|
||||
SignedURL for details.
|
||||
You can also sign a URL without creating a client. See the documentation of
|
||||
[SignedURL] for details.
|
||||
|
||||
url, err := storage.SignedURL(bucketName, "shared-object", opts)
|
||||
if err != nil {
|
||||
|
@ -238,8 +237,8 @@ A type of signed request that allows uploads through HTML forms directly to Clou
|
|||
temporary permission. Conditions can be applied to restrict how the HTML form is used and exercised
|
||||
by a user.
|
||||
|
||||
For more information, please see https://cloud.google.com/storage/docs/xml-api/post-object as well
|
||||
as the documentation of BucketHandle.GenerateSignedPostPolicyV4.
|
||||
For more information, please see the [XML POST Object docs] as well
|
||||
as the documentation of [BucketHandle.GenerateSignedPostPolicyV4].
|
||||
|
||||
pv4, err := client.Bucket(bucketName).GenerateSignedPostPolicyV4(objectName, opts)
|
||||
if err != nil {
|
||||
|
@ -247,19 +246,40 @@ as the documentation of BucketHandle.GenerateSignedPostPolicyV4.
|
|||
}
|
||||
fmt.Printf("URL: %s\nFields; %v\n", pv4.URL, pv4.Fields)
|
||||
|
||||
# Credential requirements for signing
|
||||
|
||||
If the GoogleAccessID and PrivateKey option fields are not provided, they will
|
||||
be automatically detected by [BucketHandle.SignedURL] and
|
||||
[BucketHandle.GenerateSignedPostPolicyV4] if any of the following are true:
|
||||
- you are authenticated to the Storage Client with a service account's
|
||||
downloaded private key, either directly in code or by setting the
|
||||
GOOGLE_APPLICATION_CREDENTIALS environment variable (see [Other Environments]),
|
||||
- your application is running on Google Compute Engine (GCE), or
|
||||
- you are logged into [gcloud using application default credentials]
|
||||
with [impersonation enabled].
|
||||
|
||||
Detecting GoogleAccessID may not be possible if you are authenticated using a
|
||||
token source or using [option.WithHTTPClient]. In this case, you can provide a
|
||||
service account email for GoogleAccessID and the client will attempt to sign
|
||||
the URL or Post Policy using that service account.
|
||||
|
||||
To generate the signature, you must have:
|
||||
- iam.serviceAccounts.signBlob permissions on the GoogleAccessID service
|
||||
account, and
|
||||
- the [IAM Service Account Credentials API] enabled (unless authenticating
|
||||
with a downloaded private key).
|
||||
|
||||
# Errors
|
||||
|
||||
Errors returned by this client are often of the type googleapi.Error.
|
||||
These errors can be introspected for more information by using errors.As
|
||||
with the richer googleapi.Error type. For example:
|
||||
Errors returned by this client are often of the type [googleapi.Error].
|
||||
These errors can be introspected for more information by using [errors.As]
|
||||
with the richer [googleapi.Error] type. For example:
|
||||
|
||||
var e *googleapi.Error
|
||||
if ok := errors.As(err, &e); ok {
|
||||
if e.Code == 409 { ... }
|
||||
}
|
||||
|
||||
See https://pkg.go.dev/google.golang.org/api/googleapi#Error for more information.
|
||||
|
||||
# Retrying failed requests
|
||||
|
||||
Methods in this package may retry calls that fail with transient errors.
|
||||
|
@ -270,12 +290,12 @@ continuing, use context timeouts or cancellation.
|
|||
The retry strategy in this library follows best practices for Cloud Storage. By
|
||||
default, operations are retried only if they are idempotent, and exponential
|
||||
backoff with jitter is employed. In addition, errors are only retried if they
|
||||
are defined as transient by the service. See
|
||||
https://cloud.google.com/storage/docs/retry-strategy for more information.
|
||||
are defined as transient by the service. See the [Cloud Storage retry docs]
|
||||
for more information.
|
||||
|
||||
Users can configure non-default retry behavior for a single library call (using
|
||||
BucketHandle.Retryer and ObjectHandle.Retryer) or for all calls made by a
|
||||
client (using Client.SetRetry). For example:
|
||||
[BucketHandle.Retryer] and [ObjectHandle.Retryer]) or for all calls made by a
|
||||
client (using [Client.SetRetry]). For example:
|
||||
|
||||
o := client.Bucket(bucket).Object(object).Retryer(
|
||||
// Use WithBackoff to change the timing of the exponential backoff.
|
||||
|
@ -296,5 +316,13 @@ client (using Client.SetRetry). For example:
|
|||
if err := o.Delete(ctx); err != nil {
|
||||
// Handle err.
|
||||
}
|
||||
|
||||
[Cloud Storage IAM docs]: https://cloud.google.com/storage/docs/access-control/iam
|
||||
[XML POST Object docs]: https://cloud.google.com/storage/docs/xml-api/post-object
|
||||
[Cloud Storage retry docs]: https://cloud.google.com/storage/docs/retry-strategy
|
||||
[Other Environments]: https://cloud.google.com/storage/docs/authentication#libauth
|
||||
[gcloud using application default credentials]: https://cloud.google.com/sdk/gcloud/reference/auth/application-default/login
|
||||
[impersonation enabled]: https://cloud.google.com/sdk/gcloud/reference#--impersonate-service-account
|
||||
[IAM Service Account Credentials API]: https://console.developers.google.com/apis/api/iamcredentials.googleapis.com/overview
|
||||
*/
|
||||
package storage // import "cloud.google.com/go/storage"
|
||||
|
|
17
vendor/cloud.google.com/go/storage/grpc_client.go
generated
vendored
17
vendor/cloud.google.com/go/storage/grpc_client.go
generated
vendored
|
@ -155,11 +155,13 @@ func (c *grpcStorageClient) CreateBucket(ctx context.Context, project, bucket st
|
|||
}
|
||||
|
||||
req := &storagepb.CreateBucketRequest{
|
||||
Parent: toProjectResource(project),
|
||||
Bucket: b,
|
||||
BucketId: b.GetName(),
|
||||
PredefinedAcl: attrs.PredefinedACL,
|
||||
PredefinedDefaultObjectAcl: attrs.PredefinedDefaultObjectACL,
|
||||
Parent: toProjectResource(project),
|
||||
Bucket: b,
|
||||
BucketId: b.GetName(),
|
||||
}
|
||||
if attrs != nil {
|
||||
req.PredefinedAcl = attrs.PredefinedACL
|
||||
req.PredefinedDefaultObjectAcl = attrs.PredefinedDefaultObjectACL
|
||||
}
|
||||
|
||||
var battrs *BucketAttrs
|
||||
|
@ -893,6 +895,11 @@ func (c *grpcStorageClient) NewRangeReader(ctx context.Context, params *newRange
|
|||
}
|
||||
|
||||
msg, err = stream.Recv()
|
||||
// These types of errors show up on the Recv call, rather than the
|
||||
// initialization of the stream via ReadObject above.
|
||||
if s, ok := status.FromError(err); ok && s.Code() == codes.NotFound {
|
||||
return ErrObjectNotExist
|
||||
}
|
||||
|
||||
return err
|
||||
}, s.retry, s.idempotent, setRetryHeaderGRPC(ctx))
|
||||
|
|
10
vendor/cloud.google.com/go/storage/internal/apiv2/doc.go
generated
vendored
10
vendor/cloud.google.com/go/storage/internal/apiv2/doc.go
generated
vendored
|
@ -26,6 +26,11 @@
|
|||
// To get started with this package, create a client.
|
||||
//
|
||||
// ctx := context.Background()
|
||||
// // This snippet has been automatically generated and should be regarded as a code template only.
|
||||
// // It will require modifications to work:
|
||||
// // - It may require correct/in-range values for request initialization.
|
||||
// // - It may require specifying regional endpoints when creating the service client as shown in:
|
||||
// // https://pkg.go.dev/cloud.google.com/go#hdr-Client_Options
|
||||
// c, err := storage.NewClient(ctx)
|
||||
// if err != nil {
|
||||
// // TODO: Handle error.
|
||||
|
@ -41,6 +46,11 @@
|
|||
// The following is an example of making an API call with the newly created client.
|
||||
//
|
||||
// ctx := context.Background()
|
||||
// // This snippet has been automatically generated and should be regarded as a code template only.
|
||||
// // It will require modifications to work:
|
||||
// // - It may require correct/in-range values for request initialization.
|
||||
// // - It may require specifying regional endpoints when creating the service client as shown in:
|
||||
// // https://pkg.go.dev/cloud.google.com/go#hdr-Client_Options
|
||||
// c, err := storage.NewClient(ctx)
|
||||
// if err != nil {
|
||||
// // TODO: Handle error.
|
||||
|
|
6
vendor/cloud.google.com/go/storage/internal/apiv2/storage_client.go
generated
vendored
6
vendor/cloud.google.com/go/storage/internal/apiv2/storage_client.go
generated
vendored
|
@ -206,7 +206,8 @@ func (c *Client) setGoogleClientInfo(keyval ...string) {
|
|||
|
||||
// Connection returns a connection to the API service.
|
||||
//
|
||||
// Deprecated.
|
||||
// Deprecated: Connections are now pooled so this method does not always
|
||||
// return the same resource.
|
||||
func (c *Client) Connection() *grpc.ClientConn {
|
||||
return c.internalClient.Connection()
|
||||
}
|
||||
|
@ -518,7 +519,8 @@ func NewClient(ctx context.Context, opts ...option.ClientOption) (*Client, error
|
|||
|
||||
// Connection returns a connection to the API service.
|
||||
//
|
||||
// Deprecated.
|
||||
// Deprecated: Connections are now pooled so this method does not always
|
||||
// return the same resource.
|
||||
func (c *gRPCClient) Connection() *grpc.ClientConn {
|
||||
return c.connPool.Conn()
|
||||
}
|
||||
|
|
2
vendor/cloud.google.com/go/storage/internal/version.go
generated
vendored
2
vendor/cloud.google.com/go/storage/internal/version.go
generated
vendored
|
@ -15,4 +15,4 @@
|
|||
package internal
|
||||
|
||||
// Version is the current tagged release of the library.
|
||||
const Version = "1.26.0"
|
||||
const Version = "1.27.0"
|
||||
|
|
3
vendor/cloud.google.com/go/storage/release-please-config.json
generated
vendored
3
vendor/cloud.google.com/go/storage/release-please-config.json
generated
vendored
|
@ -7,5 +7,6 @@
|
|||
"storage": {
|
||||
"component": "storage"
|
||||
}
|
||||
}
|
||||
},
|
||||
"plugins": ["sentence-case"]
|
||||
}
|
||||
|
|
19
vendor/cloud.google.com/go/storage/storage.go
generated
vendored
19
vendor/cloud.google.com/go/storage/storage.go
generated
vendored
|
@ -33,6 +33,7 @@ import (
|
|||
"reflect"
|
||||
"regexp"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
"unicode/utf8"
|
||||
|
@ -2002,6 +2003,24 @@ func parseBucketName(b string) string {
|
|||
return b[sep+1:]
|
||||
}
|
||||
|
||||
// parseProjectNumber consume the given resource name and parses out the project
|
||||
// number if one is present i.e. it is not a project ID.
|
||||
func parseProjectNumber(r string) uint64 {
|
||||
projectID := regexp.MustCompile(`projects\/([0-9]+)\/?`)
|
||||
if matches := projectID.FindStringSubmatch(r); len(matches) > 0 {
|
||||
// Capture group follows the matched segment. For example:
|
||||
// input: projects/123/bars/456
|
||||
// output: [projects/123/, 123]
|
||||
number, err := strconv.ParseUint(matches[1], 10, 64)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
return number
|
||||
}
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
// toProjectResource accepts a project ID and formats it as a Project resource
|
||||
// name.
|
||||
func toProjectResource(project string) string {
|
||||
|
|
131
vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go
generated
vendored
131
vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go
generated
vendored
|
@ -5822,12 +5822,42 @@ var awsPartition = partition{
|
|||
endpointKey{
|
||||
Region: "eu-west-2",
|
||||
}: endpoint{},
|
||||
endpointKey{
|
||||
Region: "fips-us-east-1",
|
||||
}: endpoint{
|
||||
Hostname: "connect-campaigns-fips.us-east-1.amazonaws.com",
|
||||
CredentialScope: credentialScope{
|
||||
Region: "us-east-1",
|
||||
},
|
||||
Deprecated: boxedTrue,
|
||||
},
|
||||
endpointKey{
|
||||
Region: "fips-us-west-2",
|
||||
}: endpoint{
|
||||
Hostname: "connect-campaigns-fips.us-west-2.amazonaws.com",
|
||||
CredentialScope: credentialScope{
|
||||
Region: "us-west-2",
|
||||
},
|
||||
Deprecated: boxedTrue,
|
||||
},
|
||||
endpointKey{
|
||||
Region: "us-east-1",
|
||||
}: endpoint{},
|
||||
endpointKey{
|
||||
Region: "us-east-1",
|
||||
Variant: fipsVariant,
|
||||
}: endpoint{
|
||||
Hostname: "connect-campaigns-fips.us-east-1.amazonaws.com",
|
||||
},
|
||||
endpointKey{
|
||||
Region: "us-west-2",
|
||||
}: endpoint{},
|
||||
endpointKey{
|
||||
Region: "us-west-2",
|
||||
Variant: fipsVariant,
|
||||
}: endpoint{
|
||||
Hostname: "connect-campaigns-fips.us-west-2.amazonaws.com",
|
||||
},
|
||||
},
|
||||
},
|
||||
"contact-lens": service{
|
||||
|
@ -7283,6 +7313,9 @@ var awsPartition = partition{
|
|||
endpointKey{
|
||||
Region: "ap-southeast-2",
|
||||
}: endpoint{},
|
||||
endpointKey{
|
||||
Region: "ap-southeast-3",
|
||||
}: endpoint{},
|
||||
endpointKey{
|
||||
Region: "ca-central-1",
|
||||
}: endpoint{},
|
||||
|
@ -9484,6 +9517,18 @@ var awsPartition = partition{
|
|||
endpointKey{
|
||||
Region: "ap-northeast-1",
|
||||
}: endpoint{},
|
||||
endpointKey{
|
||||
Region: "ap-northeast-2",
|
||||
}: endpoint{},
|
||||
endpointKey{
|
||||
Region: "ap-south-1",
|
||||
}: endpoint{},
|
||||
endpointKey{
|
||||
Region: "ap-southeast-1",
|
||||
}: endpoint{},
|
||||
endpointKey{
|
||||
Region: "eu-central-1",
|
||||
}: endpoint{},
|
||||
endpointKey{
|
||||
Region: "eu-west-1",
|
||||
}: endpoint{},
|
||||
|
@ -9496,6 +9541,15 @@ var awsPartition = partition{
|
|||
},
|
||||
Deprecated: boxedTrue,
|
||||
},
|
||||
endpointKey{
|
||||
Region: "fips-us-east-2",
|
||||
}: endpoint{
|
||||
Hostname: "emr-serverless-fips.us-east-2.amazonaws.com",
|
||||
CredentialScope: credentialScope{
|
||||
Region: "us-east-2",
|
||||
},
|
||||
Deprecated: boxedTrue,
|
||||
},
|
||||
endpointKey{
|
||||
Region: "fips-us-west-2",
|
||||
}: endpoint{
|
||||
|
@ -9505,6 +9559,9 @@ var awsPartition = partition{
|
|||
},
|
||||
Deprecated: boxedTrue,
|
||||
},
|
||||
endpointKey{
|
||||
Region: "sa-east-1",
|
||||
}: endpoint{},
|
||||
endpointKey{
|
||||
Region: "us-east-1",
|
||||
}: endpoint{},
|
||||
|
@ -9514,6 +9571,15 @@ var awsPartition = partition{
|
|||
}: endpoint{
|
||||
Hostname: "emr-serverless-fips.us-east-1.amazonaws.com",
|
||||
},
|
||||
endpointKey{
|
||||
Region: "us-east-2",
|
||||
}: endpoint{},
|
||||
endpointKey{
|
||||
Region: "us-east-2",
|
||||
Variant: fipsVariant,
|
||||
}: endpoint{
|
||||
Hostname: "emr-serverless-fips.us-east-2.amazonaws.com",
|
||||
},
|
||||
endpointKey{
|
||||
Region: "us-west-2",
|
||||
}: endpoint{},
|
||||
|
@ -12682,6 +12748,18 @@ var awsPartition = partition{
|
|||
},
|
||||
"ivschat": service{
|
||||
Endpoints: serviceEndpoints{
|
||||
endpointKey{
|
||||
Region: "ap-northeast-1",
|
||||
}: endpoint{},
|
||||
endpointKey{
|
||||
Region: "ap-northeast-2",
|
||||
}: endpoint{},
|
||||
endpointKey{
|
||||
Region: "ap-south-1",
|
||||
}: endpoint{},
|
||||
endpointKey{
|
||||
Region: "eu-central-1",
|
||||
}: endpoint{},
|
||||
endpointKey{
|
||||
Region: "eu-west-1",
|
||||
}: endpoint{},
|
||||
|
@ -17529,6 +17607,9 @@ var awsPartition = partition{
|
|||
endpointKey{
|
||||
Region: "ap-southeast-2",
|
||||
}: endpoint{},
|
||||
endpointKey{
|
||||
Region: "ap-southeast-3",
|
||||
}: endpoint{},
|
||||
endpointKey{
|
||||
Region: "ca-central-1",
|
||||
}: endpoint{},
|
||||
|
@ -26644,6 +26725,16 @@ var awscnPartition = partition{
|
|||
}: endpoint{},
|
||||
},
|
||||
},
|
||||
"rbin": service{
|
||||
Endpoints: serviceEndpoints{
|
||||
endpointKey{
|
||||
Region: "cn-north-1",
|
||||
}: endpoint{},
|
||||
endpointKey{
|
||||
Region: "cn-northwest-1",
|
||||
}: endpoint{},
|
||||
},
|
||||
},
|
||||
"rds": service{
|
||||
Endpoints: serviceEndpoints{
|
||||
endpointKey{
|
||||
|
@ -30620,6 +30711,46 @@ var awsusgovPartition = partition{
|
|||
},
|
||||
},
|
||||
},
|
||||
"rbin": service{
|
||||
Endpoints: serviceEndpoints{
|
||||
endpointKey{
|
||||
Region: "fips-us-gov-east-1",
|
||||
}: endpoint{
|
||||
Hostname: "rbin-fips.us-gov-east-1.amazonaws.com",
|
||||
CredentialScope: credentialScope{
|
||||
Region: "us-gov-east-1",
|
||||
},
|
||||
Deprecated: boxedTrue,
|
||||
},
|
||||
endpointKey{
|
||||
Region: "fips-us-gov-west-1",
|
||||
}: endpoint{
|
||||
Hostname: "rbin-fips.us-gov-west-1.amazonaws.com",
|
||||
CredentialScope: credentialScope{
|
||||
Region: "us-gov-west-1",
|
||||
},
|
||||
Deprecated: boxedTrue,
|
||||
},
|
||||
endpointKey{
|
||||
Region: "us-gov-east-1",
|
||||
}: endpoint{},
|
||||
endpointKey{
|
||||
Region: "us-gov-east-1",
|
||||
Variant: fipsVariant,
|
||||
}: endpoint{
|
||||
Hostname: "rbin-fips.us-gov-east-1.amazonaws.com",
|
||||
},
|
||||
endpointKey{
|
||||
Region: "us-gov-west-1",
|
||||
}: endpoint{},
|
||||
endpointKey{
|
||||
Region: "us-gov-west-1",
|
||||
Variant: fipsVariant,
|
||||
}: endpoint{
|
||||
Hostname: "rbin-fips.us-gov-west-1.amazonaws.com",
|
||||
},
|
||||
},
|
||||
},
|
||||
"rds": service{
|
||||
Defaults: endpointDefaults{
|
||||
defaultKey{}: endpoint{},
|
||||
|
|
2
vendor/github.com/aws/aws-sdk-go/aws/version.go
generated
vendored
2
vendor/github.com/aws/aws-sdk-go/aws/version.go
generated
vendored
|
@ -5,4 +5,4 @@ package aws
|
|||
const SDKName = "aws-sdk-go"
|
||||
|
||||
// SDKVersion is the version of this SDK
|
||||
const SDKVersion = "1.44.101"
|
||||
const SDKVersion = "1.44.105"
|
||||
|
|
18
vendor/github.com/klauspost/compress/README.md
generated
vendored
18
vendor/github.com/klauspost/compress/README.md
generated
vendored
|
@ -17,6 +17,17 @@ This package provides various compression algorithms.
|
|||
|
||||
# changelog
|
||||
|
||||
* Sept 16, 2022 (v1.15.10)
|
||||
|
||||
* zstd: Add [WithDecodeAllCapLimit](https://pkg.go.dev/github.com/klauspost/compress@v1.15.10/zstd#WithDecodeAllCapLimit) https://github.com/klauspost/compress/pull/649
|
||||
* Add Go 1.19 - deprecate Go 1.16 https://github.com/klauspost/compress/pull/651
|
||||
* flate: Improve level 5+6 compression https://github.com/klauspost/compress/pull/656
|
||||
* zstd: Improve "better" compresssion https://github.com/klauspost/compress/pull/657
|
||||
* s2: Improve "best" compression https://github.com/klauspost/compress/pull/658
|
||||
* s2: Improve "better" compression. https://github.com/klauspost/compress/pull/635
|
||||
* s2: Slightly faster non-assembly decompression https://github.com/klauspost/compress/pull/646
|
||||
* Use arrays for constant size copies https://github.com/klauspost/compress/pull/659
|
||||
|
||||
* July 21, 2022 (v1.15.9)
|
||||
|
||||
* zstd: Fix decoder crash on amd64 (no BMI) on invalid input https://github.com/klauspost/compress/pull/645
|
||||
|
@ -97,15 +108,15 @@ This package provides various compression algorithms.
|
|||
* gzhttp: Add zstd to transport by @klauspost in [#400](https://github.com/klauspost/compress/pull/400)
|
||||
* gzhttp: Make content-type optional by @klauspost in [#510](https://github.com/klauspost/compress/pull/510)
|
||||
|
||||
<details>
|
||||
<summary>See Details</summary>
|
||||
Both compression and decompression now supports "synchronous" stream operations. This means that whenever "concurrency" is set to 1, they will operate without spawning goroutines.
|
||||
|
||||
Stream decompression is now faster on asynchronous, since the goroutine allocation much more effectively splits the workload. On typical streams this will typically use 2 cores fully for decompression. When a stream has finished decoding no goroutines will be left over, so decoders can now safely be pooled and still be garbage collected.
|
||||
|
||||
While the release has been extensively tested, it is recommended to testing when upgrading.
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>See changes to v1.14.x</summary>
|
||||
|
||||
* Feb 22, 2022 (v1.14.4)
|
||||
* flate: Fix rare huffman only (-2) corruption. [#503](https://github.com/klauspost/compress/pull/503)
|
||||
* zip: Update deprecated CreateHeaderRaw to correctly call CreateRaw by @saracen in [#502](https://github.com/klauspost/compress/pull/502)
|
||||
|
@ -131,6 +142,7 @@ While the release has been extensively tested, it is recommended to testing when
|
|||
* zstd: Performance improvement in [#420]( https://github.com/klauspost/compress/pull/420) [#456](https://github.com/klauspost/compress/pull/456) [#437](https://github.com/klauspost/compress/pull/437) [#467](https://github.com/klauspost/compress/pull/467) [#468](https://github.com/klauspost/compress/pull/468)
|
||||
* zstd: add arm64 xxhash assembly in [#464](https://github.com/klauspost/compress/pull/464)
|
||||
* Add garbled for binaries for s2 in [#445](https://github.com/klauspost/compress/pull/445)
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>See changes to v1.13.x</summary>
|
||||
|
|
6
vendor/github.com/klauspost/compress/flate/deflate.go
generated
vendored
6
vendor/github.com/klauspost/compress/flate/deflate.go
generated
vendored
|
@ -374,6 +374,12 @@ func hash4(b []byte) uint32 {
|
|||
return hash4u(binary.LittleEndian.Uint32(b), hashBits)
|
||||
}
|
||||
|
||||
// hash4 returns the hash of u to fit in a hash table with h bits.
|
||||
// Preferably h should be a constant and should always be <32.
|
||||
func hash4u(u uint32, h uint8) uint32 {
|
||||
return (u * prime4bytes) >> (32 - h)
|
||||
}
|
||||
|
||||
// bulkHash4 will compute hashes using the same
|
||||
// algorithm as hash4
|
||||
func bulkHash4(b []byte, dst []uint32) {
|
||||
|
|
56
vendor/github.com/klauspost/compress/flate/fast_encoder.go
generated
vendored
56
vendor/github.com/klauspost/compress/flate/fast_encoder.go
generated
vendored
|
@ -58,17 +58,6 @@ const (
|
|||
prime8bytes = 0xcf1bbcdcb7a56463
|
||||
)
|
||||
|
||||
func load32(b []byte, i int) uint32 {
|
||||
// Help the compiler eliminate bounds checks on the read so it can be done in a single read.
|
||||
b = b[i:]
|
||||
b = b[:4]
|
||||
return uint32(b[0]) | uint32(b[1])<<8 | uint32(b[2])<<16 | uint32(b[3])<<24
|
||||
}
|
||||
|
||||
func load64(b []byte, i int) uint64 {
|
||||
return binary.LittleEndian.Uint64(b[i:])
|
||||
}
|
||||
|
||||
func load3232(b []byte, i int32) uint32 {
|
||||
return binary.LittleEndian.Uint32(b[i:])
|
||||
}
|
||||
|
@ -77,10 +66,6 @@ func load6432(b []byte, i int32) uint64 {
|
|||
return binary.LittleEndian.Uint64(b[i:])
|
||||
}
|
||||
|
||||
func hash(u uint32) uint32 {
|
||||
return (u * 0x1e35a7bd) >> tableShift
|
||||
}
|
||||
|
||||
type tableEntry struct {
|
||||
offset int32
|
||||
}
|
||||
|
@ -115,39 +100,36 @@ func (e *fastGen) addBlock(src []byte) int32 {
|
|||
return s
|
||||
}
|
||||
|
||||
// hash4 returns the hash of u to fit in a hash table with h bits.
|
||||
// Preferably h should be a constant and should always be <32.
|
||||
func hash4u(u uint32, h uint8) uint32 {
|
||||
return (u * prime4bytes) >> (32 - h)
|
||||
}
|
||||
|
||||
type tableEntryPrev struct {
|
||||
Cur tableEntry
|
||||
Prev tableEntry
|
||||
}
|
||||
|
||||
// hash4x64 returns the hash of the lowest 4 bytes of u to fit in a hash table with h bits.
|
||||
// Preferably h should be a constant and should always be <32.
|
||||
func hash4x64(u uint64, h uint8) uint32 {
|
||||
return (uint32(u) * prime4bytes) >> ((32 - h) & reg8SizeMask32)
|
||||
}
|
||||
|
||||
// hash7 returns the hash of the lowest 7 bytes of u to fit in a hash table with h bits.
|
||||
// Preferably h should be a constant and should always be <64.
|
||||
func hash7(u uint64, h uint8) uint32 {
|
||||
return uint32(((u << (64 - 56)) * prime7bytes) >> ((64 - h) & reg8SizeMask64))
|
||||
}
|
||||
|
||||
// hash8 returns the hash of u to fit in a hash table with h bits.
|
||||
// Preferably h should be a constant and should always be <64.
|
||||
func hash8(u uint64, h uint8) uint32 {
|
||||
return uint32((u * prime8bytes) >> ((64 - h) & reg8SizeMask64))
|
||||
}
|
||||
|
||||
// hash6 returns the hash of the lowest 6 bytes of u to fit in a hash table with h bits.
|
||||
// Preferably h should be a constant and should always be <64.
|
||||
func hash6(u uint64, h uint8) uint32 {
|
||||
return uint32(((u << (64 - 48)) * prime6bytes) >> ((64 - h) & reg8SizeMask64))
|
||||
// hashLen returns a hash of the lowest mls bytes of with length output bits.
|
||||
// mls must be >=3 and <=8. Any other value will return hash for 4 bytes.
|
||||
// length should always be < 32.
|
||||
// Preferably length and mls should be a constant for inlining.
|
||||
func hashLen(u uint64, length, mls uint8) uint32 {
|
||||
switch mls {
|
||||
case 3:
|
||||
return (uint32(u<<8) * prime3bytes) >> (32 - length)
|
||||
case 5:
|
||||
return uint32(((u << (64 - 40)) * prime5bytes) >> (64 - length))
|
||||
case 6:
|
||||
return uint32(((u << (64 - 48)) * prime6bytes) >> (64 - length))
|
||||
case 7:
|
||||
return uint32(((u << (64 - 56)) * prime7bytes) >> (64 - length))
|
||||
case 8:
|
||||
return uint32((u * prime8bytes) >> (64 - length))
|
||||
default:
|
||||
return (uint32(u) * prime4bytes) >> (32 - length)
|
||||
}
|
||||
}
|
||||
|
||||
// matchlen will return the match length between offsets and t in src.
|
||||
|
|
27
vendor/github.com/klauspost/compress/flate/level1.go
generated
vendored
27
vendor/github.com/klauspost/compress/flate/level1.go
generated
vendored
|
@ -19,6 +19,7 @@ func (e *fastEncL1) Encode(dst *tokens, src []byte) {
|
|||
const (
|
||||
inputMargin = 12 - 1
|
||||
minNonLiteralBlockSize = 1 + 1 + inputMargin
|
||||
hashBytes = 5
|
||||
)
|
||||
if debugDeflate && e.cur < 0 {
|
||||
panic(fmt.Sprint("e.cur < 0: ", e.cur))
|
||||
|
@ -68,7 +69,7 @@ func (e *fastEncL1) Encode(dst *tokens, src []byte) {
|
|||
sLimit := int32(len(src) - inputMargin)
|
||||
|
||||
// nextEmit is where in src the next emitLiteral should start from.
|
||||
cv := load3232(src, s)
|
||||
cv := load6432(src, s)
|
||||
|
||||
for {
|
||||
const skipLog = 5
|
||||
|
@ -77,7 +78,7 @@ func (e *fastEncL1) Encode(dst *tokens, src []byte) {
|
|||
nextS := s
|
||||
var candidate tableEntry
|
||||
for {
|
||||
nextHash := hash(cv)
|
||||
nextHash := hashLen(cv, tableBits, hashBytes)
|
||||
candidate = e.table[nextHash]
|
||||
nextS = s + doEvery + (s-nextEmit)>>skipLog
|
||||
if nextS > sLimit {
|
||||
|
@ -86,16 +87,16 @@ func (e *fastEncL1) Encode(dst *tokens, src []byte) {
|
|||
|
||||
now := load6432(src, nextS)
|
||||
e.table[nextHash] = tableEntry{offset: s + e.cur}
|
||||
nextHash = hash(uint32(now))
|
||||
nextHash = hashLen(now, tableBits, hashBytes)
|
||||
|
||||
offset := s - (candidate.offset - e.cur)
|
||||
if offset < maxMatchOffset && cv == load3232(src, candidate.offset-e.cur) {
|
||||
if offset < maxMatchOffset && uint32(cv) == load3232(src, candidate.offset-e.cur) {
|
||||
e.table[nextHash] = tableEntry{offset: nextS + e.cur}
|
||||
break
|
||||
}
|
||||
|
||||
// Do one right away...
|
||||
cv = uint32(now)
|
||||
cv = now
|
||||
s = nextS
|
||||
nextS++
|
||||
candidate = e.table[nextHash]
|
||||
|
@ -103,11 +104,11 @@ func (e *fastEncL1) Encode(dst *tokens, src []byte) {
|
|||
e.table[nextHash] = tableEntry{offset: s + e.cur}
|
||||
|
||||
offset = s - (candidate.offset - e.cur)
|
||||
if offset < maxMatchOffset && cv == load3232(src, candidate.offset-e.cur) {
|
||||
if offset < maxMatchOffset && uint32(cv) == load3232(src, candidate.offset-e.cur) {
|
||||
e.table[nextHash] = tableEntry{offset: nextS + e.cur}
|
||||
break
|
||||
}
|
||||
cv = uint32(now)
|
||||
cv = now
|
||||
s = nextS
|
||||
}
|
||||
|
||||
|
@ -198,9 +199,9 @@ func (e *fastEncL1) Encode(dst *tokens, src []byte) {
|
|||
}
|
||||
if s >= sLimit {
|
||||
// Index first pair after match end.
|
||||
if int(s+l+4) < len(src) {
|
||||
cv := load3232(src, s)
|
||||
e.table[hash(cv)] = tableEntry{offset: s + e.cur}
|
||||
if int(s+l+8) < len(src) {
|
||||
cv := load6432(src, s)
|
||||
e.table[hashLen(cv, tableBits, hashBytes)] = tableEntry{offset: s + e.cur}
|
||||
}
|
||||
goto emitRemainder
|
||||
}
|
||||
|
@ -213,16 +214,16 @@ func (e *fastEncL1) Encode(dst *tokens, src []byte) {
|
|||
// three load32 calls.
|
||||
x := load6432(src, s-2)
|
||||
o := e.cur + s - 2
|
||||
prevHash := hash(uint32(x))
|
||||
prevHash := hashLen(x, tableBits, hashBytes)
|
||||
e.table[prevHash] = tableEntry{offset: o}
|
||||
x >>= 16
|
||||
currHash := hash(uint32(x))
|
||||
currHash := hashLen(x, tableBits, hashBytes)
|
||||
candidate = e.table[currHash]
|
||||
e.table[currHash] = tableEntry{offset: o + 2}
|
||||
|
||||
offset := s - (candidate.offset - e.cur)
|
||||
if offset > maxMatchOffset || uint32(x) != load3232(src, candidate.offset-e.cur) {
|
||||
cv = uint32(x >> 8)
|
||||
cv = x >> 8
|
||||
s++
|
||||
break
|
||||
}
|
||||
|
|
35
vendor/github.com/klauspost/compress/flate/level2.go
generated
vendored
35
vendor/github.com/klauspost/compress/flate/level2.go
generated
vendored
|
@ -16,6 +16,7 @@ func (e *fastEncL2) Encode(dst *tokens, src []byte) {
|
|||
const (
|
||||
inputMargin = 12 - 1
|
||||
minNonLiteralBlockSize = 1 + 1 + inputMargin
|
||||
hashBytes = 5
|
||||
)
|
||||
|
||||
if debugDeflate && e.cur < 0 {
|
||||
|
@ -66,7 +67,7 @@ func (e *fastEncL2) Encode(dst *tokens, src []byte) {
|
|||
sLimit := int32(len(src) - inputMargin)
|
||||
|
||||
// nextEmit is where in src the next emitLiteral should start from.
|
||||
cv := load3232(src, s)
|
||||
cv := load6432(src, s)
|
||||
for {
|
||||
// When should we start skipping if we haven't found matches in a long while.
|
||||
const skipLog = 5
|
||||
|
@ -75,7 +76,7 @@ func (e *fastEncL2) Encode(dst *tokens, src []byte) {
|
|||
nextS := s
|
||||
var candidate tableEntry
|
||||
for {
|
||||
nextHash := hash4u(cv, bTableBits)
|
||||
nextHash := hashLen(cv, bTableBits, hashBytes)
|
||||
s = nextS
|
||||
nextS = s + doEvery + (s-nextEmit)>>skipLog
|
||||
if nextS > sLimit {
|
||||
|
@ -84,16 +85,16 @@ func (e *fastEncL2) Encode(dst *tokens, src []byte) {
|
|||
candidate = e.table[nextHash]
|
||||
now := load6432(src, nextS)
|
||||
e.table[nextHash] = tableEntry{offset: s + e.cur}
|
||||
nextHash = hash4u(uint32(now), bTableBits)
|
||||
nextHash = hashLen(now, bTableBits, hashBytes)
|
||||
|
||||
offset := s - (candidate.offset - e.cur)
|
||||
if offset < maxMatchOffset && cv == load3232(src, candidate.offset-e.cur) {
|
||||
if offset < maxMatchOffset && uint32(cv) == load3232(src, candidate.offset-e.cur) {
|
||||
e.table[nextHash] = tableEntry{offset: nextS + e.cur}
|
||||
break
|
||||
}
|
||||
|
||||
// Do one right away...
|
||||
cv = uint32(now)
|
||||
cv = now
|
||||
s = nextS
|
||||
nextS++
|
||||
candidate = e.table[nextHash]
|
||||
|
@ -101,10 +102,10 @@ func (e *fastEncL2) Encode(dst *tokens, src []byte) {
|
|||
e.table[nextHash] = tableEntry{offset: s + e.cur}
|
||||
|
||||
offset = s - (candidate.offset - e.cur)
|
||||
if offset < maxMatchOffset && cv == load3232(src, candidate.offset-e.cur) {
|
||||
if offset < maxMatchOffset && uint32(cv) == load3232(src, candidate.offset-e.cur) {
|
||||
break
|
||||
}
|
||||
cv = uint32(now)
|
||||
cv = now
|
||||
}
|
||||
|
||||
// A 4-byte match has been found. We'll later see if more than 4 bytes
|
||||
|
@ -154,9 +155,9 @@ func (e *fastEncL2) Encode(dst *tokens, src []byte) {
|
|||
|
||||
if s >= sLimit {
|
||||
// Index first pair after match end.
|
||||
if int(s+l+4) < len(src) {
|
||||
cv := load3232(src, s)
|
||||
e.table[hash4u(cv, bTableBits)] = tableEntry{offset: s + e.cur}
|
||||
if int(s+l+8) < len(src) {
|
||||
cv := load6432(src, s)
|
||||
e.table[hashLen(cv, bTableBits, hashBytes)] = tableEntry{offset: s + e.cur}
|
||||
}
|
||||
goto emitRemainder
|
||||
}
|
||||
|
@ -164,15 +165,15 @@ func (e *fastEncL2) Encode(dst *tokens, src []byte) {
|
|||
// Store every second hash in-between, but offset by 1.
|
||||
for i := s - l + 2; i < s-5; i += 7 {
|
||||
x := load6432(src, i)
|
||||
nextHash := hash4u(uint32(x), bTableBits)
|
||||
nextHash := hashLen(x, bTableBits, hashBytes)
|
||||
e.table[nextHash] = tableEntry{offset: e.cur + i}
|
||||
// Skip one
|
||||
x >>= 16
|
||||
nextHash = hash4u(uint32(x), bTableBits)
|
||||
nextHash = hashLen(x, bTableBits, hashBytes)
|
||||
e.table[nextHash] = tableEntry{offset: e.cur + i + 2}
|
||||
// Skip one
|
||||
x >>= 16
|
||||
nextHash = hash4u(uint32(x), bTableBits)
|
||||
nextHash = hashLen(x, bTableBits, hashBytes)
|
||||
e.table[nextHash] = tableEntry{offset: e.cur + i + 4}
|
||||
}
|
||||
|
||||
|
@ -184,17 +185,17 @@ func (e *fastEncL2) Encode(dst *tokens, src []byte) {
|
|||
// three load32 calls.
|
||||
x := load6432(src, s-2)
|
||||
o := e.cur + s - 2
|
||||
prevHash := hash4u(uint32(x), bTableBits)
|
||||
prevHash2 := hash4u(uint32(x>>8), bTableBits)
|
||||
prevHash := hashLen(x, bTableBits, hashBytes)
|
||||
prevHash2 := hashLen(x>>8, bTableBits, hashBytes)
|
||||
e.table[prevHash] = tableEntry{offset: o}
|
||||
e.table[prevHash2] = tableEntry{offset: o + 1}
|
||||
currHash := hash4u(uint32(x>>16), bTableBits)
|
||||
currHash := hashLen(x>>16, bTableBits, hashBytes)
|
||||
candidate = e.table[currHash]
|
||||
e.table[currHash] = tableEntry{offset: o + 2}
|
||||
|
||||
offset := s - (candidate.offset - e.cur)
|
||||
if offset > maxMatchOffset || uint32(x>>16) != load3232(src, candidate.offset-e.cur) {
|
||||
cv = uint32(x >> 24)
|
||||
cv = x >> 24
|
||||
s++
|
||||
break
|
||||
}
|
||||
|
|
41
vendor/github.com/klauspost/compress/flate/level3.go
generated
vendored
41
vendor/github.com/klauspost/compress/flate/level3.go
generated
vendored
|
@ -11,10 +11,11 @@ type fastEncL3 struct {
|
|||
// Encode uses a similar algorithm to level 2, will check up to two candidates.
|
||||
func (e *fastEncL3) Encode(dst *tokens, src []byte) {
|
||||
const (
|
||||
inputMargin = 8 - 1
|
||||
inputMargin = 12 - 1
|
||||
minNonLiteralBlockSize = 1 + 1 + inputMargin
|
||||
tableBits = 16
|
||||
tableSize = 1 << tableBits
|
||||
hashBytes = 5
|
||||
)
|
||||
|
||||
if debugDeflate && e.cur < 0 {
|
||||
|
@ -69,20 +70,20 @@ func (e *fastEncL3) Encode(dst *tokens, src []byte) {
|
|||
sLimit := int32(len(src) - inputMargin)
|
||||
|
||||
// nextEmit is where in src the next emitLiteral should start from.
|
||||
cv := load3232(src, s)
|
||||
cv := load6432(src, s)
|
||||
for {
|
||||
const skipLog = 6
|
||||
const skipLog = 7
|
||||
nextS := s
|
||||
var candidate tableEntry
|
||||
for {
|
||||
nextHash := hash4u(cv, tableBits)
|
||||
nextHash := hashLen(cv, tableBits, hashBytes)
|
||||
s = nextS
|
||||
nextS = s + 1 + (s-nextEmit)>>skipLog
|
||||
if nextS > sLimit {
|
||||
goto emitRemainder
|
||||
}
|
||||
candidates := e.table[nextHash]
|
||||
now := load3232(src, nextS)
|
||||
now := load6432(src, nextS)
|
||||
|
||||
// Safe offset distance until s + 4...
|
||||
minOffset := e.cur + s - (maxMatchOffset - 4)
|
||||
|
@ -96,8 +97,8 @@ func (e *fastEncL3) Encode(dst *tokens, src []byte) {
|
|||
continue
|
||||
}
|
||||
|
||||
if cv == load3232(src, candidate.offset-e.cur) {
|
||||
if candidates.Prev.offset < minOffset || cv != load3232(src, candidates.Prev.offset-e.cur) {
|
||||
if uint32(cv) == load3232(src, candidate.offset-e.cur) {
|
||||
if candidates.Prev.offset < minOffset || uint32(cv) != load3232(src, candidates.Prev.offset-e.cur) {
|
||||
break
|
||||
}
|
||||
// Both match and are valid, pick longest.
|
||||
|
@ -112,7 +113,7 @@ func (e *fastEncL3) Encode(dst *tokens, src []byte) {
|
|||
// We only check if value mismatches.
|
||||
// Offset will always be invalid in other cases.
|
||||
candidate = candidates.Prev
|
||||
if candidate.offset > minOffset && cv == load3232(src, candidate.offset-e.cur) {
|
||||
if candidate.offset > minOffset && uint32(cv) == load3232(src, candidate.offset-e.cur) {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
@ -164,9 +165,9 @@ func (e *fastEncL3) Encode(dst *tokens, src []byte) {
|
|||
if s >= sLimit {
|
||||
t += l
|
||||
// Index first pair after match end.
|
||||
if int(t+4) < len(src) && t > 0 {
|
||||
cv := load3232(src, t)
|
||||
nextHash := hash4u(cv, tableBits)
|
||||
if int(t+8) < len(src) && t > 0 {
|
||||
cv = load6432(src, t)
|
||||
nextHash := hashLen(cv, tableBits, hashBytes)
|
||||
e.table[nextHash] = tableEntryPrev{
|
||||
Prev: e.table[nextHash].Cur,
|
||||
Cur: tableEntry{offset: e.cur + t},
|
||||
|
@ -176,8 +177,8 @@ func (e *fastEncL3) Encode(dst *tokens, src []byte) {
|
|||
}
|
||||
|
||||
// Store every 5th hash in-between.
|
||||
for i := s - l + 2; i < s-5; i += 5 {
|
||||
nextHash := hash4u(load3232(src, i), tableBits)
|
||||
for i := s - l + 2; i < s-5; i += 6 {
|
||||
nextHash := hashLen(load6432(src, i), tableBits, hashBytes)
|
||||
e.table[nextHash] = tableEntryPrev{
|
||||
Prev: e.table[nextHash].Cur,
|
||||
Cur: tableEntry{offset: e.cur + i}}
|
||||
|
@ -185,23 +186,23 @@ func (e *fastEncL3) Encode(dst *tokens, src []byte) {
|
|||
// We could immediately start working at s now, but to improve
|
||||
// compression we first update the hash table at s-2 to s.
|
||||
x := load6432(src, s-2)
|
||||
prevHash := hash4u(uint32(x), tableBits)
|
||||
prevHash := hashLen(x, tableBits, hashBytes)
|
||||
|
||||
e.table[prevHash] = tableEntryPrev{
|
||||
Prev: e.table[prevHash].Cur,
|
||||
Cur: tableEntry{offset: e.cur + s - 2},
|
||||
}
|
||||
x >>= 8
|
||||
prevHash = hash4u(uint32(x), tableBits)
|
||||
prevHash = hashLen(x, tableBits, hashBytes)
|
||||
|
||||
e.table[prevHash] = tableEntryPrev{
|
||||
Prev: e.table[prevHash].Cur,
|
||||
Cur: tableEntry{offset: e.cur + s - 1},
|
||||
}
|
||||
x >>= 8
|
||||
currHash := hash4u(uint32(x), tableBits)
|
||||
currHash := hashLen(x, tableBits, hashBytes)
|
||||
candidates := e.table[currHash]
|
||||
cv = uint32(x)
|
||||
cv = x
|
||||
e.table[currHash] = tableEntryPrev{
|
||||
Prev: candidates.Cur,
|
||||
Cur: tableEntry{offset: s + e.cur},
|
||||
|
@ -212,17 +213,17 @@ func (e *fastEncL3) Encode(dst *tokens, src []byte) {
|
|||
minOffset := e.cur + s - (maxMatchOffset - 4)
|
||||
|
||||
if candidate.offset > minOffset {
|
||||
if cv == load3232(src, candidate.offset-e.cur) {
|
||||
if uint32(cv) == load3232(src, candidate.offset-e.cur) {
|
||||
// Found a match...
|
||||
continue
|
||||
}
|
||||
candidate = candidates.Prev
|
||||
if candidate.offset > minOffset && cv == load3232(src, candidate.offset-e.cur) {
|
||||
if candidate.offset > minOffset && uint32(cv) == load3232(src, candidate.offset-e.cur) {
|
||||
// Match at prev...
|
||||
continue
|
||||
}
|
||||
}
|
||||
cv = uint32(x >> 8)
|
||||
cv = x >> 8
|
||||
s++
|
||||
break
|
||||
}
|
||||
|
|
11
vendor/github.com/klauspost/compress/flate/level4.go
generated
vendored
11
vendor/github.com/klauspost/compress/flate/level4.go
generated
vendored
|
@ -12,6 +12,7 @@ func (e *fastEncL4) Encode(dst *tokens, src []byte) {
|
|||
const (
|
||||
inputMargin = 12 - 1
|
||||
minNonLiteralBlockSize = 1 + 1 + inputMargin
|
||||
hashShortBytes = 4
|
||||
)
|
||||
if debugDeflate && e.cur < 0 {
|
||||
panic(fmt.Sprint("e.cur < 0: ", e.cur))
|
||||
|
@ -80,7 +81,7 @@ func (e *fastEncL4) Encode(dst *tokens, src []byte) {
|
|||
nextS := s
|
||||
var t int32
|
||||
for {
|
||||
nextHashS := hash4x64(cv, tableBits)
|
||||
nextHashS := hashLen(cv, tableBits, hashShortBytes)
|
||||
nextHashL := hash7(cv, tableBits)
|
||||
|
||||
s = nextS
|
||||
|
@ -168,7 +169,7 @@ func (e *fastEncL4) Encode(dst *tokens, src []byte) {
|
|||
// Index first pair after match end.
|
||||
if int(s+8) < len(src) {
|
||||
cv := load6432(src, s)
|
||||
e.table[hash4x64(cv, tableBits)] = tableEntry{offset: s + e.cur}
|
||||
e.table[hashLen(cv, tableBits, hashShortBytes)] = tableEntry{offset: s + e.cur}
|
||||
e.bTable[hash7(cv, tableBits)] = tableEntry{offset: s + e.cur}
|
||||
}
|
||||
goto emitRemainder
|
||||
|
@ -183,7 +184,7 @@ func (e *fastEncL4) Encode(dst *tokens, src []byte) {
|
|||
t2 := tableEntry{offset: t.offset + 1}
|
||||
e.bTable[hash7(cv, tableBits)] = t
|
||||
e.bTable[hash7(cv>>8, tableBits)] = t2
|
||||
e.table[hash4u(uint32(cv>>8), tableBits)] = t2
|
||||
e.table[hashLen(cv>>8, tableBits, hashShortBytes)] = t2
|
||||
|
||||
i += 3
|
||||
for ; i < s-1; i += 3 {
|
||||
|
@ -192,7 +193,7 @@ func (e *fastEncL4) Encode(dst *tokens, src []byte) {
|
|||
t2 := tableEntry{offset: t.offset + 1}
|
||||
e.bTable[hash7(cv, tableBits)] = t
|
||||
e.bTable[hash7(cv>>8, tableBits)] = t2
|
||||
e.table[hash4u(uint32(cv>>8), tableBits)] = t2
|
||||
e.table[hashLen(cv>>8, tableBits, hashShortBytes)] = t2
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -201,7 +202,7 @@ func (e *fastEncL4) Encode(dst *tokens, src []byte) {
|
|||
// compression we first update the hash table at s-1 and at s.
|
||||
x := load6432(src, s-1)
|
||||
o := e.cur + s - 1
|
||||
prevHashS := hash4x64(x, tableBits)
|
||||
prevHashS := hashLen(x, tableBits, hashShortBytes)
|
||||
prevHashL := hash7(x, tableBits)
|
||||
e.table[prevHashS] = tableEntry{offset: o}
|
||||
e.bTable[prevHashL] = tableEntry{offset: o}
|
||||
|
|
13
vendor/github.com/klauspost/compress/flate/level5.go
generated
vendored
13
vendor/github.com/klauspost/compress/flate/level5.go
generated
vendored
|
@ -12,6 +12,7 @@ func (e *fastEncL5) Encode(dst *tokens, src []byte) {
|
|||
const (
|
||||
inputMargin = 12 - 1
|
||||
minNonLiteralBlockSize = 1 + 1 + inputMargin
|
||||
hashShortBytes = 4
|
||||
)
|
||||
if debugDeflate && e.cur < 0 {
|
||||
panic(fmt.Sprint("e.cur < 0: ", e.cur))
|
||||
|
@ -88,7 +89,7 @@ func (e *fastEncL5) Encode(dst *tokens, src []byte) {
|
|||
var l int32
|
||||
var t int32
|
||||
for {
|
||||
nextHashS := hash4x64(cv, tableBits)
|
||||
nextHashS := hashLen(cv, tableBits, hashShortBytes)
|
||||
nextHashL := hash7(cv, tableBits)
|
||||
|
||||
s = nextS
|
||||
|
@ -105,7 +106,7 @@ func (e *fastEncL5) Encode(dst *tokens, src []byte) {
|
|||
eLong := &e.bTable[nextHashL]
|
||||
eLong.Cur, eLong.Prev = entry, eLong.Cur
|
||||
|
||||
nextHashS = hash4x64(next, tableBits)
|
||||
nextHashS = hashLen(next, tableBits, hashShortBytes)
|
||||
nextHashL = hash7(next, tableBits)
|
||||
|
||||
t = lCandidate.Cur.offset - e.cur
|
||||
|
@ -257,7 +258,7 @@ func (e *fastEncL5) Encode(dst *tokens, src []byte) {
|
|||
if i < s-1 {
|
||||
cv := load6432(src, i)
|
||||
t := tableEntry{offset: i + e.cur}
|
||||
e.table[hash4x64(cv, tableBits)] = t
|
||||
e.table[hashLen(cv, tableBits, hashShortBytes)] = t
|
||||
eLong := &e.bTable[hash7(cv, tableBits)]
|
||||
eLong.Cur, eLong.Prev = t, eLong.Cur
|
||||
|
||||
|
@ -270,7 +271,7 @@ func (e *fastEncL5) Encode(dst *tokens, src []byte) {
|
|||
// We only have enough bits for a short entry at i+2
|
||||
cv >>= 8
|
||||
t = tableEntry{offset: t.offset + 1}
|
||||
e.table[hash4x64(cv, tableBits)] = t
|
||||
e.table[hashLen(cv, tableBits, hashShortBytes)] = t
|
||||
|
||||
// Skip one - otherwise we risk hitting 's'
|
||||
i += 4
|
||||
|
@ -280,7 +281,7 @@ func (e *fastEncL5) Encode(dst *tokens, src []byte) {
|
|||
t2 := tableEntry{offset: t.offset + 1}
|
||||
eLong := &e.bTable[hash7(cv, tableBits)]
|
||||
eLong.Cur, eLong.Prev = t, eLong.Cur
|
||||
e.table[hash4u(uint32(cv>>8), tableBits)] = t2
|
||||
e.table[hashLen(cv>>8, tableBits, hashShortBytes)] = t2
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -289,7 +290,7 @@ func (e *fastEncL5) Encode(dst *tokens, src []byte) {
|
|||
// compression we first update the hash table at s-1 and at s.
|
||||
x := load6432(src, s-1)
|
||||
o := e.cur + s - 1
|
||||
prevHashS := hash4x64(x, tableBits)
|
||||
prevHashS := hashLen(x, tableBits, hashShortBytes)
|
||||
prevHashL := hash7(x, tableBits)
|
||||
e.table[prevHashS] = tableEntry{offset: o}
|
||||
eLong := &e.bTable[prevHashL]
|
||||
|
|
9
vendor/github.com/klauspost/compress/flate/level6.go
generated
vendored
9
vendor/github.com/klauspost/compress/flate/level6.go
generated
vendored
|
@ -12,6 +12,7 @@ func (e *fastEncL6) Encode(dst *tokens, src []byte) {
|
|||
const (
|
||||
inputMargin = 12 - 1
|
||||
minNonLiteralBlockSize = 1 + 1 + inputMargin
|
||||
hashShortBytes = 4
|
||||
)
|
||||
if debugDeflate && e.cur < 0 {
|
||||
panic(fmt.Sprint("e.cur < 0: ", e.cur))
|
||||
|
@ -90,7 +91,7 @@ func (e *fastEncL6) Encode(dst *tokens, src []byte) {
|
|||
var l int32
|
||||
var t int32
|
||||
for {
|
||||
nextHashS := hash4x64(cv, tableBits)
|
||||
nextHashS := hashLen(cv, tableBits, hashShortBytes)
|
||||
nextHashL := hash7(cv, tableBits)
|
||||
s = nextS
|
||||
nextS = s + doEvery + (s-nextEmit)>>skipLog
|
||||
|
@ -107,7 +108,7 @@ func (e *fastEncL6) Encode(dst *tokens, src []byte) {
|
|||
eLong.Cur, eLong.Prev = entry, eLong.Cur
|
||||
|
||||
// Calculate hashes of 'next'
|
||||
nextHashS = hash4x64(next, tableBits)
|
||||
nextHashS = hashLen(next, tableBits, hashShortBytes)
|
||||
nextHashL = hash7(next, tableBits)
|
||||
|
||||
t = lCandidate.Cur.offset - e.cur
|
||||
|
@ -286,7 +287,7 @@ func (e *fastEncL6) Encode(dst *tokens, src []byte) {
|
|||
// Index after match end.
|
||||
for i := nextS + 1; i < int32(len(src))-8; i += 2 {
|
||||
cv := load6432(src, i)
|
||||
e.table[hash4x64(cv, tableBits)] = tableEntry{offset: i + e.cur}
|
||||
e.table[hashLen(cv, tableBits, hashShortBytes)] = tableEntry{offset: i + e.cur}
|
||||
eLong := &e.bTable[hash7(cv, tableBits)]
|
||||
eLong.Cur, eLong.Prev = tableEntry{offset: i + e.cur}, eLong.Cur
|
||||
}
|
||||
|
@ -301,7 +302,7 @@ func (e *fastEncL6) Encode(dst *tokens, src []byte) {
|
|||
t2 := tableEntry{offset: t.offset + 1}
|
||||
eLong := &e.bTable[hash7(cv, tableBits)]
|
||||
eLong2 := &e.bTable[hash7(cv>>8, tableBits)]
|
||||
e.table[hash4x64(cv, tableBits)] = t
|
||||
e.table[hashLen(cv, tableBits, hashShortBytes)] = t
|
||||
eLong.Cur, eLong.Prev = t, eLong.Cur
|
||||
eLong2.Cur, eLong2.Prev = t2, eLong2.Cur
|
||||
}
|
||||
|
|
268
vendor/github.com/klauspost/compress/s2/README.md
generated
vendored
268
vendor/github.com/klauspost/compress/s2/README.md
generated
vendored
|
@ -325,35 +325,35 @@ The content compressed in this mode is fully compatible with the standard decode
|
|||
|
||||
Snappy vs S2 **compression** speed on 16 core (32 thread) computer, using all threads and a single thread (1 CPU):
|
||||
|
||||
| File | S2 speed | S2 Throughput | S2 % smaller | S2 "better" | "better" throughput | "better" % smaller |
|
||||
|-----------------------------------------------------------------------------------------------------|----------|---------------|--------------|-------------|---------------------|--------------------|
|
||||
| [rawstudio-mint14.tar](https://files.klauspost.com/compress/rawstudio-mint14.7z) | 12.70x | 10556 MB/s | 7.35% | 4.15x | 3455 MB/s | 12.79% |
|
||||
| (1 CPU) | 1.14x | 948 MB/s | - | 0.42x | 349 MB/s | - |
|
||||
| [github-june-2days-2019.json](https://files.klauspost.com/compress/github-june-2days-2019.json.zst) | 17.13x | 14484 MB/s | 31.60% | 10.09x | 8533 MB/s | 37.71% |
|
||||
| (1 CPU) | 1.33x | 1127 MB/s | - | 0.70x | 589 MB/s | - |
|
||||
| [github-ranks-backup.bin](https://files.klauspost.com/compress/github-ranks-backup.bin.zst) | 15.14x | 12000 MB/s | -5.79% | 6.59x | 5223 MB/s | 5.80% |
|
||||
| (1 CPU) | 1.11x | 877 MB/s | - | 0.47x | 370 MB/s | - |
|
||||
| [consensus.db.10gb](https://files.klauspost.com/compress/consensus.db.10gb.zst) | 14.62x | 12116 MB/s | 15.90% | 5.35x | 4430 MB/s | 16.08% |
|
||||
| (1 CPU) | 1.38x | 1146 MB/s | - | 0.38x | 312 MB/s | - |
|
||||
| [adresser.json](https://files.klauspost.com/compress/adresser.json.zst) | 8.83x | 17579 MB/s | 43.86% | 6.54x | 13011 MB/s | 47.23% |
|
||||
| (1 CPU) | 1.14x | 2259 MB/s | - | 0.74x | 1475 MB/s | - |
|
||||
| [gob-stream](https://files.klauspost.com/compress/gob-stream.7z) | 16.72x | 14019 MB/s | 24.02% | 10.11x | 8477 MB/s | 30.48% |
|
||||
| (1 CPU) | 1.24x | 1043 MB/s | - | 0.70x | 586 MB/s | - |
|
||||
| [10gb.tar](http://mattmahoney.net/dc/10gb.html) | 13.33x | 9254 MB/s | 1.84% | 6.75x | 4686 MB/s | 6.72% |
|
||||
| (1 CPU) | 0.97x | 672 MB/s | - | 0.53x | 366 MB/s | - |
|
||||
| sharnd.out.2gb | 2.11x | 12639 MB/s | 0.01% | 1.98x | 11833 MB/s | 0.01% |
|
||||
| (1 CPU) | 0.93x | 5594 MB/s | - | 1.34x | 8030 MB/s | - |
|
||||
| [enwik9](http://mattmahoney.net/dc/textdata.html) | 19.34x | 8220 MB/s | 3.98% | 7.87x | 3345 MB/s | 15.82% |
|
||||
| (1 CPU) | 1.06x | 452 MB/s | - | 0.50x | 213 MB/s | - |
|
||||
| [silesia.tar](http://sun.aei.polsl.pl/~sdeor/corpus/silesia.zip) | 10.48x | 6124 MB/s | 5.67% | 3.76x | 2197 MB/s | 12.60% |
|
||||
| (1 CPU) | 0.97x | 568 MB/s | - | 0.46x | 271 MB/s | - |
|
||||
| [enwik10](https://encode.su/threads/3315-enwik10-benchmark-results) | 21.07x | 9020 MB/s | 6.36% | 6.91x | 2959 MB/s | 16.95% |
|
||||
| (1 CPU) | 1.07x | 460 MB/s | - | 0.51x | 220 MB/s | - |
|
||||
| File | S2 Speed | S2 Throughput | S2 % smaller | S2 "better" | "better" throughput | "better" % smaller |
|
||||
|---------------------------------------------------------------------------------------------------------|----------|---------------|--------------|-------------|---------------------|--------------------|
|
||||
| [rawstudio-mint14.tar](https://files.klauspost.com/compress/rawstudio-mint14.7z) | 16.33x | 10556 MB/s | 8.0% | 6.04x | 5252 MB/s | 14.7% |
|
||||
| (1 CPU) | 1.08x | 940 MB/s | - | 0.46x | 400 MB/s | - |
|
||||
| [github-june-2days-2019.json](https://files.klauspost.com/compress/github-june-2days-2019.json.zst) | 16.51x | 15224 MB/s | 31.70% | 9.47x | 8734 MB/s | 37.71% |
|
||||
| (1 CPU) | 1.26x | 1157 MB/s | - | 0.60x | 556 MB/s | - |
|
||||
| [github-ranks-backup.bin](https://files.klauspost.com/compress/github-ranks-backup.bin.zst) | 15.14x | 12598 MB/s | -5.76% | 6.23x | 5675 MB/s | 3.62% |
|
||||
| (1 CPU) | 1.02x | 932 MB/s | - | 0.47x | 432 MB/s | - |
|
||||
| [consensus.db.10gb](https://files.klauspost.com/compress/consensus.db.10gb.zst) | 11.21x | 12116 MB/s | 15.95% | 3.24x | 3500 MB/s | 18.00% |
|
||||
| (1 CPU) | 1.05x | 1135 MB/s | - | 0.27x | 292 MB/s | - |
|
||||
| [apache.log](https://files.klauspost.com/compress/apache.log.zst) | 8.55x | 16673 MB/s | 20.54% | 5.85x | 11420 MB/s | 24.97% |
|
||||
| (1 CPU) | 1.91x | 1771 MB/s | - | 0.53x | 1041 MB/s | - |
|
||||
| [gob-stream](https://files.klauspost.com/compress/gob-stream.7z) | 15.76x | 14357 MB/s | 24.01% | 8.67x | 7891 MB/s | 33.68% |
|
||||
| (1 CPU) | 1.17x | 1064 MB/s | - | 0.65x | 595 MB/s | - |
|
||||
| [10gb.tar](http://mattmahoney.net/dc/10gb.html) | 13.33x | 9835 MB/s | 2.34% | 6.85x | 4863 MB/s | 9.96% |
|
||||
| (1 CPU) | 0.97x | 689 MB/s | - | 0.55x | 387 MB/s | - |
|
||||
| sharnd.out.2gb | 9.11x | 13213 MB/s | 0.01% | 1.49x | 9184 MB/s | 0.01% |
|
||||
| (1 CPU) | 0.88x | 5418 MB/s | - | 0.77x | 5417 MB/s | - |
|
||||
| [sofia-air-quality-dataset csv](https://files.klauspost.com/compress/sofia-air-quality-dataset.tar.zst) | 22.00x | 11477 MB/s | 18.73% | 11.15x | 5817 MB/s | 27.88% |
|
||||
| (1 CPU) | 1.23x | 642 MB/s | - | 0.71x | 642 MB/s | - |
|
||||
| [silesia.tar](http://sun.aei.polsl.pl/~sdeor/corpus/silesia.zip) | 11.23x | 6520 MB/s | 5.9% | 5.35x | 3109 MB/s | 15.88% |
|
||||
| (1 CPU) | 1.05x | 607 MB/s | - | 0.52x | 304 MB/s | - |
|
||||
| [enwik9](https://files.klauspost.com/compress/enwik9.zst) | 19.28x | 8440 MB/s | 4.04% | 9.31x | 4076 MB/s | 18.04% |
|
||||
| (1 CPU) | 1.12x | 488 MB/s | - | 0.57x | 250 MB/s | - |
|
||||
|
||||
### Legend
|
||||
|
||||
* `S2 speed`: Speed of S2 compared to Snappy, using 16 cores and 1 core.
|
||||
* `S2 throughput`: Throughput of S2 in MB/s.
|
||||
* `S2 Speed`: Speed of S2 compared to Snappy, using 16 cores and 1 core.
|
||||
* `S2 Throughput`: Throughput of S2 in MB/s.
|
||||
* `S2 % smaller`: How many percent of the Snappy output size is S2 better.
|
||||
* `S2 "better"`: Speed when enabling "better" compression mode in S2 compared to Snappy.
|
||||
* `"better" throughput`: Speed when enabling "better" compression mode in S2 compared to Snappy.
|
||||
|
@ -361,7 +361,7 @@ Snappy vs S2 **compression** speed on 16 core (32 thread) computer, using all th
|
|||
|
||||
There is a good speedup across the board when using a single thread and a significant speedup when using multiple threads.
|
||||
|
||||
Machine generated data gets by far the biggest compression boost, with size being being reduced by up to 45% of Snappy size.
|
||||
Machine generated data gets by far the biggest compression boost, with size being reduced by up to 35% of Snappy size.
|
||||
|
||||
The "better" compression mode sees a good improvement in all cases, but usually at a performance cost.
|
||||
|
||||
|
@ -404,15 +404,15 @@ The "better" compression mode will actively look for shorter matches, which is w
|
|||
Without assembly decompression is also very fast; single goroutine decompression speed. No assembly:
|
||||
|
||||
| File | S2 Throughput | S2 throughput |
|
||||
|--------------------------------|--------------|---------------|
|
||||
| consensus.db.10gb.s2 | 1.84x | 2289.8 MB/s |
|
||||
| 10gb.tar.s2 | 1.30x | 867.07 MB/s |
|
||||
| rawstudio-mint14.tar.s2 | 1.66x | 1329.65 MB/s |
|
||||
| github-june-2days-2019.json.s2 | 2.36x | 1831.59 MB/s |
|
||||
| github-ranks-backup.bin.s2 | 1.73x | 1390.7 MB/s |
|
||||
| enwik9.s2 | 1.67x | 681.53 MB/s |
|
||||
| adresser.json.s2 | 3.41x | 4230.53 MB/s |
|
||||
| silesia.tar.s2 | 1.52x | 811.58 |
|
||||
|--------------------------------|---------------|---------------|
|
||||
| consensus.db.10gb.s2 | 1.84x | 2289.8 MB/s |
|
||||
| 10gb.tar.s2 | 1.30x | 867.07 MB/s |
|
||||
| rawstudio-mint14.tar.s2 | 1.66x | 1329.65 MB/s |
|
||||
| github-june-2days-2019.json.s2 | 2.36x | 1831.59 MB/s |
|
||||
| github-ranks-backup.bin.s2 | 1.73x | 1390.7 MB/s |
|
||||
| enwik9.s2 | 1.67x | 681.53 MB/s |
|
||||
| adresser.json.s2 | 3.41x | 4230.53 MB/s |
|
||||
| silesia.tar.s2 | 1.52x | 811.58 |
|
||||
|
||||
Even though S2 typically compresses better than Snappy, decompression speed is always better.
|
||||
|
||||
|
@ -450,14 +450,14 @@ The most reliable is a wide dataset.
|
|||
For this we use [`webdevdata.org-2015-01-07-subset`](https://files.klauspost.com/compress/webdevdata.org-2015-01-07-4GB-subset.7z),
|
||||
53927 files, total input size: 4,014,735,833 bytes. Single goroutine used.
|
||||
|
||||
| * | Input | Output | Reduction | MB/s |
|
||||
|-------------------|------------|------------|-----------|--------|
|
||||
| S2 | 4014735833 | 1059723369 | 73.60% | **934.34** |
|
||||
| S2 Better | 4014735833 | 969670507 | 75.85% | 532.70 |
|
||||
| S2 Best | 4014735833 | 906625668 | **77.85%** | 46.84 |
|
||||
| Snappy | 4014735833 | 1128706759 | 71.89% | 762.59 |
|
||||
| S2, Snappy Output | 4014735833 | 1093821420 | 72.75% | 908.60 |
|
||||
| LZ4 | 4014735833 | 1079259294 | 73.12% | 526.94 |
|
||||
| * | Input | Output | Reduction | MB/s |
|
||||
|-------------------|------------|------------|------------|------------|
|
||||
| S2 | 4014735833 | 1059723369 | 73.60% | **936.73** |
|
||||
| S2 Better | 4014735833 | 961580539 | 76.05% | 451.10 |
|
||||
| S2 Best | 4014735833 | 899182886 | **77.60%** | 46.84 |
|
||||
| Snappy | 4014735833 | 1128706759 | 71.89% | 790.15 |
|
||||
| S2, Snappy Output | 4014735833 | 1093823291 | 72.75% | 936.60 |
|
||||
| LZ4 | 4014735833 | 1063768713 | 73.50% | 452.02 |
|
||||
|
||||
S2 delivers both the best single threaded throughput with regular mode and the best compression rate with "best".
|
||||
"Better" mode provides the same compression speed as LZ4 with better compression ratio.
|
||||
|
@ -489,43 +489,24 @@ AMD64 assembly is use for both S2 and Snappy.
|
|||
|
||||
| Absolute Perf | Snappy size | S2 Size | Snappy Speed | S2 Speed | Snappy dec | S2 dec |
|
||||
|-----------------------|-------------|---------|--------------|-------------|-------------|-------------|
|
||||
| html | 22843 | 21111 | 16246 MB/s | 17438 MB/s | 40972 MB/s | 49263 MB/s |
|
||||
| urls.10K | 335492 | 287326 | 7943 MB/s | 9693 MB/s | 22523 MB/s | 26484 MB/s |
|
||||
| fireworks.jpeg | 123034 | 123100 | 349544 MB/s | 273889 MB/s | 718321 MB/s | 827552 MB/s |
|
||||
| fireworks.jpeg (200B) | 146 | 155 | 8869 MB/s | 17773 MB/s | 33691 MB/s | 52421 MB/s |
|
||||
| paper-100k.pdf | 85304 | 84459 | 167546 MB/s | 101263 MB/s | 326905 MB/s | 291944 MB/s |
|
||||
| html_x_4 | 92234 | 21113 | 15194 MB/s | 50670 MB/s | 30843 MB/s | 32217 MB/s |
|
||||
| alice29.txt | 88034 | 85975 | 5936 MB/s | 6139 MB/s | 12882 MB/s | 20044 MB/s |
|
||||
| asyoulik.txt | 77503 | 79650 | 5517 MB/s | 6366 MB/s | 12735 MB/s | 22806 MB/s |
|
||||
| lcet10.txt | 234661 | 220670 | 6235 MB/s | 6067 MB/s | 14519 MB/s | 18697 MB/s |
|
||||
| plrabn12.txt | 319267 | 317985 | 5159 MB/s | 5726 MB/s | 11923 MB/s | 19901 MB/s |
|
||||
| geo.protodata | 23335 | 18690 | 21220 MB/s | 26529 MB/s | 56271 MB/s | 62540 MB/s |
|
||||
| kppkn.gtb | 69526 | 65312 | 9732 MB/s | 8559 MB/s | 18491 MB/s | 18969 MB/s |
|
||||
| alice29.txt (128B) | 80 | 82 | 6691 MB/s | 15489 MB/s | 31883 MB/s | 38874 MB/s |
|
||||
| alice29.txt (1000B) | 774 | 774 | 12204 MB/s | 13000 MB/s | 48056 MB/s | 52341 MB/s |
|
||||
| alice29.txt (10000B) | 6648 | 6933 | 10044 MB/s | 12806 MB/s | 32378 MB/s | 46322 MB/s |
|
||||
| alice29.txt (20000B) | 12686 | 13574 | 7733 MB/s | 11210 MB/s | 30566 MB/s | 58969 MB/s |
|
||||
| html | 22843 | 20868 | 16246 MB/s | 18617 MB/s | 40972 MB/s | 49263 MB/s |
|
||||
| urls.10K | 335492 | 286541 | 7943 MB/s | 10201 MB/s | 22523 MB/s | 26484 MB/s |
|
||||
| fireworks.jpeg | 123034 | 123100 | 349544 MB/s | 303228 MB/s | 718321 MB/s | 827552 MB/s |
|
||||
| fireworks.jpeg (200B) | 146 | 155 | 8869 MB/s | 20180 MB/s | 33691 MB/s | 52421 MB/s |
|
||||
| paper-100k.pdf | 85304 | 84202 | 167546 MB/s | 112988 MB/s | 326905 MB/s | 291944 MB/s |
|
||||
| html_x_4 | 92234 | 20870 | 15194 MB/s | 54457 MB/s | 30843 MB/s | 32217 MB/s |
|
||||
| alice29.txt | 88034 | 85934 | 5936 MB/s | 6540 MB/s | 12882 MB/s | 20044 MB/s |
|
||||
| asyoulik.txt | 77503 | 79575 | 5517 MB/s | 6657 MB/s | 12735 MB/s | 22806 MB/s |
|
||||
| lcet10.txt | 234661 | 220383 | 6235 MB/s | 6303 MB/s | 14519 MB/s | 18697 MB/s |
|
||||
| plrabn12.txt | 319267 | 318196 | 5159 MB/s | 6074 MB/s | 11923 MB/s | 19901 MB/s |
|
||||
| geo.protodata | 23335 | 18606 | 21220 MB/s | 25432 MB/s | 56271 MB/s | 62540 MB/s |
|
||||
| kppkn.gtb | 69526 | 65019 | 9732 MB/s | 8905 MB/s | 18491 MB/s | 18969 MB/s |
|
||||
| alice29.txt (128B) | 80 | 82 | 6691 MB/s | 17179 MB/s | 31883 MB/s | 38874 MB/s |
|
||||
| alice29.txt (1000B) | 774 | 774 | 12204 MB/s | 13273 MB/s | 48056 MB/s | 52341 MB/s |
|
||||
| alice29.txt (10000B) | 6648 | 6933 | 10044 MB/s | 12824 MB/s | 32378 MB/s | 46322 MB/s |
|
||||
| alice29.txt (20000B) | 12686 | 13516 | 7733 MB/s | 12160 MB/s | 30566 MB/s | 58969 MB/s |
|
||||
|
||||
|
||||
| Relative Perf | Snappy size | S2 size improved | S2 Speed | S2 Dec Speed |
|
||||
|-----------------------|-------------|------------------|----------|--------------|
|
||||
| html | 22.31% | 7.58% | 1.07x | 1.20x |
|
||||
| urls.10K | 47.78% | 14.36% | 1.22x | 1.18x |
|
||||
| fireworks.jpeg | 99.95% | -0.05% | 0.78x | 1.15x |
|
||||
| fireworks.jpeg (200B) | 73.00% | -6.16% | 2.00x | 1.56x |
|
||||
| paper-100k.pdf | 83.30% | 0.99% | 0.60x | 0.89x |
|
||||
| html_x_4 | 22.52% | 77.11% | 3.33x | 1.04x |
|
||||
| alice29.txt | 57.88% | 2.34% | 1.03x | 1.56x |
|
||||
| asyoulik.txt | 61.91% | -2.77% | 1.15x | 1.79x |
|
||||
| lcet10.txt | 54.99% | 5.96% | 0.97x | 1.29x |
|
||||
| plrabn12.txt | 66.26% | 0.40% | 1.11x | 1.67x |
|
||||
| geo.protodata | 19.68% | 19.91% | 1.25x | 1.11x |
|
||||
| kppkn.gtb | 37.72% | 6.06% | 0.88x | 1.03x |
|
||||
| alice29.txt (128B) | 62.50% | -2.50% | 2.31x | 1.22x |
|
||||
| alice29.txt (1000B) | 77.40% | 0.00% | 1.07x | 1.09x |
|
||||
| alice29.txt (10000B) | 66.48% | -4.29% | 1.27x | 1.43x |
|
||||
| alice29.txt (20000B) | 63.43% | -7.00% | 1.45x | 1.93x |
|
||||
|
||||
Speed is generally at or above Snappy. Small blocks gets a significant speedup, although at the expense of size.
|
||||
|
||||
Decompression speed is better than Snappy, except in one case.
|
||||
|
@ -543,43 +524,24 @@ So individual benchmarks should only be seen as a guideline and the overall pict
|
|||
|
||||
| Absolute Perf | Snappy size | Better Size | Snappy Speed | Better Speed | Snappy dec | Better dec |
|
||||
|-----------------------|-------------|-------------|--------------|--------------|-------------|-------------|
|
||||
| html | 22843 | 19833 | 16246 MB/s | 7731 MB/s | 40972 MB/s | 40292 MB/s |
|
||||
| urls.10K | 335492 | 253529 | 7943 MB/s | 3980 MB/s | 22523 MB/s | 20981 MB/s |
|
||||
| fireworks.jpeg | 123034 | 123100 | 349544 MB/s | 9760 MB/s | 718321 MB/s | 823698 MB/s |
|
||||
| fireworks.jpeg (200B) | 146 | 142 | 8869 MB/s | 594 MB/s | 33691 MB/s | 30101 MB/s |
|
||||
| paper-100k.pdf | 85304 | 82915 | 167546 MB/s | 7470 MB/s | 326905 MB/s | 198869 MB/s |
|
||||
| html_x_4 | 92234 | 19841 | 15194 MB/s | 23403 MB/s | 30843 MB/s | 30937 MB/s |
|
||||
| alice29.txt | 88034 | 73218 | 5936 MB/s | 2945 MB/s | 12882 MB/s | 16611 MB/s |
|
||||
| asyoulik.txt | 77503 | 66844 | 5517 MB/s | 2739 MB/s | 12735 MB/s | 14975 MB/s |
|
||||
| lcet10.txt | 234661 | 190589 | 6235 MB/s | 3099 MB/s | 14519 MB/s | 16634 MB/s |
|
||||
| plrabn12.txt | 319267 | 270828 | 5159 MB/s | 2600 MB/s | 11923 MB/s | 13382 MB/s |
|
||||
| geo.protodata | 23335 | 18278 | 21220 MB/s | 11208 MB/s | 56271 MB/s | 57961 MB/s |
|
||||
| kppkn.gtb | 69526 | 61851 | 9732 MB/s | 4556 MB/s | 18491 MB/s | 16524 MB/s |
|
||||
| alice29.txt (128B) | 80 | 81 | 6691 MB/s | 529 MB/s | 31883 MB/s | 34225 MB/s |
|
||||
| alice29.txt (1000B) | 774 | 748 | 12204 MB/s | 1943 MB/s | 48056 MB/s | 42068 MB/s |
|
||||
| alice29.txt (10000B) | 6648 | 6234 | 10044 MB/s | 2949 MB/s | 32378 MB/s | 28813 MB/s |
|
||||
| alice29.txt (20000B) | 12686 | 11584 | 7733 MB/s | 2822 MB/s | 30566 MB/s | 27315 MB/s |
|
||||
| html | 22843 | 18972 | 16246 MB/s | 8621 MB/s | 40972 MB/s | 40292 MB/s |
|
||||
| urls.10K | 335492 | 248079 | 7943 MB/s | 5104 MB/s | 22523 MB/s | 20981 MB/s |
|
||||
| fireworks.jpeg | 123034 | 123100 | 349544 MB/s | 84429 MB/s | 718321 MB/s | 823698 MB/s |
|
||||
| fireworks.jpeg (200B) | 146 | 149 | 8869 MB/s | 7125 MB/s | 33691 MB/s | 30101 MB/s |
|
||||
| paper-100k.pdf | 85304 | 82887 | 167546 MB/s | 11087 MB/s | 326905 MB/s | 198869 MB/s |
|
||||
| html_x_4 | 92234 | 18982 | 15194 MB/s | 29316 MB/s | 30843 MB/s | 30937 MB/s |
|
||||
| alice29.txt | 88034 | 71611 | 5936 MB/s | 3709 MB/s | 12882 MB/s | 16611 MB/s |
|
||||
| asyoulik.txt | 77503 | 65941 | 5517 MB/s | 3380 MB/s | 12735 MB/s | 14975 MB/s |
|
||||
| lcet10.txt | 234661 | 184939 | 6235 MB/s | 3537 MB/s | 14519 MB/s | 16634 MB/s |
|
||||
| plrabn12.txt | 319267 | 264990 | 5159 MB/s | 2960 MB/s | 11923 MB/s | 13382 MB/s |
|
||||
| geo.protodata | 23335 | 17689 | 21220 MB/s | 10859 MB/s | 56271 MB/s | 57961 MB/s |
|
||||
| kppkn.gtb | 69526 | 55398 | 9732 MB/s | 5206 MB/s | 18491 MB/s | 16524 MB/s |
|
||||
| alice29.txt (128B) | 80 | 78 | 6691 MB/s | 7422 MB/s | 31883 MB/s | 34225 MB/s |
|
||||
| alice29.txt (1000B) | 774 | 746 | 12204 MB/s | 5734 MB/s | 48056 MB/s | 42068 MB/s |
|
||||
| alice29.txt (10000B) | 6648 | 6218 | 10044 MB/s | 6055 MB/s | 32378 MB/s | 28813 MB/s |
|
||||
| alice29.txt (20000B) | 12686 | 11492 | 7733 MB/s | 3143 MB/s | 30566 MB/s | 27315 MB/s |
|
||||
|
||||
|
||||
| Relative Perf | Snappy size | Better size | Better Speed | Better dec |
|
||||
|-----------------------|-------------|-------------|--------------|------------|
|
||||
| html | 22.31% | 13.18% | 0.48x | 0.98x |
|
||||
| urls.10K | 47.78% | 24.43% | 0.50x | 0.93x |
|
||||
| fireworks.jpeg | 99.95% | -0.05% | 0.03x | 1.15x |
|
||||
| fireworks.jpeg (200B) | 73.00% | 2.74% | 0.07x | 0.89x |
|
||||
| paper-100k.pdf | 83.30% | 2.80% | 0.07x | 0.61x |
|
||||
| html_x_4 | 22.52% | 78.49% | 0.04x | 1.00x |
|
||||
| alice29.txt | 57.88% | 16.83% | 1.54x | 1.29x |
|
||||
| asyoulik.txt | 61.91% | 13.75% | 0.50x | 1.18x |
|
||||
| lcet10.txt | 54.99% | 18.78% | 0.50x | 1.15x |
|
||||
| plrabn12.txt | 66.26% | 15.17% | 0.50x | 1.12x |
|
||||
| geo.protodata | 19.68% | 21.67% | 0.50x | 1.03x |
|
||||
| kppkn.gtb | 37.72% | 11.04% | 0.53x | 0.89x |
|
||||
| alice29.txt (128B) | 62.50% | -1.25% | 0.47x | 1.07x |
|
||||
| alice29.txt (1000B) | 77.40% | 3.36% | 0.08x | 0.88x |
|
||||
| alice29.txt (10000B) | 66.48% | 6.23% | 0.16x | 0.89x |
|
||||
| alice29.txt (20000B) | 63.43% | 8.69% | 0.29x | 0.89x |
|
||||
|
||||
Except for the mostly incompressible JPEG image compression is better and usually in the
|
||||
double digits in terms of percentage reduction over Snappy.
|
||||
|
||||
|
@ -605,29 +567,29 @@ Some examples compared on 16 core CPU, amd64 assembly used:
|
|||
|
||||
```
|
||||
* enwik10
|
||||
Default... 10000000000 -> 4761467548 [47.61%]; 1.098s, 8685.6MB/s
|
||||
Better... 10000000000 -> 4219438251 [42.19%]; 1.925s, 4954.2MB/s
|
||||
Best... 10000000000 -> 3627364337 [36.27%]; 43.051s, 221.5MB/s
|
||||
Default... 10000000000 -> 4759950115 [47.60%]; 1.03s, 9263.0MB/s
|
||||
Better... 10000000000 -> 4084706676 [40.85%]; 2.16s, 4415.4MB/s
|
||||
Best... 10000000000 -> 3615520079 [36.16%]; 42.259s, 225.7MB/s
|
||||
|
||||
* github-june-2days-2019.json
|
||||
Default... 6273951764 -> 1043196283 [16.63%]; 431ms, 13882.3MB/s
|
||||
Better... 6273951764 -> 949146808 [15.13%]; 547ms, 10938.4MB/s
|
||||
Best... 6273951764 -> 832855506 [13.27%]; 9.455s, 632.8MB/s
|
||||
Default... 6273951764 -> 1041700255 [16.60%]; 431ms, 13882.3MB/s
|
||||
Better... 6273951764 -> 945841238 [15.08%]; 547ms, 10938.4MB/s
|
||||
Best... 6273951764 -> 826392576 [13.17%]; 9.455s, 632.8MB/s
|
||||
|
||||
* nyc-taxi-data-10M.csv
|
||||
Default... 3325605752 -> 1095998837 [32.96%]; 324ms, 9788.7MB/s
|
||||
Better... 3325605752 -> 954776589 [28.71%]; 491ms, 6459.4MB/s
|
||||
Best... 3325605752 -> 779098746 [23.43%]; 8.29s, 382.6MB/s
|
||||
Default... 3325605752 -> 1093516949 [32.88%]; 324ms, 9788.7MB/s
|
||||
Better... 3325605752 -> 885394158 [26.62%]; 491ms, 6459.4MB/s
|
||||
Best... 3325605752 -> 773681257 [23.26%]; 8.29s, 412.0MB/s
|
||||
|
||||
* 10gb.tar
|
||||
Default... 10065157632 -> 5916578242 [58.78%]; 1.028s, 9337.4MB/s
|
||||
Better... 10065157632 -> 5649207485 [56.13%]; 1.597s, 6010.6MB/s
|
||||
Best... 10065157632 -> 5208719802 [51.75%]; 32.78s, 292.8MB/
|
||||
Default... 10065157632 -> 5915541066 [58.77%]; 1.028s, 9337.4MB/s
|
||||
Better... 10065157632 -> 5453844650 [54.19%]; 1.597s, 4862.7MB/s
|
||||
Best... 10065157632 -> 5192495021 [51.59%]; 32.78s, 308.2MB/
|
||||
|
||||
* consensus.db.10gb
|
||||
Default... 10737418240 -> 4562648848 [42.49%]; 882ms, 11610.0MB/s
|
||||
Better... 10737418240 -> 4542428129 [42.30%]; 1.533s, 6679.7MB/s
|
||||
Best... 10737418240 -> 4244773384 [39.53%]; 42.96s, 238.4MB/s
|
||||
Default... 10737418240 -> 4549762344 [42.37%]; 882ms, 12118.4MB/s
|
||||
Better... 10737418240 -> 4438535064 [41.34%]; 1.533s, 3500.9MB/s
|
||||
Best... 10737418240 -> 4210602774 [39.21%]; 42.96s, 254.4MB/s
|
||||
```
|
||||
|
||||
Decompression speed should be around the same as using the 'better' compression mode.
|
||||
|
@ -648,10 +610,10 @@ If you would like more control, you can use the s2 package as described below:
|
|||
Snappy compatible blocks can be generated with the S2 encoder.
|
||||
Compression and speed is typically a bit better `MaxEncodedLen` is also smaller for smaller memory usage. Replace
|
||||
|
||||
| Snappy | S2 replacement |
|
||||
|----------------------------|-------------------------|
|
||||
| snappy.Encode(...) | s2.EncodeSnappy(...) |
|
||||
| snappy.MaxEncodedLen(...) | s2.MaxEncodedLen(...) |
|
||||
| Snappy | S2 replacement |
|
||||
|---------------------------|-----------------------|
|
||||
| snappy.Encode(...) | s2.EncodeSnappy(...) |
|
||||
| snappy.MaxEncodedLen(...) | s2.MaxEncodedLen(...) |
|
||||
|
||||
`s2.EncodeSnappy` can be replaced with `s2.EncodeSnappyBetter` or `s2.EncodeSnappyBest` to get more efficiently compressed snappy compatible output.
|
||||
|
||||
|
@ -660,12 +622,12 @@ Compression and speed is typically a bit better `MaxEncodedLen` is also smaller
|
|||
Comparison of [`webdevdata.org-2015-01-07-subset`](https://files.klauspost.com/compress/webdevdata.org-2015-01-07-4GB-subset.7z),
|
||||
53927 files, total input size: 4,014,735,833 bytes. amd64, single goroutine used:
|
||||
|
||||
| Encoder | Size | MB/s | Reduction |
|
||||
|-----------------------|------------|------------|------------
|
||||
| snappy.Encode | 1128706759 | 725.59 | 71.89% |
|
||||
| s2.EncodeSnappy | 1093823291 | **899.16** | 72.75% |
|
||||
| s2.EncodeSnappyBetter | 1001158548 | 578.49 | 75.06% |
|
||||
| s2.EncodeSnappyBest | 944507998 | 66.00 | **76.47%**|
|
||||
| Encoder | Size | MB/s | Reduction |
|
||||
|-----------------------|------------|------------|------------|
|
||||
| snappy.Encode | 1128706759 | 725.59 | 71.89% |
|
||||
| s2.EncodeSnappy | 1093823291 | **899.16** | 72.75% |
|
||||
| s2.EncodeSnappyBetter | 1001158548 | 578.49 | 75.06% |
|
||||
| s2.EncodeSnappyBest | 944507998 | 66.00 | **76.47%** |
|
||||
|
||||
## Streams
|
||||
|
||||
|
@ -851,20 +813,20 @@ The block can be read from the front, but contains information so it can be read
|
|||
Numbers are stored as fixed size little endian values or [zigzag encoded](https://developers.google.com/protocol-buffers/docs/encoding#signed_integers) [base 128 varints](https://developers.google.com/protocol-buffers/docs/encoding),
|
||||
with un-encoded value length of 64 bits, unless other limits are specified.
|
||||
|
||||
| Content | Format |
|
||||
|---------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------|
|
||||
| ID, `[1]byte` | Always 0x99. |
|
||||
| Data Length, `[3]byte` | 3 byte little-endian length of the chunk in bytes, following this. |
|
||||
| Header `[6]byte` | Header, must be `[115, 50, 105, 100, 120, 0]` or in text: "s2idx\x00". |
|
||||
| UncompressedSize, Varint | Total Uncompressed size. |
|
||||
| CompressedSize, Varint | Total Compressed size if known. Should be -1 if unknown. |
|
||||
| EstBlockSize, Varint | Block Size, used for guessing uncompressed offsets. Must be >= 0. |
|
||||
| Entries, Varint | Number of Entries in index, must be < 65536 and >=0. |
|
||||
| HasUncompressedOffsets `byte` | 0 if no uncompressed offsets are present, 1 if present. Other values are invalid. |
|
||||
| UncompressedOffsets, [Entries]VarInt | Uncompressed offsets. See below how to decode. |
|
||||
| CompressedOffsets, [Entries]VarInt | Compressed offsets. See below how to decode. |
|
||||
| Block Size, `[4]byte` | Little Endian total encoded size (including header and trailer). Can be used for searching backwards to start of block. |
|
||||
| Trailer `[6]byte` | Trailer, must be `[0, 120, 100, 105, 50, 115]` or in text: "\x00xdi2s". Can be used for identifying block from end of stream. |
|
||||
| Content | Format |
|
||||
|--------------------------------------|-------------------------------------------------------------------------------------------------------------------------------|
|
||||
| ID, `[1]byte` | Always 0x99. |
|
||||
| Data Length, `[3]byte` | 3 byte little-endian length of the chunk in bytes, following this. |
|
||||
| Header `[6]byte` | Header, must be `[115, 50, 105, 100, 120, 0]` or in text: "s2idx\x00". |
|
||||
| UncompressedSize, Varint | Total Uncompressed size. |
|
||||
| CompressedSize, Varint | Total Compressed size if known. Should be -1 if unknown. |
|
||||
| EstBlockSize, Varint | Block Size, used for guessing uncompressed offsets. Must be >= 0. |
|
||||
| Entries, Varint | Number of Entries in index, must be < 65536 and >=0. |
|
||||
| HasUncompressedOffsets `byte` | 0 if no uncompressed offsets are present, 1 if present. Other values are invalid. |
|
||||
| UncompressedOffsets, [Entries]VarInt | Uncompressed offsets. See below how to decode. |
|
||||
| CompressedOffsets, [Entries]VarInt | Compressed offsets. See below how to decode. |
|
||||
| Block Size, `[4]byte` | Little Endian total encoded size (including header and trailer). Can be used for searching backwards to start of block. |
|
||||
| Trailer `[6]byte` | Trailer, must be `[0, 120, 100, 105, 50, 115]` or in text: "\x00xdi2s". Can be used for identifying block from end of stream. |
|
||||
|
||||
For regular streams the uncompressed offsets are fully predictable,
|
||||
so `HasUncompressedOffsets` allows to specify that compressed blocks all have
|
||||
|
|
21
vendor/github.com/klauspost/compress/zstd/decoder.go
generated
vendored
21
vendor/github.com/klauspost/compress/zstd/decoder.go
generated
vendored
|
@ -35,6 +35,7 @@ type Decoder struct {
|
|||
br readerWrapper
|
||||
enabled bool
|
||||
inFrame bool
|
||||
dstBuf []byte
|
||||
}
|
||||
|
||||
frame *frameDec
|
||||
|
@ -187,21 +188,23 @@ func (d *Decoder) Reset(r io.Reader) error {
|
|||
}
|
||||
|
||||
// If bytes buffer and < 5MB, do sync decoding anyway.
|
||||
if bb, ok := r.(byter); ok && bb.Len() < 5<<20 {
|
||||
if bb, ok := r.(byter); ok && bb.Len() < d.o.decodeBufsBelow && !d.o.limitToCap {
|
||||
bb2 := bb
|
||||
if debugDecoder {
|
||||
println("*bytes.Buffer detected, doing sync decode, len:", bb.Len())
|
||||
}
|
||||
b := bb2.Bytes()
|
||||
var dst []byte
|
||||
if cap(d.current.b) > 0 {
|
||||
dst = d.current.b
|
||||
if cap(d.syncStream.dstBuf) > 0 {
|
||||
dst = d.syncStream.dstBuf[:0]
|
||||
}
|
||||
|
||||
dst, err := d.DecodeAll(b, dst[:0])
|
||||
dst, err := d.DecodeAll(b, dst)
|
||||
if err == nil {
|
||||
err = io.EOF
|
||||
}
|
||||
// Save output buffer
|
||||
d.syncStream.dstBuf = dst
|
||||
d.current.b = dst
|
||||
d.current.err = err
|
||||
d.current.flushed = true
|
||||
|
@ -216,6 +219,7 @@ func (d *Decoder) Reset(r io.Reader) error {
|
|||
d.current.err = nil
|
||||
d.current.flushed = false
|
||||
d.current.d = nil
|
||||
d.syncStream.dstBuf = nil
|
||||
|
||||
// Ensure no-one else is still running...
|
||||
d.streamWg.Wait()
|
||||
|
@ -680,6 +684,7 @@ func (d *Decoder) startStreamDecoder(ctx context.Context, r io.Reader, output ch
|
|||
if debugDecoder {
|
||||
println("Async 1: new history, recent:", block.async.newHist.recentOffsets)
|
||||
}
|
||||
hist.reset()
|
||||
hist.decoders = block.async.newHist.decoders
|
||||
hist.recentOffsets = block.async.newHist.recentOffsets
|
||||
hist.windowSize = block.async.newHist.windowSize
|
||||
|
@ -711,6 +716,7 @@ func (d *Decoder) startStreamDecoder(ctx context.Context, r io.Reader, output ch
|
|||
seqExecute <- block
|
||||
}
|
||||
close(seqExecute)
|
||||
hist.reset()
|
||||
}()
|
||||
|
||||
var wg sync.WaitGroup
|
||||
|
@ -734,6 +740,7 @@ func (d *Decoder) startStreamDecoder(ctx context.Context, r io.Reader, output ch
|
|||
if debugDecoder {
|
||||
println("Async 2: new history")
|
||||
}
|
||||
hist.reset()
|
||||
hist.windowSize = block.async.newHist.windowSize
|
||||
hist.allocFrameBuffer = block.async.newHist.allocFrameBuffer
|
||||
if block.async.newHist.dict != nil {
|
||||
|
@ -815,13 +822,14 @@ func (d *Decoder) startStreamDecoder(ctx context.Context, r io.Reader, output ch
|
|||
if debugDecoder {
|
||||
println("decoder goroutines finished")
|
||||
}
|
||||
hist.reset()
|
||||
}()
|
||||
|
||||
var hist history
|
||||
decodeStream:
|
||||
for {
|
||||
var hist history
|
||||
var hasErr bool
|
||||
|
||||
hist.reset()
|
||||
decodeBlock := func(block *blockDec) {
|
||||
if hasErr {
|
||||
if block != nil {
|
||||
|
@ -937,5 +945,6 @@ decodeStream:
|
|||
}
|
||||
close(seqDecode)
|
||||
wg.Wait()
|
||||
hist.reset()
|
||||
d.frame.history.b = frameHistCache
|
||||
}
|
||||
|
|
34
vendor/github.com/klauspost/compress/zstd/decoder_options.go
generated
vendored
34
vendor/github.com/klauspost/compress/zstd/decoder_options.go
generated
vendored
|
@ -14,21 +14,23 @@ type DOption func(*decoderOptions) error
|
|||
|
||||
// options retains accumulated state of multiple options.
|
||||
type decoderOptions struct {
|
||||
lowMem bool
|
||||
concurrent int
|
||||
maxDecodedSize uint64
|
||||
maxWindowSize uint64
|
||||
dicts []dict
|
||||
ignoreChecksum bool
|
||||
limitToCap bool
|
||||
lowMem bool
|
||||
concurrent int
|
||||
maxDecodedSize uint64
|
||||
maxWindowSize uint64
|
||||
dicts []dict
|
||||
ignoreChecksum bool
|
||||
limitToCap bool
|
||||
decodeBufsBelow int
|
||||
}
|
||||
|
||||
func (o *decoderOptions) setDefault() {
|
||||
*o = decoderOptions{
|
||||
// use less ram: true for now, but may change.
|
||||
lowMem: true,
|
||||
concurrent: runtime.GOMAXPROCS(0),
|
||||
maxWindowSize: MaxWindowSize,
|
||||
lowMem: true,
|
||||
concurrent: runtime.GOMAXPROCS(0),
|
||||
maxWindowSize: MaxWindowSize,
|
||||
decodeBufsBelow: 128 << 10,
|
||||
}
|
||||
if o.concurrent > 4 {
|
||||
o.concurrent = 4
|
||||
|
@ -126,6 +128,18 @@ func WithDecodeAllCapLimit(b bool) DOption {
|
|||
}
|
||||
}
|
||||
|
||||
// WithDecodeBuffersBelow will fully decode readers that have a
|
||||
// `Bytes() []byte` and `Len() int` interface similar to bytes.Buffer.
|
||||
// This typically uses less allocations but will have the full decompressed object in memory.
|
||||
// Note that DecodeAllCapLimit will disable this, as well as giving a size of 0 or less.
|
||||
// Default is 128KiB.
|
||||
func WithDecodeBuffersBelow(size int) DOption {
|
||||
return func(o *decoderOptions) error {
|
||||
o.decodeBufsBelow = size
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// IgnoreChecksum allows to forcibly ignore checksum checking.
|
||||
func IgnoreChecksum(b bool) DOption {
|
||||
return func(o *decoderOptions) error {
|
||||
|
|
1
vendor/github.com/klauspost/compress/zstd/enc_best.go
generated
vendored
1
vendor/github.com/klauspost/compress/zstd/enc_best.go
generated
vendored
|
@ -32,6 +32,7 @@ type match struct {
|
|||
length int32
|
||||
rep int32
|
||||
est int32
|
||||
_ [12]byte // Aligned size to cache line: 4+4+4+4+4 bytes + 12 bytes padding = 32 bytes
|
||||
}
|
||||
|
||||
const highScore = 25000
|
||||
|
|
4
vendor/github.com/klauspost/compress/zstd/framedec.go
generated
vendored
4
vendor/github.com/klauspost/compress/zstd/framedec.go
generated
vendored
|
@ -343,7 +343,7 @@ func (d *frameDec) consumeCRC() error {
|
|||
return nil
|
||||
}
|
||||
|
||||
// runDecoder will create a sync decoder that will decode a block of data.
|
||||
// runDecoder will run the decoder for the remainder of the frame.
|
||||
func (d *frameDec) runDecoder(dst []byte, dec *blockDec) ([]byte, error) {
|
||||
saved := d.history.b
|
||||
|
||||
|
@ -369,7 +369,7 @@ func (d *frameDec) runDecoder(dst []byte, dec *blockDec) ([]byte, error) {
|
|||
if debugDecoder {
|
||||
println("maxSyncLen:", d.history.decoders.maxSyncLen)
|
||||
}
|
||||
if !d.o.limitToCap && uint64(cap(dst)-len(dst)) < d.history.decoders.maxSyncLen {
|
||||
if !d.o.limitToCap && uint64(cap(dst)) < d.history.decoders.maxSyncLen {
|
||||
// Alloc for output
|
||||
dst2 := make([]byte, len(dst), d.history.decoders.maxSyncLen+compressedBlockOverAlloc)
|
||||
copy(dst2, dst)
|
||||
|
|
3
vendor/github.com/klauspost/compress/zstd/fse_decoder_amd64.go
generated
vendored
3
vendor/github.com/klauspost/compress/zstd/fse_decoder_amd64.go
generated
vendored
|
@ -21,7 +21,8 @@ type buildDtableAsmContext struct {
|
|||
|
||||
// buildDtable_asm is an x86 assembly implementation of fseDecoder.buildDtable.
|
||||
// Function returns non-zero exit code on error.
|
||||
// go:noescape
|
||||
//
|
||||
//go:noescape
|
||||
func buildDtable_asm(s *fseDecoder, ctx *buildDtableAsmContext) int
|
||||
|
||||
// please keep in sync with _generate/gen_fse.go
|
||||
|
|
25
vendor/github.com/klauspost/compress/zstd/history.go
generated
vendored
25
vendor/github.com/klauspost/compress/zstd/history.go
generated
vendored
|
@ -37,26 +37,23 @@ func (h *history) reset() {
|
|||
h.ignoreBuffer = 0
|
||||
h.error = false
|
||||
h.recentOffsets = [3]int{1, 4, 8}
|
||||
if f := h.decoders.litLengths.fse; f != nil && !f.preDefined {
|
||||
fseDecoderPool.Put(f)
|
||||
}
|
||||
if f := h.decoders.offsets.fse; f != nil && !f.preDefined {
|
||||
fseDecoderPool.Put(f)
|
||||
}
|
||||
if f := h.decoders.matchLengths.fse; f != nil && !f.preDefined {
|
||||
fseDecoderPool.Put(f)
|
||||
}
|
||||
h.decoders.freeDecoders()
|
||||
h.decoders = sequenceDecs{br: h.decoders.br}
|
||||
if h.huffTree != nil {
|
||||
if h.dict == nil || h.dict.litEnc != h.huffTree {
|
||||
huffDecoderPool.Put(h.huffTree)
|
||||
}
|
||||
}
|
||||
h.freeHuffDecoder()
|
||||
h.huffTree = nil
|
||||
h.dict = nil
|
||||
//printf("history created: %+v (l: %d, c: %d)", *h, len(h.b), cap(h.b))
|
||||
}
|
||||
|
||||
func (h *history) freeHuffDecoder() {
|
||||
if h.huffTree != nil {
|
||||
if h.dict == nil || h.dict.litEnc != h.huffTree {
|
||||
huffDecoderPool.Put(h.huffTree)
|
||||
h.huffTree = nil
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (h *history) setDict(dict *dict) {
|
||||
if dict == nil {
|
||||
return
|
||||
|
|
22
vendor/github.com/klauspost/compress/zstd/seqdec.go
generated
vendored
22
vendor/github.com/klauspost/compress/zstd/seqdec.go
generated
vendored
|
@ -99,6 +99,21 @@ func (s *sequenceDecs) initialize(br *bitReader, hist *history, out []byte) erro
|
|||
return nil
|
||||
}
|
||||
|
||||
func (s *sequenceDecs) freeDecoders() {
|
||||
if f := s.litLengths.fse; f != nil && !f.preDefined {
|
||||
fseDecoderPool.Put(f)
|
||||
s.litLengths.fse = nil
|
||||
}
|
||||
if f := s.offsets.fse; f != nil && !f.preDefined {
|
||||
fseDecoderPool.Put(f)
|
||||
s.offsets.fse = nil
|
||||
}
|
||||
if f := s.matchLengths.fse; f != nil && !f.preDefined {
|
||||
fseDecoderPool.Put(f)
|
||||
s.matchLengths.fse = nil
|
||||
}
|
||||
}
|
||||
|
||||
// execute will execute the decoded sequence with the provided history.
|
||||
// The sequence must be evaluated before being sent.
|
||||
func (s *sequenceDecs) execute(seqs []seqVals, hist []byte) error {
|
||||
|
@ -299,7 +314,10 @@ func (s *sequenceDecs) decodeSync(hist []byte) error {
|
|||
}
|
||||
size := ll + ml + len(out)
|
||||
if size-startSize > maxBlockSize {
|
||||
return fmt.Errorf("output (%d) bigger than max block size (%d)", size-startSize, maxBlockSize)
|
||||
if size-startSize == 424242 {
|
||||
panic("here")
|
||||
}
|
||||
return fmt.Errorf("output bigger than max block size (%d)", maxBlockSize)
|
||||
}
|
||||
if size > cap(out) {
|
||||
// Not enough size, which can happen under high volume block streaming conditions
|
||||
|
@ -411,7 +429,7 @@ func (s *sequenceDecs) decodeSync(hist []byte) error {
|
|||
|
||||
// Check if space for literals
|
||||
if size := len(s.literals) + len(s.out) - startSize; size > maxBlockSize {
|
||||
return fmt.Errorf("output (%d) bigger than max block size (%d)", size, maxBlockSize)
|
||||
return fmt.Errorf("output bigger than max block size (%d)", maxBlockSize)
|
||||
}
|
||||
|
||||
// Add final literals
|
||||
|
|
7
vendor/github.com/klauspost/compress/zstd/seqdec_amd64.go
generated
vendored
7
vendor/github.com/klauspost/compress/zstd/seqdec_amd64.go
generated
vendored
|
@ -139,7 +139,7 @@ func (s *sequenceDecs) decodeSyncSimple(hist []byte) (bool, error) {
|
|||
if debugDecoder {
|
||||
println("msl:", s.maxSyncLen, "cap", cap(s.out), "bef:", startSize, "sz:", size-startSize, "mbs:", maxBlockSize, "outsz:", cap(s.out)-startSize)
|
||||
}
|
||||
return true, fmt.Errorf("output (%d) bigger than max block size (%d)", size-startSize, maxBlockSize)
|
||||
return true, fmt.Errorf("output bigger than max block size (%d)", maxBlockSize)
|
||||
|
||||
default:
|
||||
return true, fmt.Errorf("sequenceDecs_decode returned erronous code %d", errCode)
|
||||
|
@ -147,7 +147,8 @@ func (s *sequenceDecs) decodeSyncSimple(hist []byte) (bool, error) {
|
|||
|
||||
s.seqSize += ctx.litRemain
|
||||
if s.seqSize > maxBlockSize {
|
||||
return true, fmt.Errorf("output (%d) bigger than max block size (%d)", s.seqSize, maxBlockSize)
|
||||
return true, fmt.Errorf("output bigger than max block size (%d)", maxBlockSize)
|
||||
|
||||
}
|
||||
err := br.close()
|
||||
if err != nil {
|
||||
|
@ -289,7 +290,7 @@ func (s *sequenceDecs) decode(seqs []seqVals) error {
|
|||
|
||||
s.seqSize += ctx.litRemain
|
||||
if s.seqSize > maxBlockSize {
|
||||
return fmt.Errorf("output (%d) bigger than max block size (%d)", s.seqSize, maxBlockSize)
|
||||
return fmt.Errorf("output bigger than max block size (%d)", maxBlockSize)
|
||||
}
|
||||
err := br.close()
|
||||
if err != nil {
|
||||
|
|
4
vendor/github.com/klauspost/compress/zstd/seqdec_generic.go
generated
vendored
4
vendor/github.com/klauspost/compress/zstd/seqdec_generic.go
generated
vendored
|
@ -111,7 +111,7 @@ func (s *sequenceDecs) decode(seqs []seqVals) error {
|
|||
}
|
||||
s.seqSize += ll + ml
|
||||
if s.seqSize > maxBlockSize {
|
||||
return fmt.Errorf("output (%d) bigger than max block size (%d)", s.seqSize, maxBlockSize)
|
||||
return fmt.Errorf("output bigger than max block size (%d)", maxBlockSize)
|
||||
}
|
||||
litRemain -= ll
|
||||
if litRemain < 0 {
|
||||
|
@ -149,7 +149,7 @@ func (s *sequenceDecs) decode(seqs []seqVals) error {
|
|||
}
|
||||
s.seqSize += litRemain
|
||||
if s.seqSize > maxBlockSize {
|
||||
return fmt.Errorf("output (%d) bigger than max block size (%d)", s.seqSize, maxBlockSize)
|
||||
return fmt.Errorf("output bigger than max block size (%d)", maxBlockSize)
|
||||
}
|
||||
err := br.close()
|
||||
if err != nil {
|
||||
|
|
8
vendor/golang.org/x/net/http2/server.go
generated
vendored
8
vendor/golang.org/x/net/http2/server.go
generated
vendored
|
@ -2513,6 +2513,10 @@ func (rws *responseWriterState) writeChunk(p []byte) (n int, err error) {
|
|||
rws.writeHeader(200)
|
||||
}
|
||||
|
||||
if rws.handlerDone {
|
||||
rws.promoteUndeclaredTrailers()
|
||||
}
|
||||
|
||||
isHeadResp := rws.req.Method == "HEAD"
|
||||
if !rws.sentHeader {
|
||||
rws.sentHeader = true
|
||||
|
@ -2584,10 +2588,6 @@ func (rws *responseWriterState) writeChunk(p []byte) (n int, err error) {
|
|||
return 0, nil
|
||||
}
|
||||
|
||||
if rws.handlerDone {
|
||||
rws.promoteUndeclaredTrailers()
|
||||
}
|
||||
|
||||
// only send trailers if they have actually been defined by the
|
||||
// server handler.
|
||||
hasNonemptyTrailers := rws.hasNonemptyTrailers()
|
||||
|
|
24
vendor/golang.org/x/net/http2/transport.go
generated
vendored
24
vendor/golang.org/x/net/http2/transport.go
generated
vendored
|
@ -258,7 +258,8 @@ func (t *Transport) initConnPool() {
|
|||
// HTTP/2 server.
|
||||
type ClientConn struct {
|
||||
t *Transport
|
||||
tconn net.Conn // usually *tls.Conn, except specialized impls
|
||||
tconn net.Conn // usually *tls.Conn, except specialized impls
|
||||
tconnClosed bool
|
||||
tlsState *tls.ConnectionState // nil only for specialized impls
|
||||
reused uint32 // whether conn is being reused; atomic
|
||||
singleUse bool // whether being used for a single http.Request
|
||||
|
@ -921,10 +922,10 @@ func (cc *ClientConn) onIdleTimeout() {
|
|||
cc.closeIfIdle()
|
||||
}
|
||||
|
||||
func (cc *ClientConn) closeConn() error {
|
||||
func (cc *ClientConn) closeConn() {
|
||||
t := time.AfterFunc(250*time.Millisecond, cc.forceCloseConn)
|
||||
defer t.Stop()
|
||||
return cc.tconn.Close()
|
||||
cc.tconn.Close()
|
||||
}
|
||||
|
||||
// A tls.Conn.Close can hang for a long time if the peer is unresponsive.
|
||||
|
@ -990,7 +991,8 @@ func (cc *ClientConn) Shutdown(ctx context.Context) error {
|
|||
shutdownEnterWaitStateHook()
|
||||
select {
|
||||
case <-done:
|
||||
return cc.closeConn()
|
||||
cc.closeConn()
|
||||
return nil
|
||||
case <-ctx.Done():
|
||||
cc.mu.Lock()
|
||||
// Free the goroutine above
|
||||
|
@ -1027,7 +1029,7 @@ func (cc *ClientConn) sendGoAway() error {
|
|||
|
||||
// closes the client connection immediately. In-flight requests are interrupted.
|
||||
// err is sent to streams.
|
||||
func (cc *ClientConn) closeForError(err error) error {
|
||||
func (cc *ClientConn) closeForError(err error) {
|
||||
cc.mu.Lock()
|
||||
cc.closed = true
|
||||
for _, cs := range cc.streams {
|
||||
|
@ -1035,7 +1037,7 @@ func (cc *ClientConn) closeForError(err error) error {
|
|||
}
|
||||
cc.cond.Broadcast()
|
||||
cc.mu.Unlock()
|
||||
return cc.closeConn()
|
||||
cc.closeConn()
|
||||
}
|
||||
|
||||
// Close closes the client connection immediately.
|
||||
|
@ -1043,16 +1045,17 @@ func (cc *ClientConn) closeForError(err error) error {
|
|||
// In-flight requests are interrupted. For a graceful shutdown, use Shutdown instead.
|
||||
func (cc *ClientConn) Close() error {
|
||||
err := errors.New("http2: client connection force closed via ClientConn.Close")
|
||||
return cc.closeForError(err)
|
||||
cc.closeForError(err)
|
||||
return nil
|
||||
}
|
||||
|
||||
// closes the client connection immediately. In-flight requests are interrupted.
|
||||
func (cc *ClientConn) closeForLostPing() error {
|
||||
func (cc *ClientConn) closeForLostPing() {
|
||||
err := errors.New("http2: client connection lost")
|
||||
if f := cc.t.CountError; f != nil {
|
||||
f("conn_close_lost_ping")
|
||||
}
|
||||
return cc.closeForError(err)
|
||||
cc.closeForError(err)
|
||||
}
|
||||
|
||||
// errRequestCanceled is a copy of net/http's errRequestCanceled because it's not
|
||||
|
@ -2005,7 +2008,7 @@ func (cc *ClientConn) forgetStreamID(id uint32) {
|
|||
// wake up RoundTrip if there is a pending request.
|
||||
cc.cond.Broadcast()
|
||||
|
||||
closeOnIdle := cc.singleUse || cc.doNotReuse || cc.t.disableKeepAlives()
|
||||
closeOnIdle := cc.singleUse || cc.doNotReuse || cc.t.disableKeepAlives() || cc.goAway != nil
|
||||
if closeOnIdle && cc.streamsReserved == 0 && len(cc.streams) == 0 {
|
||||
if VerboseLogs {
|
||||
cc.vlogf("http2: Transport closing idle conn %p (forSingleUse=%v, maxStream=%v)", cc, cc.singleUse, cc.nextStreamID-2)
|
||||
|
@ -2674,7 +2677,6 @@ func (rl *clientConnReadLoop) processGoAway(f *GoAwayFrame) error {
|
|||
if fn := cc.t.CountError; fn != nil {
|
||||
fn("recv_goaway_" + f.ErrCode.stringToken())
|
||||
}
|
||||
|
||||
}
|
||||
cc.setGoAway(f)
|
||||
return nil
|
||||
|
|
29
vendor/google.golang.org/api/internal/gensupport/media.go
generated
vendored
29
vendor/google.golang.org/api/internal/gensupport/media.go
generated
vendored
|
@ -289,13 +289,12 @@ func (mi *MediaInfo) UploadRequest(reqHeaders http.Header, body io.Reader) (newB
|
|||
// be retried because the data is stored in the MediaBuffer.
|
||||
media, _, _, _ = mi.buffer.Chunk()
|
||||
}
|
||||
toCleanup := []io.Closer{}
|
||||
if media != nil {
|
||||
fb := readerFunc(body)
|
||||
fm := readerFunc(media)
|
||||
combined, ctype := CombineBodyMedia(body, "application/json", media, mi.mType)
|
||||
toCleanup := []io.Closer{
|
||||
combined,
|
||||
}
|
||||
toCleanup = append(toCleanup, combined)
|
||||
if fb != nil && fm != nil {
|
||||
getBody = func() (io.ReadCloser, error) {
|
||||
rb := ioutil.NopCloser(fb())
|
||||
|
@ -309,18 +308,30 @@ func (mi *MediaInfo) UploadRequest(reqHeaders http.Header, body io.Reader) (newB
|
|||
return r, nil
|
||||
}
|
||||
}
|
||||
cleanup = func() {
|
||||
for _, closer := range toCleanup {
|
||||
_ = closer.Close()
|
||||
}
|
||||
|
||||
}
|
||||
reqHeaders.Set("Content-Type", ctype)
|
||||
body = combined
|
||||
}
|
||||
if mi.buffer != nil && mi.mType != "" && !mi.singleChunk {
|
||||
// This happens when initiating a resumable upload session.
|
||||
// The initial request contains a JSON body rather than media.
|
||||
// It can be retried with a getBody function that re-creates the request body.
|
||||
fb := readerFunc(body)
|
||||
if fb != nil {
|
||||
getBody = func() (io.ReadCloser, error) {
|
||||
rb := ioutil.NopCloser(fb())
|
||||
toCleanup = append(toCleanup, rb)
|
||||
return rb, nil
|
||||
}
|
||||
}
|
||||
reqHeaders.Set("X-Upload-Content-Type", mi.mType)
|
||||
}
|
||||
// Ensure that any bodies created in getBody are cleaned up.
|
||||
cleanup = func() {
|
||||
for _, closer := range toCleanup {
|
||||
_ = closer.Close()
|
||||
}
|
||||
|
||||
}
|
||||
return body, getBody, cleanup
|
||||
}
|
||||
|
||||
|
|
37
vendor/google.golang.org/api/internal/gensupport/send.go
generated
vendored
37
vendor/google.golang.org/api/internal/gensupport/send.go
generated
vendored
|
@ -17,6 +17,27 @@ import (
|
|||
"github.com/googleapis/gax-go/v2"
|
||||
)
|
||||
|
||||
// Use this error type to return an error which allows introspection of both
|
||||
// the context error and the error from the service.
|
||||
type wrappedCallErr struct {
|
||||
ctxErr error
|
||||
wrappedErr error
|
||||
}
|
||||
|
||||
func (e wrappedCallErr) Error() string {
|
||||
return fmt.Sprintf("retry failed with %v; last error: %v", e.ctxErr, e.wrappedErr)
|
||||
}
|
||||
|
||||
func (e wrappedCallErr) Unwrap() error {
|
||||
return e.wrappedErr
|
||||
}
|
||||
|
||||
// Is allows errors.Is to match the error from the call as well as context
|
||||
// sentinel errors.
|
||||
func (e wrappedCallErr) Is(target error) bool {
|
||||
return errors.Is(e.ctxErr, target) || errors.Is(e.wrappedErr, target)
|
||||
}
|
||||
|
||||
// SendRequest sends a single HTTP request using the given client.
|
||||
// If ctx is non-nil, it calls all hooks, then sends the request with
|
||||
// req.WithContext, then calls any functions returned by the hooks in
|
||||
|
@ -96,12 +117,12 @@ func sendAndRetry(ctx context.Context, client *http.Client, req *http.Request, r
|
|||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
// If we got an error, and the context has been canceled,
|
||||
// the context's error is probably more useful.
|
||||
if err == nil {
|
||||
err = ctx.Err()
|
||||
// If we got an error and the context has been canceled, return an error acknowledging
|
||||
// both the context cancelation and the service error.
|
||||
if err != nil {
|
||||
return resp, wrappedCallErr{ctx.Err(), err}
|
||||
}
|
||||
return resp, err
|
||||
return resp, ctx.Err()
|
||||
case <-time.After(pause):
|
||||
}
|
||||
|
||||
|
@ -110,10 +131,10 @@ func sendAndRetry(ctx context.Context, client *http.Client, req *http.Request, r
|
|||
// select is satisfied at the same time, Go will choose one arbitrarily.
|
||||
// That can cause an operation to go through even if the context was
|
||||
// canceled before.
|
||||
if err == nil {
|
||||
err = ctx.Err()
|
||||
if err != nil {
|
||||
return resp, wrappedCallErr{ctx.Err(), err}
|
||||
}
|
||||
return resp, err
|
||||
return resp, ctx.Err()
|
||||
}
|
||||
invocationHeader := fmt.Sprintf("gccl-invocation-id/%s gccl-attempt-count/%d", invocationID, attempts)
|
||||
xGoogHeader := strings.Join([]string{invocationHeader, baseXGoogHeader}, " ")
|
||||
|
|
2
vendor/google.golang.org/api/internal/version.go
generated
vendored
2
vendor/google.golang.org/api/internal/version.go
generated
vendored
|
@ -5,4 +5,4 @@
|
|||
package internal
|
||||
|
||||
// Version is the current tagged release of the library.
|
||||
const Version = "0.96.0"
|
||||
const Version = "0.97.0"
|
||||
|
|
18
vendor/modules.txt
vendored
18
vendor/modules.txt
vendored
|
@ -10,7 +10,7 @@ cloud.google.com/go/compute/metadata
|
|||
# cloud.google.com/go/iam v0.4.0
|
||||
## explicit; go 1.17
|
||||
cloud.google.com/go/iam
|
||||
# cloud.google.com/go/storage v1.26.0
|
||||
# cloud.google.com/go/storage v1.27.0
|
||||
## explicit; go 1.17
|
||||
cloud.google.com/go/storage
|
||||
cloud.google.com/go/storage/internal
|
||||
|
@ -34,7 +34,7 @@ github.com/VictoriaMetrics/metricsql/binaryop
|
|||
# github.com/VividCortex/ewma v1.2.0
|
||||
## explicit; go 1.12
|
||||
github.com/VividCortex/ewma
|
||||
# github.com/aws/aws-sdk-go v1.44.101
|
||||
# github.com/aws/aws-sdk-go v1.44.105
|
||||
## explicit; go 1.11
|
||||
github.com/aws/aws-sdk-go/aws
|
||||
github.com/aws/aws-sdk-go/aws/arn
|
||||
|
@ -157,7 +157,7 @@ github.com/influxdata/influxdb/pkg/escape
|
|||
# github.com/jmespath/go-jmespath v0.4.0
|
||||
## explicit; go 1.14
|
||||
github.com/jmespath/go-jmespath
|
||||
# github.com/klauspost/compress v1.15.10
|
||||
# github.com/klauspost/compress v1.15.11
|
||||
## explicit; go 1.17
|
||||
github.com/klauspost/compress
|
||||
github.com/klauspost/compress/flate
|
||||
|
@ -179,8 +179,8 @@ github.com/mattn/go-isatty
|
|||
# github.com/mattn/go-runewidth v0.0.13
|
||||
## explicit; go 1.9
|
||||
github.com/mattn/go-runewidth
|
||||
# github.com/matttproud/golang_protobuf_extensions v1.0.1
|
||||
## explicit
|
||||
# github.com/matttproud/golang_protobuf_extensions v1.0.2
|
||||
## explicit; go 1.9
|
||||
github.com/matttproud/golang_protobuf_extensions/pbutil
|
||||
# github.com/oklog/ulid v1.3.1
|
||||
## explicit
|
||||
|
@ -280,7 +280,7 @@ go.opencensus.io/trace/tracestate
|
|||
go.uber.org/atomic
|
||||
# go.uber.org/goleak v1.1.11-0.20210813005559-691160354723
|
||||
## explicit; go 1.13
|
||||
# golang.org/x/net v0.0.0-20220919232410-f2f64ebce3c1
|
||||
# golang.org/x/net v0.0.0-20220923203811-8be639271d50
|
||||
## explicit; go 1.17
|
||||
golang.org/x/net/context
|
||||
golang.org/x/net/context/ctxhttp
|
||||
|
@ -302,7 +302,7 @@ golang.org/x/oauth2/google/internal/externalaccount
|
|||
golang.org/x/oauth2/internal
|
||||
golang.org/x/oauth2/jws
|
||||
golang.org/x/oauth2/jwt
|
||||
# golang.org/x/sync v0.0.0-20220907140024-f12130a52804
|
||||
# golang.org/x/sync v0.0.0-20220923202941-7f9b1623fab7
|
||||
## explicit
|
||||
golang.org/x/sync/errgroup
|
||||
# golang.org/x/sys v0.0.0-20220919091848-fb04ddd9f9c8
|
||||
|
@ -320,7 +320,7 @@ golang.org/x/text/unicode/norm
|
|||
## explicit; go 1.17
|
||||
golang.org/x/xerrors
|
||||
golang.org/x/xerrors/internal
|
||||
# google.golang.org/api v0.96.0
|
||||
# google.golang.org/api v0.97.0
|
||||
## explicit; go 1.15
|
||||
google.golang.org/api/googleapi
|
||||
google.golang.org/api/googleapi/transport
|
||||
|
@ -353,7 +353,7 @@ google.golang.org/appengine/internal/socket
|
|||
google.golang.org/appengine/internal/urlfetch
|
||||
google.golang.org/appengine/socket
|
||||
google.golang.org/appengine/urlfetch
|
||||
# google.golang.org/genproto v0.0.0-20220919141832-68c03719ef51
|
||||
# google.golang.org/genproto v0.0.0-20220923205249-dd2d53f1fffc
|
||||
## explicit; go 1.19
|
||||
google.golang.org/genproto/googleapis/api/annotations
|
||||
google.golang.org/genproto/googleapis/iam/v1
|
||||
|
|
Loading…
Reference in a new issue