The standard Snappy encoder from github.com/golang/snappy shows quite good performance number
for compressing the Prometheus remote_write proto messages according to the added benchmarks,
so there is no need in switching to github.com/klauspost/compress/s2 yet.
* vmselect/promql: add alphanumeric sort by label (sort_by_label_numeric)
* vmselect/promql: fix tests, add documentation
* vmselect/promql: update test
* vmselect/promql: update for alphanumeric sorting, fix tests
* vmselect/promql: remove comments
* vmselect/promql: cleanup
* vmselect/promql: avoid memory allocations, update functions descriptions
* vmselect/promql: make linter happy (remove ineffectual assigment)
* vmselect/promql: add test case, fix behavior when strings are equal
* vendor: update github.com/VictoriaMetrics/metricsql from v0.44.1 to v0.45.0
this adds support for sort_by_label_numeric and sort_by_label_numeric_desc functions
* wip
* lib/promscrape: read response body into memory in stream parsing mode before parsing it
This reduces scrape duration for targets returning big responses.
The response body was already read into memory in stream parsing mode before this change,
so this commit shouldn't increase memory usage.
* wip
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
The reason is to cover vulnerability GO-2022-0969
Found in: net/http@go1.18.5
Fixed in: net/http@go1.19.1
More info: https://pkg.go.dev/vuln/GO-2022-0969
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Signed-off-by: hagen1778 <roman@victoriametrics.com>
This adds the following push-related metrics when -pushmetrics.url is set:
- metrics_push_interval_seconds
- metrics_push_total
- metrics_push_errors_total
- metrics_push_bytes_pushed_total
- metrics_push_duration_seconds
- metrics_push_block_size_bytes
Updates https://github.com/VictoriaMetrics/metrics/issues/35
Previously the cache could store 10K unique regexps. When every regexp is huge (e.g. hundreds of kilobytes),
then the total cache size could grow to multiples of gigabytes. Now the cache size is limited by the total length
of all cached regexps. So huge regexps won't result in high memory usage for the cache.
add progress bars to the VM importer
The new progress bars supposed to display the processing speed per each
VM importer worker. This info should help to identify if there is a bottleneck
on the VM side during the import process, without waiting for its finish.
The new progress bars can be disabled by passing `vm-disable-progress-bar` flag.
Plotting multiple progress bars requires using experimental progress bar pool
from github.com/cheggaaa/pb/v3. Switch to progress bar pool required changes
in all import modes.
The openTSDB mode wasn't changed due to its implementation, which implies individual progress
bars per each series. Because of this, using the pool wasn't possible.
Signed-off-by: dmitryk-dk <kozlovdmitriyy@gmail.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>