Previously netstorage.MustStop() call didn't free up all the resources,
so the subsequent call to nestorage.Init() would panic.
This allows writing tests, which call nestorage.Init() + nestorage.MustStop() in a loop.
* feat: add maximum display series by tabs
* feat: add warning on PredefinedPanels.tsx
* docs/CHANGELOG.md: vmui limit number of plotted series
* docs/CHANGELOG.md: vmui limit number of plotted series
* wip
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
* Optimize fast path for /api/v1/import when importing numeric values
* Move the docs about the change from features to bugfixes at docs/CHANGELOG.md
* Update tests at lib/protoparser/vmimport
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3161
* app/vmselect: properly work when export import json from `api/v1/{export, import}` API
* app/vmselect: update convert function
* app/vmselect: export null if `math.IsNaN(v)`
* app/vmselect: get float from json
* lib/protoparser: add test
* docs: add change log
* lib/protoparser: make export import api compatible
Incorrect 301 redirects can be cached by user agents such as web browsers.
This can complicate recovery procedure after the incorrect redirect is fixed,
e.g. web browser cache must be reset.
The related issue - https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1752
Previously empty series (e.g. series with all NaN samples) were passed to aggregate functions.
Such series must be ingored by all the aggregate functions.
So it is better from consistency PoV filtering out empty series before applying aggregate functions.
* app/vmselect: ignore empty series for `limit_offset`
VictoriaMetrics doesn't return empty series (with all NaN values) to
the user. But such series are filtered after transform functions.
It means `limit_offset` will account for empty series as well.
For example, let's consider following data set:
```
time series:
foo{label="1"} NaN, NaN, NaN, NaN // empty series
foo{label="2"} 1, 2, 3, 4
foo{label="3"} 4, 3, 2, 1
```
When user requests all series for metric `foo` the empty series
will be filtered out:
```
/query=foo:
foo{label="v2"} 1, 2, 3, 4
foo{label="v3"} 4, 3, 2, 1
```
But `limit_offset(1, 1, foo)` is applied to original series, not filtered yet.
So it will return `foo{label="v2"}` (skips the first in list)
```
/query=limit_offset(1, 1, foo):
foo{label="v2"} 1, 2, 3, 4
```
Expected result would be to apply `limit_offset` to already filtered list,
so in result we receive `foo{label="v3"}`:
```
/query=limit_offset(1, 1, foo):
foo{label="v3"} 4, 3, 2, 1
```
The change does exactly that - filters empty series before applying `limit_offset`.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
* app/vmselect: ignore empty series for `limit_offset`
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Signed-off-by: hagen1778 <roman@victoriametrics.com>
The workaround was introduced to fix https://github.com/VictoriaMetrics/VictoriaMetrics/issues/962.
However, it didn't prove itself useful. Instead, it is recommended using `increase_pure` function.
Removing the workaround makes VM to produce accurate results when calculating
`delta` or `increase` functions over slow-changing counters with vary intervals
between data points.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Signed-off-by: hagen1778 <roman@victoriametrics.com>
* vmselect/promql: add alphanumeric sort by label (sort_by_label_numeric)
* vmselect/promql: fix tests, add documentation
* vmselect/promql: update test
* vmselect/promql: update for alphanumeric sorting, fix tests
* vmselect/promql: remove comments
* vmselect/promql: cleanup
* vmselect/promql: avoid memory allocations, update functions descriptions
* vmselect/promql: make linter happy (remove ineffectual assigment)
* vmselect/promql: add test case, fix behavior when strings are equal
* vendor: update github.com/VictoriaMetrics/metricsql from v0.44.1 to v0.45.0
this adds support for sort_by_label_numeric and sort_by_label_numeric_desc functions
* wip
* lib/promscrape: read response body into memory in stream parsing mode before parsing it
This reduces scrape duration for targets returning big responses.
The response body was already read into memory in stream parsing mode before this change,
so this commit shouldn't increase memory usage.
* wip
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
- Add getCommonParamsWithDefaultDuration function and use it at /api/v1/series, /api/v1/labels and /api/v1/label/.../values
- Document the default behaviour for setting 5 minutes time range if start arg isn't passed to /api/v1/series, /api/v1/labels and /api/v1/label/.../values
- Document the change at docs/CHANGELOG.md
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/3052
The panic may trigger during data blocks' processing received
from vmstorage nodes when some of vmstorage nodes return an error
or when `-replicationFactor` is set to values higher than 2 at `vmselect`.
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3058
Note that the parallel execution of `union()` args may take more memory and CPU time
than the sequential execution if args contain heavy queries, which may load all the available CPU,
disk and memory resources and vmselect and vmstorage levels.
- Use getScalar() function for obtaining the expected scalar from phi arg
- Reduce the error message returned to the user when incorrect phi is passed to histogram_quantiles
- Improve the description of this bugfix in the docs/CHANGELOG.md
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3026
The io/ioutil package is deprecated since Go1.16 - see https://tip.golang.org/doc/go1.16#ioutil
VictoriaMetrics requires at least Go1.18, so it is time to remove the io/ioutil from source code
This is a follow-up for 02ca2342ab
The ioutil.{Read|Write}File is deprecated since Go1.16 -
see https://tip.golang.org/doc/go1.16#ioutil
VictoriaMetrics needs at least Go1.18, so it is safe to remove ioutil usage
from source code.
This is a follow-up for 02ca2342ab
The bug results in `duplicate output time series` error
because the same time series is added two times into the orderedMetricNames list
inside the tmpBlocksFileWrapper.Finalize().
While at it, properly release all the tmpBlocksFile structs on tbf.Finalize() error.
Previously only the remaining tbf entries were released. This could result in resource leak.
* Explicitly store a pointer to UserReadableError in the error interface.
Previously Go automatically converted the value to a pointer before storing in the error interface.
* Add Unwrap() method to UserReadableError, so it can be used transparently with the other code,
which calls errors.Is() and errors.As().
* Document the change in docs/CHANGELOG.md
When read query fails, VM returns rich error message with
all the details. While these details might be useful
for debugging specific cases, they're usually too verbose
for users.
Introducing a new error type `UserReadableError` is supposed
to allow to return to user only the most important parts
of the error trace. This supposed to improve error readability
in web interfaces such as VMUI or Grafana.
The full error trace is still logged with the full context
and can be found in vmselect logs.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Signed-off-by: hagen1778 <roman@victoriametrics.com>
This should improve vmselect performance scalability on systems with many CPU cores.
The following tasks were done:
- Use separate temporary files for storing the data read from each vmstorage node.
This may result in the following potential issues:
- Up to N times higher memory usage for performing each query where N is the number
of vmstorage nodes known to vmselect.
This issue shouldn't increase chances of out of memory errors in most cases,
since per-query memory overhead is quite low comparing to the overall vmselect memory usage.
- Up to N times higher number of open temporary files where N is the number
of vmstorage nodes known to vmselect.
This issue should be fixed by increasing the limit on the number of open files.
- Use separate counters per each vmstorage node for various stats calculation
when reading the data from vmstorage nodes.
Previously a single syncwg.WaitGroup was used for tracking the lifetime of processBlock callbacks
across all the per-vmstorage goroutines. This could be slow on systems with many CPU cores
because of inter-CPU synchronization overhead.
Use a separate per-vmstorage sync.WaitGroup instead in order to reduce inter-CPU synchronization overhead.
This should imrpove performance for heavy queries over big number of blocks on multi-CPU systems.
* vmselect: cover special cases for vmalert's routing in single-node version
* remove trailing `/` from requests
* redirect to vmalert's home page when `/vmalert` is requested.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
* vmalert: fix review comments
Signed-off-by: hagen1778 <roman@victoriametrics.com>
* Update app/vmselect/main.go
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
Reduce inter-CPU communications when processing the query over big number of time series.
This should improve performance for queries over big number of time series
on systems with many CPU cores.
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2896
Based on b596ac3745
Thanks to @zqyzyq for the idea.
The change allows to specify default value for `getScrapeInterval`
function when actual interval can't be calculated.
Before the change, function were returning `maxSilenceInterval` (5m)
in such cases, which may be not correct for instant queries processing.
The specific scenario where using `maxSilenceInterval` caused issues
is the following:
1. Series becomes stale;
2. Client (in this case vmalert) continues to request series every 15s;
3. Database returns empty results as expected;
4. But at some specific moment of time database returns datapoints from `now()-5m`,
because lookback window was extended to `maxSilenceInterval`.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
- Use binary search instead of linear scan when locating the run of smallest timestamps
in blocks with intersected time ranges. This should improve performance
when merging blocks with big number of samples
- Skip samples with duplicate timestamps. This should increase query performance
in cluster version of VictoriaMetrics with the enabled replication.
- show dates in human-readable format, e.g. 2022-05-07, instead of a numeric value
- limit the maximum length of queries and filters shown in trace messages
Production experience shows that 100k is too big for /api/v1/series .
It leads to increased CPU usage when Grafana queries /api/v1/series over VictoriaMetrics
with big number of time series during auto-completion and when modifying template variables.
Previously SearchMetricNames was returning unmarshaled metric names.
This wasn't great for vmstorage, which should spend additional CPU time
for marshaling the metric names before sending them to vmselect.
While at it, remove possible duplicate metric names, which could occur when
multiple samples for new time series are ingested via concurrent requests.
Also sort the metric names before returning them to the client.
This simplifies debugging of the returned metric names across repeated requests to /api/v1/series
Metrics `vm_partial_results_total` and `vm_requests_total` serving
the similar purpose, but contain inconsistent set of labels.
This change updates `vm_partial_results_total` labels to be consistent
with `vm_requests_total`.
The change breaks backward compatibility with assumption that
`vm_partial_results_total` wasn't widely used, since it is
not documented and absent in the alerts and dashboards.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
* vmselect: limit `end` param max value by 2d in future
The change is applied only to service handlers like `/labels` or `/series`
and limits the `end` param by max value <= now() + 2 days. The same limit
is applied for the ingested data, so no reason to allow to request data
in future far than that.
The change is also needed for corner cases like https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2669
where too high `end` value triggers inefficient global index search.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
* docs/CHANGELOG.md: document the bugfix
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
* feat: make datepicker to be set to last 30 min by default
* fix: correct spinner while loading data
* feat: change legend style
* app/vmselect: `make vmui-update`
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
- Remove unused js bloatware from /targets page. This strips down binary size by more than 100Kb
- Add /service-discovery page for API compatibility with Prometheus
- Properly load bootstrap.min.css from /prometheus/targets
- Serve static contents for /targets page from app/vminsert instead of app/vmselect, because /targets page is served from there