Commit graph

1161 commits

Author SHA1 Message Date
Zakhar Bessarab
7195862a51
app/{vmselect,vlselect}: run make vmui-update vmui-logs-update
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-03-21 16:37:34 +04:00
Guillem Jover
76d205feae
spelling and grammar fixes via codespell ()
### Describe Your Changes

Fix many spelling errors and some grammar, including misspellings in
filenames. 

The change also fixes a typo in metric `vm_mmaped_files` to `vm_mmapped_files`.
While this is a breaking change, this metric isn't used in alerts or dashboards. 
So it seems to have low impact on users.

The change also deprecates `cspell` as it is much heavier and less usable. 
---------

Co-authored-by: Andrii Chubatiuk <achubatiuk@victoriametrics.com>
Co-authored-by: Andrii Chubatiuk <andrew.chubatiuk@gmail.com>
2025-03-17 16:32:10 +01:00
Roman Khavronenko
dc1f7ef0d0
app/vmselect/promql: optimize binary operator or for common cases ()
The optimization touches 2 things:
1. Reduces amount of allocations when comparing canonical metric names
between left and right parts of expressions.
2. Adds fast path for cases when right part of expression returns
scalar: `series_selector or on() vector(1)`, which is a typical
expression.

```
benchcmp old.txt new.txt
benchcmp is deprecated in favor of benchstat: https://pkg.go.dev/golang.org/x/perf/cmd/benchstat
benchmark                                             old ns/op     new ns/op     delta
BenchmarkBinaryOpOr/tss:1_or_tss:1-14                 291           272           -6.56%
BenchmarkBinaryOpOr/tss:1_or_tss:1000-14              44590         28592         -35.88%
BenchmarkBinaryOpOr/tss:1000_or_tss:1-14              103124        39563         -61.64%
BenchmarkBinaryOpOr/tss:1000_or_tss:1000-14           20386150      1859335       -90.88%
BenchmarkBinaryOpOr/tss:1000_or_on()_vector(0)-14     91382         36805         -59.72%
```

https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8382

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-14 12:07:22 +01:00
alicja-karasiewicz
d47d329ce7
feat: make topN limit configurable from CLI
Implement changes mentioned in 

Allow the administrator to specify the limit of returned TSDB series in
`/api/v1/status/tsdb` by making a TopN limit configurable from CLI.

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: alicja-karasiewicz <alicja.karasiewicz@allegro.com>
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Co-authored-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-03-12 11:33:30 +04:00
Artem Fetishev
c4fd62188a
app/{vmselect,vlselect}: run make vmui-update vmui-logs-update
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-03-07 14:15:38 +01:00
Nikolay
b85b28d30a
lib/storage: add tracker for time series metric names statistics
This feature allows to track query requests by metric names. Tracker
state is stored in-memory, capped by 1/100 of allocated memory to the
storage. If cap exceeds, tracker rejects any new items add and instead
registers query requests for already observed metric names.

This feature is disable by default and new flag:
`-storage.trackMetricNamesStats` enables it.

  New API added to the select component:

* /api/v1/status/metric_names_stats - which returns a JSON
object
    with usage statistics.
* /admin/api/v1/status/metric_names_stats/reset - which resets internal
    state of the tracker and reset tsid/cache.

   New metrics were added for this feature:

  * vm_cache_size_bytes{type="storage/metricNamesUsageTracker"}
  * vm_cache_size{type="storage/metricNamesUsageTracker"}
  * vm_cache_size_max_bytes{type="storage/metricNamesUsageTracker"}

  Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4458

---------

Signed-off-by: f41gh7 <nik@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2025-03-06 22:06:50 +01:00
Zakhar Bessarab
7dfaef9088
app/vmselect/promql: fix panic with using @ with series which is not present at the start of the query ()
### Describe Your Changes

Previously, "selector @ another_selector" assumed that
"another_selector" metric is supposed to exist since "start" used in the
query.

If the query was evaluated in the following case (timestamps):
- start - 2, end - 10
- "another_selector" 5,6,7,8,9,10
- "selector" The resulting "at" timestamp would be taken from NaN (as
`int64(NaN * 1000)`), causing a panic or invalid behavior later.

Note that type cast of `NaN` to int64 is also platform-dependent, so
value of `int64(math.NaN() * 1000)` can produce `0` or max int64 on
different platforms and versions of Go.

This commit changes this and checks for the first non-NaN value. This
makes it easier to use for users as series are not always aligned and
returning an error in this case would disallow using this for some time
ranges.

See: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8444

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-03-06 16:42:19 +01:00
Zhu Jiekun
117b7ce6b7
bugfix: negative rate result when lookbehind window longer than search.maxLookback ()
### Describe Your Changes

 

fix negative rate result when the lookbehind window is longer than
`-search.maxLookback` or `-search.maxStalenessInterval` and data
contains gap.

This issue was introduced since
[v1.110.0](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8072).

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-02-27 22:52:39 +01:00
f41gh7
93001ac931
make vmui-update 2025-02-21 14:08:00 +01:00
Artem Fetishev
ba0d7dc2fc
Allow disabling per-day index ()
### Describe Your Changes

Allow disabling the per-day index using the `-disablePerDayIndex` flag.
This should significantly improve the ingestion rate and decrease the
disk space usage for the use cases that assume small or no churn rate.
See the docs added to `docs/README.md` for details.

Both improvements are due to no data written to the per-day index.
Benchmark results:

```shell
rm -Rf ./lib/storage/Benchmark*; go test ./lib/storage -run=NONE -bench=BenchmarkStorageInsertWithAndWithoutPerDayIndex --loggerLevel=ERROR
goos: linux
goarch: amd64
pkg: github.com/VictoriaMetrics/VictoriaMetrics/lib/storage
cpu: 13th Gen Intel(R) Core(TM) i7-1355U
BenchmarkStorageInsertWithAndWithoutPerDayIndex/HighChurnRate/perDayIndexes-12                 1        3850268120 ns/op                39.56 data-MiB          28.20 indexdb-MiB           259722 rows/s
BenchmarkStorageInsertWithAndWithoutPerDayIndex/HighChurnRate/noPerDayIndexes-12               1        2916865725 ns/op                39.57 data-MiB          25.73 indexdb-MiB           342834 rows/s
BenchmarkStorageInsertWithAndWithoutPerDayIndex/NoChurnRate/perDayIndexes-12                   1        2218073474 ns/op                 9.772 data-MiB         13.73 indexdb-MiB           450842 rows/s
BenchmarkStorageInsertWithAndWithoutPerDayIndex/NoChurnRate/noPerDayIndexes-12                 1        1295140898 ns/op                 9.771 data-MiB          0.3566 indexdb-MiB         772119 rows/s
PASS
ok      github.com/VictoriaMetrics/VictoriaMetrics/lib/storage  11.421s
```

Signed-off-by: Artem Fetishev <wwctrsrx@gmail.com>
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Co-authored-by: Roman Khavronenko <hagen1778@gmail.com>
2025-02-14 12:35:51 +01:00
Phuong Le
23147c8339
docs: search.lookback-delta -> query.lookback-delta () 2025-02-11 22:15:39 +01:00
Zakhar Bessarab
8e576854b3
make vmui-update
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-02-07 18:41:53 +04:00
Roman Khavronenko
0a6f6d0bba
app/vmselect/netstorage: stop exposing `vm_index_search_duration_seconds metric
This metric records time spent on search operations in the index.
It was introduced in
[v1.56.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.56.0).
However, this metric was used neither in dashboards nor in alerting
rules.
It also has high cardinality because index search operations latency can
differ by 3 orders of magnitude.

See
[example](https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/#/cardinality?date=2025-02-05&match=vm_index_search_duration_seconds_bucket&topN=10&focusLabel=).

 Hence, dropping it as unused.

---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-02-06 13:40:52 +01:00
Roman Khavronenko
fa2107bbec
metricsql: bump to v0.83.0 ()
metricsql: bump to v0.83.0

See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7703

The update also returns an error if metric name is specified twice in
metrics selector.
For example, `foo{__name__="bar"}` is not allowed anymore. It would
successfully parse before
this change, but it won't satisfy the search filter any way. So it had
no sense in supporting this. This is why some test cases were removed.

Signed-off-by: hagen1778 <roman@victoriametrics.com>

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-02-01 22:22:29 +01:00
Roman Khavronenko
72837919ae
app/vmselect/promql: fix discrepancies when using or binary operator
The change covers various corner cases when using `or` binary operator.
See corresponding issues and pull request here to see the cases:
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/7770

Related issues:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7759
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7640


---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-01-31 14:14:17 +01:00
Dmitry Konishchev
17fd53bd37
app/vmselect: properly cancel long running requests on client connection close
At this time `bufferedwriter` [silently ignores connection close
errors](78eaa056c0/lib/bufferedwriter/bufferedwriter.go (L67)).
It may be very convenient in some situations (to not log such
unimportant errors), but it's too implicit and unsafe for the others.
For example, if you close [export
API](https://docs.victoriametrics.com/#how-to-export-time-series) client
connection in the middle of communication, VictoriaMetrics won't notice
it and will start to hog CPU by exporting all the data into nowhere
until it process all of them. If you'll make a few retries, it will be
effectively a DoS on the server.

This commit replaces this implicit error suppressing with explicit error
handling which fixes the issue with export API.

Issue was introduced at e78f3ac8ac
2025-01-29 16:09:32 +01:00
f41gh7
f8a0f2fe44
make vmui-update 2025-01-24 14:11:28 +01:00
Roman Khavronenko
dcb6dd5dcb
app/vmselect/promql: respect staleness in removeCounterResets ()
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8072

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
2025-01-23 14:31:32 +01:00
Zhu Jiekun
1f0b03aebe
docs: update docs for *authKey, add authKey to HTTP 401 resp body ()
### Describe Your Changes

optimize for
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6226

for user who set `*AuthKey` flag, they will receive new response in
body:
```go
// query arg not set
The provided authKey '' doesn't match -search.resetCacheAuthKey

// incorrect query arg
The provided authKey '5dxd71hsz==' doesn't match -search.resetCacheAuthKey
```

previously, they receive:
```
The provided authKey doesn't match -search.resetCacheAuthKey
```

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-01-20 12:42:53 +01:00
Hui Wang
4574958e2e
vmselect: add -search.maxDeleteDuration to limit the duration of th… ()
…e `/api/v1/admin/tsdb/delete_series` call

Previously, it is limited by `-search.maxQueryDuration`, and can be
small for delete calls.

part of https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7857.
2025-01-17 15:09:42 +01:00
Roman Khavronenko
7d2a6764e7
app/vmselect/promql: limit staleness detection for increase/increase_pure/delta ()
`doInternal` has adaptive staleness detection mechanism. It is
calculated using timestamp distance between samples in selected list of
samples. It is dynamic because VM can store signals from many sources
with different samples resolution. And while it works for most of cases,
there are edge cases for rollup functions that are comparing values
between windows: increase, increase_pure, delta.

The edge case 1.
There was a gap between series because of the missed scrape or two. In
this case staleness will trigger and increase-like functions will assume
the value they need to compare with is 0. In result, this could produce
spikes for a flappy data - see
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/894 This
problem was solved by introducing a `realPrevValue` field -
1f19c167a4.
It stores the closest real sample value on selected interval and is used
for comparison when samples start after the gap.

The edge case 2.
`realPrevValue` doesn't respect staleness interval. In result, even if
gap between samples is huge (hours), the increase-like functions will
not consider it as a new series that started from 0. See
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8002.

Covering both edge cases is tricky, because `realPrevValue` has to
respect and not respect the staleness interval in the same time. In
other words, it should be able to ignore periodic missing gaps, but
reset if the gap is too big. While "too big gap" can't be figured out
empirically, I suggest using `-search.maxStalenessInterval` for this
purpose. If `-search.maxStalenessInterval` is set to 0 (default), then
`realPrevValue` ignores staleness interval. If
`-search.maxStalenessInterval` is > 0, then `realPrevValue` respects it
as a staleness interval.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-01-16 15:01:17 +01:00
Roman Khavronenko
ee3c0c6a87
make: bump golangci-lint to v1.63.4 (
New version has additional checks and reduced resource consumption, so
it doesn't timeout for our internal repos.

To make linter happy, I addressed "redefinition of the built-in
function" lint error.

----
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-01-13 07:18:04 +01:00
Roman Khavronenko
7cab4fd30d
app/vmselect/promql: account for staleness when populating realPrevValue ()
When vmselect process a rollup function it fetches all the raw samples
on requested `start-end` interval of the query. It then loops through
the raw samples, picks the range of the samples based on provided `step`
interval and invokes a rollup function for each of the picked ranges of
samples.

During this processing, vmselect always populates the `realPrevValue`
field with the closest previous raw sample value before the picked range
of samples. This `realPrevValue` is used by rollup functions like
increase_pure or delta to decide whether the counter change happened or
not. For example, we get the counter value == 1. If we've seen this
counter before and its value was also 1 - then no change happened. If we
didn't see it before, then this counter should have started with value=0
and we need to account for `1-0=1` change. All this is required to deal
with situations when scrapes are missing or `step` is too small.

However, vmselect doesn't check how "old" is the `realPrevValue`. In
other words, it doesn't respect the staleness interval when picking it.
In result, depending on the `start` and `end` params, vmselect can use
`realPrevValue` which is a couple of hours old and is unlikely to be a
temporary scrape fail. In result, some increases can be incorrectly
ingnored by vmselect.

This change makes sure that vmselect doesn't populate `realPrevValue`
with samples that are older than staleness interval.

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ x ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).


-------------------

To reproduce, create a dataset with one metric `foo` which has samples
with value=1 on interval of couple of hours and resolution 15s, and a
gap for an hour in the middle:
<img width="769" alt="image"
src="https://github.com/user-attachments/assets/a39b2740-b741-45f8-ad18-093b7c57c3b3"
/>

Then run `increase(foo[1m])` expression on this time range (disable
cache):
<img width="1472" alt="image"
src="https://github.com/user-attachments/assets/463cece1-f359-4c75-a96c-60092a31cab2"
/>

In result, there will be one increase on the beginning of the series.
And no increase after the gap. Then change the time range so it starts
in the middle of the gap:
<img width="1505" alt="image"
src="https://github.com/user-attachments/assets/f4a460c3-9fd1-4ec7-ab47-15e716ec1019"
/>

Now, there is an increase>0 because the `realPrevValue` wasn't
populated. This is wrong, because it hides the increase of the series.

With the fix, the original increase query on full time range should show
2 increases:
<img width="1492" alt="image"
src="https://github.com/user-attachments/assets/aa9d8a6b-7b22-41f6-9eb9-83b3113a6982"
/>

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-01-10 16:45:44 +01:00
Zakhar Bessarab
1db1841b20
app/{vmselect,vlselect}: run make vmui-update vmui-logs-update
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-01-10 18:53:33 +04:00
YuDong Tang
5a41bdf329
app/select: add command-line flag -search.maxBinaryOpPushdownLabelValues
### Describe Your Changes

Binary operations like `exprFirst op exprSecond` in VictoriaMetrics are
performed in the following way:
1. Execute exprFirst.
2. Extract **common label filters** from the result of step 1.
3. Apply these common label filters to `exprSecond` and execute it, in
order to retrieve less time series from vmstorage nodes.

In step 2, only labels with less than `100` (hard-coded) value could be
used as **common label filter** (e.g. `{common_lb=~"v1|v2|...|v100"}`.

In our scenarios, a label, take `instance` label as an example, could
has thousands of candidate values. Regarding bring more pressure to
vmstorage node, it's still beneficial if labels with more than 100
values could be used as filter in `exprSecond`, with enough vmstorage
resources. After adjusting the value from `100` to `10000`, our query
round-trip time drops significantly from 5s to 2s.

This pull request change the hard-coded value into a configurable flag.
2025-01-03 13:20:50 +01:00
kiriklo
82badc3dd5
app/vmselect/promql: improve performance of parseCache on systems with many CPU cores
### Describe Your Changes

Parse cache is a pretty simple implementation of cache. It's just a
standard map with mutex.
Map with mutex overall has poor performance, plus when the cache
overflow occurs, the whole cache locks until 1k elements have been
deleted (now it's 10% of 10000 max elements in the cache). To avoid this
bottleneck and improve performance of cache on systems with many CPU
cores but keep it rather simple, we can implement cache with per bucket
locks like it's done in fastcache. The logic and API remain the same. So
now each bucket will have a map with approximately 78 elements (with 128
buckets), and overflow will occur now for each bucket, and only 7
elements need to be deleted.
Because exec_test.go has about 10k lines of code, it's better to move
the cache into a separate file to add tests and benchmarks for it,
because now it does not have them.

```
goos: windows
goarch: amd64
pkg: github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/promql
cpu: 11th Gen Intel(R) Core(TM) i9-11900K @ 3.50GHz

Current cache implementation performance on 8 cores:
BenchmarkCachePutNoOverFlow-8               1932            618372 ns/op             253 B/op          0 allocs/op
BenchmarkCacheGetNoOverflow-8               6547            211527 ns/op               0 B/op          0 allocs/op
BenchmarkCachePutGetNoOverflow-8            1873            621718 ns/op             261 B/op          0 allocs/op
BenchmarkCachePutOverflow-8                 2262            464328 ns/op              32 B/op          0 allocs/op
BenchmarkCachePutGetOverflow-8              1764            655866 ns/op              38 B/op          0 allocs/op

New cache implementation performance on 8 cores:
BenchmarkCachePutNoOverFlow-8              10408            111412 ns/op               0 B/op          0 allocs/op
BenchmarkCacheGetNoOverflow-8              22407             52809 ns/op               0 B/op          0 allocs/op
BenchmarkCachePutGetNoOverflow-8            6583            168088 ns/op               0 B/op          0 allocs/op
BenchmarkCachePutOverflow-8                 9822            117212 ns/op               2 B/op          0 allocs/op
BenchmarkCachePutGetOverflow-8              6481            175952 ns/op               3 B/op          0 allocs/op

Current cache implementation performance on 16 cores:
BenchmarkCachePutNoOverFlow-16              2331            475307 ns/op             218 B/op          0 allocs/op
BenchmarkCacheGetNoOverflow-16              6069            196905 ns/op               0 B/op          0 allocs/op
BenchmarkCachePutGetNoOverflow-16           1870            644236 ns/op             262 B/op          0 allocs/op
BenchmarkCachePutOverflow-16                2296            509279 ns/op              34 B/op          0 allocs/op
BenchmarkCachePutGetOverflow-16             1726            671510 ns/op              45 B/op          0 allocs/op

New cache implementation performance on 16 cores:
BenchmarkCachePutNoOverFlow-16             13549             82413 ns/op               0 B/op          0 allocs/op
BenchmarkCacheGetNoOverflow-16             30274             38997 ns/op               0 B/op          0 allocs/op
BenchmarkCachePutGetNoOverflow-16           8512            126239 ns/op               0 B/op          0 allocs/op
BenchmarkCachePutOverflow-16               13884             88124 ns/op               1 B/op          0 allocs/op
BenchmarkCachePutGetOverflow-16             7903            131299 ns/op               3 B/op          0 allocs/op
```
From the benchmarks above, we can see that the new implementation is ~5
times faster than the old one.


---------
Co-authored-by: f41gh7 <nik@victoriametrics.com>
2025-01-02 17:43:23 +01:00
f41gh7
3237c64ef3
make vmui-update 2024-12-18 23:08:22 +01:00
Artem Fetishev
c6f6302ca4
Fix inconsistent treatment of millisecond-precision time for instant queries ()
### Describe Your Changes

This PR fixes . See the points 6 and 7 in `Steps to reproduce`:

> Now let's set time to only 5ms past the timestamp of the first point,
since even 199ms worked for the second point. Surprise, the point isn't
returned 💥:
>
> ```curl -s $VMQURL -d 'query=series1' -d 'time=1707123456705' -d
'step=1ms' | grep 10 # nothing!```
>
> But, 4ms works: 🤨🤔
>
> ```curl -s $VMQURL -d 'query=series1' -d 'time=1707123456704' -d
'step=1ms' | grep 10 # found```

This happens so because the actual step becomes 5ms due to jitter being
applied. THe fix is to do not apply jitter if scrape interval was not
detected (the case when vmstorage returns only one result). In this case
the scrape interval is set to `5m+step`.

An integration test has been added to check the steps to reproduce and
then to confirm that fix works. Note that the cluster tests are
currently disabled because the fix is not in cluster branch yet.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2024-12-16 13:24:52 +01:00
f41gh7
04d19a2200
make vmui-update 2024-12-13 12:01:03 +01:00
f41gh7
5bf2d6a689
make vmui-update 2024-11-29 17:43:55 +01:00
Yury Molodov
dec9a2f023
vmui: fix predefined panels
### Describe Your Changes

- Fixes the handling of the `showLegend` flag.  
- Fixes the handling of `alias`.  
- Adds support for alias templates, allowing dynamic substitutions like
`{{label_name}}`.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7565
2024-11-28 13:47:37 +01:00
Andrii Chubatiuk
9cfdbc582f
refactoring: changed prompb to prompbmarshal everythere where internal series transformations are happening ()
### Describe Your Changes

doing similar changes for both vmagent and vminsert (like one in
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/7399) ends up
with almost same implementations for each of packages instead of having
this shared code in one place. one of the reasons is the same Timeseries
and Labels structure from different prompb and prompbmarshal packages.
My proposal is to use structures from prompb package only to
marshal/unmarshal sent/received data, but for internal transformations
use only structures from prompbmarshal package

Another example, where it already can help to simplify code is streaming
aggregation pipeline for vmsingle (now it first marshals
prompb.Timeseries to storage.MetricRow and then if streaming aggregation
or deduplication is enabled it unmarshals all the series back but to
prompbmarshal.Timeseries)

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2024-11-26 12:45:17 +01:00
Nikolay
bb399518db
app/vmselect: properly return binary pow function result ()
Previously, for `^` aka pow function calls, VictoriaMetrics returned `1`
if left arg was Nan. For example, given query=`(hour()==2)^1` returns 1
for NaN produced by hour() == 2 function. It added additional non-exist
datapoints to the timeseries.

This commit port bugfix from `metricql` package and adds test for it.
Now, VictoriaMetrics
correctly returns `NaN` for such cases.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7359

Signed-off-by: f41gh7 <nik@victoriametrics.com>
2024-11-21 15:16:28 +01:00
f41gh7
d5f52adf3d
make vmui-update 2024-11-15 19:21:51 +01:00
Evgeniy Negriy
d27dfac5c6
app/vmselect: fixes graphite function transformRemoveEmptySeries
Previously it incorrectly applied xFilesFactor, if it's value equal to 0.

 This commit properly handles this case and returns result according to
the graphite documentation:
  
`xFilesFactor follows the same semantics as in Whisper storage schemas. Setting it to 0 (the default) means that only a single value in the series needs to be non-null for it to be considered non-empty, setting it to 1 means that all values ​​in the series must be non-null. A setting of 0.5 means that at least half the values ​​in the series must be non-null.`

Signed-off-by: f41gh7 <nik@victoriametrics.com>
Co-authored-by: Evgeniy Negriy <einegriy@avito.ru>
2024-11-06 17:35:59 +01:00
Andrii Chubatiuk
a88f896b43
promql: exclude limit_offset from default by metric name sorting ()
### Describe Your Changes

I don't like this solution, but it works. Other possible solutions
described in an issue

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7068

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2024-11-06 15:10:23 +01:00
Zakhar Bessarab
e8adbc9f09
app/{vmselect,vlselect}: run make vmui-update vmui-logs-update
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2024-11-04 10:59:42 -03:00
Hui Wang
0172e65b8d
docs: clarify flags -search.maxxxDuration can only be overridden to… ()
… a smaller value with `timeout` arg
2024-10-25 11:11:09 +02:00
hagen1778
a1882a84fb
app/vmui: add missing assets after a710d43a20
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-10-18 19:52:25 +02:00
hagen1778
a710d43a20
app/{vmselect,vlselect}: run make vmui-update vmui-logs-update
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-10-18 14:26:47 +02:00
Zhu Jiekun
8c50c38a80
vmstorage: auto calculate maxUniqueTimeseries based on resources ()
### Describe Your Changes

Add support for
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6930

Calculate `-search.maxUniqueTimeseries` by
`-search.maxConcurrentRequests` and remaining memory if it's **not set**
or **less equal than 0**.

The remaining memory is affected by `-memory.allowedPercent`,
`-memory.allowedBytes` and cgroup memory limit.
### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>

(cherry picked from commit 85f60237e2)
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-10-18 14:00:14 +02:00
Roman Khavronenko
05ac508fbf
lib/flagutil: rename Duration to RetentionDuration ()
The purpose of this change is to reduce confusion between using
`flag.Duration` and `flagutils.Duration`. The reason is that
`flagutils.Duration` was mistakenly used for cases that required `m`
support. See
ab0d31a7b0

The change in name should clearly indicate the purpose of this data
type.

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-10-17 13:47:48 +02:00
Roman Khavronenko
ebd393d8b3
app/vmselect/promql: fix seriesFetched update logic ()
### Describe Your Changes

evalInstantRollup could have overreport the number of fetched series if
`offset` checks will result into retry. This change updates fetched
series only if these checks were successful.

It also adds a comment to another potential place of over-reporting
series fetched. It doesn't fix it, because it would require spending
extra resources on such a check, while discrepancy in seriesFetched
doesn't affect calculations in any way.

Probably related to
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7170

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-10-07 14:27:50 +02:00
Artem Fetishev
85ea0f80fc
Change the default value of the maxDeleteSeries flag to 1 million ()
Change the default value of the maxDeleteSeries flag to 1 million. This
is a follow up for ed5da38ede
---

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2024-09-30 12:40:49 +02:00
Artem Fetishev
ed5da38ede
Introduce a flag for limiting the number of time series to delete ()
### Describe Your Changes

Introduce the `-search.maxDeleteSeries` flag that limits the number of
time series that can be deleted with a single
`/api/v1/admin/tsdb/delete_series` call.

Currently, any number can be deleted and if the number is big (millions)
then the operation may result in unaccounted CPU and memory usage spikes
which in some cases may result in OOM kill (see ). The flag limits
the number to 30k by default and the users may override it if needed at
the vmstorage start time.


---------

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Co-authored-by: Nikolay <nik@victoriametrics.com>
2024-09-30 10:02:21 +02:00
Alexander Frolov
80a3c410d4
vmselect: ensure default -search.maxConcurrentRequests is non-decreasing ()
### Describe Your Changes

vmselect determines the default value of `-search.maxConcurrentRequests`
multiplying the number of available CPUs by 2 if and only if the number
is small (to be precise <= 4). That leads
`-search.maxConcurrentRequests` is decreasing at the edge of these two
cases as shown below:
| CPUs | MaxConcurrentRequests | MaxConcurrentRequests (original
proposal) | MaxConcurrentRequests (updated proposal) |
|--------|--------|--------|--------|
| 1 | 2 | 2 | 2 |
| 2 | 4 (prev+2) | 4 (prev+2) | 4 (prev+2) |
| 3 | 6 (prev+2) | 6 (prev+2) | 6 (prev+2) |
| 4 | 8 (prev+2) | 8 (prev+2) | 8 (prev+2) |
| 5 | 5 __(prev-3)__ | 9 __(prev+1)__ | 10 __(prev+2)__ |
| 6 | 6 (prev+1) | 10 (prev+1) | 12 (prev+2) |
| 7 | 7 (prev+1) | 11 (prev+1) | 14 (prev+2) |
| 8 | 8 (prev+1) | 12 (prev+1) | 16 (prev+2) |

I propose to make the default value non-decreasing.
2024-09-30 09:51:54 +02:00
Aliaksandr Valialkin
c8e23eefba
app/{vmselect,vlselect}: run make vmui-update vmui-logs-update after 25a9802ca4 and 8657d03433
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/7088
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5924

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/7025
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6545#issuecomment-2336805237
2024-09-27 13:50:47 +02:00
Aliaksandr Valialkin
3964889705
app/vmselect/promql: consistently replace NaN data points with non-NaN values for range_first and range_last functions
It is expected that range_first and range_last functions return non-nan const value across all the points
if the original series contains at least a single non-NaN value. Previously this rule was violated for NaN data points
in the original series. This could confuse users.

While at it, add tests for series with NaN values across all the range_* and running_* functions, in order to maintain
consistent handling of NaN values across these functions.
2024-09-23 14:59:29 +02:00
hagen1778
c00b64726c
app/{vmselect,vlselect}: run make vmui-update vmui-logs-update
Executed after https://github.com/VictoriaMetrics/VictoriaMetrics/pull/6972
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6900

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-09-19 15:39:40 +02:00
Aliaksandr Valialkin
b82e2cabc5
app/vmselect/promql: properly calculate c1 and c2 and c1 or c2 by upgrading github.com/VictoriaMetrics/metricsql to v0.79.0
The fix is in the https://github.com/VictoriaMetrics/metricsql/pull/34
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6637
2024-09-18 17:38:19 +02:00