metricsql: bump to v0.83.0
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7703
The update also returns an error if metric name is specified twice in
metrics selector.
For example, `foo{__name__="bar"}` is not allowed anymore. It would
successfully parse before
this change, but it won't satisfy the search filter any way. So it had
no sense in supporting this. This is why some test cases were removed.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
Signed-off-by: hagen1778 <roman@victoriametrics.com>
(cherry picked from commit fa2107bbec)
Mnetion explicitly that `remoteWrite.concurrency` deopends on number
of available CPU cores. Updated docs to rm auto-printed default value.
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8151
Signed-off-by: hagen1778 <roman@victoriametrics.com>
(cherry picked from commit 5561970db0)
### Describe Your Changes
Fixed an issue where dropdown menus were not visible in the Group View
settings.
Ref issue: #8153
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
At this time `bufferedwriter` [silently ignores connection close
errors](78eaa056c0/lib/bufferedwriter/bufferedwriter.go (L67)).
It may be very convenient in some situations (to not log such
unimportant errors), but it's too implicit and unsafe for the others.
For example, if you close [export
API](https://docs.victoriametrics.com/#how-to-export-time-series) client
connection in the middle of communication, VictoriaMetrics won't notice
it and will start to hog CPU by exporting all the data into nowhere
until it process all of them. If you'll make a few retries, it will be
effectively a DoS on the server.
This commit replaces this implicit error suppressing with explicit error
handling which fixes the issue with export API.
Issue was introduced at e78f3ac8ac
It is better to use the AppendFieldsToJSON function directly
instead of hiding it under RowsFormatter abstraction.
(cherry picked from commit 95f182053b)
### Describe Your Changes
- Added the `fields_limit` parameter for the `hits` query to limit the
number of returned fields. This reduces the response size and decreases
memory usage.
#### Legend Menu:
A context menu has been added for legend items, which includes:
1. Metric name
2. **Copy _stream name** - copies the full stream name in the format
`{field1="value1", ..., fieldN="valueN"}`.
3. **Add _stream to filter** - adds the full stream value to the current
filter:
`_stream: {field1="value1", ..., fieldN="valueN"} AND (old_expr)`.
4. **Exclude _stream from filter** - excludes the stream from the
current filter:
`(NOT _stream: {field1="value1", ..., fieldN="valueN"}) AND (old_expr)`.
5. List of fields with options:
- Copy as `field: "value"`.
- Add to filter: `field: "value" AND (old_expr)`.
- Exclude from filter: `-field: "value" AND (old_expr)`.
6. Total number of hits for the stream.
Related issue: #7552
<details>
<summary>UI Demo - Legend Menu</summary>
<img width="400"
src="https://github.com/user-attachments/assets/ee1954b2-fdce-44b4-a2dc-aa73096a5414"/>
<img width="400"
src="https://github.com/user-attachments/assets/19d71f04-c207-4143-a176-c5f221592e3d"/>
</details>
---
#### Legend:
1. Displays the total number of hits for the stream.
2. Added hints below the legend for total hits and graph interactions.
3. Click behavior is now the same as in other vmui charts:
- `click` - shows only the selected series.
- `click + ctrl/cmd` - hides the selected series.
<details>
<summary>UI Demo - Legend</summary>
before:
<img
src="https://github.com/user-attachments/assets/18270842-0c39-4f63-bcda-da62e15c3c73"/>
after:
<img
src="https://github.com/user-attachments/assets/351cad3a-f763-4b1d-b3be-b569b5472a7c"/>
</details>
---
#### Tooltip:
1. The `other` label is moved to the end, and others are sorted by
value.
2. Values are aligned to the right.
3. Labels are truncated and always shown in a single line for better
readability; the full name is available in the legend.
<details>
<summary>UI Demo - Tooltip</summary>
| before | after |
|----------|----------|
| <img
src="https://github.com/user-attachments/assets/adccff38-e2e6-46e4-a69e-21381982489c"/>
| <img
src="https://github.com/user-attachments/assets/81008897-d816-4aed-92cb-749ea7e0ff1e"/>
|
</details>
---
#### Group View (tab Group):
Groups are now sorted by the number of records in descending order.
<details>
<summary>UI Demo - Group View</summary>
before:
<img width="800"
src="https://github.com/user-attachments/assets/15b4ca72-7e5d-421f-913b-c5ff22c340cb"/>
after:
<img width="800"
src="https://github.com/user-attachments/assets/32ff627b-6f30-4195-bfe7-8c9b4aa11f6b"/>
</details>
logger.Fatalf("BUG: ...") complicates investigating the bug, since it doesn't show the call stack,
which led to the bug. So it is better to consistently use logger.Panicf("BUG: ...") for logging programming bugs.
Previously, NewChild elements of querytracer could be referenced by concurrent
storageNode goroutines. After earlier return ( if search.skipSlowReplicas is set), it is
possible, that tracer objects could be still in-use by concurrent workers.
It may cause panics and data races. Most probable case is when parent tracer is finished, but children
still could write data to itself via Donef() method. It triggers read-write data race at trace
formatting.
This commit adds a new methods to the querytracer package, that allows to
create children not referenced by parent and add it to the parent later.
Orphaned child must be registered at the parent, when goroutine returns. It's done synchronously by the single caller via finishQueryTracer call.
If child didn't finished work and reference for it is used by concurrent goroutine, new child must be created instead with
context message.
It prevents panics and possible data races.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8114
---------
Signed-off-by: f41gh7 <nik@victoriametrics.com>
Co-authored-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
### Describe Your Changes
Added saving column settings in the URL for the table view. See #7662
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
(cherry picked from commit f0d55a1c25)
This commit fixes incorrect behaviour when pressing `Enter` did not execute the query, and
`Shift+Enter` did not insert a new line.
- The issue occurred when autocomplete was disabled.
- This problem affected the query editor in both the VictoriaMetrics UI
and VictoriaLogs UI.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8058
(cherry picked from commit f31dece58d)
Initially delete_series API wasn't implemented for mulitenant auth token.
This commit fixes it and properly handle delete series requests for mulitenant auth token.
It also adds integration tests for this case.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8126
Introduced at v1.104.0 release:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1434
---------
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Co-authored-by: f41gh7 <nik@victoriametrics.com>
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8072
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
Also always initialize Query.timestamp with the timestamp from the lexer.
This should avoid potential problems with relative timestamps inside inner queries.
For example, the `_time:1h` filter in the following query is correctly executed
relative to the current timestamp:
foo:in(_time:1h | keep foo)
Sync.Pool for readTrackingBody was added in order to reduce potential
load on garbage collector. But golang net/http standard library does not
allow to reuse request body, since it closes body asynchronously after
return. Related issue: https://github.com/golang/go/issues/51907
This commit removes sync.Pool in order to fix potential panic and data
race at requests processing.
Affected releases:
- all releases after v1.97.7
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8051
Signed-off-by: f41gh7 <nik@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
(cherry picked from commit 80ead7cfa4)
### Describe Your Changes
optimize for
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6226
for user who set `*AuthKey` flag, they will receive new response in
body:
```go
// query arg not set
The provided authKey '' doesn't match -search.resetCacheAuthKey
// incorrect query arg
The provided authKey '5dxd71hsz==' doesn't match -search.resetCacheAuthKey
```
previously, they receive:
```
The provided authKey doesn't match -search.resetCacheAuthKey
```
### Checklist
The following checks are **mandatory**:
- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
(cherry picked from commit 1f0b03aebe)
Signed-off-by: hagen1778 <roman@victoriametrics.com>
…e `/api/v1/admin/tsdb/delete_series` call
Previously, it is limited by `-search.maxQueryDuration`, and can be
small for delete calls.
part of https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7857.
(cherry picked from commit 4574958e2e)
Signed-off-by: hagen1778 <roman@victoriametrics.com>
`doInternal` has adaptive staleness detection mechanism. It is
calculated using timestamp distance between samples in selected list of
samples. It is dynamic because VM can store signals from many sources
with different samples resolution. And while it works for most of cases,
there are edge cases for rollup functions that are comparing values
between windows: increase, increase_pure, delta.
The edge case 1.
There was a gap between series because of the missed scrape or two. In
this case staleness will trigger and increase-like functions will assume
the value they need to compare with is 0. In result, this could produce
spikes for a flappy data - see
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/894 This
problem was solved by introducing a `realPrevValue` field -
1f19c167a4.
It stores the closest real sample value on selected interval and is used
for comparison when samples start after the gap.
The edge case 2.
`realPrevValue` doesn't respect staleness interval. In result, even if
gap between samples is huge (hours), the increase-like functions will
not consider it as a new series that started from 0. See
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8002.
Covering both edge cases is tricky, because `realPrevValue` has to
respect and not respect the staleness interval in the same time. In
other words, it should be able to ignore periodic missing gaps, but
reset if the gap is too big. While "too big gap" can't be figured out
empirically, I suggest using `-search.maxStalenessInterval` for this
purpose. If `-search.maxStalenessInterval` is set to 0 (default), then
`realPrevValue` ignores staleness interval. If
`-search.maxStalenessInterval` is > 0, then `realPrevValue` respects it
as a staleness interval.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
(cherry picked from commit 7d2a6764e7)
New version has additional checks and reduced resource consumption, so
it doesn't timeout for our internal repos.
To make linter happy, I addressed "redefinition of the built-in
function" lint error.
----
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Previously columns with negative int64 values were stored either as float64 or string
depending on whether the negative int64 values are bigger or smaller than -2^53.
If the integer values are smaller than -2^53, then they are stored as string, since float64 cannot
hold such values without precision loss. Now such values are stored as int64.
This should improve compression ratio and query performance over columns with negative int64 values.
1. Do not copy every line from LineReader.buf to LineReader.Line - just refer the line at LineReader.buf.
2. Do not copy the next found line to the beginning of LineReader.buf - just track the next line start index with LineReader.bufOffset.
This reduces memory copying when many lines are read into LineReader.buf by a single read() syscall.
When vmselect process a rollup function it fetches all the raw samples
on requested `start-end` interval of the query. It then loops through
the raw samples, picks the range of the samples based on provided `step`
interval and invokes a rollup function for each of the picked ranges of
samples.
During this processing, vmselect always populates the `realPrevValue`
field with the closest previous raw sample value before the picked range
of samples. This `realPrevValue` is used by rollup functions like
increase_pure or delta to decide whether the counter change happened or
not. For example, we get the counter value == 1. If we've seen this
counter before and its value was also 1 - then no change happened. If we
didn't see it before, then this counter should have started with value=0
and we need to account for `1-0=1` change. All this is required to deal
with situations when scrapes are missing or `step` is too small.
However, vmselect doesn't check how "old" is the `realPrevValue`. In
other words, it doesn't respect the staleness interval when picking it.
In result, depending on the `start` and `end` params, vmselect can use
`realPrevValue` which is a couple of hours old and is unlikely to be a
temporary scrape fail. In result, some increases can be incorrectly
ingnored by vmselect.
This change makes sure that vmselect doesn't populate `realPrevValue`
with samples that are older than staleness interval.
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ x ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
-------------------
To reproduce, create a dataset with one metric `foo` which has samples
with value=1 on interval of couple of hours and resolution 15s, and a
gap for an hour in the middle:
<img width="769" alt="image"
src="https://github.com/user-attachments/assets/a39b2740-b741-45f8-ad18-093b7c57c3b3"
/>
Then run `increase(foo[1m])` expression on this time range (disable
cache):
<img width="1472" alt="image"
src="https://github.com/user-attachments/assets/463cece1-f359-4c75-a96c-60092a31cab2"
/>
In result, there will be one increase on the beginning of the series.
And no increase after the gap. Then change the time range so it starts
in the middle of the gap:
<img width="1505" alt="image"
src="https://github.com/user-attachments/assets/f4a460c3-9fd1-4ec7-ab47-15e716ec1019"
/>
Now, there is an increase>0 because the `realPrevValue` wasn't
populated. This is wrong, because it hides the increase of the series.
With the fix, the original increase query on full time range should show
2 increases:
<img width="1492" alt="image"
src="https://github.com/user-attachments/assets/aa9d8a6b-7b22-41f6-9eb9-83b3113a6982"
/>
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Hint allows to choose type of cache to be used for index search:
- in-memory parts are storing recently ingested samples and should use
main cache. This improves ingestion speed and cache hit ration for
queries accessing recently ingested samples.
- merges of file parts is performed in background, using a separate
cache allows avoiding pollution of the main cache with irrelevant
entries.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7182
---------
Signed-off-by: f41gh7 <nik@victoriametrics.com>
(cherry picked from commit e9f86af7f5)
This commit makes configurable interval for checking if final dedup
process for the historical data should be started. It allows to spread
resource utilisation for multiple vmstorage/vmsingle instances in time.
Since final dedup may add additional preasure on disk, backup systems
and make cluster less stable. Storage unconditionally adds 25% jitter to
the provided value, it should simplify configuration management at
Kubernetes ecosystem. Because Kubernetes application pods must have the
same configuration.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7880
---------
Signed-off-by: f41gh7 <nik@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
(cherry picked from commit 9ada784983)
commitL c7fc0d0d2f enabled skipping alerts
in case there is no labels present for an alert. This made clause which
was adding a comma for the JSON list incorrect as it is not possible to
determine if the next alert will be skipped or not.
This fix renders all alert labels in advance allowing properly format
JSON payload for Alertmanager notification.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7985
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
(cherry picked from commit 51b21dfd57)
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
### Describe Your Changes
Fixes error in `vmauth` when discovering ipv6 addresses.
`vmauth` attempts to [slice till
`:`](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmauth/auth_config.go#L397)
in the discovered addresses without accounting for ipv6. This causes it
to fail in ipv6 only environments.
```sh
$ nslookup vmselect.ns.svc.cluster.local
...
Name: vmselect.ns.svc.cluster.local
Address: 2600:dead:beef:dead:beef::8
```
```sh
$ kubectl logs -f vmauth
...
error: dial tcp: lookup 2600: no such host
```
### Checklist
The following checks are **mandatory**:
- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Co-authored-by: f41gh7 <nik@victoriametrics.com>
(cherry picked from commit 77b0fcfdd9)
### Describe Your Changes
This pull request adds support for autocomplete in LogsQL queries. The
new feature provides suggestions for field names, field values, and pipe
names as you type.
---------
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
(cherry picked from commit ee7fe11fd2)
Since
44b071296d
`evalNumber` function no longer updating MetricName tenancy information.
This leads to mismatch in metric names between the query result and
evaluated number for all tenants other than 0:0.
For example, query `count(up) or 0` will return different results for
tenants 0:0 and 1:1 (assuming up is present for both tenants):
- tenant 0:0 - will only contain result of `count(up)`
- tenant 1:1 - will return both `count(up)` and `0` since metric names
will not be matched
This restores setting of tenancy information for metric name for
single-tenant queries.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7987
---
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
### Describe Your Changes
Binary operations like `exprFirst op exprSecond` in VictoriaMetrics are
performed in the following way:
1. Execute exprFirst.
2. Extract **common label filters** from the result of step 1.
3. Apply these common label filters to `exprSecond` and execute it, in
order to retrieve less time series from vmstorage nodes.
In step 2, only labels with less than `100` (hard-coded) value could be
used as **common label filter** (e.g. `{common_lb=~"v1|v2|...|v100"}`.
In our scenarios, a label, take `instance` label as an example, could
has thousands of candidate values. Regarding bring more pressure to
vmstorage node, it's still beneficial if labels with more than 100
values could be used as filter in `exprSecond`, with enough vmstorage
resources. After adjusting the value from `100` to `10000`, our query
round-trip time drops significantly from 5s to 2s.
This pull request change the hard-coded value into a configurable flag.
Parse cache is a pretty simple implementation of cache. It's just a
standard map with mutex.
Map with mutex overall has poor performance, plus when the cache
overflow occurs, the whole cache locks until 1k elements have been
deleted (now it's 10% of 10000 max elements in the cache). To avoid this
bottleneck and improve performance of cache on systems with many CPU
cores but keep it rather simple, we can implement cache with per bucket
locks like it's done in fastcache. The logic and API remain the same. So
now each bucket will have a map with approximately 78 elements (with 128
buckets), and overflow will occur now for each bucket, and only 7
elements need to be deleted.
Because exec_test.go has about 10k lines of code, it's better to move
the cache into a separate file to add tests and benchmarks for it,
because now it does not have them.
```
goos: windows
goarch: amd64
pkg: github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/promql
cpu: 11th Gen Intel(R) Core(TM) i9-11900K @ 3.50GHz
Current cache implementation performance on 8 cores:
BenchmarkCachePutNoOverFlow-8 1932 618372 ns/op 253 B/op 0 allocs/op
BenchmarkCacheGetNoOverflow-8 6547 211527 ns/op 0 B/op 0 allocs/op
BenchmarkCachePutGetNoOverflow-8 1873 621718 ns/op 261 B/op 0 allocs/op
BenchmarkCachePutOverflow-8 2262 464328 ns/op 32 B/op 0 allocs/op
BenchmarkCachePutGetOverflow-8 1764 655866 ns/op 38 B/op 0 allocs/op
New cache implementation performance on 8 cores:
BenchmarkCachePutNoOverFlow-8 10408 111412 ns/op 0 B/op 0 allocs/op
BenchmarkCacheGetNoOverflow-8 22407 52809 ns/op 0 B/op 0 allocs/op
BenchmarkCachePutGetNoOverflow-8 6583 168088 ns/op 0 B/op 0 allocs/op
BenchmarkCachePutOverflow-8 9822 117212 ns/op 2 B/op 0 allocs/op
BenchmarkCachePutGetOverflow-8 6481 175952 ns/op 3 B/op 0 allocs/op
Current cache implementation performance on 16 cores:
BenchmarkCachePutNoOverFlow-16 2331 475307 ns/op 218 B/op 0 allocs/op
BenchmarkCacheGetNoOverflow-16 6069 196905 ns/op 0 B/op 0 allocs/op
BenchmarkCachePutGetNoOverflow-16 1870 644236 ns/op 262 B/op 0 allocs/op
BenchmarkCachePutOverflow-16 2296 509279 ns/op 34 B/op 0 allocs/op
BenchmarkCachePutGetOverflow-16 1726 671510 ns/op 45 B/op 0 allocs/op
New cache implementation performance on 16 cores:
BenchmarkCachePutNoOverFlow-16 13549 82413 ns/op 0 B/op 0 allocs/op
BenchmarkCacheGetNoOverflow-16 30274 38997 ns/op 0 B/op 0 allocs/op
BenchmarkCachePutGetNoOverflow-16 8512 126239 ns/op 0 B/op 0 allocs/op
BenchmarkCachePutOverflow-16 13884 88124 ns/op 1 B/op 0 allocs/op
BenchmarkCachePutGetOverflow-16 7903 131299 ns/op 3 B/op 0 allocs/op
```
From the benchmarks above, we can see that the new implementation is ~5
times faster than the old one.
---------
Co-authored-by: f41gh7 <nik@victoriametrics.com>
Previously, since labels slice is reused for both `ALERTS` and
`ALERTS_FOR_STATE`, metrics might have incorrect labels and affect the
restore process. Tested the fix under `TestAlertingRule_Exec:
"for-pending=>empty"`.
The bug is introduced in
282f13cf11.
Affected versions: v1.106.1, v1.107...v1.108.x
related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7796
previously vmstorage ignored limit values from vmselect component.
This behavior is prohibited starting from v1.105.0, with
85f60237e2.
This breaks the original intent of the -search.maxUniqueTimeseries command-line flag, which has been added at vmselect nodes in the commit b843f0e : to be able to override the default limit at vmstorage on the number of unique time series, at different subsets of vmselect nodes.
The behavior should be the following:
* If -search.maxUniqueTimeseries command-line flag isn't set at both vmselect and vmstorage nodes, then the limit on the number of unique time series must be automatically detected at vmstorage nodes according to
* vmstorage: automatically adjust -search.maxUniqueTimeseries max value . This simplifies configuration of VictoriaMetrics cluster for the typical case.
* If -search.maxUniqueTimeseries command-line flag is explicitly set at vmstorage node, then it must be used as the limit on the number of unique time series, without automatic detection of the limit. Explicitly set limit at vmstorage node cannot be exceeded by the limit from vmselect nodes.
* If the -search.maxUniqueTimeseries command-line flag is explicitly set at vmselect node, then it must override the automatically detected limit at vmstorage node. For example, if vmselect node provides the limit, which exceeds the automatically detected limit at vmstorage node, then the limit from the vmselect node must be applied during query execution at vmstorage node. This will allow properly executing queries from the subset of vmselect nodes for reporting queries described above.
related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7852
### Describe Your Changes
Previously, vmctl expect that tag must exist for each measurement, but
it's actually not necessary.
f16a58f14c/app/vmctl/influx/influx.go (L183-L186)
This pull request fix it by removing the check. For influx series
`measurement1_value1{}`, it will be represented as:
```go
Series{
Measurement: "measurement1",
Field: "value1",
LabelPairs: []LabelPair{},
EmptyTags: []string{},
}
```
and searched by the following query:
```sql
select "value1" from "measurement1"
```
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7921
Commit 71bb9fc0d0 introduced a regression.
If labels are empty and relabeling is not configured, influx ingestion hanlder
performed an earlier exit due to TryPrepareLabels call.
Due micro-optimisations for this procotol, this check was not valid.
Since it didn't take in account metircName, which added later and skip metrics line.
This commit removes `TryPrepareLabel` function call from this path and inline it instead.
It properly track empty labels path.
Adds initial tests implementation for data ingestion protocols.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7933
Signed-off-by: f41gh7 <nik@victoriametrics.com>
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7761
### Describe Your Changes
- datadog /api/v2/logs api supports message field in json format, which
is not documented and is used by serverless extension. This PR allows
message field to be both string and object type. Also added support of
not documented timestamp field
- added `-datadog.streamFields` and `-datadog.ignoreFields` flags to
configure default stream fields for datadog logs, where there's no
alternative option to pass extra headers and query args
- added ingest `max` and `min` values of data, which are ingested using
`datadogsketches` API, which is also actively used by serverless
extensions
- use default `.` separator instead of `_` for sketches metric names
until metrics are not sanitized
Historically some of VictoriaMetrics components were optimized for the low rate of memory allocations.
These are: vmagent, single-node VictoriaMetrics and vmstorage. These components benefit from the low
GOGC value, since this allow reducing their memory usage in steady state on typical workloads.
Other VictoriaMetrics components aren't optimized for the reduced rate of memory allocations.
This results in the increased CPU usage spent on garbage collection (GC) in these components,
since it must be triggered at higher rate. See https://tip.golang.org/doc/gc-guide#GOGC for details.
These components do not use too much memory, so it is OK increasing the GOGC for these components
from 30 to 100 - this won't affect the most users.
Keep GOGC to 30 only for vmagent, single-node VictoriaMetrics and vmstorage components.
See 077193d87c and 54b9e1d3cb .
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7902
vmauth started to use request.Host after commit
f4776fec1b for`src_hosts` routing rules.
This commit adds http.Request.Host to the debugInfo output in order to
be consistent with routing logic.
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Signed-off-by: f41gh7 <nik@victoriametrics.com>
Regression was introduced at 564e6ea024
after implementing:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6928
ctx.Labels array could be incorrectly updated and changes to it after
relabeling rules can be lost.
E.g. ctx.Labels passed to WriteDataPoint function as slice copy, but
results of relabeling only changed an actual slice at ctx.Labels.
This commit replaces implicit relabeling call with explicit
`TryPrepareLabels` function.
It also reduces code diffs with cluster version and adds integration tests
related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7865
---------
Signed-off-by: f41gh7 <nik@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
While at at, allow passing an array of string values per each JSON entry at extra_filters and extra_stream_filters.
For example, `extra_filters={"foo":["bar","baz"]}` is converted into `foo:in("bar", "baz")` extra filter,
while `extra_stream_fitlers={"foo":["bar","baz"]}` is converted into `{foo=~"bar|baz"}` extra filter.
This should simplify creating faceted search when multiple values per a single log field must be selected.
This is needed for https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7365#issuecomment-2447964259
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5542
Such log fields do not give any useful information during logs' exploration.
They just clutter the output of the `facets` pipe. So it is better to drop such fields by default.
If these fields are needed, then `keep_const_fields` option can be added to `facets` pipe.
This PR fixes#5796. See the points 6 and 7 in `Steps to reproduce`:
> Now let's set time to only 5ms past the timestamp of the first point,
since even 199ms worked for the second point. Surprise, the point isn't
returned 💥:
>
> ```curl -s $VMQURL -d 'query=series1' -d 'time=1707123456705' -d
'step=1ms' | grep 10 # nothing!```
>
> But, 4ms works: 🤨🤔
>
> ```curl -s $VMQURL -d 'query=series1' -d 'time=1707123456704' -d
'step=1ms' | grep 10 # found```
This happens so because the actual step becomes 5ms due to jitter being
applied. THe fix is to do not apply jitter if scrape interval was not
detected (the case when vmstorage returns only one result). In this case
the scrape interval is set to `5m+step`.
An integration test has been added to check the steps to reproduce and
then to confirm that fix works. Note that the cluster tests are
currently disabled because the fix is not in cluster branch yet.
The following checks are **mandatory**:
- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Previously after configuration reload call `externalURL` templaing function defined at external templates could be lost. Since it was added only at initial `Load` call and never copied during template reload process.
External templates for vmalert could be defined via `-rule.templates` flag.
This commit properly reload external templates. It's no longer copies mutated templates and instead fully reloads it each time if there is any changes.
Previously cluster with the following vmselect configuration:
./bin/vmselect
-storageNode=gr1/:8211,gr1/:8212
-storageNode=gr2/:8213,gr2/:8214
-search.skipSlowReplicas=true
-globalReplicationFactor=2
Here we have two vmstorage groups and -globalReplicationFactor=2, which effectively means that "every ingested sample is replicated across multiple vmstorage groups". Hence, gr1 and gr2 contain identical data set. And when we set -search.skipSlowReplicas=true it is expected vmselect should return result as soon as at least one storage group returned the full result.
In current state, -search.skipSlowReplicas is ignored on the storage group level. It is only respected within the group (with -replicationFactor flag).
This commit fixes global replication for skipSlowReplicas.
To ensure that the fix works and does not break
anything replication tests have been added. For checking the fix for
skipping slow replicas see `testGroupSkipSlowReplicas()`.
To emulate storage groups, the integration test creates a cluster with
multilevel vminsert. The L1 inserts are group-level inserts, each writes
to its own group of vmstorages. The L2 vminsert is a global vminsert
that writes replicated to the L1 vminserts.
To enable multilevel inserts changes in apptest framework and
`lib/ingestserver/clusternative/server.go` were necessary.
related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6924
---------
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
See https://alpinelinux.org/posts/Alpine-3.21.0-released.html
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
Signed-off-by: hagen1778 <roman@victoriametrics.com>
(cherry picked from commit 87c1b2de6f)
Previously, time series with labels exceeding the configured limits were truncated and written to storage, potentially causing data inconsistency. This could lead to collisions between time series and make it difficult to identify the source due to truncated labels.
This commit changes the behavior:
* Such time series are now rejected outright.
* Rejected time series are logged to stdout, and corresponding counters are incremented.
* removes `vm_too_long_label_values_total`, `vm_too_long_label_names_total`, `vm_metrics_with_dropped_labels_total` metrics.
* adds new values `[too_many_labels,too_long_label_name,too_long_label_value]` to `reason` label of the `vm_rows_ignored_total` metric name
related issues:
- https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6928
- https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7661
Previous commit b09272ccac added regression, which could lead to the template
global state overwrites.
The issue related to the mechanism how `vmalert` inherits templates. It has global templates, that could be changed via `rule.templates` flag. And local templates defined per labels/annotations for rules and groups.
During labels/annotations templating state could be changed via `define` syntax.
This commit restores previous behavior with `Clone` call for templates before templating labels/annotations.
Affected releases:
- 1.106.1
- v1.102.7
- v1.97.12
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6894
This commit adds ability to launch vmauth without configuration file.
Which is possible use case for operator based installations.
Operator provides global resource `VMAuth` and allows to create
`VMUser` objects for it. Eventually operator creates configuration for
`VMAuth` based on user defined selectors for `VMUser`.
Since there is no direct relations between
those objects. And any object could be created in on-demand by
Kubernetes users. It's required to be able to start `vmauth` with empty
auth config file.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6467
The max_value_len query arg allows controlling the maximum length of values
per every log field. If the length is exceeded, then the log field is dropped
from the results, since it contains incomplete (misleading) set of most frequently seen field values.
(cherry picked from commit 48540ac409)
This endpoint returns the most frequent values per each field seen in the selected logs.
This endpoint is going to be used by VictoriaLogs web UI for faceted search.
(cherry picked from commit 740548ccfc)
Requests processed by built-in HTTP server has the [origin
form](https://datatracker.ietf.org/doc/html/rfc7230#section-5.3) rather
than the absolute form.
So in[Request.URL](https://pkg.go.dev/net/http#Request), fields other than
Path and RawQuery will be empty.
> // For server requests, the URL is parsed from the URI
> // supplied on the Request-Line as stored in RequestURI. For
> // most requests, fields other than Path and RawQuery will be
> // empty. (See RFC 7230, Section 5.3)
Using `request.Host` field instead to match `src_hosts` fixes issue and allows to route requests properly.
An addition It allows user to route requests with customized `Host` header.
The storage isn't designed to work efficiently with logs containing too many log fields.
It is better to emit a warning to the user and ignore such logs instead of trying to store them.
This will allow fixing the issue by the user ASAP, and won't lead to excess resource usage
at VictoriaLogs side, such as RAM, CPU, disk IO and disk space.
While at it, ignore too long logs with the size exceeding the maximum block size during data ingestion.
This should prevent from possible issues when dealing with such long logs if they were stored in the storage.
Emit a warning in this case, so the user could identify and fix the issue ASAP.
This is a follow-up for 22e6385f56
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7568
Previously too long line in Elasticsearch bulk import protocol resulted in clsoing
the client stream and ignoring the rest of log messages in the stream.
Now only the too long message is ignored properly, while the rest of log messages
are read successfully.
This is a follow-up for 61e7c77ce25967269192ed2e201f67d8c48b972e
Previously vl_rows_ingested_total metric was tracked individually per each supported data ingestion protocols.
It is better from maintainability PoV tracking this metric consistently in a single place - at logMessageProcessor.AddRow() function
in the same way as vl_bytes_ingested_total metric is tracked.
This is a follow-up for 50bfa689c9
This metric tracks an approximate amounts of bytes processed when parsing the ingested logs.
The metric is exposed individually per every supported data ingestion protocol. The protocol name
is exposed via "type" label in order to be consistent with vl_rows_ingested_total metric.
Thanks to @tenmozes for the initial idea and implementation at https://github.com/VictoriaMetrics/VictoriaMetrics/pull/7682
While at it, remove the unneeded "format" label from vl_rows_ingested_total metric.
The "type" label must be enough for encoding the data ingestion format.
Comment broken tests for remote_read integration test.
Prometheus broke library compatibility and it's required to rewrite tests.
Also, test structure and format should be revisited and improved according to our test code style.
Signed-off-by: f41gh7 <nik@victoriametrics.com>
Removes global defaultAuthToken, since it's no longer needed.
It was added as fallback for 'remoteWrite.multitenantURL' feature.
This feature was deprecated at v1.102 version and removed.
Updates newRemoteWriteCtxs function, it shouldn't accept auth.Token no longer.
This was also a part of remove feature.
Signed-off-by: f41gh7 <nik@victoriametrics.com>
Previously, vmagent produced parsing error for 'multitenant' auth token
value for the cases:
* data ingestion with enableMultitentEndpoints
* data scrapping at promscrape
It's inconsistent to the other VictoriaMetrics components.
Since 'multitenant' is well-known token value for multitenancy via
labels. And vmagent is intended to be compatible with vminsert ingestion
endpoints.
This commit replaces NewToken with NewTokenPossibleMultitenant function
for token parsing. It allows to use multitenant value for it. And it
makes token values consistent for the all components.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7694
Previously, there was no option to replace value of `X-Forwarded-For`
HTTP Header. It was only possible to completely remove it. It's not good
solution, since backend may require this information. But using direct
value of this header is insecure. And requires complex knowledge of
infrastruce at backend side (see spoofing X-Forwarded-For articles).
This commit adds new flag, that replaces content of `X-Forwarded-For`
HTTP Header value with current `RemoteAddress` of client that send
request.
It should be used if `vmauth` is directly attached to the internet.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6883
---------
Signed-off-by: f41gh7 <nik@victoriametrics.com>
### Describe Your Changes
- Fixes the handling of the `showLegend` flag.
- Fixes the handling of `alias`.
- Adds support for alias templates, allowing dynamic substitutions like
`{{label_name}}`.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7565
### Describe Your Changes
- **Memory Optimization**: Reduced memory consumption on the "Group" and
"JSON" tabs by approximately 30%.
- **Table Pagination**: Added pagination to the "Table" view with an
option to select the number of rows displayed (from 10 to 1000 items per
page, with a default of 1000). This change significantly reduced memory
usage by approximately 75%.
Related to #7185
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
doing similar changes for both vmagent and vminsert (like one in
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/7399) ends up
with almost same implementations for each of packages instead of having
this shared code in one place. one of the reasons is the same Timeseries
and Labels structure from different prompb and prompbmarshal packages.
My proposal is to use structures from prompb package only to
marshal/unmarshal sent/received data, but for internal transformations
use only structures from prompbmarshal package
Another example, where it already can help to simplify code is streaming
aggregation pipeline for vmsingle (now it first marshals
prompb.Timeseries to storage.MetricRow and then if streaming aggregation
or deduplication is enabled it unmarshals all the series back but to
prompbmarshal.Timeseries)
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
Additional info from the dump can be used to debug rotuing rules.
https://pkg.go.dev/net/http/httputil#DumpRequest
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Previously multitenant cache was inited before flag.Parse call. It
didn't allow to change cache expiration value and default value was
always used.
This commit adds cache init at the first time cache was called.
Also this commit adds small cache improvements:
* chore for cleanup cache, it now uses common pattern for in-place items
filtering
* fail cache request fast if item is already expired
---------
Signed-off-by: f41gh7 <nik@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>