### Describe Your Changes
clarify extra resource is needed when downsampling with filter(s) or
retention filter(s) is applied
### Checklist
The following checks are **mandatory**:
- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
### Describe Your Changes
Binary operations like `exprFirst op exprSecond` in VictoriaMetrics are
performed in the following way:
1. Execute exprFirst.
2. Extract **common label filters** from the result of step 1.
3. Apply these common label filters to `exprSecond` and execute it, in
order to retrieve less time series from vmstorage nodes.
In step 2, only labels with less than `100` (hard-coded) value could be
used as **common label filter** (e.g. `{common_lb=~"v1|v2|...|v100"}`.
In our scenarios, a label, take `instance` label as an example, could
has thousands of candidate values. Regarding bring more pressure to
vmstorage node, it's still beneficial if labels with more than 100
values could be used as filter in `exprSecond`, with enough vmstorage
resources. After adjusting the value from `100` to `10000`, our query
round-trip time drops significantly from 5s to 2s.
This pull request change the hard-coded value into a configurable flag.
storageNode sorting should be BUGFIX, since previously vminsert performed sort and this behaviour was changed.
Also this change only affects OSS version
### Describe Your Changes
Parse cache is a pretty simple implementation of cache. It's just a
standard map with mutex.
Map with mutex overall has poor performance, plus when the cache
overflow occurs, the whole cache locks until 1k elements have been
deleted (now it's 10% of 10000 max elements in the cache). To avoid this
bottleneck and improve performance of cache on systems with many CPU
cores but keep it rather simple, we can implement cache with per bucket
locks like it's done in fastcache. The logic and API remain the same. So
now each bucket will have a map with approximately 78 elements (with 128
buckets), and overflow will occur now for each bucket, and only 7
elements need to be deleted.
Because exec_test.go has about 10k lines of code, it's better to move
the cache into a separate file to add tests and benchmarks for it,
because now it does not have them.
```
goos: windows
goarch: amd64
pkg: github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/promql
cpu: 11th Gen Intel(R) Core(TM) i9-11900K @ 3.50GHz
Current cache implementation performance on 8 cores:
BenchmarkCachePutNoOverFlow-8 1932 618372 ns/op 253 B/op 0 allocs/op
BenchmarkCacheGetNoOverflow-8 6547 211527 ns/op 0 B/op 0 allocs/op
BenchmarkCachePutGetNoOverflow-8 1873 621718 ns/op 261 B/op 0 allocs/op
BenchmarkCachePutOverflow-8 2262 464328 ns/op 32 B/op 0 allocs/op
BenchmarkCachePutGetOverflow-8 1764 655866 ns/op 38 B/op 0 allocs/op
New cache implementation performance on 8 cores:
BenchmarkCachePutNoOverFlow-8 10408 111412 ns/op 0 B/op 0 allocs/op
BenchmarkCacheGetNoOverflow-8 22407 52809 ns/op 0 B/op 0 allocs/op
BenchmarkCachePutGetNoOverflow-8 6583 168088 ns/op 0 B/op 0 allocs/op
BenchmarkCachePutOverflow-8 9822 117212 ns/op 2 B/op 0 allocs/op
BenchmarkCachePutGetOverflow-8 6481 175952 ns/op 3 B/op 0 allocs/op
Current cache implementation performance on 16 cores:
BenchmarkCachePutNoOverFlow-16 2331 475307 ns/op 218 B/op 0 allocs/op
BenchmarkCacheGetNoOverflow-16 6069 196905 ns/op 0 B/op 0 allocs/op
BenchmarkCachePutGetNoOverflow-16 1870 644236 ns/op 262 B/op 0 allocs/op
BenchmarkCachePutOverflow-16 2296 509279 ns/op 34 B/op 0 allocs/op
BenchmarkCachePutGetOverflow-16 1726 671510 ns/op 45 B/op 0 allocs/op
New cache implementation performance on 16 cores:
BenchmarkCachePutNoOverFlow-16 13549 82413 ns/op 0 B/op 0 allocs/op
BenchmarkCacheGetNoOverflow-16 30274 38997 ns/op 0 B/op 0 allocs/op
BenchmarkCachePutGetNoOverflow-16 8512 126239 ns/op 0 B/op 0 allocs/op
BenchmarkCachePutOverflow-16 13884 88124 ns/op 1 B/op 0 allocs/op
BenchmarkCachePutGetOverflow-16 7903 131299 ns/op 3 B/op 0 allocs/op
```
From the benchmarks above, we can see that the new implementation is ~5
times faster than the old one.
---------
Co-authored-by: f41gh7 <nik@victoriametrics.com>
### Describe Your Changes
In order for third-party tooling to identify the source repository of
VictoriaMetrics, add the org.opencontainers.image label to the
Dockerfiles. This enables a whole suite of tools that scan container
images to further correlate data with the source code.
The lack of these annotations can be identified using docker:
```shell
docker pull victoriametrics/victoria-metrics
docker inspect victoriametrics/victoria-metrics
```
```jsonc
// ...
"Labels": null
// ...
```
If we try an image that has the annotations, we'll see more output.
```shell
docker pull traefik
docker image inspect traefik
```
```jsonc
// ...
"Labels": {
"org.opencontainers.image.description": "A modern reverse-proxy",
"org.opencontainers.image.documentation": "https://docs.traefik.io",
"org.opencontainers.image.source": "https://github.com/traefik/traefik",
"org.opencontainers.image.title": "Traefik",
"org.opencontainers.image.url": "https://traefik.io",
"org.opencontainers.image.vendor": "Traefik Labs",
"org.opencontainers.image.version": "v3.2.3"
}
// ...
```
### Checklist
The following checks are **mandatory**:
- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
consistently use `vmagent_remotewrite_pending_data_bytes` on vmagent dashboard to represent persistent queue size.
`vmagent_remotewrite_pending_data_bytes =
vm_persistentqueue_bytes_pending + pendingInmemoryBytes`
According to panel description, `vmagent_remotewrite_pending_data_bytes`
is more accurate.
>Persistent queue size shows size of pending samples in bytes which
hasn't been flushed to remote storage yet.
And we already use `vmagent_remotewrite_pending_data_bytes` in other two
panels.
44d2205136/dashboards/vmagent.json (L7132)
- removed absolute paths to run without docker
- set cspell to default entrypoint value
- set cspell config path instead of cspell.json copying and removal
Previously, since labels slice is reused for both `ALERTS` and
`ALERTS_FOR_STATE`, metrics might have incorrect labels and affect the
restore process. Tested the fix under `TestAlertingRule_Exec:
"for-pending=>empty"`.
The bug is introduced in
282f13cf11.
Affected versions: v1.106.1, v1.107...v1.108.x
related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7796
At Enterprise version of the vmalert, `group` supports `tenant` field.
`tenant` field value must be added to the `datasource` as a part of the URL path prefix.
But VictoriaLogs can obtain tenant information only from `headers` and defined `tenant` breaks requests to the `VictoriaLogs` datasource.
This commit properly checks `datasourceType` and skips adding path prefix if `datasourceType` is `vlogs`.
---------
Co-authored-by: Nikolay <nik@victoriametrics.com>
previously vmstorage ignored limit values from vmselect component.
This behavior is prohibited starting from v1.105.0, with
85f60237e2.
This breaks the original intent of the -search.maxUniqueTimeseries command-line flag, which has been added at vmselect nodes in the commit b843f0e : to be able to override the default limit at vmstorage on the number of unique time series, at different subsets of vmselect nodes.
The behavior should be the following:
* If -search.maxUniqueTimeseries command-line flag isn't set at both vmselect and vmstorage nodes, then the limit on the number of unique time series must be automatically detected at vmstorage nodes according to
* vmstorage: automatically adjust -search.maxUniqueTimeseries max value . This simplifies configuration of VictoriaMetrics cluster for the typical case.
* If -search.maxUniqueTimeseries command-line flag is explicitly set at vmstorage node, then it must be used as the limit on the number of unique time series, without automatic detection of the limit. Explicitly set limit at vmstorage node cannot be exceeded by the limit from vmselect nodes.
* If the -search.maxUniqueTimeseries command-line flag is explicitly set at vmselect node, then it must override the automatically detected limit at vmstorage node. For example, if vmselect node provides the limit, which exceeds the automatically detected limit at vmstorage node, then the limit from the vmselect node must be applied during query execution at vmstorage node. This will allow properly executing queries from the subset of vmselect nodes for reporting queries described above.
related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7852
### Describe Your Changes
Previously, vmctl expect that tag must exist for each measurement, but
it's actually not necessary.
f16a58f14c/app/vmctl/influx/influx.go (L183-L186)
This pull request fix it by removing the check. For influx series
`measurement1_value1{}`, it will be represented as:
```go
Series{
Measurement: "measurement1",
Field: "value1",
LabelPairs: []LabelPair{},
EmptyTags: []string{},
}
```
and searched by the following query:
```sql
select "value1" from "measurement1"
```
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7921
Commit 71bb9fc0d0 introduced a regression.
If labels are empty and relabeling is not configured, influx ingestion hanlder
performed an earlier exit due to TryPrepareLabels call.
Due micro-optimisations for this procotol, this check was not valid.
Since it didn't take in account metircName, which added later and skip metrics line.
This commit removes `TryPrepareLabel` function call from this path and inline it instead.
It properly track empty labels path.
Adds initial tests implementation for data ingestion protocols.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7933
Signed-off-by: f41gh7 <nik@victoriametrics.com>
### Describe Your Changes
Fixed typo in contributing.md (enterpriZe -> enterpriSe in the label
name)
### Checklist
The following checks are **mandatory**:
- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
### Describe Your Changes
Currently if multiple msgFields are present in a log row it's not
obvious which field is selected as a _msg field. With this PR and order
of msgfield values defined either via headers or query arg params
defines a priority of these values
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7761
### Describe Your Changes
- datadog /api/v2/logs api supports message field in json format, which
is not documented and is used by serverless extension. This PR allows
message field to be both string and object type. Also added support of
not documented timestamp field
- added `-datadog.streamFields` and `-datadog.ignoreFields` flags to
configure default stream fields for datadog logs, where there's no
alternative option to pass extra headers and query args
- added ingest `max` and `min` values of data, which are ingested using
`datadogsketches` API, which is also actively used by serverless
extensions
- use default `.` separator instead of `_` for sketches metric names
until metrics are not sanitized
This should prevent from excess usage of CPU, RAM and other resources when too many logs
are passed to 'stream_context' pipe.
It is expected that 'stream_context' pipe results are investigated by humans, who cannot inspect
surrounding logs for millions of initial logs. That's why it is OK to limit the number of logs
and/or log streams, which can be passed to 'stream_context' pipe.
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7766
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7903
While at it, reduce memory allocations at Storage.getFieldValuesNoHits and make it more scalable on multi-CPU systems.
This improves performance of in(<query>) filter when the <query> returns big number of values.
Use chunked allocator in order to reduce memory allocations. It allocates objects from slices of up to 64Kb size.
This improves performance for `stats` and `top` pipes by up to 2x when they are applied to big number of `by (...)` groups.
Also parallelize execution of `count_uniq`, `count_uniq_hash` and `uniq_values` stats functions,
so they are executed faster on hosts with many CPU cores when applied to fields with big number
of unique values.
The commit 4599429f51 improperly set br.cs to nil,
while it should set br.bs to nil instead. This resulted in excess memory allocations
at br.csInit() and br.csInitFast().
### Describe Your Changes
added deprecated form and available from popups in vmanomaly docs
### Checklist
The following checks are **mandatory**:
- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
### Describe Your Changes
- Adds Headers to FAQ questions in vmalert for Victorialogs
- Adds FAQ for multitenant recording rules described in #7656
### Checklist
The following checks are **mandatory**:
- [X] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Co-authored-by: Haley Wang <haley@victoriametrics.com>
Historically some of VictoriaMetrics components were optimized for the low rate of memory allocations.
These are: vmagent, single-node VictoriaMetrics and vmstorage. These components benefit from the low
GOGC value, since this allow reducing their memory usage in steady state on typical workloads.
Other VictoriaMetrics components aren't optimized for the reduced rate of memory allocations.
This results in the increased CPU usage spent on garbage collection (GC) in these components,
since it must be triggered at higher rate. See https://tip.golang.org/doc/gc-guide#GOGC for details.
These components do not use too much memory, so it is OK increasing the GOGC for these components
from 30 to 100 - this won't affect the most users.
Keep GOGC to 30 only for vmagent, single-node VictoriaMetrics and vmstorage components.
See 077193d87c and 54b9e1d3cb .
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7902
Numeric fields can be stored as const values in the block of logs. In this case the `sort` pipe
was incorrectly comparing such values as strings instead of numbers. This results in incorrect
sort results. For example, 123 was smaller than 2. Fix this by removing the incorrect case
for comparing const fields.
While at it, replace lessString() with strings.LessNatural() in the sortBlockLess.
This improves sorting performance a bit, since the sortBlockLess function already tried
comparing numeric values, and it doesn't need to spend CPU time on such a comparison again inside lessString() call.
The commit 42c9183281 wasn't correct by replacing strings.LessNatural() with lessString()
inside the sortBlockLess() function.