Previously only logs inside the selected time range could be returned by stream_context pipe.
For example, the following query could return up to 10 surrounding logs only for the last 5 minutes,
while most users expect this query should return up to 10 surrounding logs without restrictions on the time range.
_time:5m panic | stream_context before 10
This enables the ability to implement stream context feature at VictoriaLogs web UI: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7063 .
Reduce memory usage when returning stream context over big log streams with millions of entries.
The new logic scans over all the log messages for the selected log stream, while keeping in memory only
the given number of surrounding logs. Previously all the logs for the given log stream on the selected time range
were loaded in memory before selecting the needed surrounding logs.
This should help https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6730 .
Reduce the scan performance for big log streams by fetching only the requested fields. For example, the following
query should be executed much faster than before if logs contain many fields other than _stream, _msg and _time:
panic | stream_context after 30 | fields _stream, _msg, _time
Use local timezone of the host server in this case. The timezone can be overridden
with TZ environment variable if needed.
While at it, allow using whitespace instead of T as a delimiter between data and time
in the ingested _time field. For example, '2024-09-20 10:20:30' is now accepted
during data ingestion. This is valid ISO8601 format, which is used by some log shippers,
so it should be supported. This format is also known as SQL datetime format.
Also assume local time zone when time without timezone information is passed to querying APIs.
Previously such a time was parsed in UTC timezone. Add `Z` to the end of the time string
if the old behaviour is preferred.
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6721
### Describe Your Changes
VictoriaLogs allows logs without `_msg` field or `_msg` field is empty.
This lead to incorrect search result. See:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6785
This pull request search for non-empty `_msg` field before log entry is
added to `LogRows`.
New counter `vl_rows_dropped_total{reason="msg_not_exist"}` is
introduced.
Example log output:
```
2024-09-23T02:33:19.719Z warn app/vlinsert/insertutils/common_params.go:189 dropping log line without _msg field; [{@timestamp 2024-09-18T13:42:16.600000000Z} {Attributes.array.attribute ["many","values"]} {Attributes.boolean.attribute true} {Attributes.double.attribute 637.704} {Attributes.int.attribute 10} {Attributes.map.attribute.some.map.key some value} {Attributes.string.attribute some string} {Body Example ddddddddddlog record} {Resource.service.name my.service} {Scope.my.scope.attribute some scope attribute} {Scope.name my.library} {Scope.version 1.0.0} {SeverityNumber 10} {SeverityText Information} {SpanId eee19b7ec3c1b174} {TraceFlags 0} {TraceId 5b8efff798038103d269b633813fc60c}]
```
### Checklist
The following checks are **mandatory**:
- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
- [ ] Benchmark for potential performance loss.
---------
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
Previously the original timestamp was used in the copied query, so _time:duration filters
were applied to the original time range: (timestamp-duration ... timestamp]. This resulted
in stopped live tailing, since new logs have timestamps bigger than the original time range.
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7028
This pipe is useful for debugging purposes when the number of processed blocks must be calculated for the given query:
<query> | blocks_count
This helps detecting the root cause of query performance slowdown in cases like https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7070
This improves performance for analytical queries, which do not need column headers metadata.
For example, the following query doesn't need column headers metadata, since _stream and min(_time)
are stored in block header, which is read separately from colum headers metadata:
_time:1w | stats by (_stream) min(_time) min_time
This commit significantly improves the performance for this query.
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7070
### Describe Your Changes
- Use common compose.yaml file for all victorialogs setups to set
version in a single place and override it on demand for each agent and
protocol
- Replaced multiple victorialogs instances in HA setup with single setup
with `deploy.replica` parameter set
- Added fluentd setup
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
(cherry picked from commit e75ae1b274)
* `remoteWrite.maxQueueSize` from `100_000` to `1_000_000`, this should
improve resiliency of recording rules that produce many series;
* `remoteWrite.maxBatchSize` from `1_000` to `10_000`, this should be
more efficient to send from netwroking perspective;
* `remoteWrite.concurrency` from `1` to `4`, this should imrpove speed
of sending the generated series.
The new settings should improve remote write performance of vmalert with
default settings.
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: Hui Wang <haley@victoriametrics.com>
(cherry picked from commit 6b1b47df54)
Substitute global streamTagsCache with per-blockSearch cache for ((stream.id) -> (_stream value)) entries.
This improves scalability of obtaining _stream values on a machine with many CPU cores, since every CPU
has its own blockSearch instance.
This also should reduce memory usage when querying logs over big number of streams, since per-blockSearch
cache of ((stream.id) -> (_stream value)) entries is limited in size, and its lifetime is bounded by a single query.
The change supposed to have more practical recommendations and reflect
the real processes for maintaining the project.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
The streamID.marshalString() is executed in hot path if the query selects _stream_id field.
Command to run the benchmark:
go test ./lib/logstorage/ -run=NONE -bench=BenchmarkStreamIDMarshalString -benchtime=5s
Results before the commit:
BenchmarkStreamIDMarshalString-16 438480714 14.04 ns/op 71.23 MB/s 0 B/op 0 allocs/op
Results after the commit:
BenchmarkStreamIDMarshalString-16 982459660 6.049 ns/op 165.30 MB/s 0 B/op 0 allocs/op
This test is very flaky and prevents other tests from running in CI.
Disabling this test should improve tests quality, since it isn't reliable anyway.
There is a ticket to fix this test - https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7062
Once fixed, this test should be uncommented.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
(cherry picked from commit c7569dac50)
### Describe Your Changes
It is better to show only unhealthy targets instead of all of them when
`show_only_unhealthy` filter is enabled.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3536
### Checklist
The following checks are **mandatory**:
- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
(cherry picked from commit cbeb7d50e8)
### Describe Your Changes
New logos and usage guideline
### Checklist
The following checks are **mandatory**:
- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
(cherry picked from commit df665a13c9)
### Describe Your Changes
By default, the `elasticsearchexporter` in OTel Collector puts the log
message under a field other than `_msg` (e.g., `Body`). Without
specifying via an HTTP header, those logs may not be queried correctly.
See also:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6785.
This pull request updates the example configuration and notes for the
`elasticsearchexporter`.
### Checklist
The following checks are **mandatory**:
- [X] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
(cherry picked from commit fea4433362)
### Describe Your Changes
Updated grafana plugins to the latest releases
### Checklist
The following checks are **mandatory**:
- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
(cherry picked from commit 91b28d0527)
Rounding GOMAXPROCS to the upper interger value of cpuQuota increases chances of CPU starvation,
non-optimimal goroutine scheduling and additional CPU overhead related to context switching.
So it is better to round GOMAXPROCS to the lower integer value of cpuQuota.
It is expected that range_first and range_last functions return non-nan const value across all the points
if the original series contains at least a single non-NaN value. Previously this rule was violated for NaN data points
in the original series. This could confuse users.
While at it, add tests for series with NaN values across all the range_* and running_* functions, in order to maintain
consistent handling of NaN values across these functions.
* lib/license: add support of license key hot-reload
* docs: add info about license key hot reload
---------
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
The previous behaviour was incorrect, since it is unexpected that the -streamAggr.dedupInterval
and -remoteWrite.streamAggr.dedupInterval is applied to processed samples only if -streamAggr.config isn't set.
This is a follow-up for d523015f27
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/6711
### Describe Your Changes
Currently it the metricID list is empty it won't be mashalled and as the
result won't be put into the tagFiltersToMetricIDsCache which causes the
cache misses for the corresponding tagFilters. In some setups this
causes severe search speed detradation (see #7009).
The empty metric IDs was covered before but then was accidentally
removed in 6c21439.
This PR restores the coverage of this case.
A new unit test can be used as a proof that empty metricID lists are not
added to the cache (just remove the fix in index_db.go and run the test
to see the result)
Also a benchmark has been added to see the implications of the
compression.
```
user@laptop:~/p/github.com/rtm0/VictoriaMetrics/01/src$ go test ./lib/storage/ -run=NONE -bench BenchmarkMarshalUnmarshalMetricIDs --loggerLevel=ERROR
goos: linux
goarch: amd64
pkg: github.com/VictoriaMetrics/VictoriaMetrics/lib/storage
cpu: 13th Gen Intel(R) Core(TM) i7-1355U
BenchmarkMarshalUnmarshalMetricIDs/numMetricIDs-0-12 3237240 363.5 ns/op 0 compression-rate
BenchmarkMarshalUnmarshalMetricIDs/numMetricIDs-1-12 2831049 451.8 ns/op 0.4706 compression-rate
BenchmarkMarshalUnmarshalMetricIDs/numMetricIDs-10-12 1152764 1009 ns/op 1.667 compression-rate
BenchmarkMarshalUnmarshalMetricIDs/numMetricIDs-100-12 297055 3998 ns/op 5.755 compression-rate
BenchmarkMarshalUnmarshalMetricIDs/numMetricIDs-1000-12 31172 34566 ns/op 8.484 compression-rate
BenchmarkMarshalUnmarshalMetricIDs/numMetricIDs-10000-12 4900 289659 ns/op 9.416 compression-rate
BenchmarkMarshalUnmarshalMetricIDs/numMetricIDs-100000-12 447 2341173 ns/op 9.456 compression-rate
BenchmarkMarshalUnmarshalMetricIDs/numMetricIDs-1000000-12 42 24926928 ns/op 9.468 compression-rate
BenchmarkMarshalUnmarshalMetricIDs/numMetricIDs-10000000-12 5 204098872 ns/op 9.467 compression-rate
PASS
ok github.com/VictoriaMetrics/VictoriaMetrics/lib/storage 15.018s
```
### Checklist
The following checks are **mandatory**:
- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Signed-off-by: Artem Fetishev <wwctrsrx@gmail.com>
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Signed-off-by: Artem Navoiev <tenmozes@gmail.com>
Co-authored-by: Artem Navoiev <tenmozes@gmail.com>
The indexSearch.containsTimeRange() function is called for the current indexDB and the previous indexDB
every time when searching for metricIDs by label filters. This function consumes a lot of additional CPU time
for cases when queries with lightweight label filters are sent to VictoriaMetrics at high rate (e.g. thousands of RPS),
like in the issue https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7009 .
Optimize indexSearch.containsTimeRange() function in the following ways:
- Unconditionally return true if this function is called for the current indexDB, since there are very high
chances that the current indexDB contains the data with timestamps in the requested time range.
- Cache the minimum timestamp, which is missing in the indexed data for the previous indexDB.
This is safe to do, since the previous indexDB is readonly.
This optimization eliminates potentially slow lookup in the previous indexDB for typical
use cases when the requested time range is close to the current time.
Previously indexDB.doExtDB() was returning boolean value, which was indicating whether f callback was called.
There is no need in returning this boolean value, since the f callback can determine on itself whether it was called.
This simplifies the code a bit.
While at it, document indexDB.doExtDB().