Since funcs `ParseDuration` and `ParseTimeMsec` are used in vlogs,
vmalert, victoriametrics and other components, importing promutils only
for this reason makes them to export irrelevant
`vm_rows_invalid_total{type="prometheus"}` metric.
This change removes `vm_rows_invalid_total{type="prometheus"}` metric
from /metrics page for these components.
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
(cherry picked from commit 63f6ac3ff8)
For example, `field:~".+"`, `field:~".*"` or `field:""`
Replace such filters to faster ones. For example, `field:~".*"` is replaced with `*`,
while `field:~".+"` is replaced with `field:*`.
These filters can be used for selecting logs where one field value is less than another field value.
These filter complement `<=` and `<` filters for constant literals.
(cherry picked from commit 30974e7f3f)
This allows using different intervals for flushing in-memory data among different mergeset.Table instances.
The initial user of this feature is lib/logstorage.Storage, which explicitly passes Storage.flushInterval
to every created mereset.Table instance. Previously mergeset.Table instances were using 5 seconds
flush interval, which didn't depend on the Storage.flushInterval.
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4775
Make sure that the maximum log field name, which can be generated by JSONParser.ParseLogMessage,
doesn't exceed the hardcoded limit maxFieldNameSize. Stop flattening of nested JSON objects
when the resulting field name becomes longer than maxFieldNameSize, and return the nested JSON object
as a string instead.
This should prevent from parse errors when ingesting deeply nested JSON logs with long field names.
- `contains_any` selects logs with fields containing at least one word/phrase from the provided list.
The provided list can be generated by a subquery.
- `contains_all` selects logs with fields containing all the words and phrases from the provided list.
The provided list can be generated by a subquery.
Examples:
- `top 5 x, y` is equivalent to `top 5 by (x, y)`
- `uniq foo, bar` is equivalent to `uniq by (foo, bar)`
- `unroll foo, bar` is equivalent to `unroll (foo, bar)`
_time:<=max_time filter must include logs with timestamps matching max_time.
For example, _time:<=2025-02-24Z must include logs with timestamps until the end of February 24, 2025.
Examples:
_time:>=2025-02-24Z selects logs with timestamps bigger or equal to 2025-02-24 UTC
_time:>1d selects logs with timestamps older than one day comparing to the current time
This simplifies writing queries with _time filters.
See https://docs.victoriametrics.com/victorialogs/logsql/#time-filter
Inside PARAM-VALUE, the characters '"' (ABNF %d34), '\' (ABNF %d92),
and ']' (ABNF %d93) MUST be escaped. This is necessary to avoid
parsing errors. Escaping ']' would not strictly be necessary but is
REQUIRED by this specification to avoid syslog application
implementation errors. Each of these three characters MUST be
escaped as '\"', '\\', and '\]' respectively. The backslash is used
for control character escaping for consistency with its use for
escaping in other parts of the syslog message as well as in traditional syslog.
Related RFC:
https://datatracker.ietf.org/doc/html/rfc5424#section-6.3.3
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8282
A follow-up after 9bb5ba5d2f
that impacted compression ratio for data compressed with native GO zstd lib (`make test-pure`).
Signed-off-by: hagen1778 <roman@victoriametrics.com>
(cherry picked from commit 38bded4e58)
It has been appeared these optimizatios do not give measurable performance improvements,
while they complicate the code too much and may result in slowdown when the ingested logs have
different sets of fields.
This is a follow-up for 630601488e
(cherry picked from commit dce5eb88d3)
It has been appeared that 256 files increase RAM usage too much comparing to 128 files
when ingesting logs with hundreds of fields (aka wide events). So let's return back 128 files
limit for now.
This is a follow-up for 9bb5ba5d2f
(cherry picked from commit a50ab10998)
This should improve query performance for logs with hundreds of fields (aka wide events).
Previously there was a high chance that the data for multiple log fields is stored in the same file.
This could result in query performance slowdown and/or increased disk read IO,
since the operating system could read unnecessary data for the fields, which aren't used in the query.
Now log fields are guaranteed to be stored in separate files until the number of fields exceeds 256.
After that multiple log fields start sharing files.
(cherry picked from commit 9bb5ba5d2f)
- Re-use column names and values from the previously added rows if possible.
This increases locality of reference for field names and values, while improving
access speed for the field names and values.
- Postpone sorting fields in the added rows until creating inmemory part from them.
This allows optimizing the sorting for log fields with the same set of fields.
This is usually the case for logs, which belong to the same logs stream.
(cherry picked from commit 630601488e)
This eliminates a class of potential bugs with incorrect stats calculations when an additional filter
is applied to the blockResult before passing it to the stats function, and this filter removes
all the rows from blockResult.
1. Use distinct code paths for blockResult.getValues() and blockResult.getValuesBucketed().
This should simplify debugging and maintenance of the resulting code.
2. Do not load column values if all the values in the block fit the same bucket.
Use blockResultColumn.minValue and blockResultColumn.maxValue for determining whether
column values must be loaded via blockResultColumn.getValuesEncoded().
This signiciantly improves performance for big buckets, which cover all the column
values in a block.
3. Properly calculate buckets for negative values.
4. Properly adjust weekly buckets by Monday.
This improves performance for queries, which use `sort by (...) limit N` without mentioning _time field.
For example, the following query must work faster now
_time:1d | rm _time | sort by (request_duration desc) limit 10
(cherry picked from commit 422caf6bd7)
- Add a fast path for timestamps ending with 'Z'
- Use strings.LastIndexAny instead of strings.IndexAny for searching
for timezone offset at the end of the string. This works faster
for timestamps with sub-second precision.
(cherry picked from commit 335071cf3d)