- Move uniqueFields from rows to blockStreamMerger struct. This allows localizing all the references to uniqueFields inside blockStreamMerger.mustWriteBlock(), which should improve readability and maintainability of the code. - Remove logging of the event when blocks cannot be merged because they contain more than maxColumnsPerBlock, since the provided logging didn't provide the solution for the issue with too many columns. I couldn't figure out the proper solution, which could be helpful for end user, so decided to remove the logging until we find the solution. This commit also contains the following additional changes: - It truncates field names longer than 128 chars during logs ingestion. This should prevent from ingesting bogus field names. This also should prevent from too big columnsHeader blocks, which could negatively affect search query performance, since columnsHeader is read on every scan of the corresponding data block. - It limits the maximum length of const column value to 256. Longer values are stored in an ordinary columns. This helps limiting the size of columnsHeader blocks and improving search query performance by avoiding reading too long const columns on every scan of the corresponding data block. - It deduplicates columns with identical names during data ingestion and background merging. Previously it was possible to pass columns with duplicate names to block.mustInitFromRows(), and they were stored as is in the block. Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4762 Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/4969
4.3 KiB
VictoriaLogs changelog
The following tip
changes can be tested by building VictoriaLogs from the latest commit of VictoriaMetrics repository
according to these docs
tip
-
FEATURE: add
-elasticsearch.version
command-line flag, which can be used for specifying Elasticsearch version returned by VictoriaLogs to Filebeat at elasticsearch bulk API. This helps resolving this issue. -
FEATURE: expose the following metrics at /metrics page:
vl_data_size_bytes{type="storage"}
- on-disk size for data excluding log stream indexes.vl_data_size_bytes{type="indexdb"}
- on-disk size for log stream indexes.
-
FEATURE: add
-insert.maxFieldsPerLine
command-line flag, which can be used for limiting the number of fields per line in logs sent to VictoriaLogs via ingestion protocols. This helps to avoid issues like this. -
FEATURE: expose
vl_http_request_duration_seconds
histogram at the /metrics page. Thanks to @crossoverJie for this pull request. -
FEATURE: add support of
-storage.minFreeDiskSpaceBytes
command-line flag to allow switching to read-only mode when running out of disk space at-storageDataPath
. See this issue. -
BUGFIX: fix possible panic when no data is written to VictoriaLogs for a long time. See this issue. Thanks to @crossoverJie for filing and fixing the issue.
-
BUGFIX: add
/insert/loky/ready
endpoint, which is used by Promtail for healthchecks. This should removeunsupported path requested: /insert/loki/ready
warning logs. See this comment. -
BUGFIX: prevent from panic during background merge when the number of columns in the resulting block exceeds the maximum allowed number of columns per block. See this issue.
v0.3.0
Released at 2023-07-20
- FEATURE: add support for data ingestion via Promtail (aka default log shipper for Grafana Loki). See these and these docs.
v0.2.0
Released at 2023-07-17
- FEATURE: support short form of
_time
filters over the last X minutes/hours/days/etc. For example,_time:5m
is a short form for_time:(now-5m, now]
, which matches logs with timestamps for the last 5 minutes. See these docs for details. - FEATURE: add ability to specify offset for the selected time range. For example,
_time:5m offset 1h
is equivalent to_time:(now-5m-1h, now-1h]
. See these docs for details. - FEATURE: LogsQL: replace
exact_prefix("...")
withexact("..."*)
. This makes it consistent with i() filter, which can accept phrases and prefixes, e.g.i("phrase")
andi("phrase"*)
. See these docs.
v0.1.0
Released at 2023-06-21
Initial release