### Describe Your Changes
add sorting of logs by groups and within each group by time in desc
order. See #7184 and #7045
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
(cherry picked from commit 1e1952acf5)
Loki protocol supports optional `metadata` object for each ingested line. It's added as 3rd field at the (ts,msg,metadata) tuple. Previously, loki request json parsers rejected log line if tuple size != 2.
This commit allows optional tuple field. It parses it as json object and adds it as log metadata fields to the log message stream.
related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7431
---------
Co-authored-by: f41gh7 <nik@victoriametrics.com>
(cherry picked from commit 3aeb1b96a2)
User experience suggests that examples shouldn't have `-rule.defaultRuleType=vlogs` set,
as it may confuse users who run vmalert with their existing rules or only use
rules from examples for testing purposes.
This change is supposed to remove the confusion by removing `-rule.defaultRuleType=vlogs`
from default recommendations and explcitily specifying `type` on group level in examples.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
(cherry picked from commit a5f1764171)
* fix typos in rules definition. Otherwise, they can't pass validation
* add code types for rendered examples
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
Signed-off-by: hagen1778 <roman@victoriametrics.com>
(cherry picked from commit 0390d58a34)
This commit adds the following changes:
- Added support to push datadog logs with examples of how to ingest data
using Vector and Fluentbit
- Updated VictoriaLogs examples directory structure to have single
container image for victorialogs, agent (fluentbit, vector, etc) but
multiple configurations for different protocols
Related issue https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6632
(cherry picked from commit e0930687f1)
### Describe Your Changes
Christmas is early and you get the first present in the shape of
spelling fixes.
Sorry for the big amount :)
### Checklist
- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
(cherry picked from commit 2e8f420d84)
In this case the _msg field is set to the value specified in the -defaultMsgValue command-line flag.
This should simplify first-time migration to VictoriaLogs from other systems.
(cherry picked from commit 16ee470da6)
- Remove leading whitespace from the first lines in 'HTTP parameters' chapter.
This whitespace isn't needed for the markdown formatting.
- Add leading whitespace for the second sentence in the list bullet describing AccountID and ProjectID HTTP headers.
This fixes markdown formatting for this list bullet.
(cherry picked from commit 96466562b6)
This msy be useful when ingesting logs from different sources, which store the log message in different fields.
For example, `_msg_field=message,event.data,some_field` will get log message from the first non-empty field:
`message`, `event.data` and `some_field`.
(cherry picked from commit ed73f8350b)
If the number of output (bloom, values) shards is zero, then this may lead to panic
as shown at https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7391 .
This panic may happen when parts with only constant fields with distinct values are merged into
output part with non-constant fields, which should be written to (bloom, values) shards.
(cherry picked from commit 102e9d4f4e)
This allows reducing the amounts of data, which must be read during queries over logs with big number of fields (aka "wide events").
This, in turn, improves query performance when the data, which needs to be scanned during the query, doesn't fit OS page cache.
(cherry picked from commit 7a62eefa34)