### Describe Your Changes
1. **Add new `Raw Query` tab**
A new `Raw Query` tab has been added to the
[vmui](https://docs.victoriametrics.com/#vmui) interface for displaying
raw data. The tab uses the `/api/v1/export` API endpoint. Related issue:
[#7024](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7024)
2. **Fix rendering of isolated points on the graph**
Previously, isolated points (not connected to other points on the left
or right) were not visible on the graph. Now, they are rendered
correctly.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
Previously, for `^` aka pow function calls, VictoriaMetrics returned `1`
if left arg was Nan. For example, given query=`(hour()==2)^1` returns 1
for NaN produced by hour() == 2 function. It added additional non-exist
datapoints to the timeseries.
This commit port bugfix from `metricql` package and adds test for it.
Now, VictoriaMetrics
correctly returns `NaN` for such cases.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7359
Signed-off-by: f41gh7 <nik@victoriametrics.com>
This commit makes vmauth respect the routing config for unauthorized
requests for requests that despite having Authorization header failed to
authorize successfully.
It covers the following use-cases:
- vmauth is used at load-balanacer and must forward requests as is. There is no any authorization configs.
- vmauth has authorization config, but it must forward requests with invalid credential tokens to some other backend.
related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7543
---------
Signed-off-by: Andrii <andriibeee@gmail.com>
1. Avoid storing the last evaluation results outside of rules, check for
stale time series as soon as possible;
2. remove duplicated template `Clone()`.
This pull request is primarily reducing memory usage when rules produce
large volumes of results, as seen in
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6894.
The CPU time spent on garbage collection remains high and may be
addressed in a separate PR.
The following user-level options must be unconditionally inherited by url_map, since this is what most users expect:
- retry_status_codes
- load_balancing_policy
- drop_src_path_prefix_parts
- discover_backend_ips
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7519
The fix at a0a154511a looks too complicated and fragile:
- It moves buMin initialization to the place, which is far from its usage.
- It embeds unclear logic on selecting the proper buMin if it is broken,
into unrelated loop.
The actual fix must be more clear:
$ git diff 95acca6b52 -- app/vmauth/
- if n := bu.concurrentRequests.Load(); n < minRequests {
+ if n := bu.concurrentRequests.Load(); n < minRequests || buMin.isBroken() {
This should simplify further maintenance of this code.
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/7489
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3061
By default the delay equals to 1 second.
While at it, document refresh_interval query arg at /select/logsql/tail endpoint.
Thanks to @Fusl for the idea and the initial implementation at https://github.com/VictoriaMetrics/VictoriaMetrics/pull/7428
This eliminates possible bugs related to forgotten Query.Optimize() calls.
This also allows removing optimize() function from pipe interface.
While at it, drop filterNoop inside filterAnd.
Previously, vmauth could have pick `buMin` as least loaded backend
without checking its status. In result, vmauth could have respond to the
user with an error even if there were healthy backends. That could
happen if healthy backends already had non-zero amount of concurrent
requests executing at the moment of least-loaded backend choosing logic.
Steps to reproduce:
1. Setup vmauth with two backends: healthy and non-healthy
2. Execute a bunch of concurrent requests against vmauth (i.e. Grafana
dash reload)
3. Observe that some requests will fail with message that all backends
are unavailable
Addresses https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3061
---
Signed-off-by: hagen1778 <roman@victoriametrics.com>
### Describe Your Changes
add sorting of logs by groups and within each group by time in desc
order. See #7184 and #7045
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
Previously it incorrectly applied xFilesFactor, if it's value equal to 0.
This commit properly handles this case and returns result according to
the graphite documentation:
`xFilesFactor follows the same semantics as in Whisper storage schemas. Setting it to 0 (the default) means that only a single value in the series needs to be non-null for it to be considered non-empty, setting it to 1 means that all values in the series must be non-null. A setting of 0.5 means that at least half the values in the series must be non-null.`
Signed-off-by: f41gh7 <nik@victoriametrics.com>
Co-authored-by: Evgeniy Negriy <einegriy@avito.ru>
Loki protocol supports optional `metadata` object for each ingested line. It's added as 3rd field at the (ts,msg,metadata) tuple. Previously, loki request json parsers rejected log line if tuple size != 2.
This commit allows optional tuple field. It parses it as json object and adds it as log metadata fields to the log message stream.
related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7431
---------
Co-authored-by: f41gh7 <nik@victoriametrics.com>
### Describe Your Changes
I don't like this solution, but it works. Other possible solutions
described in an issue
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7068
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
### Describe Your Changes
Fix https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7301
When querying with condition like `WHERE a=1` (looking for series A),
InfluxDB can return data with the tag `a=1` (series A) and data with the
tag `a=1,b=1` (series B).
However, series B is will be queried later and it's data should not be
combined into series A's data.
This PR filter those series that are not identical to the original query
condition.
For table `example`:
```
// time host region value
// ---- ---- ------ -----
// 2024-10-25T02:12:13.469720983Z serverA us_west 0.64
// 2024-10-25T02:12:21.832755213Z serverA us_west 0.75
// 2024-10-25T02:12:32.351876479Z serverA 0.88
// 2024-10-25T02:12:37.766320484Z serverA 0.95
```
The query for series A (`example_value{host="serverA"}`) and result will
be:
```SQL
SELECT * FROM example WHERE host = "serverA"
```
```json
{
"results": [{
"statement_id": 0,
"series": [{
"name": "cpu",
"columns": ["time", "host", "region", "value"],
"values": [
["2024-10-25T02:12:13.469720983Z", "serverA", "us_west", 0.64],
["2024-10-25T02:12:21.832755213Z", "serverA", "us_west", 0.75],
["2024-10-25T02:12:32.351876479Z", "serverA", null, 0.88],
["2024-10-25T02:12:37.766320484Z", "serverA", null, 0.95]
]
}]
}]
}
```
We need to abandon `values[0]` and `values[1]` because the value of
**unwanted** column `region` is not null.
As for series B (`example_value{host="serverA", region="us_west"}`), no
change needed since the query filter out unwanted rows already.
### Note
This is a draft PR for verifying the fix.
### Checklist
The following checks are **mandatory**:
- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
This commit adds the following changes:
- Added support to push datadog logs with examples of how to ingest data
using Vector and Fluentbit
- Updated VictoriaLogs examples directory structure to have single
container image for victorialogs, agent (fluentbit, vector, etc) but
multiple configurations for different protocols
Related issue https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6632
This commit fixes flaky test TestWriteRead/read/graphite/subquery-aggregation in app/victoria-metrics/main_test.go
The test fails when the test execution falls on the first second of a minute,
for example 6:59:00. In all other cases (such as 6:59:01) the test passes.
The test fails because of the way VictoriaMetrics implements sub-queries: it
aligns the time range to the step. The test config does not account for this.
Assuming that the implementation is correct, the fix is to adjust the test
config so that the data is inserted at intervals other than 1m.
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
In this case the _msg field is set to the value specified in the -defaultMsgValue command-line flag.
This should simplify first-time migration to VictoriaLogs from other systems.
This msy be useful when ingesting logs from different sources, which store the log message in different fields.
For example, `_msg_field=message,event.data,some_field` will get log message from the first non-empty field:
`message`, `event.data` and `some_field`.
### Describe Your Changes
Fixes issues with incorrect updating of query and limit fields, and
resolves the problem where the display tab resets.
Related issue: #7279 and #7290
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7182
- add a separate index cache for searches which might read through large
amounts of random entries. Primary use-case for this is retention and
downsampling filters, when applying filters background merge needs to
fetch large amount of random entries which pollutes an index cache.
Using different caches allows to reduce effect on memory usage and cache
efficiency of the main cache while still having high cache hit rate. A
separate cache size is 5% of allowed memory.
- reduce size of indexdb/dataBlocks cache in order to free memory for
new sparse cache. Reduced size by 5% and moved this to a separate cache.
- add a separate metricName search which does not cache metric names -
this is needed in order to allow disabling metric name caching when
applying downsampling/retention filters. Applying filters during
background merge accesses random entries, this fills up cache and does
not provide an actual improvement due to random access nature.
Merge performance and memory usage stats before and after the change:
- before
![image](https://github.com/user-attachments/assets/485fffbb-c225-47ae-b5c5-bc8a7c57b36e)
- after
![image](https://github.com/user-attachments/assets/f4ba3440-7c1c-4ec1-bc54-4d2ab431eef5)
---------
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7309
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>