Samples parsing is a hot path. Bad client could easily overwhelm
receiver with bad or unsupported data. So it is better to throttle such
messages.
Follow-up after
b26a68641c
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Process values in batches instead of passing every value in the callback.
This improves performance of reading the encoded values from storage by up to 50%.
### Describe Your Changes
- fix table rendering on writer and scheduler pages
### Checklist
The following checks are **mandatory**:
- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
### Describe Your Changes
added search page required for docs site
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
Signed-off-by: Artem Navoiev <tenmozes@gmail.com>
New version has additional checks and reduced resource consumption, so
it doesn't timeout for our internal repos.
To make linter happy, I addressed "redefinition of the built-in
function" lint error.
----
Signed-off-by: hagen1778 <roman@victoriametrics.com>
- url encoding / decoding with <urlencode:field> and <urldecode:field>
- base64 encoding / decoding with <base64encode:field> and <base64decode:field>
- hex encoding / decoding with <hexencode:field> and <hexdecode:field>
- hex encoding for integers with <hexnumencode:field> and <hexnumdecode:field>
This allows reducing the state of every statsProcessor by removing pointer to the corresponding statsFunc.
For example, this reduces statsCountProcessor size by 2x.
Previoysly finalizeStats() for some functions such as count_uniq() could run for long periods
of time after the query is canceled, since stopCh wan't propagated to finalizeStats().
Prevsiously integer values were converted to strings before being passed to `updateState()` function at `count_uniq`
and `count_uniq_hash`. Later such values are converted back to integers in order to track them via integer map of unique values.
This commit avoids the int -> string -> int conversion. Instead, it passes integers directly to the integer map of unique values.
This improves performance of `count_uniq` and `count_uniq_hash` functions even further.
This filter can be used when debugging and exploring logs in order to understand better
which value types are used for storing the particular log fields.
The `value_type` filter complements `block_stats` pipe.
Add exclude google.golang.org/grpc/stats/opentelemetry v0.0.0-20240907200651-3ffb98b2c93a to go.mod
according to https://github.com/googleapis/google-cloud-go/issues/11283#issuecomment-2558515586 .
This fixes the following strange issue on `make vendor-update`:
cloud.google.com/go/storage imports
google.golang.org/grpc/stats/opentelemetry: ambiguous import: found package google.golang.org/grpc/stats/opentelemetry in multiple modules:
google.golang.org/grpc v1.69.0 (/go/pkg/mod/google.golang.org/grpc@v1.69.0/stats/opentelemetry)
google.golang.org/grpc/stats/opentelemetry v0.0.0-20240907200651-3ffb98b2c93a (/go/pkg/mod/google.golang.org/grpc/stats/opentelemetry@v0.0.0-20240907200651-3ffb98b2c93a)
- Pass the calculated results to the next pipe in float64 columns.
Previously the results were converted to string columns. This could slow down further calculations.
- Use custom optimized logic for processing numeric columns, which are passed to math pipe.
Previously all the input columns were converted to string and then converted to float64
before math pipe calculations.
- Initialize the newly added columns at blockResult as soon as they are added.
This improves performance when big number of columns are calculated by math pipe.
Previously integer values were tracked in string maps. Now every input value is parsed as integer.
On success the parsed integer is tracked via specialized maps, which hold only integers.
This reduces CPU usage and memory usage in general case.
Use the column name attached to the corresponding part. The lifetime of this column name exceed the blockSearch lifetime,
so it is safe using it here.
This is a follow-up for 8d968acd0a
Previously columns with negative int64 values were stored either as float64 or string
depending on whether the negative int64 values are bigger or smaller than -2^53.
If the integer values are smaller than -2^53, then they are stored as string, since float64 cannot
hold such values without precision loss. Now such values are stored as int64.
This should improve compression ratio and query performance over columns with negative int64 values.
Previously field values could be automatically converted to float64 with precision loss.
This could lead to unexpected results when querying such field values.
For example, "10007199254740992" was incorrectly represented as 10007199254740993.
This commit prevents from such lossy conversions when storing field values.
While at it, prevent from int64 overflow at tryParseBytes and tryParseDuration functions,
which are used for parsing constants in queries for byte sizes and durations.
Now these functions return 1<<63-1 (the maximum int64 value) for constants exceeding
this value. Previously they could return arbitrary garbage for such constants.
1. Do not copy every line from LineReader.buf to LineReader.Line - just refer the line at LineReader.buf.
2. Do not copy the next found line to the beginning of LineReader.buf - just track the next line start index with LineReader.bufOffset.
This reduces memory copying when many lines are read into LineReader.buf by a single read() syscall.
When vmselect process a rollup function it fetches all the raw samples
on requested `start-end` interval of the query. It then loops through
the raw samples, picks the range of the samples based on provided `step`
interval and invokes a rollup function for each of the picked ranges of
samples.
During this processing, vmselect always populates the `realPrevValue`
field with the closest previous raw sample value before the picked range
of samples. This `realPrevValue` is used by rollup functions like
increase_pure or delta to decide whether the counter change happened or
not. For example, we get the counter value == 1. If we've seen this
counter before and its value was also 1 - then no change happened. If we
didn't see it before, then this counter should have started with value=0
and we need to account for `1-0=1` change. All this is required to deal
with situations when scrapes are missing or `step` is too small.
However, vmselect doesn't check how "old" is the `realPrevValue`. In
other words, it doesn't respect the staleness interval when picking it.
In result, depending on the `start` and `end` params, vmselect can use
`realPrevValue` which is a couple of hours old and is unlikely to be a
temporary scrape fail. In result, some increases can be incorrectly
ingnored by vmselect.
This change makes sure that vmselect doesn't populate `realPrevValue`
with samples that are older than staleness interval.
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ x ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
-------------------
To reproduce, create a dataset with one metric `foo` which has samples
with value=1 on interval of couple of hours and resolution 15s, and a
gap for an hour in the middle:
<img width="769" alt="image"
src="https://github.com/user-attachments/assets/a39b2740-b741-45f8-ad18-093b7c57c3b3"
/>
Then run `increase(foo[1m])` expression on this time range (disable
cache):
<img width="1472" alt="image"
src="https://github.com/user-attachments/assets/463cece1-f359-4c75-a96c-60092a31cab2"
/>
In result, there will be one increase on the beginning of the series.
And no increase after the gap. Then change the time range so it starts
in the middle of the gap:
<img width="1505" alt="image"
src="https://github.com/user-attachments/assets/f4a460c3-9fd1-4ec7-ab47-15e716ec1019"
/>
Now, there is an increase>0 because the `realPrevValue` wasn't
populated. This is wrong, because it hides the increase of the series.
With the fix, the original increase query on full time range should show
2 increases:
<img width="1492" alt="image"
src="https://github.com/user-attachments/assets/aa9d8a6b-7b22-41f6-9eb9-83b3113a6982"
/>
Signed-off-by: hagen1778 <roman@victoriametrics.com>
This chapter is needed for referring from Github issues when CPU or memory profiles are needed to be collected
in order to investigate issues with high CPU and/or RAM usage at VictoriaLogs.
Since
44b071296d
`evalNumber` function no longer updating MetricName tenancy information.
This leads to mismatch in metric names between the query result and
evaluated number for all tenants other than 0:0.
For example, query `count(up) or 0` will return different results for
tenants 0:0 and 1:1 (assuming up is present for both tenants):
- tenant 0:0 - will only contain result of `count(up)`
- tenant 1:1 - will return both `count(up)` and `0` since metric names
will not be matched
This restores setting of tenancy information for metric name for
single-tenant queries.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7987
---
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Hint allows to choose type of cache to be used for index search:
- in-memory parts are storing recently ingested samples and should use
main cache. This improves ingestion speed and cache hit ration for
queries accessing recently ingested samples.
- merges of file parts is performed in background, using a separate
cache allows avoiding pollution of the main cache with irrelevant
entries.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7182
---------
Signed-off-by: f41gh7 <nik@victoriametrics.com>
This commit makes configurable interval for checking if final dedup
process for the historical data should be started. It allows to spread
resource utilisation for multiple vmstorage/vmsingle instances in time.
Since final dedup may add additional preasure on disk, backup systems
and make cluster less stable. Storage unconditionally adds 25% jitter to
the provided value, it should simplify configuration management at
Kubernetes ecosystem. Because Kubernetes application pods must have the
same configuration.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7880
---------
Signed-off-by: f41gh7 <nik@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
It reduces memory usage during tests execution. It makes tests execution more reliable.
Since it sometimes crashes with OOM at small github runners.
Signed-off-by: f41gh7 <nik@victoriametrics.com>
recently new datadog extension was released, where custom endpoint
configuration was added
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
commitL c7fc0d0d2f enabled skipping alerts
in case there is no labels present for an alert. This made clause which
was adding a comma for the JSON list incorrect as it is not possible to
determine if the next alert will be skipped or not.
This fix renders all alert labels in advance allowing properly format
JSON payload for Alertmanager notification.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7985
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
### Describe Your Changes
There has been a question in our public Slack on whether the length
limit of a log record is going to be changed. See:
https://victoriametrics.slack.com/archives/C05UNTPAEDN/p1736156255119689
This PR documents the max length and explains why it has been chosen.
This FAQ section could serve as an answer to more questions like this.
### Checklist
The following checks are **mandatory**:
- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
### Describe Your Changes
fix function name in comment
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
Signed-off-by: cuiweiyuan <cuiweiyuan@aliyun.com>
### Describe Your Changes
Fixes error in `vmauth` when discovering ipv6 addresses.
`vmauth` attempts to [slice till
`:`](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmauth/auth_config.go#L397)
in the discovered addresses without accounting for ipv6. This causes it
to fail in ipv6 only environments.
```sh
$ nslookup vmselect.ns.svc.cluster.local
...
Name: vmselect.ns.svc.cluster.local
Address: 2600:dead:beef:dead:beef::8
```
```sh
$ kubectl logs -f vmauth
...
error: dial tcp: lookup 2600: no such host
```
### Checklist
The following checks are **mandatory**:
- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Co-authored-by: f41gh7 <nik@victoriametrics.com>
### Describe Your Changes
This pull request adds support for autocomplete in LogsQL queries. The
new feature provides suggestions for field names, field values, and pipe
names as you type.
---------
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
### Describe Your Changes
added links to badges and made them clickable at
docs.victoriametrics.com
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
The point of the new section is to highlight publicly available
playgrounds for users. All of them were mentioned in other parts of the
documentation, but we didn't have all of them in one place before.
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
Signed-off-by: hagen1778 <roman@victoriametrics.com>