Previously, performance of stream.Parse could be limited by mutex.Lock on callback function. It used shared writeContext. With complicated relabeling rules and any slowness at pushData function, it could significantly decrease parsed rows processing performance.
This commit removes locks and makes parsed rows processing lock-free in the same manner as `stream.Parse` processing implemented at push ingestion processing.
Implementation details:
- Removing global lock around stream.Parse callback.
- Using atomic operations for counters
- Creating write contexts per callback instead of sharing
- Improving series limit checking with sync.Once
- Optimizing labels hash calculation with buffer pooling
- Adding comprehensive tests for concurrency correctness
Benchmark performance:
```
# before
BenchmarkScrapeWorkScrapeInternalStreamBigData-10 13 81973945 ns/op 37.68 MB/s 18947868 B/op 197 allocs/op
# after
goos: darwin
goarch: arm64
pkg: github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape
cpu: Apple M1 Pro
BenchmarkScrapeWorkScrapeInternalStreamBigData-10 74 15761331 ns/op 195.98 MB/s 15487399 B/op 148 allocs/op
PASS
ok github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape 1.806s
```
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8159
---------
Signed-off-by: Maksim Kotlyar <kotlyar.maksim@gmail.com>
Co-authored-by: Roman Khavronenko <hagen1778@gmail.com>
These errors could be caused by intermittent network issues, especially
in case of using proxies when accessing S3 storage. Previously, such
error would abort backup/restore process and require manual intervention
to ensure backups consistency.
This commit adds automatic retries to handle this to improve backups
reliability and resilience to network issues.
### Describe Your Changes
during initial flush with deduplication and windows enabled lower
timestamps threshold is set to an upper bound of the next deduplication
interval, which leads to ignoring all samples on subsequent intervals
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
(cherry picked from commit 511517f491)
Previously, vmagent may incorrectly store partial scrape response
in case of scrapping error. It may happen if `sw.ReadData` call fetched
some chunked response and store it at buffer. And later context deadline
exceed error happened.
As a result, at the next scrape iteration this partial response could
be forwarded to the `sw.sendStaleSeries(lastScrape...)` function call
and lead to `Prometheus line` parsing error.
This commit properly set response body to the empty value in case of
scrapping error. It prevents storing partial scrape response body. And
it no longer sends partial staleness markers to the remote storage.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8528
### Describe Your Changes
`prasedRelabelConfig` -> `parsedRelabelConfig`
### Checklist
The following checks are **mandatory**:
- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
(cherry picked from commit 127d4f37b8)
### Describe Your Changes
Fix many spelling errors and some grammar, including misspellings in
filenames.
The change also fixes a typo in metric `vm_mmaped_files` to `vm_mmapped_files`.
While this is a breaking change, this metric isn't used in alerts or dashboards.
So it seems to have low impact on users.
The change also deprecates `cspell` as it is much heavier and less usable.
---------
Co-authored-by: Andrii Chubatiuk <achubatiuk@victoriametrics.com>
Co-authored-by: Andrii Chubatiuk <andrew.chubatiuk@gmail.com>
(cherry picked from commit 76d205feae)
Signed-off-by: hagen1778 <roman@victoriametrics.com>
This reduces the size of LogRows.streamTagCanonicals by 1/3 because of the eliminated `cap` field
in the slice header (reflect.SliceHeader) compared to the string header (reflect.StringHeader).
This commit adds lib/chunkedbuffer.Buffer - an in-memory chunked buffer
optimized for random access via MustReadAt() function.
It is better than bytesutil.ByteBuffer for storing large volumes of data,
since it stores the data in chunks of a fixed size (4KiB at the moment)
instead of using a contiguous memory region. This has the following benefits over bytesutil.ByteBuffer:
- reduced memory fragmentation
- reduced memory re-allocations when new data is written to the buffer
- reduced memory usage, since the allocated chunks can be re-used
by other Buffer instances after Buffer.Reset() call
Performance tests show up to 2x memory reduction for VictoriaLogs
when ingesting logs with big number of fields (aka wide events) under high speed.
Pre-allocate the needed slice of strings and then assign items to it by index
instead of appending them. This reduces the number of memory allocations
and improves performance a bit.
…an 64KB at Reset
This commit reverts
b58e2ab214
as it has negative impacts when ByteBuffer is used for workloads that
always exceed 64KiB size. This significantly slows down affected
components because:
* buffers aren't beign reused;
* growing new buffers to >64KiB is very slow.
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8501
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
Signed-off-by: hagen1778 <roman@victoriametrics.com>
- Properly decode protobuf-encoded Loki request if it has no Content-Encoding header.
Protobuf Loki message is snappy-encoded by default, so snappy decoding must be used
when Content-Encoding header is missing.
- Return back the previous signatures of parseJSONRequest and parseProtobufRequest functions.
This eliminates the churn in tests for these functions. This also fixes broken
benchmarks BenchmarkParseJSONRequest and BenchmarkParseProtobufRequest, which consume
the whole request body on the first iteration and do nothing on subsequent iterations.
- Put the CHANGELOG entries into correct places, since they were incorrectly put into already released
versions of VictoriaMetrics and VictoriaLogs.
- Add support for reading zstd-compressed data ingestion requests into the remaining protocols
at VictoriaLogs and VictoriaMetrics.
- Remove the `encoding` arg from PutUncompressedReader() - it has enough information about
the passed reader arg in order to properly deal with it.
- Add ReadUncompressedData to lib/protoparser/common for reading uncompressed data from the reader until EOF.
This allows removing repeated code across request-based protocol parsers without streaming mode.
- Consistently limit data ingestion request sizes, which can be read by ReadUncompressedData function.
Previously this wasn't the case for all the supported protocols.
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8416
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8380
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8300
The `ignore_fields` HTTTP query args can contain prefixes ending with '*'.
For example, `ignore_fields=foo.*,bar` skips all the fields starting with `foo.`
during data ingestion.
Long constant fields cannot be stored in columnsHeader as a const column,
because their size exceeds maxConstColumnValueSize, so they are stored as regular values.
This commit optimizes storing such fields by storing only a single value
across the field values in a block instead of storing multiple values.
This should improve data ingestion performance a bit. This also should improve query
performance when the query accesses such fields because of better cache locality.
Also improve persisting of constant string lengths by storing them only once.
This commit introduces common readers for multiple compression encoding algorithms.
Currently, supported encodings are:
* zstd
* gzip
* deflat
* snappy
It adds new common reader to the all VictoriaLogs ingestion protocols.
And updates opentelemetry metrics parsing for VictoriaMetrics components.
Also, it ports zstd stream parses from cluster branch.
Related issues:
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8380
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8300
---------
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
Co-authored-by: f41gh7 <nik@victoriametrics.com>
Previously, if the command-line flag value `-remoteWrite.showURL` changed, vmagent dropped content of persistent queues. It's not expected behavior and may lead to data-loss at queue.
Further more if command-line flag value `-remoteWrite.showURL` is set to `true`, any changes to url query arguments will lead to persistent queue drop. The most common uses is kafka and gcp pub-sub integration. It uses url query arguments for client configuration.
Also, it complicates copy content of persistent queue between vmagents. Since it requires to properly change name inside metainfo.json.
This commit removes persistent queue name equality check from `lib/persistentqueue`. This check was added as an additional protection from on-disk data corruption.
It's safe to skip this check for vmagent, because vmagent encodes remoteWrite.url as part of path to the queue. It guarantees that there will be no collision.
related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8477.
### Checklist
The following checks are **mandatory**:
- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Signed-off-by: f41gh7 <nik@victoriametrics.com>
Co-authored-by: f41gh7 <nik@victoriametrics.com>
Previously, TLS config was only created for URLs with `https` scheme.
This could lead to unexpected errors when original URL was redirecting
to `https` one as TLS config is not applied.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8494
Add an integration test to confirm that deduplication works for the
current month. See #6965.
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
Almost all storage API operations, both ingestion and retrieval, involve
writing and/or reading the indexdb. However, during these operations,
the indexdb refcount is not incremented. This may lead to panics if
indexdb is rotated more than once during these operations.
This commit increments the refcount before using indexdb and decrements it
after use.
Note that rotating indexdb more than once during some operation is an
impossible case under normal circumstances as the min retention period
is 1 day (i.e. the indexdb will be rotated once per day). However, we
want the storage to behave correctly in all cases.
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Fix metric that shows number of active time series when per-day index is disabled. Previously, once per-day index was disabled, the active time series metric would stop being populated and the `Active time series` chart would show 0.
See: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8411.
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
### Fix scrapePool name
If in the scrape file, I do some magic and manipulate the job name then
Prometheus will show scrapePool as the original job name in the targets
API, but vmagent will set it to the final value which is wrong.
example
```
job: consul-targets
...
- source_labels: [ __meta_consul_service ]
regex: (\w+)[_-]exporter
target_label: job
replacement: $1
```
curl to prom API will show
`"scrapePool": "consul-targets",`
vmagent:
`""scrapePool": "node",`
before changes:
```
curl -s 'http://localhost:8429/api/v1/targets' | jq -r '.data.activeTargets[].scrapePool'| sort|uniq
blackbox
pgbackrest
postgres
```
after changes
```
curl -s 'http://localhost:8429/api/v1/targets' | jq -r '.data.activeTargets[].scrapePool'| sort|uniq
blackbox
consul-targets
```
### Checklist
The following checks are **mandatory**:
- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Co-authored-by: hagen1778 <roman@victoriametrics.com>
(cherry picked from commit 486b9e1c64)
### Describe Your Changes
fixes#8469
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
(cherry picked from commit c174a046e2)
This feature allows to track query requests by metric names. Tracker
state is stored in-memory, capped by 1/100 of allocated memory to the
storage. If cap exceeds, tracker rejects any new items add and instead
registers query requests for already observed metric names.
This feature is disable by default and new flag:
`-storage.trackMetricNamesStats` enables it.
New API added to the select component:
* /api/v1/status/metric_names_stats - which returns a JSON
object
with usage statistics.
* /admin/api/v1/status/metric_names_stats/reset - which resets internal
state of the tracker and reset tsid/cache.
New metrics were added for this feature:
* vm_cache_size_bytes{type="storage/metricNamesUsageTracker"}
* vm_cache_size{type="storage/metricNamesUsageTracker"}
* vm_cache_size_max_bytes{type="storage/metricNamesUsageTracker"}
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4458
---------
Signed-off-by: f41gh7 <nik@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
Previously, opentelemetry attribute parsed added extra field names according to
golang JSON parser spec for structs:
```
struct AnyValue{
StringValue string
}
```
Was serialized into:
```
{"StringValue": "some-string"}
```
While opentelemetry-collector serializes it as
```
"some-string"
```
This commit changes this behaviour it makes parses compatible with opentelemetry-collector format. See test cases for examples.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8384
Since funcs `ParseDuration` and `ParseTimeMsec` are used in vlogs,
vmalert, victoriametrics and other components, importing promutils only
for this reason makes them to export irrelevant
`vm_rows_invalid_total{type="prometheus"}` metric.
This change removes `vm_rows_invalid_total{type="prometheus"}` metric
from /metrics page for these components.
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
(cherry picked from commit 63f6ac3ff8)
Fix parsing of IPv6 addresses after discovery. Previously, it could lead
to target being discovered and discarded afterwards.
See: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8374
---------
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
(cherry picked from commit 99de272b72)
`MustParsePromMetrics` imports `lib/protoparser/prometheus`, and this
package exposes the following metrics:
```
vm_protoparser_rows_read_total{type="promscrape"}
vm_rows_invalid_total{type="prometheus"}
```
It means every package that uses `lib/prompbmarshal` will start exposing
these metrics. For example, vlogs imports `lib/protoparser/common` which
uses `lib/prompbmarshal.Label`. And only because of this vlogs starts
exposing unrelated prometheus metrics on /metrics page.
Moving `MustParsePromMetrics` to `lib/protoparser/prometheus` seems like
the leas intrusive change.
-----------
Depends on another change
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8403
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Previously, if indexDB search failed for some reason during search at previous indexDB (aka extDB), VictoriaMetrics stored empty search result at cache. It could cause incorrect search results at subsequent requests.
This commit checks search error and stores request results only on success.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8345