There are three possible cases for blockResultColumn.getValuesEncoded():
- The blockResultColumn.valuesEncoded is already set. This is the case for manually constructed blockResultColumn,
or if it is cloned via blockResult.clone().
- The blockResultColumn.chSrc is non-nil. In this case the valuesEncoded must be read from the corresponding br.bs,
by applying br.bm filter.
- The blockResultColumn.cSrc is non-nil. In this case the valuesEncoded must be read from the corresponding br.brSrc,
by applying br.bm filter.
It is better from maintainability and debuggability PoV to write this logic in a single getValuesEncoded() function
instead of indirecting it via valuesEncodedCreator.
VictoriaLogs stores min and max column values per every data block.
These values were incorrectly used by min() and max() stats functions
inside updateStatsForAllRows() function. It was assumed that this function
could use min / max values stored in the block, since all the rows in the blockResult
must be processed. But the blockResult contains _filtered_ rows,
e.g. it may have less rows than the number of rows in the original block.
In this case it is unsafe assuming that the min / max values from the original block
exist in the filtered rows inside blockResult.
Add blockResult.isFull() function, which returns true if the blockResult contains all rows
from the original block (e.g. they aren't filtered). Use this function in fast path,
while fall back to slow path, which triggers reading the column values and iterating over them.
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [X] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).
* update sorting to reflect more important changes first
* use issue numbers instead of `this issue` wording
Signed-off-by: hagen1778 <roman@victoriametrics.com>
### Describe Your Changes
Add a link to UI on the welcome page.
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).
* actualize information
* explain alert state in more details
* re-order content by its importance
Signed-off-by: hagen1778 <roman@victoriametrics.com>
### Describe Your Changes
Fix logs not updating after query change
Related issue: #8912
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).
### Describe Your Changes
Fixes UI freeze in logs timeline after repeated zooming and panning.
The issue was caused by excessive query requests triggered during fast
timeline interactions. Added debounce to limit the frequency of
requests, similar to the approach used in vmui for metrics.
Related issue: #8655
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* tidy up wording
* mention available integrations in quickstart
* cross-reference integrations and data ingestion
* move these docs closer to each other
Signed-off-by: hagen1778 <roman@victoriametrics.com>
### Describe Your Changes
Limit number of points per series and total response size (30 MiB) on
the Raw Query page to improve UI stability.
Related issue: #7895
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).
* move example alerts out of /rules folder, since this folder should
contain only useful rules
* expose ports for vlogs and vmetrics for local debug
* add some comments explamining vmalert config
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).
Signed-off-by: hagen1778 <roman@victoriametrics.com>
The new templating option could contain following options: `prometheus`,
`vlogs` or `graphite`.
It represents the datasource type this rule is evaluated against. The
new templating option
can be used in rule's annotations or for routing to a specific
destination (e.g. Grafana datasource).
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).
This functionality was broken in
0e313e5355
Was caught by integration tests:
```
--- FAIL: TestSingleVMAuthRouterWithInternalAddr (5.00s)
vmauth_routing_test.go:148: Could not start vmauth: could not extract some or all regexps from stderr: ["pprof handlers are exposed at http://(.*:\\d{1,5})/debug/pprof/"]
```
Signed-off-by: hagen1778 <roman@victoriametrics.com>
VictoriaLogs happily accepts logs without _msg field, and there are legitimate cases where the _msg field isn't needed.
Remove the warning in order to reduce the level of confusion for VictoriaLogs users.
This feature is useful when ingesting logs, which may contain timestamps across different fields.
Then the first non-empty field from the _time_field list is used as a timestamp field.
This is needed in order to be able to find these playgrounds by `demo` word.
While at it, replace misleading `playground` word with less confusing wording across different places of the docs.
Previously `*_last_reload_successful` metrics are set to 1 even when respective
configuration is not defined, besides stream aggregation and scraping.
This commit initializes config reload related metrics only when respective
configuration is set.
Related issue:
https://github.com/VictoriaMetrics/helm-charts/issues/2119
This commit adds the following changes to the enterprise version:
- add make target for testing in FIPS mode
- disallow using OVH in FIPS mode. OVH is using SHA1 for authentication via headers and SHA1 is not allowed to be used in FIPS mode. There is no option to switch to another hashing algorithm in OVH API, so disabling it completely.
- build fips binaries together with regular ones. This will allow to make sure that FIPS builds are always up to date and compatible with regular ones.
- disable CGO in FIPS builds for vmagent, since vmagent imports Kafka library which uses CGO imports. This might lead to using OpenSSL version which is not certified for FIPS mode. Using pure Go implementation allows to avoid this and keep all validations on Go build process side.
- vm_retention_filtering_partitions_scheduled is a guage metric that shows now
many partitions the retention filtering is currently being applied to.
- vm_retention_filtering_partitions_scheduled_size_bytes is a guage metric that
shows the total size (in bytes) of partitions the retention filtering is
currently being applied to.
These metrics are similar to vm_downsampling_partitions_scheduled and
vm_downsampling_partitions_scheduled_size_bytes and can be used for observing
how much time it took for a given vmstorage instance to perform retention
filtering across this many partitions whose total size was this many bytes.
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).
Signed-off-by: hagen1778 <roman@victoriametrics.com>
### Describe Your Changes
fixed typos in docs and code
fixed collision in cloud docs
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).
* {lib/backup,app/}: gracefully cancel currently running operation during graceful shutdown
Make backup/restore process interruptable by passing global context from the operation caller.
This is needed in order to reduce shutdown delays in case backup/restore cancellation is requested.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8554
This PR fixes two related bugs in the `replace_regexp` pipe:
1. **Infinite loop on empty matches when `limit` is not set**
[#8625](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8625)
When a regex pattern like `\d*`, `()`, or `\b` was used, the
implementation could repeatedly match the same zero-width position
without advancing the string, causing unbounded memory usage and
eventual OOM. This is now fixed by collecting all matches up front,
respecting the `limit`, and applying replacements in a single pass.
2. **Incorrect handling of anchors (`^` and `$`)**
The previous implementation applied regex matching to progressively
sliced substrings (`s = s[end:]`), which unintentionally caused anchor
patterns like `^` (start-of-string) to match at every new substring's
start. As a result, patterns that should have matched only once (e.g.,
`^|$`) ended up matching multiple times.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8625
Hopefully this should reduce the amounts of questions whether vlinsert replicates or shards the incoming logs.
The answer - it spreads the incoming log _evenly_ among the configured vlstorage nodes (e.g. it shards logs).
### Changes
Updated `lib/httpserver/httpserver.go` to include a flag that can toggle
CORS (defaults to true to keep the current behavior).
This PR relates to
[this](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8680#issue-2983786438)
feature request
### Checklist
The following checks are **mandatory**:
- [x] My change does not break backwards compatibility (i.e., preserves
CORS being enabled unless specified otherwise via the
`-http.cors.disabled=true` flag & value)
---------
Co-authored-by: Jai Mehra <jai.mehra@nav-timing.safrangroup.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
### Describe Your Changes
#8842
As a quick fix, it seems like it works. checked on my failing tests
(they are not emitting this error anymore)
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Co-authored-by: Roman Khavronenko <hagen1778@gmail.com>
These were added while working on paritition index (#8134). Submitting
the separately in order to:
1. Make sure partition index will not break anything and
2. Be able to compare performance before and after swiching to parition
index.
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
The rationale behind this change was explained here
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8595#issuecomment-2841788436
This PR moves only Prom and Grafana docs to show the change and limit
the amount of changes across docs.
Once accepted, the follow-up PRs will move Graphite, InfluxDB, OpenTSDB,
Datadog and NewRelic docs.
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
fixed not successful rebase in PR, moved svg icons to a sprite and added
VM logo to UI
<img width="1290" alt="image"
src="https://github.com/user-attachments/assets/34cfdc61-6349-4320-b133-38965cb6c30f"
/>
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).
### Describe Your Changes
use the built-in max/min to simplify the code
### Checklist
The following checks are **mandatory**:
- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
Signed-off-by: pkucode <cssjtu@163.com>
The new rule should notify user when there is a job with 0 configured or
discovered targets, which is usually a sign of misconfiguration.
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).
Signed-off-by: hagen1778 <roman@victoriametrics.com>
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).
Signed-off-by: hagen1778 <roman@victoriametrics.com>
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8466
This pull request:
1. Add `GraphiteWrite`, `CSVImport`, `OpenTSDBHTTPImport` data import
methods for vmsingle and vminsert.
2. Add TCP `Write` method to `apptest.Client` to make it capable of
writing data in TCP-based protocol.
3. Add test cases:
1. for new import methods.
2. for corner cases placed under `app/victoria-metrics/main_test`.
4. Removed all test cases under `app/victoria-metrics/main_test`.
Signed-off-by: f41gh7 <nik@victoriametrics.com>
Co-authored-by: f41gh7 <nik@victoriametrics.com>
### Describe Your Changes
Updated "> note" sections in /anomaly-detection/ doc path for new
formatting and improved clarity
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/victoriametrics/contributing/).
### Describe Your Changes
app/vmalert feat: add Debug to rule.Group
This is to enable/disable debug logs for the rules at group level.
Default is false.
Rule specific `debug` configuration takes priority over group level
configuration.
Relates to [app/vmalert: chore: log when response is
partial](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8522)
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Signed-off-by: emreya <e.yazici1990@gmail.com>
Signed-off-by: Emre Yazici <e.yazici1990@gmail.com>
Signed-off-by: emreya <emre.yazici@adyen.com>
Co-authored-by: Hui Wang <haley@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
(cherry picked from commit 90d2a6efed)
This is needed because Go interprets backticked string as a measurement unit for the given command-line flag:
The listed type, here int, can be changed by placing a back-quoted name in the flag's usage string;
the first such item in the message is taken to be a parameter name to show in the message and the back quotes
are stripped from the message when displayed
See https://pkg.go.dev/flag#PrintDefaults
Signed-off-by: hagen1778 <roman@victoriametrics.com>
### Describe Your Changes
1. Fixed log entry sorting in the "Group" view: newest entries are now
displayed at the top again. In v1.18.0, log entries were incorrectly
sorted (oldest first) due to changes related to nanosecond precision
support.
Related issue: #8726
2. Added alphabetical sorting of fields by key in the "Group" view for
easier navigation.
Related issue: #8438
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
### Describe Your Changes
Related issue: #8697
Renamed `retentionFilters` to `retentionFilter`.
🛑 before merge this pull request, need to merge API changes in
enterprise version.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
* set Y-min to 0. This helps to see real spikes in resource usage;
* add text panel explaining how to use query hash;
* support filtering by tenant;
* use `\tvm_slow_query_stats` prefix filter to filter out logs
that mention failed queries with `vm_slow_query_stats` word;
* add panel to show the minimum queried timestamp - see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8828
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Based on the code, an active time series is one that has received at
least one sample during the last hour. It does not includes querying.
The first time the docs were updated with this was in
d06aae9454. But even at that time, the
code shows that querying a metric did not make it active.
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
* put important changes on top;
* replace `this issue` with corresponding issue number to make it more sense;
* consistently format components names.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Co-authored-by: Jose Gómez-Sellés <14234281+jgomezselles@users.noreply.github.com>
* display max values instead of cumulative values
to outline anomalies;
* remove unnecessary overrides on column width;
* add corresponding formatting units to displayed values.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
datadb.rb contains logRows shards, which weren't freed up after the data ingestion
for the given per-day datadb is stopped. This leads to slow memory leak when VictoriaLogs runs
for multiple days without restarts. Avoid this memory leak by freeing up the logRows shards
after converting them to in-memory parts. Re-use the freed up logRows shards via a pool in order
to reduce the pressure on GC.
This commit moved all the docs/*.md files to docs/victoriametrics/ folder. This broke direct links to these files
such as https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/docs/vmagent.md .
Add the missing `/victoriametrics/` part here, so the link becomes https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/docs/victoriametrics/vmagent.md .
The move of all the docs/*.md files into docs/victoriametrics/ folder is dubious because of the following reasons:
- It breaks direct links to *.md files like mentioned above. It is impossible to fix such links all over the Internet,
so they will remain broken :(
The best thing we can do is to fix them on the resources we control such as VictoriaMetrics repository.
- It breaks Google indexing of VictoriaMetrics docs. Google index contains old links such as https://docs.victoriametrics.com/vmagent/#features .
Now such links are automatically redirected to https://docs.victoriametrics.com/victoriametrics/vmagent/#features by a javascript on the https://docs.victoriametrics.com/vmagent/ page.
Google doesn't like redirects at javascript, since they are frequently used by black hat SEO purposes. This leads to pessimization of VictoriaMetrics docs
in Google search result :( We cannot update the old links all over the Internet in order to avoid the redirect by javascript :(
The best thing we can do is to add <meta rel="canonical"> header with the new location of the page at the old url,
and hope Google won't remove VictoriaMetrics docs from its search results. @AndrewChubatiuk , please do this.
- It breaks backwards navigation. When you click the https://docs.victoriametrics.com/vmagent/ link on some page and then press `back` button
in the web browser, it won't return back you to the original page because of the intermediate redirect :( The broken navigation cannot be fixed
for old links located all over the Internet.
- It increases chances of breaking old links left on the Internet in the future, which will lead to 404 Not Found pages
and angry users :(
The sad thing is that we hit the same wall with harmful redirects again :( In the beginning the VictoriaMetrics docs links had .html suffix,
and they were case-sensitive. For example, http://docs.victoriametrics.com/MetricsQL.html#rate . It has been decided for some unknown reason
that it is a good idea to remove the .html suffix and to make all the links lowercase. So now such links are automatically redirected by javascript
to https://docs.victoriametrics.com/metricsql/#rate and then redirected again by another javasript to https://docs.victoriametrics.com/victoriametrics/metricsql/#rate :(
There are many old links all over the Internet (for example, at Reddit, StackOverflow, some internal Wiki pages, etc.). We cannot fix all of them,
so we need to pray these links won't break in the future.
@hagen1778, @tenmozes, @makasim , please make sure we won't hit the same wall in a third time.
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8595
Use multiple independent logRows shards for storing the pending log entries before converting them to searchable parts.
Every shard is protected by its own mutex, so multiple CPU cores may add multiple log rows into datadb at the same time.
This increases the performance of BenchmarkStorageMustAddRows/rowsPerInsert-1, which ingests log rows own-by-one
from concurrently running goroutines, by 2x.
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
Here a description of important terms related to Organizations is
provided. The intention is to leverage on a good UI design rather than
explaining all steps needed to perfomr certain actions.
However, an effort on explaining the meaning of internal states and
terms has been done.
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
This commit modifies the logging behavior for client network errors
(e.g., EOFs, timeouts) during the handshake process. They are now logged
as warnings instead of errors, as they are not actionable from the
server’s perspective. Here's some examples of such errors.
Timeouts during the initial read phase:
2025-04-09T07:08:59.323Z error
VictoriaMetrics/lib/vmselectapi/server.go:204 cannot perform
vmselect handshake with client "<REDACTED>": cannot read hello: cannot
read message with size 11: read tcp4 <REDACTED>-><REDACTED>: i/o
timeout; read only 0 bytes
EOFs occurring later in the handshake process:
2025-04-08T18:01:30.783Z error
VictoriaMetrics/lib/vmselectapi/server.go:204 cannot perform
vmselect handshake with client "<REDACTED>": cannot read isCompressed
flag: cannot read message with size 1: EOF; read only 0 bytes
By logging these as warnings, we reduce noise in error logs while
preserving valuble information for debug.
Previously, if `cpu.max` file has only `max` resource defined without
`period`, it was parsed incorrectly and silently drop error. While this
syntax is valid and actually used by some container runtimes. If period
is not defined, default value for it 100_000 must be used.
This commit fixes parsing function by using default value for period.
In addition, it adds zero value check, which fixes possible panic if
period has 0 value.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8808
This reverts commit fa6a32a39d.
Reason for revert: the broken tests were fixed on GOARCH=386 by skipping the check for the state size
after improting the state of stats function, since the state size depends on the hardware architecture.
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8710
### Describe Your Changes
Two new columns were added to the **Cardinality Explorer** table:
`Requests count`
- Displays how many times a metric was queried since stats collection
began
- Highlighted **red** when the value is `0` or missing
`Last request`
- Shows when the metric was last queried
- Highlighted:
- **Red** if older than **30 days**
- **Yellow** if between **7 and 30 days**
- Shows exact timestamp on hover in `YYYY-MM-DD HH:mm:ss` format
Related issue: #6145

### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
Co-authored-by: Hui Wang <haley@victoriametrics.com>
* downsampling: allow using `-downsampling.period=filter:0s:0s` to skip downsampling for time series that match the specified `filter`
* doc minor fix
* lib/storage/downsampling: fix a typo in error message
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
---------
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Co-authored-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
Signed-off-by: Artem Navoiev <tenmozes@gmail.com>
### Describe Your Changes
Fixes incorrect /flags request in vmui when using URL prefixes.
Related issue: #8641
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
### Describe Your Changes
Fixes oversized `Query History` button on mobile in logs UI.
Related issue: #8788
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
### Describe Your Changes
Hide identical series in the legend on the Raw Query page when
deduplication is disabled.
Related issue: #8688
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
* update wording;
* add more details about returned response;
* cover details on counter increments;
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Before, this doc section was related to vmui, by mistake.
Moved it closer to tsdb stats, where it makes more sense to be.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Badges is more a decorative element rather than source of useful information.
Some badges break from time to time, giving misleading impression to users.
It is better to remove them to reduce visual load and confusion.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
### Describe Your Changes
HTTP/2 support is used by some S3-compatible storage providers, so
disabling it default leads to unexpected errors when trying to connect
to S3 endpoint.
For example, using MinIO as S3 storage backend: `net/http: HTTP/1.x
transport connection broken: malformed HTTP response`.
HTTP/2 was enabled by default previously, but while fixing inconsistency
e5f4826 commit disabled this by default.
cc: @valyala
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
### Describe Your Changes
Fixes which are required in order to build FIPS-compliant binaries.
These changes were originally added for enterprise version and synced to
opensource for consistency and easier maintenance.
- consistently use `hash/fnv` at `app/vmalert` when calculating
checksums. Usage of md5 is not allowed in FIPS mode.
- increase encryption keys size used in testing in order to allow tests
to successfully run in FIPS mode
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
The suffix after the ID may change in dashboard settings.
If it changes, the link becomes broken (dubious decision at grafana.com/grafana/dashboards/ ).
That's why it is better to drop all the suffixes and use links for Grafana dashbooards ending with IDs.
In this case they are automatically redirected to the url with the proper suffix.
This is a follow-up for 3e4c38c56c
See also the previous commits 9c0863babc and 0a5ffb3bc1
### Describe Your Changes
Previously, it was possible to use any UTF-8 string to specify list of
labels. While this makes it easier to use it also leads to unexpected
parsing results in some cases (see
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8584 as an
example).
Enforce specifying metric in format {label="value"...} in order to avoid
issues with unexpected parsing results.
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
This reduces the overhead needed for converting the ingested log entries to searchable in-memory parts
when small number of log entries are passed to Storage.MustAddRows().
The BenchmarkStorageMustAddRows shows up to 10x performance increase for rowsPerInsert=1,
up to 5x performance increase for rowsPerInsert=10 and up to 2x performance increase for rowsPerInsert=100.
This should reduce CPU usage during data ingestion when every request contains small number of rows.
…te panel
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Signed-off-by: Artem Navoiev <tenmozes@gmail.com>
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
On shutdown, rw client printed two log messages about shutting down and
how many series it flushed on shut down.
This commit removes these log messages for the following reasons:
1. These messages were printed for each `client.cc`, which is set to x2
of available CPUs. This behavior generated a lot of useless logs on
shutdown.
2. Number of flushed series on shutdown doesn't matter to the user.
Instead, it now prints only two log messages: when shutdown starts and
when it ends:
```
^C2025-04-22T07:37:11.932Z info VictoriaMetrics/app/vmalert/main.go:189 service received signal interrupt
2025-04-22T07:37:11.932Z info VictoriaMetrics/app/vmalert/remotewrite/client.go:152 shutting down remote write client: flushing remained series
2025-04-22T07:37:11.933Z info VictoriaMetrics/app/vmalert/remotewrite/client.go:155 shutting down remote write client: finished in 210.917µs
```
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Previously, metric names stats API had a false assumption, that max
size of metric name is 256 byte. But this is configurable parameter with
4096 bytes max size. It triggered errors during API requests.
This commit replaces hard-coded 256 byte limit with common constant:
maxLabelValueSize. It has 16 MB limit.
In addition, this commit adds check for metric name stats tracker,
if metric name size exceeds default buffer limit, it will be allocated
directly on heap. It must be rare case, since most metric names has
16-64 byte size.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8759
Throttle remote write retry warning logs to one message per 5 seconds.
This reduces log noise during connectivity issues.
Users can monitor `vmagent_remotewrite_retries_count_total` and
`vmagent_remotewrite_errors_total` metrics for detailed retry rates per
destination if needed. This change prioritizes reducing log volume over
immediate destination-specific error logging in the retry warnings.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8723
VictoriaMetrics executables need the latest available stable release of Go (1.24 currently),
since they use its features. See, for example, GOEXPERIMENT=synctest from the commit 06c26315df .
There is no need in specifying the minimum supported Go version for building VictoriaMetrics products,
since Go automatically downloads and uses the version specified at `go ...` directive inside go.mod
starting from Go1.21. See https://go.dev/doc/toolchain for details.
So the actual minimum needed Go version is Go1.21, which has been released 1.5 years ago. It should automatically
install newer Go versions specified at `go ...` directive inside go.mod.
Do not reset wc.labels in order to properly keep track of the number of used labels for the scrape,
and properly re-use the same number of wc.labels on subsequent scrapes.
See 12f26668a6 (r155481168)
### Describe Your Changes
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8694
additionally removed container_name, docker network, renamed all
compose, config files for consistency
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
### Describe Your Changes
fixes#8707
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
Before the change, the vmagent integration tests created their directory
and files inside apptest/tests.
After the change, vmagent is instructed to store all files in a real
temporary directory, which is automatically deleted after the tests
complete.
Using this package lets to manipulate time. In this particular case, it
lets to advance the time 61 second forward instantly.
A few side changes were necessary:
- Do not use fasttime in unit tests. The fasttime package starts a
goroutine outside the test bubble which causes the clock to be real, not
fake.
- Stop the time.Ticker explicitly and also stop idbNext. These two
create goroutines with infinite loops which causes the unit tests that
use synctest to hang forever. All goroutines created inside the bubble
must exit in order for the syntest to finish.
- synctest is an experimental package and requires an environment
variable to be set. The Makefile was changed to set it.
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
### Describe Your Changes
The write concurrency limiter in ReadUncompressedData was previously
removed in
22d1b916bf
to avoid suboptimal behavior in certain scenarios. However, follow-up
reports—including issue
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8674 and
production feedback from VictoriaMetrics Cloud—indicated a noticeable
degradation in performance after its removal.
To mitigate these regressions, this commit reintroduces the concurrency
limiter. A long-term, more optimal solution will be explored separately
in issue https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8728.
TODO:
* [x] Changelog
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Co-authored-by: hagen1778 <roman@victoriametrics.com>
The doc was incorrectly ported into wrong directory after
a113516588
This change moves it to the victoriametrics dir.
While there, updated the order of some pages to couple them by meaning.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
### Describe Your Changes
Fixed table sorting logic and added unit tests for descendingComparator.
Values are now correctly sorted by type: number, date, or string.
Related issue: #8606
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
### Describe Your Changes
Related issue:
[#8604](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8604)
Added a download logs button to VictoriaLogs, which allows you to export
logs in the following formats: csv, json.
### Checklist
The following checks are **mandatory**:
- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
This commit adds new fields - `requestsCount` and `lastRequestTimestamp`
to series count be metric names stats.
It allows to display an additional stats at explore cardinality page.
Stats will only be added if `storage.trackMetricNameStats` flag is set.
This change requires an update to RPC protocol in order to properly
marshal data.
In addition, this commit adds integration tests to TSDB stats API.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6145
This commit adds `search.logSlowQueryStats=<duration>` cmd-line flag on vmselect.
It reads stats from eval function, and doesn't slow down the query execution.
Log line has the following structure:
vm_slow_query_stats type=%s query=%q query_hash=%d start_ms=%d end_ms=%d step_ms=%d range_ms=%d tenant=%q execution_duration_ms=%v series_fetched=%d samples_fetched=%d bytes=%d memory_estimated_bytes=%d
This feature is only available for enterprise version.
### Describe Your Changes
Changelog note about experimental v1.22.0 release that solves deadlock
issue on multicore systems by complete parallelization turned off until
proper refactor is made to return it back w/o reintroducing the risk of
the deadlock in child processes.
P.s. other references and guides were not updated to experimental tag,
as long as it has downsides of dropped speed. The links will be updated
once we have parallelization properly turned on.
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
This change unblocks testing pipelines in CI for other contributions.
The tests are commented because I don't have full understanding of
fixing them.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8710
---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
### Describe Your Changes
Follow-up to
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8462
Addressed review comments:
- Log panic with FATAL prefix to indicate possible on-disk data
corruption
- Moved version bump line to the tip block (v1.114.0 has already been
released) in changelog
- Removed duplicate vmagent entry from targets list from Makefile
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
### Describe Your Changes
Under heavy load, vmagent's wirte concurrency limiter
(2ab53acce4/lib/writeconcurrencylimiter/concurrencylimiter.go (L111))
queues incoming requests. If a client's timeout is shorter than the wait
time in the
queue, the client may close the connection before vmagent starts
processing it. When vmagent then tries to read the request body, it
encounters an ambiguous `unexpected EOF` error
(https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8675).
This commit adds more context to such errors to help users diagnose and
resolve
the issue when it's related to vmagent's own load and queuing behavior.
Possible user actions include:
- Lowering `-insert.maxQueueDuration` below the client's timeout.
- Increasing the client-side timeout, if applicable.
- Scaling up vmagent (e.g., adding more CPU resources).
- Increasing `-maxConcurrentInserts` if CPU capacity allows.
Steps to reproduce:
https://gist.github.com/makasim/6984e20f57bfd944411f56a7ebe5b6bf
### Checklist
The following checks are **mandatory**:
- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
### Describe Your Changes
This commit improves how vmagent selects the remote write protocol.
Previously, vmagent [performed a handshake
probe](0ff1a3b154/lib/protoparser/protoparserutil/vmproto_handshake.go (L11))
at
[startup](0ff1a3b154/app/vmagent/remotewrite/client.go (L173)):
- If the probe succeeded, it used the VictoriaMetrics (VM) protocol.
- If the probe failed, it downgraded to the Prometheus protocol.
- No protocol changes occurred after the initial probe at runtime.
However, this approach had limitations:
- If vmstorage was unavailable during vmagent startup, vmagent would
immediately downgrade to the Prometheus protocol, leading to higher
network usage unitl vmagent restarted. This case has been reported in
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7615.
- If the remote write server was updated or downgraded (e.g., during a
fallback or migration), vmagent would not detect the protocol change. It
would continue retrying failed requests and eventually drop them.
Require a restart of vmagent to pick up the new protocol.
This commit introduces a more adaptive mechanism.
vmagent always starts with the VM protocol and downgrades to the
Prometheus protocol only if an unsupported media type or bad request
response is received.
When this happens, the protocol is downgraded for all future requests.
In-flight requests are re-packed from Zstd to Snappy and retried
immediately.
Snappy-encoded requests are dropped if an unsupported media type or bad
request is received (no retrying).
Additionally, the in-memory and persisted queues could mix snappy and
zstd encoded blocks. The proper encoding is decided before sending by
encoding.IsZstd function.
TODO:
* [x] Add tests
* [x] Update documentation
* [x] Changelog
* [x] Research on
[content-type](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8462#issuecomment-2786918054),
[accept-encoding](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8462#issuecomment-2786923382)
Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7615#top
issue.
### Checklist
The following checks are **mandatory**:
- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
Previously, if vmagent was built with CGO_ENABLED=0, vmagent cannot start and reported runtime error:
`Kafka client is not supported at systems without CGO`
This error was trigger even if `-kafka.consumer.topic` was not
provided. CGO_ENABLED=0 is default build option for linux/arm and some other archs.
This commit properly inits kafka consumer by checking if `-kafka.consumer.topic` is set.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6019
'authKey' is well-known url and form param for VictoriaMetrics
components authorization. Previously, it could be printed into stdout
via httpserver error logger. It makes this authKey insecure and hard to
use.
This commit prevents from logging authKey defined at PostForm or as part
of url.Query.
It's recommneded to transfer authKey via PostForm and it should be
implemented at separate PRs.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5973
---------
Signed-off-by: f41gh7 <nik@victoriametrics.com>
Previously, if ProfileName is set to empty value (as default). AWS s3
lib ignored any profile config defined with `-configProfilePath`.
This commit correctly configure client options and set profile name only
if it's set to non-empty value.
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8668
Currently, requests failing due to network timeout would receive "200
OK" while producing a warning log message about the timeout. This
behaviour is confusing and might produce unexpected issues as it is not
possible to retry errors properly.
Change this to return "502 Bad Gateway" response so that error can be
handled by the client.
See: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8621
Config for testing:
```
unauthorized_user:
url_prefix: "http://example.com:9800"
```
Before the change:
```
* Trying 127.0.0.1:8427...
* Connected to 127.0.0.1 (127.0.0.1) port 8427
* using HTTP/1.x
> HEAD /api/v1/query HTTP/1.1
> Host: 127.0.0.1:8427
> User-Agent: curl/8.12.1
> Accept: */*
>
* Request completely sent off
/* NOTE: 30 seconds timeout passes */
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Vary: Accept-Encoding
Vary: Accept-Encoding
< X-Server-Hostname: pc
X-Server-Hostname: pc
< Date: Tue, 01 Apr 2025 08:54:05 GMT
Date: Tue, 01 Apr 2025 08:54:05 GMT
<
* Connection #0 to host 127.0.0.1 left intact
```
After:
```
* Trying 127.0.0.1:8427...
* Connected to 127.0.0.1 (127.0.0.1) port 8427
* using HTTP/1.x
> HEAD /api/v1/query HTTP/1.1
> Host: 127.0.0.1:8427
> User-Agent: curl/8.12.1
> Accept: */*
>
* Request completely sent off
< HTTP/1.1 502 Bad Gateway
HTTP/1.1 502 Bad Gateway
< Content-Type: text/plain; charset=utf-8
Content-Type: text/plain; charset=utf-8
< Vary: Accept-Encoding
Vary: Accept-Encoding
< X-Content-Type-Options: nosniff
X-Content-Type-Options: nosniff
< X-Server-Hostname: pc
X-Server-Hostname: pc
< Date: Tue, 01 Apr 2025 09:13:57 GMT
Date: Tue, 01 Apr 2025 09:13:57 GMT
< Content-Length: 109
Content-Length: 109
<
* Connection #0 to host 127.0.0.1 left intact
```
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
### Describe Your Changes
Log when the data response from vmselect is partial during
rule(recording, alertingrule) evaluations.
vmselect returns `isPartial: true` in case data is not fully fetched
from scattered vmstorages. At the time of rule evals, it may be drifting
apart from real values due to missing points. This is an important event
that should be logged to inform users to see how often that happens as
it may lead to false positive alerts.
### Checklist
The following checks are **mandatory**:
- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
---------
Signed-off-by: emreya <emre.yazici@adyen.com>
Signed-off-by: emreya <e.yazici1990@gmail.com>
Signed-off-by: Emre Yazici <e.yazici1990@gmail.com>
(cherry picked from commit 56f60e8be9)
When creating and listing snapshots, panic instead of returning an error
since errors are not recoverable anyway.
Also do not cleanup the filesystem on panic. Leave as is for further
manual inspection.
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
This reduces CPU usage by up to 30% in exchange of the increased RAM usage by 10%
when scraping thousands of targets, which expose millions of metrics in summary.
This looks like a good tradeoff after the commit edac875179 ,
which reduced RAM usage by more than 10%, so the final RAM usage for vmagent
is still lower than the RAM usage at v1.114.0 by ~15%, while CPU usage drops by 30%.
This reduces memory usage when reading large response bodies because the underlying buffer
doesn't need to be re-allocated during the read of large response body in the buffer.
Also decompress response body under the processScrapedDataConcurrencyLimitCh .
This reduces CPU usage and RAM usage a bit when scraping thousands of targets.
This reduces memory usage for vmagent when scraping big number of targets at the cost of slightly higher CPU usage.
The increased CPU usage can be decreased by disabling tracking of stale markers either via -promscrape.noStaleMarkers
command-line flag or via `no_stale_markers: true` option at the scrape config pointed by -promscrape.config command-line flag.
See https://docs.victoriametrics.com/vmagent/#prometheus-staleness-markers
The pool is used mostly for obtaining byte buffers for responses from scrape targets.
There are no responses smaller than 256 bytes in practice, so there is no sense in maintaining
pools for byte slices up to 64 and 128 bytes.
Previously the maxLabelsLen could be updated with smaller value after it is updated to bigger value by concurrently running goroutines.
Prevent this by loading the latest maxLabelsLen value and updating it only if it is smaller than the current len(wc.labels)
before the exit from callback passed to stream.Parse.
While at it, return early from the callback on the sample_limit exceeding error,
since the rest of the code in the callback becomes no-op after wc.reset().
This simplifies following the logic in the code a bit.
Also remove outdated misleading comment in front of sw.pushData() call inside callbacks passed to stream.Parse.
This comment has no sense after every callback start working with its own goroutine-local wc.
Start with writeRequestCtx containing up to 256 labels instead of 8 labels,
since a typical response from scrape target contains much more than 8 labels across all the exposed metrics.
Do not pre-allocate labels at writeRequestCtx, since they are pre-allocated inside writeRequestCtx.addRows(),
together with the pre-allocation of samples and writeRequest.Timeseries.
The applySeriesLimit applies the limit to samples stored at writeRequestCtx,
while the scrapeWork is used as read-only configuration source.
That's why it is better from maintainability PoV to attach the applySeriesLimit
method to writeRequestCtx.
While at it, clarify docs for the applySeriesLimit function.
The rows are added to writeRequestCtx, while the scrapeWork is used only as a read-only configuration source.
So it is better from maintainability PoV to attach addRows function to writeRequestCtx instead of scrapeWork.
Also attach addAutoMetrics to writeRequestCtx instead of scrapeWork due to the same reason:
addAutoMetrics adds metrics to the writeRequestCtx, while using scrapeWork as a read-only configuration source.
While at it, remove tmpRow from scrapeWork struct in order to reduce the complexity of this struct.
areIdenticalSeries doesn't access scrapeWork members except of sw.Config of *ScrapeWork type.
It is better from maintainability PoV to attach this methos to ScrapeWork then.
While at it, replace sw.Config with cfg shortcut at scrapeWork.processDataOneShot()
and scrapeWork.processDataInStreamMode().
Having `victoriametrics` or `victorialogs` tags should be enough for
filtering dashboards related to VictoriaMetrics components.
Related ticket
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8618
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Support of the latest prometheus/common is not released yet so pin to previous version.
Related commit at prometheus/prometheus: 95f49dd84b
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
This allows verifying how the benchmark performance scales with the number of available CPU cores
and makes the results of the benchmark consistent with other BenchmarkScrapeWorkScrapeInternal* benchmarks.
Also reduce the amounts of memory allocations inside generateScrape() function in order to reduce
measurement noise during the BenchmarkScrapeWorkScrapeInternalStreamBigData run.
This is a follow-up after c05ffa906d
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8515
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8159
It is always better to run benchmarks in parallel on all the available CPU cores
in order to see how their performance scales with the number of CPU cores (GOMAXPROCS).
The commit also performs the following modifications:
- Removes the dependency of on the scrapeWork from getLabelsHash() function.
- Makes sure that the benchmark cannot be optimized out by the compiler, by introducing a dependency
on a global Sink variable. Previously the getLabelsHash() function call could be optimized out
by the compiler, since this call has no side effects, and the returned result is ignored.
- Reduces the amounts of memory allocations inside the BenchmarkScrapeWorkGetLabelsHash
when preparing the labels for the benchmark. This should reduce measurements' noise during the benchmark.
This is a follow-up for c05ffa906d
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8515
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8159
* vmgateway: properly set the `Host` header when routing requests to `-write.url` and `-read.url`
* Update docs/victoriametrics/changelog/CHANGELOG.md
---------
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
Signed-off-by: hagen1778 <roman@victoriametrics.com>
* vmalert: properly attach tenant labels `vm_account_id` and `vm_project_id` to alerting rules when enabling `-clusterMode`
Previously, these labels were lost in alert messages to Alertmanager. Bug was introduced in [v1.112.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.112.0).
Previously the buffer was increased by 30% after it became 50% full.
For example, if more than 5MB of data is read into 10MB buffer, then its' size
was increased to 13MB, leading to 13MB-5MB = 8MB of waste.
This translates to 8MB/5MB = 160% waste in the worst case.
The updated algorithm increases the buffer by 30% after it becomes ~94% full.
This means that if more than 9.4MB of data is read into 10MB buffer,
then its' size is increased to 13MB, leading to 13MB-9.4MB = 3.6MB of waste.
This translates to 3.6MB / 9.4MB = ~38% waste in the worst case.
This should reduce memory usage when vmagent reads big responses from scrape targets.
While at it, properly append the data to buffer if it already has more than 4KiB of data.
Previously the data over 4KiB in the buffer was lost after ReadFrom call.
This is a follow-up for f28f496a9d
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/6761
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6759
It is faster to read the whole data and then decompress it in one go for zstd and snappy encodings.
This reduces the number of potential read() syscalls and decompress CGO calls needed
for reading and decompressing the data.
### Describe Your Changes
replace /VictoriaLogs aliases to /victorialogs as generated directories
are anyway renamed to victorialogs before deployment
need to merge this PR immediately after
https://github.com/VictoriaMetrics/vmdocs/pull/116
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
The panel was producing wrong predictions as it is almost impossible,
without making too expensive queries, to make a precise predictions.
More details on reasoning why it is better to remove it than fix it
is here https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8492.
This change also removes ETA panels from alerting rules annotations.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
### Describe Your Changes
remove unused files from docs
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
### Describe Your Changes
`its'` -> `its`
### Checklist
The following checks are **mandatory**:
- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
### Describe Your Changes
Update dependencies to latest versions
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
### Describe Your Changes
- add smaller search weights for changelog content
- remove replace `<details>` tag with collapse shortcode
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
related PR https://github.com/VictoriaMetrics/vmdocs/pull/115
### Describe Your Changes
Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.
### Checklist
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
- Move lib/httputil.Transport to lib/promauth.NewTLSTransport. Remove the first arg to this function (URL),
since it has zero relation to the created transport.
- Move lib/httputil.TLSConfig to lib/promauth.NewTLSConfig. Re-use the existing functionality
from lib/promauth.Config for creating TLS config. This enables the following features:
- Ability to load key, cert and CA files from http urls.
- Ability to change the key, cert and CA files without the need to restart the service.
It automatically re-loads the new files after they change.
- Avoid a data race when multiple goroutines access and update roundTripper.trBase inside roundTripper.getTransport().
The way to go is to make sure the roundTripper.trBase is updated only during roundTripper creation,
and then can be only read without updating.
- Use the http.DefaultTransport for http2 client connections at Kubernetes service discovery.
Previously golang.org/x/net/http2.Transport was used there. This had the following issues:
- An additional dependency on golang.org/x/net/http2.
- Missing initialization of Transport.DialContext with netutil.Dialer.DialContext for http2 client.
- Missing initialization of Transport.TLSHandshakeTimeout for http2 client.
- Introduction of the lib/promauth.Config.NewRoundTripperFromGetter() method, which is hard to use properly.
- Unnecessary complications of the lib/promauth.roundTripper, which led to the data race described above.
- Avoid a data race when multiple goroutines access and update tls config shared between multiple
net/http.Transport instances at the TLSClientConfig field. The way to go is to always make a copy of the tls config
before assigning it to the net/http.Transport.TLSClientConfig field.
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5971
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/7114
It is much better to panic instead of returning an error on programming error (aka BUG),
since this significantly increases chances that the bug will be noticed, reported and fixed ASAP.
The returned error can be ignored, even if it is logged, while panic is much harder to ignore.
The code must always panic instead of returning errors when any programming error (aka unexpected state) is detected.
This is a follow-up for the commit 9feee15493
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/6783
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6771
Bugfix was introduced at 682e8d8af5 and was included as a part of package update here b1fab92d, but lacks a changelog change.
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
- update references to the latest version
- port LTS changelog
- revert changes from 47201ace96 and overwrite by an actual latest enterprise release version instead of latest LTS release
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Before filling a bug report it would be great to [upgrade](https://docs.victoriametrics.com/#how-to-upgrade)
Before filling a bug report it would be great to [upgrade](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#how-to-upgrade)
to [the latest available release](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest)
and verify whether the bug is reproducible there.
It's also recommended to read the [troubleshooting docs](https://docs.victoriametrics.com/troubleshooting/) first.
It's also recommended to read the [troubleshooting docs](https://docs.victoriametrics.com/victoriametrics/troubleshooting/) first.
- type:textarea
id:describe-the-bug
attributes:
@ -64,8 +64,8 @@ body:
*[Grafana dashboard for VictoriaMetrics cluster](https://grafana.com/grafana/dashboards/11176)
See how to setup monitoring here:
*[monitoring for single-node VictoriaMetrics](https://docs.victoriametrics.com/#monitoring)
*[monitoring for VictoriaMetrics cluster](https://docs.victoriametrics.com/cluster-victoriametrics/#monitoring)
*[monitoring for single-node VictoriaMetrics](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#monitoring)
*[monitoring for VictoriaMetrics cluster](https://docs.victoriametrics.com/victoriametrics/cluster-victoriametrics/#monitoring)
VictoriaMetrics is a fast, cost-saving, and scalable solution for monitoring and managing time series data. It delivers high performance and reliability, making it an ideal choice for businesses of all sizes.
@ -21,10 +21,10 @@ VictoriaMetrics is a fast, cost-saving, and scalable solution for monitoring and
Here are some resources and information about VictoriaMetrics:
- Deployment types: [Single-node version](https://docs.victoriametrics.com/), [Cluster version](https://docs.victoriametrics.com/cluster-victoriametrics/), and [Enterprise version](https://docs.victoriametrics.com/enterprise/)
- Changelog: [CHANGELOG](https://docs.victoriametrics.com/changelog/), and [How to upgrade](https://docs.victoriametrics.com/#how-to-upgrade-victoriametrics)
- Deployment types: [Single-node version](https://docs.victoriametrics.com/), [Cluster version](https://docs.victoriametrics.com/victoriametrics/cluster-victoriametrics/), and [Enterprise version](https://docs.victoriametrics.com/victoriametrics/enterprise/)
- Changelog: [CHANGELOG](https://docs.victoriametrics.com/victoriametrics/changelog/), and [How to upgrade](https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#how-to-upgrade-victoriametrics)
Yes, we open-source both the single-node VictoriaMetrics and the cluster version.
@ -35,22 +35,22 @@ VictoriaMetrics is optimized for timeseries data, even when old time series are
* **Long-term storage for Prometheus** or as a drop-in replacement for Prometheus and Graphite in Grafana.
* **Powerful stream aggregation**: Can be used as a StatsD alternative.
* **Ideal for big data**: Works well with large amounts of time series data from APM, Kubernetes, IoT sensors, connected cars, industrial telemetry, financial data and various [Enterprise workloads](https://docs.victoriametrics.com/enterprise/).
* **Ideal for big data**: Works well with large amounts of time series data from APM, Kubernetes, IoT sensors, connected cars, industrial telemetry, financial data and various [Enterprise workloads](https://docs.victoriametrics.com/victoriametrics/enterprise/).
* **Query language**: Supports both PromQL and the more performant MetricsQL.
* **Easy to setup**: No dependencies, single [small binary](https://medium.com/@valyala/stripping-dependency-bloat-in-victoriametrics-docker-image-983fb5912b0d), configuration through command-line flags, but the default is also fine-tuned; backup and restore with [instant snapshots](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282).
* **Global query view**: Multiple Prometheus instances or any other data sources may ingest data into VictoriaMetrics and queried via a single query.
* **Various Protocols**: Support metric scraping, ingestion and backfilling in various protocol.
* [InfluxDB line protocol](https://docs.victoriametrics.com/#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf) over HTTP, TCP and UDP.
* [Graphite plaintext protocol](https://docs.victoriametrics.com/#how-to-send-data-from-graphite-compatible-agents-such-as-statsd) with [tags](https://graphite.readthedocs.io/en/latest/tags.html#carbon).
* [OpenTSDB put message](https://docs.victoriametrics.com/#sending-data-via-telnet-put-protocol).
- **Multiple retentions**: Reducing storage costs by specifying different retentions for different datasets.
- **Downsampling**: Reducing storage costs and increasing performance for queries over historical data.
- **Stable releases** with long-term support lines ([LTS](https://docs.victoriametrics.com/lts-releases/)).
- **Stable releases** with long-term support lines ([LTS](https://docs.victoriametrics.com/victoriametrics/lts-releases/)).
- **Comprehensive support**: First-class consulting, feature requests and technical support provided by the core VictoriaMetrics dev team.
- Many other features, which you can read about on [the Enterprise page](https://docs.victoriametrics.com/enterprise/).
- Many other features, which you can read about on [the Enterprise page](https://docs.victoriametrics.com/victoriametrics/enterprise/).
[Contact us](mailto:info@victoriametrics.com) if you need enterprise support for VictoriaMetrics. Or you can request a free trial license [here](https://victoriametrics.com/products/enterprise/trial/), downloaded Enterprise binaries are available at [Github Releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest).
@ -77,7 +77,7 @@ Some good benchmarks VictoriaMetrics achieved:
* **Minimal memory footprint**: handling millions of unique timeseries with [10x less RAM](https://medium.com/@valyala/insert-benchmarks-with-inch-influxdb-vs-victoriametrics-e31a41ae2893) than InfluxDB, up to [7x less RAM](https://valyala.medium.com/prometheus-vs-victoriametrics-benchmark-on-node-exporter-metrics-4ca29c75590f) than Prometheus, Thanos or Cortex.
* **Highly scalable and performance** for [data ingestion](https://medium.com/@valyala/high-cardinality-tsdb-benchmarks-victoriametrics-vs-timescaledb-vs-influxdb-13e6ee64dd6b) and [querying](https://medium.com/@valyala/when-size-matters-benchmarking-victoriametrics-vs-timescale-and-influxdb-6035811952d4), [20x outperforms](https://medium.com/@valyala/insert-benchmarks-with-inch-influxdb-vs-victoriametrics-e31a41ae2893) InfluxDB and TimescaleDB.
* **High data compression**: [70x more data points](https://medium.com/@valyala/when-size-matters-benchmarking-victoriametrics-vs-timescale-and-influxdb-6035811952d4) may be stored into limited storage than TimescaleDB, [7x less storage](https://valyala.medium.com/prometheus-vs-victoriametrics-benchmark-on-node-exporter-metrics-4ca29c75590f) space is required than Prometheus, Thanos or Cortex.
* **Reducing storage costs**: [10x more effective](https://docs.victoriametrics.com/casestudies/#grammarly) than Graphite according to the Grammarly case study.
* **Reducing storage costs**: [10x more effective](https://docs.victoriametrics.com/victoriametrics/casestudies/#grammarly) than Graphite according to the Grammarly case study.
* **A single-node VictoriaMetrics** can replace medium-sized clusters built with competing solutions such as Thanos, M3DB, Cortex, InfluxDB or TimescaleDB. See [VictoriaMetrics vs Thanos](https://medium.com/@valyala/comparing-thanos-to-victoriametrics-cluster-b193bea1683), [Measuring vertical scalability](https://medium.com/@valyala/measuring-vertical-scalability-for-time-series-databases-in-google-cloud-92550d78d8ae), [Remote write storage wars - PromCon 2019](https://promcon.io/2019-munich/talks/remote-write-storage-wars/).
* **Optimized for storage**: [Works well with high-latency IO](https://medium.com/@valyala/high-cardinality-tsdb-benchmarks-victoriametrics-vs-timescaledb-vs-influxdb-13e6ee64dd6b) and low IOPS (HDD and network storage in AWS, Google Cloud, Microsoft Azure, etc.).
"See https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt . "+
"With enabled proxy protocol http server cannot serve regular /metrics endpoint. Use -pushmetrics.url for metrics pushing")
minScrapeInterval=flag.Duration("dedup.minScrapeInterval",0,"Leave only the last sample in every time series per each discrete interval "+
"equal to -dedup.minScrapeInterval > 0. See also -streamAggr.dedupInterval and https://docs.victoriametrics.com/#deduplication")
"equal to -dedup.minScrapeInterval > 0. See also -streamAggr.dedupInterval and https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#deduplication")
dryRun=flag.Bool("dryRun",false,"Whether to check config files without running VictoriaMetrics. The following config files are checked: "+
"-promscrape.config, -relabelConfig and -streamAggr.config. Unknown config entries aren't allowed in -promscrape.config by default. "+
"This can be changed with -promscrape.config.strictParse=false command-line flag")
@ -45,7 +45,7 @@ var (
finalDedupScheduleInterval=flag.Duration("storage.finalDedupScheduleCheckInterval",time.Hour,"The interval for checking when final deduplication process should be started."+
"Storage unconditionally adds 25% jitter to the interval value on each check evaluation."+
" Changing the interval to the bigger values may delay downsampling, deduplication for historical data."+
" See also https://docs.victoriametrics.com/#deduplication")
" See also https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#deduplication")
t.Fatalf("not expected number of csv lines want=%d\ngot=%d test=%s.%s\n\response=%q",len(data),test.ExpectedResultLinesCount,q,test.Issue,strings.Join(data,"\n"))
disableInsert=flag.Bool("internalinsert.disable",false,"Whether to disable /internal/insert HTTP endpoint")
maxRequestSize=flagutil.NewBytes("internalinsert.maxRequestSize",64*1024*1024,"The maximum size in bytes of a single request, which can be accepted at /internal/insert HTTP endpoint")
decolorizeFieldsTCP=flagutil.NewArrayString("syslog.decolorizeFields.tcp","Fields to remove ANSI color codes across logs ingested via the corresponding -syslog.listenAddr.tcp. "+
decolorizeFieldsUDP=flagutil.NewArrayString("syslog.decolorizeFields.udp","Fields to remove ANSI color codes across logs ingested via the corresponding -syslog.listenAddr.udp. "+
addr=flag.String("addr","stdout","HTTP address to push the generated logs to; if it is set to stdout, then logs are generated to stdout")
workers=flag.Int("workers",1,"The number of workers to use to push logs to -addr")
start=newTimeFlag("start","-1d","Generated logs start from this time; see https://docs.victoriametrics.com/#timestamp-formats")
end=newTimeFlag("end","0s","Generated logs end at this time; see https://docs.victoriametrics.com/#timestamp-formats")
start=newTimeFlag("start","-1d","Generated logs start from this time; see https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#timestamp-formats")
end=newTimeFlag("end","0s","Generated logs end at this time; see https://docs.victoriametrics.com/victoriametrics/single-server-victoriametrics/#timestamp-formats")
activeStreams=flag.Int("activeStreams",100,"The number of active log streams to generate; see https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields")
totalStreams=flag.Int("totalStreams",0,"The number of total log streams; if -totalStreams > -activeStreams, then some active streams are substituted with new streams "+
minFreeDiskSpaceBytes=flagutil.NewBytes("storage.minFreeDiskSpaceBytes",10e6,"The minimum free disk space at -storageDataPath after which "+
"the storage stops accepting new data")
forceMergeAuthKey=flagutil.NewPassword("forceMergeAuthKey","authKey, which must be passed in query string to /internal/force_merge pages. It overrides -httpAuth.*")
forceMergeAuthKey=flagutil.NewPassword("forceMergeAuthKey","authKey, which must be passed in query string to /internal/force_merge . It overrides -httpAuth.* . "+
"See https://docs.victoriametrics.com/victorialogs/#forced-merge")
forceFlushAuthKey=flagutil.NewPassword("forceFlushAuthKey","authKey, which must be passed in query string to /internal/force_flush . It overrides -httpAuth.* . "+
"See https://docs.victoriametrics.com/victorialogs/#forced-flush")
storageNodeAddrs=flagutil.NewArrayString("storageNode","Comma-separated list of TCP addresses for storage nodes to route the ingested logs to and to send select queries to. "+
"If the list is empty, then the ingested logs are stored and queried locally from -storageDataPath")
insertConcurrency=flag.Int("insert.concurrency",2,"The average number of concurrent data ingestion requests, which can be sent to every -storageNode")
insertDisableCompression=flag.Bool("insert.disableCompression",false,"Whether to disable compression when sending the ingested data to -storageNode nodes. "+
"Disabled compression reduces CPU usage at the cost of higher network usage")
selectDisableCompression=flag.Bool("select.disableCompression",false,"Whether to disable compression for select query responses received from -storageNode nodes. "+
"Disabled compression reduces CPU usage at the cost of higher network usage")
storageNodeUsername=flagutil.NewArrayString("storageNode.username","Optional basic auth username to use for the corresponding -storageNode")
storageNodePassword=flagutil.NewArrayString("storageNode.password","Optional basic auth password to use for the corresponding -storageNode")
storageNodePasswordFile=flagutil.NewArrayString("storageNode.passwordFile","Optional path to basic auth password to use for the corresponding -storageNode. "+
"The file is re-read every second")
storageNodeBearerToken=flagutil.NewArrayString("storageNode.bearerToken","Optional bearer auth token to use for the corresponding -storageNode")
storageNodeBearerTokenFile=flagutil.NewArrayString("storageNode.bearerTokenFile","Optional path to bearer token file to use for the corresponding -storageNode. "+
"The token is re-read from the file every second")
storageNodeTLS=flagutil.NewArrayBool("storageNode.tls","Whether to use TLS (HTTPS) protocol for communicating with the corresponding -storageNode. "+
"By default communication is performed via HTTP")
storageNodeTLSCAFile=flagutil.NewArrayString("storageNode.tlsCAFile","Optional path to TLS CA file to use for verifying connections to the corresponding -storageNode. "+
"By default, system CA is used")
storageNodeTLSCertFile=flagutil.NewArrayString("storageNode.tlsCertFile","Optional path to client-side TLS certificate file to use when connecting "+
"to the corresponding -storageNode")
storageNodeTLSKeyFile=flagutil.NewArrayString("storageNode.tlsKeyFile","Optional path to client-side TLS certificate key to use when connecting to the corresponding -storageNode")
storageNodeTLSServerName=flagutil.NewArrayString("storageNode.tlsServerName","Optional TLS server name to use for connections to the corresponding -storageNode. "+
"By default, the server name from -storageNode is used")
storageNodeTLSInsecureSkipVerify=flagutil.NewArrayBool("storageNode.tlsInsecureSkipVerify","Whether to skip tls verification when connecting to the corresponding -storageNode")
)
varlocalStorage*logstorage.Storage
varlocalStorageMetrics*metrics.Set
varnetstorageInsert*netinsert.Storage
varnetstorageSelect*netselect.Storage
// Init initializes vlstorage.
//
// Stop must be called when vlstorage is no longer needed
funcInit(){
ifstrg!=nil{
logger.Panicf("BUG: Init() has been already called")
iflen(*storageNodeAddrs)==0{
initLocalStorage()
}else{
initNetworkStorage()
}
}
funcinitLocalStorage(){
iflocalStorage!=nil{
logger.Panicf("BUG: initLocalStorage() has been already called")
}
ifretentionPeriod.Duration()<24*time.Hour{
@ -63,60 +110,157 @@ func Init() {
}
logger.Infof("opening storage at -storageDataPath=%s",*storageDataPath)
logger.Infof("forced merge for partition_prefix=%q has been started",partitionNamePrefix)
startTime:=time.Now()
strg.MustForceMerge(partitionNamePrefix)
logger.Infof("forced merge for partition_prefix=%q has been successfully finished in %.3f seconds",partitionNamePrefix,time.Since(startTime).Seconds())
logger.Infof("forced merge for partition_prefix=%q has been started",partitionNamePrefix)
startTime:=time.Now()
localStorage.MustForceMerge(partitionNamePrefix)
logger.Infof("forced merge for partition_prefix=%q has been successfully finished in %.3f seconds",partitionNamePrefix,time.Since(startTime).Seconds())
logger.Warnf("skipping too long log entry, since its length exceeds %d bytes; the actual log entry length is %d bytes; log entry contents: %s",maxInsertBlockSize,len(b),b)
fmt.Fprintf(w,"See docs at <a href='https://docs.victoriametrics.com/vmagent/'>https://docs.victoriametrics.com/vmagent/</a></br>")
fmt.Fprintf(w,"See docs at <a href='https://docs.victoriametrics.com/victoriametrics/vmagent/'>https://docs.victoriametrics.com/victoriametrics/vmagent/</a></br>")
fmt.Fprintf(w,"Useful endpoints:</br>")
httpserver.WriteAPIHelp(w,[][2]string{
{"targets","status for discovered active targets"},
forcePromProto=flagutil.NewArrayBool("remoteWrite.forcePromProto","Whether to force Prometheus remote write protocol for sending data "+
"to the corresponding -remoteWrite.url . See https://docs.victoriametrics.com/vmagent/#victoriametrics-remote-write-protocol")
"to the corresponding -remoteWrite.url . See https://docs.victoriametrics.com/victoriametrics/vmagent/#victoriametrics-remote-write-protocol")
forceVMProto=flagutil.NewArrayBool("remoteWrite.forceVMProto","Whether to force VictoriaMetrics remote write protocol for sending data "+
"to the corresponding -remoteWrite.url . See https://docs.victoriametrics.com/vmagent/#victoriametrics-remote-write-protocol")
"to the corresponding -remoteWrite.url . See https://docs.victoriametrics.com/victoriametrics/vmagent/#victoriametrics-remote-write-protocol")
rateLimit=flagutil.NewArrayInt("remoteWrite.rateLimit",0,"Optional rate limit in bytes per second for data sent to the corresponding -remoteWrite.url. "+
"By default, the rate limit is disabled. It can be useful for limiting load on remote storage when big amounts of buffered data "+
@ -87,7 +90,8 @@ type client struct {
remoteWriteURLstring
// Whether to use VictoriaMetrics remote write protocol for sending the data to remoteWriteURL
// Just drop block on 409 and 400 status codes like Prometheus does.
ifstatusCode==409{
logBlockRejected(block,c.sanitizedURL,resp)
// Just drop block on 409 status code like Prometheus does.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/873
// and https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1149
_=resp.Body.Close()
c.packetsDropped.Inc()
returntrue
// - Remote Write v1 specification implicitly expects a `400 Bad Request` when the encoding is not supported.
// - Remote Write v2 specification explicitly specifies a `415 Unsupported Media Type` for unsupported encodings.
// - Real-world implementations of v1 use both 400 and 415 status codes.
// See more in research: https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8462#issuecomment-2786918054
}elseifstatusCode==415||statusCode==400{
ifc.canDowngradeVMProto.Swap(false){
logger.Infof("received unsupported media type or bad request from remote storage at %q. Downgrading protocol from VictoriaMetrics to Prometheus remote write for all future requests. "+
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#victoriametrics-remote-write-protocol",c.sanitizedURL)
c.useVMProto.Store(false)
}
ifencoding.IsZstd(block){
logger.Infof("received unsupported media type or bad request from remote storage at %q. Re-packing the block to Prometheus remote write and retrying."+
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#victoriametrics-remote-write-protocol",c.sanitizedURL)
block=mustRepackBlockFromZstdToSnappy(block)
c.retriesCount.Inc()
_=resp.Body.Close()
gotoagain
}
// Just drop snappy blocks on 400 or 415 status codes like Prometheus does.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/873
// and https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1149
maxRowsPerBlock=flag.Int("remoteWrite.maxRowsPerBlock",10000,"The maximum number of samples to send in each block to remote storage. Higher number may improve performance at the cost of the increased memory usage. See also -remoteWrite.maxBlockSize")
vmProtoCompressLevel=flag.Int("remoteWrite.vmProtoCompressLevel",0,"The compression level for VictoriaMetrics remote write protocol. "+
"Higher values reduce network traffic at the cost of higher CPU usage. Negative values reduce CPU usage at the cost of increased network traffic. "+
"See https://docs.victoriametrics.com/vmagent/#victoriametrics-remote-write-protocol")
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#victoriametrics-remote-write-protocol")
relabelConfigPathGlobal=flag.String("remoteWrite.relabelConfig","","Optional path to file with relabeling configs, which are applied "+
"to all the metrics before sending them to -remoteWrite.url. See also -remoteWrite.urlRelabelConfig. "+
"The path can point either to local file or to http url. "+
"See https://docs.victoriametrics.com/vmagent/#relabeling")
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#relabeling")
relabelConfigPaths=flagutil.NewArrayString("remoteWrite.urlRelabelConfig","Optional path to relabel configs for the corresponding -remoteWrite.url. "+
"See also -remoteWrite.relabelConfig. The path can point either to local file or to http url. "+
"See https://docs.victoriametrics.com/vmagent/#relabeling")
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#relabeling")
usePromCompatibleNaming=flag.Bool("usePromCompatibleNaming",false,"Whether to replace characters unsupported by Prometheus with underscores "+
"in the ingested metric names and label names. For example, foo.bar{a.b='c'} is transformed into foo_bar{a_b='c'} during data ingestion if this flag is set. "+