Compare commits

...

359 commits

Author SHA1 Message Date
Aliaksandr Valialkin
58a605317f
use new canonical urls to articles docs: https://docs.victoriametrics.com/victoriametrics/articles/
This avoids a redirect from the old link https://docs.victoriametrics.com/articles/ to https://docs.victoriametrics.com/victoriametrics/articles/ ,
and fixes `backwards` navigation for these links across VictoriaMetrics docs.

This is a follow-up for f152021521
See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8595#issuecomment-2831598274
2025-04-30 17:43:14 +02:00
Aliaksandr Valialkin
5e76845b5d
use new canonical urls to sd_configs docs: https://docs.victoriametrics.com/victoriametrics/sd_configs/
This avoids a redirect from the old link https://docs.victoriametrics.com/sd_configs/ to https://docs.victoriametrics.com/victoriametrics/sd_configs/ ,
and fixes `backwards` navigation for these links across VictoriaMetrics docs.

This is a follow-up for f152021521
See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8595#issuecomment-2831598274
2025-04-30 17:40:33 +02:00
Aliaksandr Valialkin
f587771c4a
use new canonical urls to troubleshooting docs: https://docs.victoriametrics.com/victoriametrics/troubleshooting/
This avoids a redirect from the old link https://docs.victoriametrics.com/troubleshooting/ to https://docs.victoriametrics.com/victoriametrics/troubleshooting/ ,
and fixes `backwards` navigation for these links across VictoriaMetrics docs.

This is a follow-up for f152021521
See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8595#issuecomment-2831598274
2025-04-30 17:32:16 +02:00
Aliaksandr Valialkin
6392c0a0cb
use new canonical urls to metricsql docs: https://docs.victoriametrics.com/victoriametrics/metricsql/
This avoids a redirect from the old link https://docs.victoriametrics.com/metricsql/ to https://docs.victoriametrics.com/victoriametrics/metricsql/ ,
and fixes `backwards` navigation for these links across VictoriaMetrics docs.

This is a follow-up for f152021521
See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8595#issuecomment-2831598274
2025-04-30 17:24:39 +02:00
Aliaksandr Valialkin
5188436b14
all: use new canonical urls to vmrestore docs: https://docs.victoriametrics.com/victoriametrics/vmrestore/
This avoids a redirect from the old link https://docs.victoriametrics.com/vmrestore/ to https://docs.victoriametrics.com/victoriametrics/vmrestore/ ,
and fixes `backwards` navigation for these links across VictoriaMetrics docs.

This is a follow-up for f152021521
See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8595#issuecomment-2831598274
2025-04-30 16:55:29 +02:00
Aliaksandr Valialkin
acd06e37a8
all: use new canonical urls to vmbackup docs: https://docs.victoriametrics.com/victoriametrics/vmbackup/
This avoids a redirect from the old link https://docs.victoriametrics.com/vmbackup/ to https://docs.victoriametrics.com/victoriametrics/vmbackup/ ,
and fixes `backwards` navigation for these links across VictoriaMetrics docs.

This is a follow-up for f152021521
See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8595#issuecomment-2831598274
2025-04-30 16:51:30 +02:00
Aliaksandr Valialkin
e50ccee5ac
all: use new canonical urls to vmbackupmanager docs: https://docs.victoriametrics.com/victoriametrics/vmbackupmanager/
This avoids a redirect from the old link https://docs.victoriametrics.com/vmbackupmanager/ to https://docs.victoriametrics.com/victoriametrics/vmbackupmanager/ ,
and fixes `backwards` navigation for these links across VictoriaMetrics docs.

This is a follow-up for f152021521
See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8595#issuecomment-2831598274
2025-04-30 16:47:21 +02:00
Aliaksandr Valialkin
92bf019c3d
all: use new canonical urls to vmctl docs: https://docs.victoriametrics.com/victoriametrics/vmctl/
This avoids a redirect from the old link https://docs.victoriametrics.com/vmctl/ to https://docs.victoriametrics.com/victoriametrics/vmctl/ ,
and fixes `backwards` navigation for these links across VictoriaMetrics docs.

This is a follow-up for f152021521
See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8595#issuecomment-2831598274
2025-04-30 16:43:48 +02:00
Aliaksandr Valialkin
4261b28c86
all: use new canonical urls to vmauth docs: https://docs.victoriametrics.com/victoriametrics/vmauth/
This avoids a redirect from the old link https://docs.victoriametrics.com/vmauth/ to https://docs.victoriametrics.com/victoriametrics/vmauth/ ,
and fixes `backwards` navigation for these links across VictoriaMetrics docs.

This is a follow-up for f152021521
See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8595#issuecomment-2831598274
2025-04-30 16:40:01 +02:00
Aliaksandr Valialkin
589ee65c83
lib/atomicutil: rename Slice.GetSlice to Slice.All for the sake of better readability 2025-04-30 16:13:57 +02:00
Aliaksandr Valialkin
46ec5e1d11
all: use new canonical urls to vmalert docs: https://docs.victoriametrics.com/victoriametrics/vmalert/
This avoids a redirect from the old link https://docs.victoriametrics.com/vmalert/ to https://docs.victoriametrics.com/victoriametrics/vmalert/ ,
and fixes `backwards` navigation for these links across VictoriaMetrics docs.

This is a follow-up for f152021521
See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8595#issuecomment-2831598274
2025-04-30 16:02:45 +02:00
Aliaksandr Valialkin
41356d4e3e
all: use new canonical urls to vmagent docs: https://docs.victoriametrics.com/victoriametrics/vmagent/
This avoids a redirect from the old link https://docs.victoriametrics.com/vmagent/ to https://docs.victoriametrics.com/victoriametrics/vmagent/ ,
and fixes `backwards` navigation for these links across VictoriaMetrics docs.

This is a follow-up for f152021521
See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8595#issuecomment-2831598274
2025-04-30 13:24:39 +02:00
Aliaksandr Valialkin
7d1294e080
docs/victorialogs/logql-to-logsql.md: typo fixes and small clarifications 2025-04-30 13:02:51 +02:00
Aliaksandr Valialkin
045f33bcc3
docs/victorialogs/data-ingestion/README.md: clarify what is the vl_http_errors_total{path="/insert/jsonline"} counter 2025-04-30 11:51:32 +02:00
Aliaksandr Valialkin
ee81975392
docs/victorialogs/data-ingestion/README.md: document that VictoriaLogs skips invalid JSON lines and continues parsing the remaining lines at /insert/jsonline API
This clarifies the behaviour for /insert/jsonline for invalid JSON lines.
2025-04-30 11:47:21 +02:00
Aliaksandr Valialkin
039411e012
docs/victorialogs: add a guide for converting Loki query to VictoriaLogs query - https://docs.victoriametrics.com/victorialogs/logql-to-logsql/ 2025-04-29 22:50:50 +02:00
Aliaksandr Valialkin
cc274cf079
docs/victorialogs/FAQ.md: add a link to the article explaining why VictoriaLogs is better than Loki to the What is the difference between Loki and VictoriaLogs? question 2025-04-29 18:13:18 +02:00
Aliaksandr Valialkin
28a7840ad2
docs/victorialogs/LogsQL.md: clarify that by default LogsQL filters return matching logs with all the fields, not only the _msg field 2025-04-29 18:03:35 +02:00
Aliaksandr Valialkin
a1ade47591
docs/victorialogs/LogsQL.md: update misleading outdated docs about parse() function - it is named extract pipe now 2025-04-29 17:58:26 +02:00
Yury Molodov
492def6d57
vmui/logs/fix sorting issues ()
### Describe Your Changes

1. Fixed log entry sorting in the "Group" view: newest entries are now
displayed at the top again. In v1.18.0, log entries were incorrectly
sorted (oldest first) due to changes related to nanosecond precision
support.
    Related issue: 
2. Added alphabetical sorting of fields by key in the "Group" view for
easier navigation.
    Related issue: 

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-04-29 11:58:53 +02:00
Andrii Chubatiuk
7b675bfc13
app/vmalert: upgraded bootstrap, removed jquery, fixed collapse issues ()
### Describe Your Changes

upgraded bootstrap, removed jquery, fixed UI collapse issues in vmalert
UI

<img width="967" alt="image"
src="https://github.com/user-attachments/assets/adcd0a74-fd33-4ec6-b2ee-9e642bcf8425"
/>
<img width="528" alt="image"
src="https://github.com/user-attachments/assets/ecf4f1f7-ec05-4d01-a423-516076cb3d30"
/>


### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2025-04-29 11:46:18 +02:00
hagen1778
75a970fb5e
docs: tidy up QuickStart doc
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-04-29 11:41:27 +02:00
hagen1778
c1b9b2bd8b
docs: add link to benchmark results to vmctl migration between VMs
While there, remove version requirements for VM migration as it
is almost 4y old by now.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-04-29 11:30:37 +02:00
Alexander Marshalov
5d12ba702e
vmcloud: small fixes in access tokens docs ()
### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-04-29 09:19:18 +02:00
Artur Minchukou
d6efa6365d
app/vmui: rename flag retentionFilters to retentionFilter in retention-filter-debug example
### Describe Your Changes

Related issue:  

Renamed `retentionFilters` to `retentionFilter`.

🛑 before merge this pull request, need to merge API changes in
enterprise version.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-04-29 10:47:48 +04:00
Artur Minchukou
767c0cc996
app/vmselect: rename parameter retentionFilters to retentionFilter 2025-04-29 10:46:19 +04:00
Amin Cheloh
f0f548231d
docs: fix typos in victorialogs FAQ.md ()
### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-04-28 15:47:49 +02:00
hagen1778
5147846d18
dashboards: update Query Stats
* set Y-min to 0. This helps to see real spikes in resource usage;
* add text panel explaining how to use query hash;
* support filtering by tenant;
* use `\tvm_slow_query_stats` prefix filter to filter out logs
 that mention failed queries with `vm_slow_query_stats` word;
* add panel to show the minimum queried timestamp - see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8828

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-04-28 15:46:02 +02:00
f41gh7
dab3ed4056
docs: mention LTS releases at changelog
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-04-28 14:28:08 +03:00
f41gh7
e0254b996a
docs: update vm apps to v1.116.0
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-04-28 14:27:58 +03:00
f41gh7
1d30efa48f
make docs-update-version 2025-04-28 14:25:05 +03:00
f41gh7
bf88327121
CHANGELOG.md: cut v1.116.0 release 2025-04-28 14:20:12 +03:00
hagen1778
a6d68f859f
docs: improve wording of query stats
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-04-28 13:02:48 +02:00
Artem Fetishev
b911f721b3
docs: Fix the definition of active time series ()
Based on the code, an active time series is one that has received at
least one sample during the last hour. It does not includes querying.

The first time the docs were updated with this was in
d06aae9454. But even at that time, the
code shows that querying a metric did not make it active.



- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-04-28 12:11:47 +02:00
hagen1778
3f39bd08cf
docs: tidy up last changelog
* put important changes on top;
* replace `this issue` with corresponding issue number to make it more sense;
* consistently format components names.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-04-28 09:02:06 +02:00
Alexander Marshalov
cddf36af43
vmcloud docs: add information about access tokens ()
### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Co-authored-by: Jose Gómez-Sellés <14234281+jgomezselles@users.noreply.github.com>
2025-04-27 21:40:40 +02:00
hagen1778
0e555b421d
dashboards: update query stats
* display max values instead of cumulative values
to outline anomalies;
* remove unnecessary overrides on column width;
* add corresponding formatting units to displayed values.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-04-27 17:34:26 +02:00
Aliaksandr Valialkin
ec6f33f526
lib/logstorage: prevent from slow memory leak at datadb.rb
datadb.rb contains logRows shards, which weren't freed up after the data ingestion
for the given per-day datadb is stopped. This leads to slow memory leak when VictoriaLogs runs
for multiple days without restarts. Avoid this memory leak by freeing up the logRows shards
after converting them to in-memory parts. Re-use the freed up logRows shards via a pool in order
to reduce the pressure on GC.
2025-04-26 22:40:32 +02:00
Aliaksandr Valialkin
e316ab878c
all: fix broken links to *.md files inside docs package after the commit f152021521
This commit moved all the docs/*.md files to docs/victoriametrics/ folder. This broke direct links to these files
such as https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/docs/vmagent.md .
Add the missing `/victoriametrics/` part here, so the link becomes https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/docs/victoriametrics/vmagent.md .

The move of all the docs/*.md files into docs/victoriametrics/ folder is dubious because of the following reasons:

- It breaks direct links to *.md files like mentioned above. It is impossible to fix such links all over the Internet,
  so they will remain broken :(
  The best thing we can do is to fix them on the resources we control such as VictoriaMetrics repository.

- It breaks Google indexing of VictoriaMetrics docs. Google index contains old links such as https://docs.victoriametrics.com/vmagent/#features .
  Now such links are automatically redirected to https://docs.victoriametrics.com/victoriametrics/vmagent/#features by a javascript on the https://docs.victoriametrics.com/vmagent/ page.
  Google doesn't like redirects at javascript, since they are frequently used by black hat SEO purposes. This leads to pessimization of VictoriaMetrics docs
  in Google search result :( We cannot update the old links all over the Internet in order to avoid the redirect by javascript :(
  The best thing we can do is to add <meta rel="canonical"> header with the new location of the page at the old url,
  and hope Google won't remove VictoriaMetrics docs from its search results. @AndrewChubatiuk , please do this.

- It breaks backwards navigation. When you click the https://docs.victoriametrics.com/vmagent/ link on some page and then press `back` button
  in the web browser, it won't return back you to the original page because of the intermediate redirect :( The broken navigation cannot be fixed
  for old links located all over the Internet.

- It increases chances of breaking old links left on the Internet in the future, which will lead to 404 Not Found pages
  and angry users :(

The sad thing is that we hit the same wall with harmful redirects again :( In the beginning the VictoriaMetrics docs links had .html suffix,
and they were case-sensitive. For example, http://docs.victoriametrics.com/MetricsQL.html#rate . It has been decided for some unknown reason
that it is a good idea to remove the .html suffix and to make all the links lowercase. So now such links are automatically redirected by javascript
to https://docs.victoriametrics.com/metricsql/#rate and then redirected again by another javasript to https://docs.victoriametrics.com/victoriametrics/metricsql/#rate :(
There are many old links all over the Internet (for example, at Reddit, StackOverflow, some internal Wiki pages, etc.). We cannot fix all of them,
so we need to pray these links won't break in the future.

@hagen1778, @tenmozes, @makasim , please make sure we won't hit the same wall in a third time.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8595
2025-04-26 00:52:24 +02:00
Aliaksandr Valialkin
4aa0f93085
docs/victoriametrics/Articles.md: add a link to https://blog.ogenki.io/post/series/observability/alerts/ 2025-04-26 00:26:42 +02:00
Aliaksandr Valialkin
764fc6010d
vendor: run make vendor-update 2025-04-25 23:25:32 +02:00
Aliaksandr Valialkin
3ec70a4b6d
Makefile: run go mod tidy with -compat=1.24 instead of -compat=1.23
VictoriaMetrics source code depends on Go 1.24, so it is better to run `go mod tidy` in Go 1.24 compatibility mode.
2025-04-25 23:16:57 +02:00
Aliaksandr Valialkin
3b7039679f
lib/logstorage: make golangc-lint happy by substituting unused function arg with _ 2025-04-25 23:14:36 +02:00
Aliaksandr Valialkin
f3197dc5d7
deployment: update VictoriaLogs Docker image tags from v1.20.0-victorialogs to v1.21.0-victorialogs
See https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.21.0-victorialogs
2025-04-25 19:48:48 +02:00
Aliaksandr Valialkin
9d82de9385
docs/victorialogs: cut v1.21.0-victorialogs 2025-04-25 19:41:56 +02:00
Aliaksandr Valialkin
c34f6db5c0
app/vlselect/vmui: run make vmui-logs-update after 202fc0652c 2025-04-25 19:40:42 +02:00
Aliaksandr Valialkin
8ad81220d3
lib/logstorage: increase scalability of datadb.mustAddRows() on hosts with many CPU cores
Use multiple independent logRows shards for storing the pending log entries before converting them to searchable parts.
Every shard is protected by its own mutex, so multiple CPU cores may add multiple log rows into datadb at the same time.

This increases the performance of BenchmarkStorageMustAddRows/rowsPerInsert-1, which ingests log rows own-by-one
from concurrently running goroutines, by 2x.
2025-04-25 19:35:33 +02:00
Aliaksandr Valialkin
7455e6c0a5
lib/logstorage: re-use newTestLogRows() for creating LogRows inside BenchmarkStorageMustAddRows 2025-04-25 19:35:32 +02:00
Alexander Marshalov
22c85a149e
vmcloud docs: add intergrations section ()
### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-04-25 19:28:47 +02:00
f41gh7
1e4238ede7
make vmui-update 2025-04-25 15:51:11 +03:00
Jose Gómez-Sellés
1d218f6aeb
docs/cloud: add organizations documents ()
Here a description of important terms related to Organizations is
provided. The intention is to leverage on a good UI design rather than
explaining all steps needed to perfomr certain actions.

However, an effort on explaining the meaning of internal states and
terms has been done.

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-04-25 14:32:21 +02:00
f41gh7
ba4c9133ee
lib/handshake: log client network errors during handshake as warnings
This commit modifies the logging behavior for client network errors
(e.g., EOFs, timeouts) during the handshake process. They are now logged
as warnings instead of errors, as they are not actionable from the
server’s perspective. Here's some examples of such errors.

Timeouts during the initial read phase:

2025-04-09T07:08:59.323Z	error
VictoriaMetrics/lib/vmselectapi/server.go:204	cannot perform
vmselect handshake with client "<REDACTED>": cannot read hello: cannot
read message with size 11: read tcp4 <REDACTED>-><REDACTED>: i/o
timeout; read only 0 bytes

EOFs occurring later in the handshake process:

2025-04-08T18:01:30.783Z	error
VictoriaMetrics/lib/vmselectapi/server.go:204	cannot perform
vmselect handshake with client "<REDACTED>": cannot read isCompressed
flag: cannot read message with size 1: EOF; read only 0 bytes

By logging these as warnings, we reduce noise in error logs while
preserving valuble information for debug.
2025-04-25 12:03:03 +03:00
Nikolay
dcadc65681
lib/cgroup: properly parse cpu limit
Previously, if `cpu.max` file has only `max` resource defined without
`period`, it was parsed incorrectly and silently drop error. While this
syntax is valid and actually used by some container runtimes. If period
is not defined, default value for it 100_000 must be used.

 This commit fixes parsing function by using default value for period.
In addition, it adds zero value check, which fixes possible panic if
period has 0 value.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8808
2025-04-25 11:48:05 +03:00
Zhu Jiekun
ae9126fae7
changelog: fix update note & sort upcomming changelog
sort the upcoming changelog in alphabet order.
2025-04-25 11:38:09 +03:00
hagen1778
bf3b98954e
docs: tidy up changelog before release
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-04-25 10:03:30 +02:00
Andrii Chubatiuk
814a37e57e
docs: fixed docs-debug target ()
### Describe Your Changes

related https://github.com/VictoriaMetrics/vmdocs/issues/129

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-04-25 09:39:18 +02:00
f41gh7
0afa51a156
fixes test after 1572e1e5c3 2025-04-24 23:32:37 +03:00
Max Kotliar
b89cce6bd6
docs\stream-aggregation: Describe dropping unneeded labels in more details
Follow-up on
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8715 and
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8681#issuecomment-2796127921
2025-04-24 20:26:37 +03:00
Artur Minchukou
fdfa68bc2d
app/vmui/logs: fix zoom behavior on VictoriaLogs chart
Fix zoom behavior on VictoriaLogs chart. The problem was that after
zooming, the chart was recreated and the cursor position on the graph
was lost, so further zooming occurred relative to the zero position. So
to fix this problem:
- removed unnecessary dependencies from useEffect to avoid recreating
the chart;
- moved the chart with the legend to a separate component to optimize
the code when the graph is hidden;
- also this PR contains changes from [this
PR](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8718), which
preserve zoom selection on autorefresh

Related issues:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8557
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8558
2025-04-24 20:25:48 +03:00
Zhu Jiekun
1572e1e5c3
app/vmagent: add consistent hashing for the remote write sharding
This commit adds the following changes:
* use consistent hashing  for the remote write sharding.
* properly count metric of remote write samples drop rate  when `shardByURL` was
enabled.

Related issues:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8546
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8702
2025-04-24 20:22:59 +03:00
Aliaksandr Valialkin
55d75b911c
app/vlstorage: add /internal/force_flush endpoint for flusing all the recently ingested logs to searchable parts
This endpoint is needed for integration tests, which need querying the recently ingested logs.
2025-04-24 18:17:30 +02:00
Aliaksandr Valialkin
f9e312f35a
app/vlselect/logsql: allow passing multiple extra_filters and extra_stream_filters query args to all the querying APIs
This should help the proper implementation of extra filters in VictoriaLogs Plugin for Grafana and
in VictoriaLogs web UI. See https://github.com/VictoriaMetrics/victorialogs-datasource/issues/293#issuecomment-2827285583
2025-04-24 17:55:33 +02:00
Aliaksandr Valialkin
0cfe28c2fc
Revert "ci: temporary disable vlogs tests for i386 "
This reverts commit fa6a32a39d.

Reason for revert: the broken tests were fixed on GOARCH=386 by skipping the check for the state size
after improting the state of stats function, since the state size depends on the hardware architecture.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8710
2025-04-24 17:31:40 +02:00
Yury Molodov
1c39853164
vmui: show read usage in Cardinality Explorer ()
### Describe Your Changes

Two new columns were added to the **Cardinality Explorer** table:

`Requests count`
- Displays how many times a metric was queried since stats collection
began
- Highlighted **red** when the value is `0` or missing

`Last request`
- Shows when the metric was last queried
- Highlighted:
  - **Red** if older than **30 days**
  - **Yellow** if between **7 and 30 days**
- Shows exact timestamp on hover in `YYYY-MM-DD HH:mm:ss` format

Related issue: 


![image](https://github.com/user-attachments/assets/fcf6f33c-e2d0-45c0-9f20-177cf6fa9dde)

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-04-24 14:07:29 +02:00
Andrii Chubatiuk
7390f99897
app/vmalert: fixed rule group ID in UI ()
### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
Co-authored-by: Hui Wang <haley@victoriametrics.com>
2025-04-24 13:46:42 +02:00
Hui Wang
2d279750a9
lib/storage/downsampling: allow using -downsampling.period=filter:0s:0s to skip downsampling for time series that match the specified filter
* downsampling: allow using `-downsampling.period=filter:0s:0s` to skip downsampling for time series that match the specified `filter`

* doc minor fix

* lib/storage/downsampling: fix a typo in error message

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>

---------

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Co-authored-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-04-24 14:41:37 +04:00
Artem Navoiev
e9c577132d
docs: tag all documentation pages ()
### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

Signed-off-by: Artem Navoiev <tenmozes@gmail.com>
2025-04-24 02:58:02 -07:00
Yury Molodov
899d70a93e
vmui: fix /flags API requests to respect custom URL prefixes ()
### Describe Your Changes

Fixes incorrect /flags request in vmui when using URL prefixes.
 
 Related issue:  

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-04-24 11:33:43 +02:00
Yury Molodov
1d13718fcf
vmui/logs: fix oversized Query History button on mobile ()
### Describe Your Changes

Fixes oversized `Query History` button on mobile in logs UI.
Related issue:  

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-04-24 11:25:17 +02:00
Yury Molodov
fa4708178f
vmui: hide duplicate legend items in Raw Query ()
### Describe Your Changes

Hide identical series in the legend on the Raw Query page when
deduplication is disabled.

Related issue: 

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-04-24 11:20:41 +02:00
Harilou
e0ff58ce5e
app/vmlogs: Replace elasticsearch plugin with opensearch plugin of logstash
### Describe your changes

1. When I used the opensearch plugin in the reference example, I found a
bug that caused memory exhaustion and stopped working. There is a
similar issue in the opensearch community:
https://github.com/opensearch-project/logstash-output-opensearch/issues/186,
@chadmyers raised this issue, but it has not been fixed yet. It has been
verified that the elasticsearch plugin does not have this problem.

![image](https://github.com/user-attachments/assets/cc2cbbfc-6dde-43af-8aca-3c1e371e1890)


2. In the vmlogs logstash docking example, it is consistent with the
official document's example of using the elasticsearch plugin.
https://docs.victoriametrics.com/victorialogs/data-ingestion/logstash/

### Checklist

The following checks are **required**:

- [x] My changes follow the [VictoriaMetrics Contribution
Guide](https://docs.victoriametrics.com/contributing/).
2025-04-24 12:07:56 +04:00
Yury Molodov
a9f03605d3
vmui/logs: handle ANSI escape sequences ()
### Describe Your Changes

1. Added a toggle to enable parsing of ANSI escape sequences (e.g.,
color codes).
    Related issue:  
2. Fixed the display of raw fields when the `_msg` field is missing. 
    Related issue: 
3. Fixed log display in Group view when Markdown parsing is enabled. 
    Related issue: 
4. Moved the Markdown parsing toggle to **Group View Settings**.
5. Configured testing setup.

<details>
<summary>Demo UI:</summary>

<br/>

**Disabled ANSI parsing:**

![image](https://github.com/user-attachments/assets/7f466051-ea6c-4ff7-b386-a3df4bb503c9)

**Enabled ANSI parsing:**

![image](https://github.com/user-attachments/assets/04074b6e-b1c2-4d5e-a8c2-5f58ca88494b)


</details>


### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-04-23 15:25:07 +02:00
hagen1778
19025d5a19
docs: update metric names tracker
* update wording;
* add more details about returned response;
* cover details on counter increments;

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-04-23 15:02:57 +02:00
hagen1778
1f8c115428
docs: move metric usage doc closer to similar APIs
Before, this doc section was related to vmui, by mistake.
Moved it closer to tsdb stats, where it makes more sense to be.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-04-23 14:37:02 +02:00
hagen1778
1863c5b95b
docs: remove badges from the main page
Badges is more a decorative element rather than source of useful information.
Some badges break from time to time, giving misleading impression to users.
It is better to remove them to reduce visual load and confusion.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-04-23 14:35:03 +02:00
Hui Wang
7756046865
vmalert: correctly update the debug param for recording rule when u… ()
…pdating the rule group

Refactor the `TestUpdateWith` a bit in preparation for
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8658
2025-04-23 14:32:07 +02:00
Zakhar Bessarab
7f2d194948
lib/backup/s3remote: enable HTTP/2 for S3 connections
### Describe Your Changes

HTTP/2 support is used by some S3-compatible storage providers, so
disabling it default leads to unexpected errors when trying to connect
to S3 endpoint.
For example, using MinIO as S3 storage backend: `net/http: HTTP/1.x
transport connection broken: malformed HTTP response`.

HTTP/2 was enabled by default previously, but while fixing inconsistency
e5f4826 commit disabled this by default.
cc: @valyala 

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-04-23 09:45:07 +04:00
Zakhar Bessarab
f224b60410
fix: compatibility for FIPS builds
### Describe Your Changes

Fixes which are required in order to build FIPS-compliant binaries.
These changes were originally added for enterprise version and synced to
opensource for consistency and easier maintenance.

- consistently use `hash/fnv` at `app/vmalert` when calculating
checksums. Usage of md5 is not allowed in FIPS mode.
- increase encryption keys size used in testing in order to allow tests
to successfully run in FIPS mode

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-04-23 09:43:07 +04:00
Aliaksandr Valialkin
46d32af89a
lib/logstorage: add sample N for returning a random 1/Nth sample of matching logs 2025-04-22 16:37:07 +02:00
Aliaksandr Valialkin
3a7ea02d41
deployment: update VictoriaLogs Docker image from v1.19.0-victorialogs to v1.20.0-victorialogs
See https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.20.0-victorialogs
2025-04-22 14:30:17 +02:00
Aliaksandr Valialkin
84d2922f5e
all: consistently use Grafana dashboard links ending with dashboard ID
The suffix after the ID may change in dashboard settings.
If it changes, the link becomes broken (dubious decision at grafana.com/grafana/dashboards/ ).
That's why it is better to drop all the suffixes and use links for Grafana dashbooards ending with IDs.
In this case they are automatically redirected to the url with the proper suffix.

This is a follow-up for 3e4c38c56c

See also the previous commits 9c0863babc and 0a5ffb3bc1
2025-04-22 14:20:15 +02:00
hagen1778
39b27cb397
docs: fix typo
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-04-22 14:13:01 +02:00
Zakhar Bessarab
b1523f650d
lib/promrelabel/debug: use stricter format for labels ()
### Describe Your Changes

Previously, it was possible to use any UTF-8 string to specify list of
labels. While this makes it easier to use it also leads to unexpected
parsing results in some cases (see
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8584 as an
example).

Enforce specifying metric in format {label="value"...} in order to avoid
issues with unexpected parsing results.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-04-22 14:10:25 +02:00
Aliaksandr Valialkin
297bc28e3d
docs/victorialogs/CHANGELOG.md: cut v1.20.0-victorialogs 2025-04-22 13:53:58 +02:00
Aliaksandr Valialkin
54bb834c16
app/vlselect/vmui: run make vmui-logs-update after faf95b4a58 2025-04-22 13:53:40 +02:00
Aliaksandr Valialkin
edee80f215
docs/victorialogs/CHANGELOG.md: move the description of the bugfix for https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8647 into the appropriate place
The bugfix for https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8647 isn't released yet,
so it must be placed under the `tip` section.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8648
2025-04-22 13:49:19 +02:00
Aliaksandr Valialkin
5491d54c11
lib/logstorage: buffer the ingested log entries before converting them into searchable parts
This reduces the overhead needed for converting the ingested log entries to searchable in-memory parts
when small number of log entries are passed to Storage.MustAddRows().

The BenchmarkStorageMustAddRows shows up to 10x performance increase for rowsPerInsert=1,
up to 5x performance increase for rowsPerInsert=10 and up to 2x performance increase for rowsPerInsert=100.

This should reduce CPU usage during data ingestion when every request contains small number of rows.
2025-04-22 13:49:17 +02:00
Aliaksandr Valialkin
14561a7ed3
lib/logstorage: add a benchmark for different number of rows added to the storage via Storage.MustAddRows() 2025-04-22 13:49:15 +02:00
Artem Navoiev
cd4ec0e739
dashboards: update statistic by tenant dashboard add instant churn ra… ()
…te panel

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: Artem Navoiev <tenmozes@gmail.com>
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-04-22 13:46:46 +02:00
hagen1778
8e76b41081
docs: fix broken links after docker-compose updates
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-04-22 13:39:33 +02:00
f41gh7
9751ea1098
fixes tests after 795d3fe722
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-04-22 11:59:22 +03:00
Roman Khavronenko
787e6a9b24
app/vmalert/remotewrite: rm noisy shutdown logs
On shutdown, rw client printed two log messages about shutting down and
how many series it flushed on shut down.

This commit removes these log messages for the following reasons:
1. These messages were printed for each `client.cc`, which is set to x2
of available CPUs. This behavior generated a lot of useless logs on
shutdown.
2. Number of flushed series on shutdown doesn't matter to the user.

Instead, it now prints only two log messages: when shutdown starts and
when it ends:
```
^C2025-04-22T07:37:11.932Z      info    VictoriaMetrics/app/vmalert/main.go:189 service received signal interrupt
2025-04-22T07:37:11.932Z        info    VictoriaMetrics/app/vmalert/remotewrite/client.go:152   shutting down remote write client: flushing remained series
2025-04-22T07:37:11.933Z        info    VictoriaMetrics/app/vmalert/remotewrite/client.go:155   shutting down remote write client: finished in 210.917µs
```

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-04-22 11:50:23 +03:00
f41gh7
73a69bd655
app/vmselect/netstorage: properly set max read size for metric name
Previously, metric names stats API had a false assumption, that max
size of metric name is 256 byte. But this is configurable parameter with
4096 bytes max size. It triggered errors during API requests.

 This commit replaces hard-coded 256 byte limit with common constant:
maxLabelValueSize. It has 16 MB limit.

 In addition, this commit adds check for metric name stats tracker,
if metric name size exceeds default buffer limit, it will be allocated
directly on heap. It must be rare case, since most metric names has
16-64 byte size.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8759
2025-04-22 11:36:16 +03:00
f41gh7
ec6ed3cb48
lib/storage/metricnamestats: allow regex for match_pattern
This commit allows regex syntax for match_pattern query param.
It improves API usability.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6145
2025-04-22 11:35:23 +03:00
Yury Molodov
faf95b4a58
vmui/logs: update button styles to improve hover performance
Refactors button styles to fix high CPU usage on hover in the logs UI.

 Related issue: #
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8135
2025-04-22 11:28:19 +03:00
Artur Minchukou
a7106040de
app/vmui/logs: add 0 label to the Y axis
Added 0 label to the Y axis on VictoriaLogs chart.


Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8409
2025-04-22 11:26:34 +03:00
Phuong Le
792d0f8bb8
vmagent: reduce log noise from remote write retries
Throttle remote write retry warning logs to one message per 5 seconds.
This reduces log noise during connectivity issues.

Users can monitor `vmagent_remotewrite_retries_count_total` and
`vmagent_remotewrite_errors_total` metrics for detailed retry rates per
destination if needed. This change prioritizes reducing log volume over
immediate destination-specific error logging in the retry warnings.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8723
2025-04-22 11:20:33 +03:00
Phuong Le
8628f90e6b
lib/storage: put the unused in-memory part back into the pool 2025-04-22 11:19:50 +03:00
Georgy Torquemada
89237a04b9
fix: add missed severity levels (warn) for protobuff parser
Closes
[8647](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8647)

### Describe Your Changes

Added missed OTEL severities levels, added test for severity, fix some
severity in given tests

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-04-21 16:39:10 +04:00
Aliaksandr Valialkin
d6c156727b
docs: remove misleading requirements for the minimum supported Go version needed for building VictoriaMetrics executables
VictoriaMetrics executables need the latest available stable release of Go (1.24 currently),
since they use its features. See, for example, GOEXPERIMENT=synctest from the commit 06c26315df .

There is no need in specifying the minimum supported Go version for building VictoriaMetrics products,
since Go automatically downloads and uses the version specified at `go ...` directive inside go.mod
starting from Go1.21. See https://go.dev/doc/toolchain for details.

So the actual minimum needed Go version is Go1.21, which has been released 1.5 years ago. It should automatically
install newer Go versions specified at `go ...` directive inside go.mod.
2025-04-21 13:06:09 +02:00
Aliaksandr Valialkin
f1fdc8610d
lib/promscrape: prevent from excess memory allocation during scrapes when sample_limit is exceeded
Do not reset wc.labels in order to properly keep track of the number of used labels for the scrape,
and properly re-use the same number of wc.labels on subsequent scrapes.

See 12f26668a6 (r155481168)
2025-04-21 13:06:08 +02:00
Artem Navoiev
611450a188
docs: kuma sd actualize link
Signed-off-by: Artem Navoiev <tenmozes@gmail.com>
2025-04-19 20:37:44 +02:00
Artem Navoiev
3b1a41c014
docs: victorialogs fix link to external collectiors
Signed-off-by: Artem Navoiev <tenmozes@gmail.com>
2025-04-19 10:53:17 +02:00
hagen1778
cd9bb6b315
docs: rm links to swiftstack
Swiftstack s3 service and docs seems unavailable anymore.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-04-19 07:35:27 +02:00
Artem Navoiev
e58d3f03c7
docs: victoriametrics home page use relative anchor
Signed-off-by: Artem Navoiev <tenmozes@gmail.com>
2025-04-18 22:22:44 +02:00
Artem Navoiev
3e4c38c56c
docs: changelog fix link to grafana dashboard as far we rename it. Point to 2.53 (LTS) release of prometheus in vmalert as far link to 2.49 doesn't exist anymore
Signed-off-by: Artem Navoiev <tenmozes@gmail.com>
2025-04-18 22:21:00 +02:00
Artem Navoiev
c3becae96b
docs: actualize link to timescale compression in faq
Signed-off-by: Artem Navoiev <tenmozes@gmail.com>
2025-04-18 22:12:33 +02:00
Artem Navoiev
84ee713dc6
docs: home victoriametrics fix link to readme.md file
Signed-off-by: Artem Navoiev <tenmozes@gmail.com>
2025-04-18 22:06:01 +02:00
Artem Navoiev
7dc58894d1
docs: changelog remove deadlink to mesosphere marathon as far project in achive since october 2024 see https://github.com/d2iq-archive/marathon
Signed-off-by: Artem Navoiev <tenmozes@gmail.com>
2025-04-18 21:55:47 +02:00
Artem Navoiev
30376722c1
docs: fix wrong link in cluster documentation, point to the new era5 page in data examples
Signed-off-by: Artem Navoiev <tenmozes@gmail.com>
2025-04-18 21:44:11 +02:00
Artem Navoiev
6674691a58
docs: victoriametrics list current urls for redict to simplify the feature changes. More urls
Signed-off-by: Artem Navoiev <tenmozes@gmail.com>
2025-04-18 21:10:46 +02:00
Artem Navoiev
55e3ccb5f0
docs: victoriametrics list current urls for redict to simplify the feature changes
Signed-off-by: Artem Navoiev <tenmozes@gmail.com>
2025-04-18 20:44:01 +02:00
hagen1778
2c97c8841c
deployment/docker: fix routing for vlogs vmui
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-04-18 13:55:25 +02:00
Andrii Chubatiuk
f38736343d
deployment/docker: added victorialogs cluster docker compose setup ()
### Describe Your Changes

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8694

additionally removed container_name, docker network, renamed all
compose, config files for consistency

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-04-18 13:47:53 +02:00
Aliaksandr Valialkin
33315f1ece
deployment/docker: update VictoriaLogs from v1.18.0-victorialogs to v1.19.0-victorialogs
See https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.19.0-victorialogs
2025-04-17 20:25:47 +02:00
Aliaksandr Valialkin
680cbef0cb
docs/victorialogs/CHANGELOG.md: cut v1.19.0-victorialogs release 2025-04-17 20:15:20 +02:00
Aliaksandr Valialkin
2d3e048f59
docs/victorialogs/CHANGELOG.md: mention @arturminchukov as the author of the recent Web UI changes 2025-04-17 20:13:18 +02:00
Aliaksandr Valialkin
024a40a799
app/{vmselect,vlselect} run make vmui-update vmui-logs-update after 5fa40ee631 2025-04-17 20:08:28 +02:00
Aliaksandr Valialkin
225c6b6f52
docs/victorialogs/CHANGELOG.md: clarify the description of the bugfix in the commit 0fee22e91a
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8743
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8707
2025-04-17 20:05:39 +02:00
Andrii Chubatiuk
0fee22e91a
lib/logstorage: expect message in a field with empty and _msg name ()
### Describe Your Changes

fixes 

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
2025-04-17 19:55:37 +02:00
Max Kotliar
96200a9ec4
vmagent: use tmp dir in integrations tests ()
Before the change, the vmagent integration tests created their directory
and files inside apptest/tests.
After the change, vmagent is instructed to store all files in a real
temporary directory, which is automatically deleted after the tests
complete.
2025-04-17 18:23:42 +02:00
Artem Fetishev
06c26315df
lib/storage: test wasMetricIDsMissingBefore with "testing/synctest" ()
Using this package lets to manipulate time. In this particular case, it
lets to advance the time 61 second forward instantly.

A few side changes were necessary:

- Do not use fasttime in unit tests. The fasttime package starts a
goroutine outside the test bubble which causes the clock to be real, not
fake.
- Stop the time.Ticker explicitly and also stop idbNext. These two
create goroutines with infinite loops which causes the unit tests that
use synctest to hang forever. All goroutines created inside the bubble
must exit in order for the syntest to finish.
- synctest is an experimental package and requires an environment
variable to be set. The Makefile was changed to set it.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-04-17 16:58:47 +02:00
Max Kotliar
231810fe49
lib/protoparser/protoparserutil: restore write concurrency limiter in ReadUncompressedData due to performance regressions ()
### Describe Your Changes

The write concurrency limiter in ReadUncompressedData was previously
removed in

22d1b916bf
to avoid suboptimal behavior in certain scenarios. However, follow-up
reports—including issue
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8674 and
production feedback from VictoriaMetrics Cloud—indicated a noticeable
degradation in performance after its removal.

To mitigate these regressions, this commit reintroduces the concurrency
limiter. A long-term, more optimal solution will be explored separately
in issue https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8728.

TODO:

* [x] Changelog


### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-04-17 14:05:41 +02:00
hagen1778
60ef715c79
docs: fix newline typo in query-stats.md
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-04-17 12:25:59 +02:00
hagen1778
8071dabe58
docs: update query-stats.md
* fix typos
* improve wording
* link grafana dashboard

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-04-17 12:23:15 +02:00
hagen1778
3d9b160fce
docs: make query-stats.md available in docs
The doc was incorrectly ported into wrong directory after
a113516588
This change moves it to the victoriametrics dir.

While there, updated the order of some pages to couple them by meaning.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-04-17 12:15:17 +02:00
Yury Molodov
5fa40ee631
vmui/logs: fix incorrect table sorting for numeric ()
### Describe Your Changes

Fixed table sorting logic and added unit tests for descendingComparator.
Values are now correctly sorted by type: number, date, or string.

Related issue: 

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-04-17 09:40:46 +02:00
Artur Minchukou
53814b1ea6
app/vmui: add query history to VictoriaLogs ()
### Describe Your Changes
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8500

Added query history to VictoriaLogs

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-04-17 09:34:15 +02:00
Artur Minchukou
9ca2a246a9
app/vmui: fix mobile layout on the VictoriaLogs page, fix width of groups and table settings modal ()
### Describe Your Changes
 - Fix mobile layout on the VictoriaLogs page
<img width="361" alt="image"
src="https://github.com/user-attachments/assets/2e09b999-34d5-4059-ba09-95a21b3e8ab3"
/>
<img width="353" alt="image"
src="https://github.com/user-attachments/assets/b9048d41-5972-4290-988f-e9b0a0472b99"
/>
 
- Fix width of groups settings modal
<img width="372" alt="image"
src="https://github.com/user-attachments/assets/e1cb1902-dc93-4bfd-8b32-eaf0d8e54253"
/>
<img width="352" alt="image"
src="https://github.com/user-attachments/assets/a7c7000f-2c4a-41d9-a3c1-a515fd077b9b"
/>

- Fix width of table settings modal
<img width="361" alt="image"
src="https://github.com/user-attachments/assets/12921936-6824-47e9-aff8-d0bb4de2e7f7"
/>
<img width="352" alt="image"
src="https://github.com/user-attachments/assets/77c857ee-85f4-4992-8113-4e252b40f1a6"
/>

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-04-17 09:27:10 +02:00
Artur Minchukou
4517425da8
app/vmui: add export logs button to VictoriaLogs ()
### Describe Your Changes
Related issue:
[](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8604)

Added a download logs button to VictoriaLogs, which allows you to export
logs in the following formats: csv, json.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-04-17 09:14:41 +02:00
Yury Molodov
953ed46680
vmui: update package.json
### Describe Your Changes

Bumped project dependencies in `package.json` to the latest stable
versions.
2025-04-16 20:19:17 +02:00
f41gh7
795d3fe722
lib/storage: enhance TSDB status response
This commit adds new fields - `requestsCount` and `lastRequestTimestamp`
to series count be metric names stats.
It allows to display an additional stats at explore cardinality page.
Stats will only be added if `storage.trackMetricNameStats` flag is set.

 This change requires an update to RPC protocol in order to properly
marshal data.

 In addition, this commit adds integration tests to TSDB stats API.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6145
2025-04-16 20:09:35 +02:00
f41gh7
536b40c06c
make linter happy after a113516588 2025-04-16 19:55:54 +02:00
Roman Khavronenko
a113516588
app/vmselect: log search query request stats
This commit adds `search.logSlowQueryStats=<duration>` cmd-line flag on vmselect.
It reads stats from eval function, and doesn't slow down the query execution.

 Log line has the following structure:

 vm_slow_query_stats type=%s query=%q query_hash=%d start_ms=%d end_ms=%d step_ms=%d range_ms=%d tenant=%q execution_duration_ms=%v series_fetched=%d samples_fetched=%d bytes=%d memory_estimated_bytes=%d

 This feature is only available for enterprise version.
2025-04-16 19:39:13 +02:00
Fred Navruzov
b68f9ea9e3
docs/vmanomaly - release 1.22.0 - experimental ()
### Describe Your Changes

Changelog note about experimental v1.22.0 release that solves deadlock
issue on multicore systems by complete parallelization turned off until
proper refactor is made to return it back w/o reintroducing the risk of
the deadlock in child processes.

P.s. other references and guides were not updated to experimental tag,
as long as it has downsides of dropped speed. The links will be updated
once we have parallelization properly turned on.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-04-14 15:56:10 +03:00
Roman Khavronenko
fa6a32a39d
ci: temporary disable vlogs tests for i386
This change unblocks testing pipelines in CI for other contributions.
The tests are commented because I don't have full understanding of
fixing them.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8710

---------
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-04-14 11:11:43 +02:00
Max Kotliar
477635e00f
Follow up to "vmagent/client: Use VictoriaMetrics remote write protocol by default, downgrade to Prometheus if needed" ()
### Describe Your Changes

Follow-up to
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8462

Addressed review comments:
- Log panic with FATAL prefix to indicate possible on-disk data
corruption
- Moved version bump line to the tip block (v1.114.0 has already been
released) in changelog
- Removed duplicate vmagent entry from targets list from Makefile


### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2025-04-14 11:55:01 +03:00
Max Kotliar
8e92cd3b2d
lib/writeconcurrencylimiter: add some hints to unexpected EOF error message. ()
### Describe Your Changes

Under heavy load, vmagent's wirte concurrency limiter

(2ab53acce4/lib/writeconcurrencylimiter/concurrencylimiter.go (L111))
queues incoming requests. If a client's timeout is shorter than the wait
time in the
queue, the client may close the connection before vmagent starts
processing it. When vmagent then tries to read the request body, it
encounters an ambiguous `unexpected EOF` error
(https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8675).

This commit adds more context to such errors to help users diagnose and
resolve
the issue when it's related to vmagent's own load and queuing behavior.

Possible user actions include:
- Lowering `-insert.maxQueueDuration` below the client's timeout.
- Increasing the client-side timeout, if applicable.
- Scaling up vmagent (e.g., adding more CPU resources).
- Increasing `-maxConcurrentInserts` if CPU capacity allows.

Steps to reproduce:
https://gist.github.com/makasim/6984e20f57bfd944411f56a7ebe5b6bf

### Checklist

The following checks are **mandatory**:

- [x] My change adheres to [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-04-13 11:22:12 +03:00
Max Kotliar
2ab53acce4
vmagent/client: Use VictoriaMetrics remote write protocol by default, downgrade to Prometheus if needed ()
### Describe Your Changes

This commit improves how vmagent selects the remote write protocol.
Previously, vmagent [performed a handshake
probe](0ff1a3b154/lib/protoparser/protoparserutil/vmproto_handshake.go (L11))
at
[startup](0ff1a3b154/app/vmagent/remotewrite/client.go (L173)):

- If the probe succeeded, it used the VictoriaMetrics (VM) protocol.

- If the probe failed, it downgraded to the Prometheus protocol.

- No protocol changes occurred after the initial probe at runtime.

However, this approach had limitations:

- If vmstorage was unavailable during vmagent startup, vmagent would
immediately downgrade to the Prometheus protocol, leading to higher
network usage unitl vmagent restarted. This case has been reported in
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7615.

- If the remote write server was updated or downgraded (e.g., during a
fallback or migration), vmagent would not detect the protocol change. It
would continue retrying failed requests and eventually drop them.
Require a restart of vmagent to pick up the new protocol.

This commit introduces a more adaptive mechanism.
vmagent always starts with the VM protocol and downgrades to the
Prometheus protocol only if an unsupported media type or bad request
response is received.
When this happens, the protocol is downgraded for all future requests.
In-flight requests are re-packed from Zstd to Snappy and retried
immediately.
Snappy-encoded requests are dropped if an unsupported media type or bad
request is received (no retrying).

Additionally, the in-memory and persisted queues could mix snappy and
zstd encoded blocks. The proper encoding is decided before sending by
encoding.IsZstd function.

TODO:
* [x] Add tests
* [x] Update documentation
* [x] Changelog
* [x] Research on
[content-type](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8462#issuecomment-2786918054),
[accept-encoding](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8462#issuecomment-2786923382)

Fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7615#top
issue.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-04-12 10:39:47 +03:00
Aliaksandr Valialkin
2f4680d8f3
docs/victoriametrics/Articles.md: add a link to https://chronicles.mad-scientist.club/tales/grepping-logs-remains-terrible/ 2025-04-10 23:10:24 +02:00
Aliaksandr Valialkin
1f1afeb06e
lib/logstorage: add support for <duration_seconds:field> formatting option for format pipe
This option formats duration values as floating-point seconds.
2025-04-10 22:55:08 +02:00
Aliaksandr Valialkin
b2c075191e
docs/victorialogs/cluster.md: add an example on how to query every vmstorage node as a single-node VictoriaLogs 2025-04-10 22:16:31 +02:00
f41gh7
1191c6453b
app/vmagent: properly init kafka consumer
Previously, if vmagent was built with CGO_ENABLED=0, vmagent cannot start and reported runtime error:

 `Kafka client is not supported at systems without CGO`

 This error was trigger even if `-kafka.consumer.topic` was not
 provided. CGO_ENABLED=0 is default build option for linux/arm and some other archs.

 This commit properly inits kafka consumer by checking if `-kafka.consumer.topic` is set.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6019
2025-04-10 18:15:36 +02:00
Aliaksandr Valialkin
5d61122fd5
docs/victorialogs/cluster.md: add a link to the changelog for the latest available release 2025-04-10 17:09:23 +02:00
Aliaksandr Valialkin
e9c04879ce
docs/victorialogs/CHANGELOG.md: add release date for v1.18.0-victorialogs 2025-04-10 17:07:58 +02:00
Aliaksandr Valialkin
5f4205a050
deployment: update VictoriaLogs Docker image tag from v1.17.0-victorialogs to v1.18.0-victorialogs
See https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.18.0-victorialogs
2025-04-10 17:04:30 +02:00
Aliaksandr Valialkin
7a46af3920
victorialogs: add cluster mode
Cluster mode is enabled when -storageNode command-line flag is passed to VictoriaLogs.
In this mode it spreads the ingested logs among storage nodes specified in the -storageNode flag.
It also queries storage nodes during `select` queries.

Cluster mode allows building multi-level cluster setup when top-level select node can query multiple lower-level clusters
and get global querying view.

See https://docs.victoriametrics.com/victorialogs/cluster/

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5077
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7950
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8223
2025-04-10 16:55:23 +02:00
Aliaksandr Valialkin
ff967a8e65
lib/protoparser: support for identity encoding in a generic way inside protoparserutil.GetUncompressedReader
This should help avoiding future issues when `identity` encoding isn't replaced to `` encoding
by the caller of protoparserutil.GetUncompressedReader().

This is a follow-up for 303b425fa3

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8652
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8649
2025-04-10 13:52:45 +02:00
Artem Navoiev
3108376d95
docs: changelog fix the link to cluster version in 114 release.2
Signed-off-by: Artem Navoiev <tenmozes@gmail.com>
2025-04-10 09:39:15 +02:00
Artem Navoiev
494fe4403a
docs: changelog fix the link to cluster version in 114 release
Signed-off-by: Artem Navoiev <tenmozes@gmail.com>
2025-04-09 21:28:40 +02:00
Andrii Chubatiuk
303b425fa3
lib/protoparser/datadog*: support Content-Encoding: identity value
introduction of common decompression logic in
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8416 removed
ability to treat unsupported compression algorithms as uncompressed data
for datadog v1 endpoint. This PR adds support of `identity`
Content-Encoding header value, though according to RFC 2616 this value
is only expected in `Accept-Encoding` header

related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8649
2025-04-08 16:17:19 +02:00
Nikolay
8f3efde55d
lib/httpserver: mask authKey at PostFrom
'authKey' is well-known url and form param for VictoriaMetrics
components authorization. Previously, it could be printed into stdout
via httpserver error logger. It makes this authKey insecure and hard to
use.

This commit prevents from logging authKey defined at PostForm or as part
of url.Query.

It's recommneded to transfer authKey via PostForm and it should be
implemented at separate PRs.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5973

---------
Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-04-08 16:15:48 +02:00
Nikolay
f16938bba9
lib/backup/s3: properly set ProfileName
Previously, if ProfileName is set to empty value (as default). AWS s3
lib ignored any profile config defined with `-configProfilePath`.

This commit correctly configure client options and set profile name only
if it's set to non-empty value.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8668
2025-04-08 16:15:07 +02:00
nemobis
65ff04bc09
docs: Fix typo in changelog for v113
Fix a typo `scrapped` for `scraped`.
2025-04-08 16:14:50 +02:00
Zakhar Bessarab
e2715f94af
docs/guides/vmgateway-grafana-oidc: update guide for recent versions of components
- update grafana & keycloak to latest versions
- update UI images with the latest screenshots
- update wording to reflect UI changes
2025-04-08 16:13:58 +02:00
Zakhar Bessarab
582160f566
make: fix make package for vmalert-tool
`make package` relies on presence of `APP_NAME/deployment/Dockerfile`
which was missing for vmalert-tool.
2025-04-08 16:13:28 +02:00
nemobis
638f9839d5
docs: fix typo in pull request template
The verb is _adhere to_, see https://en.wiktionary.org/wiki/adhere .
2025-04-08 16:12:51 +02:00
Max Kotliar
3f5bf4bd03
vmagent/remotewrite: set content encoding header based on actual body
Improve remote write handling in vmagent by setting the
`Content-Encoding` header based on the actual request body, rather than
relying on configuration.

- Detects Zstd compression via the Zstd magic number.
- Falls back to Snappy if Zstd is not detected.
- Persistent queue may now contain mixed-encoding content.
- Add basic vmagent integration tests

Follow up on
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5344 and
12cd32fd75.

Extracted from
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8462

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5301
2025-04-08 16:12:06 +02:00
f41gh7
038419663b
docs: release follow-up
* mention lts release changes
* update vm apps versions at docs and deployment examples

Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-04-07 12:59:53 +02:00
f41gh7
123f373537
CHANGELOG.md: cut v1.115.0 release 2025-04-04 14:30:16 +02:00
f41gh7
57121c828f
make docs-update-version 2025-04-04 14:23:08 +02:00
f41gh7
aa5edbc706
make vmui-update 2025-04-04 14:20:37 +02:00
Andrii Chubatiuk
f9d8c86b0a
lib/streamaggr: fix panic in rate output
This commit properly reset aggregator state. Previously, it was not checked for `nil` and it lead to the panic on access.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8634
2025-04-04 14:14:52 +02:00
hansemschnokeloch
b733fc5b83
docs/vlogs: fix typo in README 2025-04-04 14:10:59 +02:00
Zakhar Bessarab
f2eaad62dc
docs/changelog: correct entry location after 298f862f
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-04-04 12:24:16 +04:00
Aliaksandr Valialkin
adae788b18
lib/logstorage: pad pipeStatsProcessorShard.groupMapShards in order to avoid false sharing when merging these shards in parallel on many CPU cores 2025-04-03 22:21:18 +02:00
Aliaksandr Valialkin
a65d10fcce
lib/logstorage: add padding between hitsMap items at hitsMapAdaptive.shards in order to avoid false sharing when processing the hitsMapAdaptive.shards on multiple CPU cores 2025-04-03 20:14:20 +02:00
Zakhar Bessarab
298f862fc0
deps: downgrade AWS dependencies
Pin AWS libraries to version before 2025-01-15 (see
https://github.com/aws/aws-sdk-go-v2/releases/tag/release-2025-01-15).

This version enabled request and response checksum verification by
default which breaks compatibility with non-AWS S3-compatible storage
providers.

See: https://github.com/victoriaMetrics/victoriaMetrics/issues/8622

Supersedes https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8630

---------

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-04-03 18:05:07 +04:00
Zakhar Bessarab
aff1580a1d
app/vmauth: return non-OK response for timeouts and request cancellation
Currently, requests failing due to network timeout would receive "200
OK" while producing a warning log message about the timeout. This
behaviour is confusing and might produce unexpected issues as it is not
possible to retry errors properly.

Change this to return "502 Bad Gateway" response so that error can be
handled by the client.

See: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8621

Config for testing:
```
unauthorized_user:
  url_prefix: "http://example.com:9800"
```

Before the change:
```
*   Trying 127.0.0.1:8427...
* Connected to 127.0.0.1 (127.0.0.1) port 8427
* using HTTP/1.x
> HEAD /api/v1/query HTTP/1.1
> Host: 127.0.0.1:8427
> User-Agent: curl/8.12.1
> Accept: */*
>
* Request completely sent off
/* NOTE: 30 seconds timeout passes */
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Vary: Accept-Encoding
Vary: Accept-Encoding
< X-Server-Hostname: pc
X-Server-Hostname: pc
< Date: Tue, 01 Apr 2025 08:54:05 GMT
Date: Tue, 01 Apr 2025 08:54:05 GMT
<

* Connection  to host 127.0.0.1 left intact
```

After:
```
*   Trying 127.0.0.1:8427...
* Connected to 127.0.0.1 (127.0.0.1) port 8427
* using HTTP/1.x
> HEAD /api/v1/query HTTP/1.1
> Host: 127.0.0.1:8427
> User-Agent: curl/8.12.1
> Accept: */*
>
* Request completely sent off
< HTTP/1.1 502 Bad Gateway
HTTP/1.1 502 Bad Gateway
< Content-Type: text/plain; charset=utf-8
Content-Type: text/plain; charset=utf-8
< Vary: Accept-Encoding
Vary: Accept-Encoding
< X-Content-Type-Options: nosniff
X-Content-Type-Options: nosniff
< X-Server-Hostname: pc
X-Server-Hostname: pc
< Date: Tue, 01 Apr 2025 09:13:57 GMT
Date: Tue, 01 Apr 2025 09:13:57 GMT
< Content-Length: 109
Content-Length: 109
<

* Connection  to host 127.0.0.1 left intact
```

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-04-03 13:44:51 +04:00
hagen1778
d4c0a42c1b
docs: improve wording for recent vmalert changes
follow-up for https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8522

Signed-off-by: hagen1778 <roman@victoriametrics.com>
(cherry picked from commit 3cbc3eb19f)
2025-04-03 09:47:34 +01:00
Emre Yazıcı
a9736a5bfb
app/vmalert: show partial responses in debug logs ()
### Describe Your Changes

Log when the data response from vmselect is partial during
rule(recording, alertingrule) evaluations.

vmselect returns `isPartial: true` in case data is not fully fetched
from scattered vmstorages. At the time of rule evals, it may be drifting
apart from real values due to missing points. This is an important event
that should be logged to inform users to see how often that happens as
it may lead to false positive alerts.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: emreya <emre.yazici@adyen.com>
Signed-off-by: emreya <e.yazici1990@gmail.com>
Signed-off-by: Emre Yazici <e.yazici1990@gmail.com>
(cherry picked from commit 56f60e8be9)
2025-04-03 09:47:34 +01:00
Artem Fetishev
2e4beeefb1
Update series count docs ()
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-04-03 10:37:35 +02:00
Aliaksandr Valialkin
635c5c9feb
app/vlselect: run /select/logsql/tail queries without concurrency limit
The concurrency limit is intended for short-running queries. If it is applied to tail queries,
then this can affect short-running queries.
2025-04-02 20:22:27 +02:00
Aliaksandr Valialkin
4e1260e189
app/vlselect: do not log canceled requests, since they are expected and legal 2025-04-02 19:14:44 +02:00
Aliaksandr Valialkin
ca3910748f
deployment: update Go builder from Go1.24.1 to Go1.24.2
See https://github.com/golang/go/issues?q=milestone%3AGo1.24.2+label%3ACherryPickApproved
2025-04-02 18:01:21 +02:00
Artem Fetishev
2f0796ff40
lib/storage: When creating and listing snapshots, panic instead of returning an error ()
When creating and listing snapshots, panic instead of returning an error
since errors are not recoverable anyway.
Also do not cleanup the filesystem on panic. Leave as is for further
manual inspection.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-04-02 15:47:23 +02:00
Artem Fetishev
cdba6dbc0e
lib/storage: Pass the partition time range during the partition creation and opening ()
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-04-02 14:57:59 +02:00
Aliaksandr Valialkin
f18daaeac5
app/vmui: replace old-style links to https://docs.victoriametrics.com/MetricsQL.html with https://docs.victoriametrics.com/metricsql/
Replace also https://docs.victoriametrics.com/keyConcepts.html with https://docs.victoriametrics.com/keyconcepts/

This is the follow-up for the commit ee1da35071
2025-04-02 13:22:58 +02:00
Artem Fetishev
a9f124388f
lib/storage: mergeBlockStreams(): replace the dependency on Storage with dependency on the set of deleted metricIDs ()
This should narrow down the function dependencies and simplify testing.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-04-02 13:16:26 +02:00
Aliaksandr Valialkin
4b2276608b
docs/victoriametrics/vmagent.md: mention that increasing scrape_interval can reduce CPU usage 2025-04-02 12:41:57 +02:00
Aliaksandr Valialkin
b352470ae1
docs/victoriametrics/vmagent.md: mention that -promscrape.disableKeepAlive option can reduce RAM usage when scraping thousands of targets 2025-04-01 23:22:13 +02:00
Aliaksandr Valialkin
b3261a1b87
lib/promscrape: do not clutter logs with cannot scrape target ...: context canceled errors when vmagent is stopped 2025-04-01 23:20:43 +02:00
Aliaksandr Valialkin
6d5973dcb0
docs/victoriametrics/vmagent.md: change GOGC from 50 to 100 in the example of optimized config for vmagent
This is a follow-up after bf024d3dce,
2025-04-01 21:36:04 +02:00
Aliaksandr Valialkin
74f17bb67e
docs/victoriametrics/vmagent.md: remove the recommendation to set GOGC to 50 at vmagent in order to reduce CPU usage
The default GOGC is set to 50 at vmagent after bf024d3dce,
so this recommendation makes no sense. Leave the recommendation to increase GOGC to 100.
2025-04-01 21:14:33 +02:00
Aliaksandr Valialkin
bf024d3dce
app/vmagent: increase the default GOGC from 30 to 50
This reduces CPU usage by up to 30% in exchange of the increased RAM usage by 10%
when scraping thousands of targets, which expose millions of metrics in summary.

This looks like a good tradeoff after the commit edac875179 ,
which reduced RAM usage by more than 10%, so the final RAM usage for vmagent
is still lower than the RAM usage at v1.114.0 by ~15%, while CPU usage drops by 30%.
2025-04-01 21:04:28 +02:00
Aliaksandr Valialkin
5b87aff830
lib/promscrape: use chunkedbuffer.Buffer instead of bytesutil.ByteBuffer for reading response body from scrape targets
This reduces memory usage when reading large response bodies because the underlying buffer
doesn't need to be re-allocated during the read of large response body in the buffer.

Also decompress response body under the processScrapedDataConcurrencyLimitCh .
This reduces CPU usage and RAM usage a bit when scraping thousands of targets.
2025-04-01 20:30:39 +02:00
Aliaksandr Valialkin
34d35869fa
docs/victoriametrics/vmagent.md: add Performance optimizations chapter
Enumerate the most commonly used options for reducing CPU usage and RAM usage
for vmagent, which scrapes thousands of targets.

See https://docs.victoriametrics.com/vmagent/#performance-optimizations
2025-04-01 18:35:43 +02:00
Max Kotliar
b1d1f1f461
vmagent/remotewrite: fix golangci-lint code style issue
### Describe Your Changes

Fixes golangci-lint issues introduced in
98f1e32e39

```
--- a/app/vmagent/remotewrite/pendingseries.go
+++ b/app/vmagent/remotewrite/pendingseries.go
@@ -202,7 +202,7 @@ func (wr *writeRequest) copyTimeSeries(dst, src *prompbmarshal.TimeSeries) {

 	// Pre-allocate memory for labels.
 	labelsLen := len(wr.labels)
-	wr.labels = slicesutil.SetLength(wr.labels, labelsLen + len(labelsSrc))
+	wr.labels = slicesutil.SetLength(wr.labels, labelsLen+len(labelsSrc))
 	labelsDst := wr.labels[labelsLen:]

 	// Pre-allocate memory for byte slice needed for storing label names and values.
@@ -212,7 +212,7 @@ func (wr *writeRequest) copyTimeSeries(dst, src *prompbmarshal.TimeSeries) {
 		neededBufLen += len(label.Name) + len(label.Value)
 	}
 	bufLen := len(wr.buf)
-	wr.buf = slicesutil.SetLength(wr.buf, bufLen + neededBufLen)
+	wr.buf = slicesutil.SetLength(wr.buf, bufLen+neededBufLen) buf := wr.buf[:bufLen]

 	// Copy labels

```

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-04-01 18:37:02 +04:00
Aliaksandr Valialkin
98f1e32e39
app/vmagent/remotewrite: optimize writeRequest.copyTimeSeries a bit
Pre-allocate memory for labels and for the needed byte buffer used
for holding the copied label names and values.
2025-04-01 15:58:04 +02:00
Aliaksandr Valialkin
edac875179
lib/promscrape: always store the last response per every scrape target in compressed form
This reduces memory usage for vmagent when scraping big number of targets at the cost of slightly higher CPU usage.

The increased CPU usage can be decreased by disabling tracking of stale markers either via -promscrape.noStaleMarkers
command-line flag or via `no_stale_markers: true` option at the scrape config pointed by -promscrape.config command-line flag.
See https://docs.victoriametrics.com/vmagent/#prometheus-staleness-markers
2025-04-01 15:27:11 +02:00
Aliaksandr Valialkin
0ff1a3b154
lib/leveledbytebufferpool: start with the pools[0] for byte slices up to 256 bytes
The pool is used mostly for obtaining byte buffers for responses from scrape targets.
There are no responses smaller than 256 bytes in practice, so there is no sense in maintaining
pools for byte slices up to 64 and 128 bytes.
2025-04-01 12:01:21 +02:00
Aliaksandr Valialkin
bbe58cc37b
lib/promscrape: make sure that the maxLabelsLen contains really the maximum len(wc.labels) among concurrently running callbacks at stream.Parse
Previously the maxLabelsLen could be updated with smaller value after it is updated to bigger value by concurrently running goroutines.
Prevent this by loading the latest maxLabelsLen value and updating it only if it is smaller than the current len(wc.labels)
before the exit from callback passed to stream.Parse.

While at it, return early from the callback on the sample_limit exceeding error,
since the rest of the code in the callback becomes no-op after wc.reset().
This simplifies following the logic in the code a bit.

Also remove outdated misleading comment in front of sw.pushData() call inside callbacks passed to stream.Parse.
This comment has no sense after every callback start working with its own goroutine-local wc.
2025-04-01 11:49:35 +02:00
Aliaksandr Valialkin
78dca6ee6e
lib/promscrape: tune leveledWriteRequestCtxPool a bit
Start with writeRequestCtx containing up to 256 labels instead of 8 labels,
since a typical response from scrape target contains much more than 8 labels across all the exposed metrics.

Do not pre-allocate labels at writeRequestCtx, since they are pre-allocated inside writeRequestCtx.addRows(),
together with the pre-allocation of samples and writeRequest.Timeseries.
2025-04-01 02:11:14 +02:00
Aliaksandr Valialkin
12f26668a6
lib/promscrape: make sure that the writeRequestCtxPool is efficiently used when sending automatically generated metrics to remote storage 2025-04-01 01:56:07 +02:00
Aliaksandr Valialkin
6c90843aab
lib/protoparser/prometheus: use clear() instead of for { ... } loops for clearing Rows.Rows and Rows.tagsPool at Rows.Reset()
This simplifies the code a bit.
2025-04-01 01:39:19 +02:00
Aliaksandr Valialkin
4aad4c64bb
lib/promscrape: attach applySeriesLimit to writeRequestCtx instead of scrapeWork
The applySeriesLimit applies the limit to samples stored at writeRequestCtx,
while the scrapeWork is used as read-only configuration source.
That's why it is better from maintainability PoV to attach the applySeriesLimit
method to writeRequestCtx.

While at it, clarify docs for the applySeriesLimit function.
2025-04-01 01:11:06 +02:00
Aliaksandr Valialkin
5630c0108e
lib/promscrape: remove writeRequestCtx.resetNoRows() funtion
This function can be safely replaced with writeRequestCtx.reset() after the commit 188325f0fc,
which makes sure that all the rows inside writeRequestCtx.rows are pushed to the remote storage before returning
from stream parsing callback.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/825
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/753
2025-04-01 01:11:06 +02:00
Aliaksandr Valialkin
732a549cff
lib/promscrape: clarify the comment for scrapeWork.pushData() 2025-04-01 01:11:05 +02:00
Aliaksandr Valialkin
af637bc2a2
lib/promscrape: remove the remaining writeRequestCtx.reset() calls before writeRequestCtxPool.Put() calls
These calls aren't needed, since they are performed by the writeRequestCtxPool.Put()
2025-04-01 01:11:05 +02:00
Aliaksandr Valialkin
6600916344
lib/promscrape: pass cfg *ScrapeWork as an arg to areIdentialSeries instead of attaching it to the ScrapeWork struct
This makes the code more consistent with other functions, which accept `cfg *ScrapeWork` as the first arg.
2025-04-01 01:11:04 +02:00
Aliaksandr Valialkin
e9cb95c5d4
lib/promscrape: replace scrapeWork.addRowToTimeseries with writeRequestCtx.addRows
The rows are added to writeRequestCtx, while the scrapeWork is used only as a read-only configuration source.
So it is better from maintainability PoV to attach addRows function to writeRequestCtx instead of scrapeWork.

Also attach addAutoMetrics to writeRequestCtx instead of scrapeWork due to the same reason:
addAutoMetrics adds metrics to the writeRequestCtx, while using scrapeWork as a read-only configuration source.

While at it, remove tmpRow from scrapeWork struct in order to reduce the complexity of this struct.
2025-04-01 01:11:04 +02:00
Aliaksandr Valialkin
8617faa160
lib/promscrape: remove wc.resetNoRows() call before returning wc to the pool, since this function is called inside writeRequestCtxPool.Put() 2025-04-01 01:11:03 +02:00
Aliaksandr Valialkin
fe888be58c
lib/promscrape: remove at *auth.Token arg from scrapeWork.pushData(), since it always equals to sw.Config.AuthToken
This simplifies the code a bit.

While at it, mention that scrapeWork.PushData callback must be safe for calling from concurrently running goroutines.
2025-04-01 01:11:03 +02:00
Aliaksandr Valialkin
a38de1c242
lib/promscrape: attach areIdentialSeries method to ScrapeWork instead of scrapeWork
areIdenticalSeries doesn't access scrapeWork members except of sw.Config of *ScrapeWork type.
It is better from maintainability PoV to attach this methos to ScrapeWork then.

While at it, replace sw.Config with cfg shortcut at scrapeWork.processDataOneShot()
and scrapeWork.processDataInStreamMode().
2025-04-01 01:11:03 +02:00
Aliaksandr Valialkin
8dc0f6cfab
lib/promscrape: remove unused scrapeWork arg from getSeriesAdded 2025-04-01 01:11:02 +02:00
Phuong Le
346db8a606
vmui: fix auto-suggestion doesn't work inside functions ()
Fixes 

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-03-31 16:22:25 +02:00
hagen1778
b277a62e94
docs: mention pull request checklist in doc guides
Checklist is a more practical list of actions than a full Contributing doc.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-31 15:48:10 +02:00
hagen1778
e2535fcb28
app/vmui: fix path to metricsql doc
This is follow-up after f152021521

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-31 14:47:38 +02:00
Roman Khavronenko
66e7b908ec
dashboards: drop all dashboards tags except victoriametrics or victorialogs tags for consistency ()
Having `victoriametrics` or `victorialogs` tags should be enough for
filtering dashboards related to VictoriaMetrics components.

Related ticket
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8618

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-31 14:27:05 +02:00
Roman Khavronenko
a2ba37be68
lib/promscrape: support filtering targets via scrapePool GET param in /api/v1/targets API ()
This improves compatibility with Prometheus `/api/v1/targets` API.

https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5343

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-03-31 14:26:25 +02:00
Aliaksandr Valialkin
004e5ff38b
lib/promscrape: hide sw.seriesLimiter behind sw.getSeriesLimiter()
This guarantees that the sw.seriesLimiter is always read after the initialization.
2025-03-29 02:05:20 +01:00
Aliaksandr Valialkin
4160fbecc0
lib/promscrape: pass a string instead of a byte slice to scrapeWork.storeLastScrape
This removes superflouos references to the "body" variable.

While at it, remove obsolete misleading comment.
2025-03-29 01:59:20 +01:00
Aliaksandr Valialkin
44844b7fbe
lib/promscrape: use "time.Time.UnixMilli()" instead of "time.Time.UnixNano() / 1e6"
This improves readability a bit
2025-03-29 01:43:31 +01:00
Aliaksandr Valialkin
f60458d5fa
lib/protoparser/prometheus: add a fast path to AreIdenticalSeriesFast when two identical strings are passed to it
This may be the case when repeated scrapes return the same set of metrics with the same values
2025-03-29 01:43:31 +01:00
Zakhar Bessarab
e242dd5bf2
make vendor-update
Support of the latest prometheus/common is not released yet so pin to previous version.
Related commit at prometheus/prometheus: 95f49dd84b

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-03-28 18:31:49 +04:00
Zakhar Bessarab
f863b331c1
app/vmalert/rule: follow-up for d8fe739aba
Remove tenancy-related part of the commit as it is not relevant to OS vmalert version.

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-03-28 18:29:41 +04:00
Aliaksandr Valialkin
a98779770c
lib/promscrape: run BenchmarkScrapeWorkScrapeInternalStreamBigData on all the available CPU cores
This allows verifying how the benchmark performance scales with the number of available CPU cores
and makes the results of the benchmark consistent with other BenchmarkScrapeWorkScrapeInternal* benchmarks.

Also reduce the amounts of memory allocations inside generateScrape() function in order to reduce
measurement noise during the BenchmarkScrapeWorkScrapeInternalStreamBigData run.

This is a follow-up after c05ffa906d
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8515
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8159
2025-03-28 13:38:13 +01:00
Aliaksandr Valialkin
bcbcae2309
lib/promscrape: improve the performance of getLabelsHash() after c05ffa906d
Before the commit:

BenchmarkScrapeWorkGetLabelsHash-16    	23226468	       249.5 ns/op	   4.01 MB/s	       0 B/op	       0 allocs/op

After the commit:

BenchmarkScrapeWorkGetLabelsHash-16    	39100964	       154.7 ns/op	   6.46 MB/s	       0 B/op	       0 allocs/op

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8515
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8159
2025-03-28 13:38:12 +01:00
Aliaksandr Valialkin
5b8c10d08e
lib/promscrape: run the BenchmarkScrapeWorkGetLabelsHash benchmark in parallel on all the available CPU cores
It is always better to run benchmarks in parallel on all the available CPU cores
in order to see how their performance scales with the number of CPU cores (GOMAXPROCS).

The commit also performs the following modifications:

- Removes the dependency of on the scrapeWork from getLabelsHash() function.

- Makes sure that the benchmark cannot be optimized out by the compiler, by introducing a dependency
  on a global Sink variable. Previously the getLabelsHash() function call could be optimized out
  by the compiler, since this call has no side effects, and the returned result is ignored.

- Reduces the amounts of memory allocations inside the BenchmarkScrapeWorkGetLabelsHash
  when preparing the labels for the benchmark. This should reduce measurements' noise during the benchmark.

This is a follow-up for c05ffa906d

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8515
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8159
2025-03-28 13:38:12 +01:00
Aliaksandr Valialkin
588fa4d90d
lib/promscrape: consistently use io.LimitReader across all the VictoriaMetrics repository 2025-03-28 13:38:11 +01:00
Hui Wang
b9c777a578
vmgateway: properly set the Host header when routing requests to `-… ()
* vmgateway: properly set the `Host` header when routing requests to `-write.url` and `-read.url`

* Update docs/victoriametrics/changelog/CHANGELOG.md

---------

Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-28 12:27:19 +01:00
Hui Wang
d8fe739aba
vmalert: properly attach tenant labels vm_account_id and `vm_projec… ()
* vmalert: properly attach tenant labels `vm_account_id` and `vm_project_id` to alerting rules when enabling `-clusterMode`

Previously, these labels were lost in alert messages to Alertmanager. Bug was introduced in [v1.112.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.112.0).
2025-03-28 12:26:55 +01:00
Aliaksandr Valialkin
e5ddf475b8
lib/promscrape/scrapework.go: typo fix in the comment: replace 'parsing parsing' with 'parsing' 2025-03-27 15:03:53 +01:00
Aliaksandr Valialkin
3c3c8668d6
lib/bytesutil: grow the buffer at ByteBuffer.ReadFrom more smoothly
Previously the buffer was increased by 30% after it became 50% full.
For example, if more than 5MB of data is read into 10MB buffer, then its' size
was increased to 13MB, leading to 13MB-5MB = 8MB of waste.
This translates to 8MB/5MB = 160% waste in the worst case.

The updated algorithm increases the buffer by 30% after it becomes ~94% full.
This means that if more than 9.4MB of data is read into 10MB buffer,
then its' size is increased to 13MB, leading to 13MB-9.4MB = 3.6MB of waste.
This translates to 3.6MB / 9.4MB = ~38% waste in the worst case.

This should reduce memory usage when vmagent reads big responses from scrape targets.

While at it, properly append the data to buffer if it already has more than 4KiB of data.
Previously the data over 4KiB in the buffer was lost after ReadFrom call.

This is a follow-up for f28f496a9d
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/6761
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6759
2025-03-27 15:03:53 +01:00
Aliaksandr Valialkin
22d1b916bf
lib/protoparser/protoparserutil: optimize ReadUncompressedData for zstd and snappy
It is faster to read the whole data and then decompress it in one go for zstd and snappy encodings.
This reduces the number of potential read() syscalls and decompress CGO calls needed
for reading and decompressing the data.
2025-03-27 15:03:52 +01:00
Aliaksandr Valialkin
35b31f904d
lib/httputil: automatically initialize data transfer metrics for the created HTTP transports via NewTransport() 2025-03-27 15:03:52 +01:00
Dmytro Kozlov
7c05ec42fe
app/vmctl: fix show logs for prometheus migration mode ()
### Describe Your Changes
Fixed issue an issue with show stats at the end of the process. Please
check the images below
Before the fix

![image](https://github.com/user-attachments/assets/d549c327-ed2b-46c5-965c-4f3581f54d83)


After the fix


![image](https://github.com/user-attachments/assets/c3200aff-dd50-40cf-92a9-b09800a25834)

I fixed it by moving logic to the function. Now it works correctly.

Added the tests for the Prometheus migration mode (make tests great
again).

The main discussion was introduced in this
[PR](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/7863).

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-03-27 14:12:10 +01:00
Andrii Chubatiuk
0e142e4e11
docs: update /VictoriaLogs path in hugo to /victorialogs ()
### Describe Your Changes

replace /VictoriaLogs aliases to /victorialogs as generated directories
are anyway renamed to victorialogs before deployment

need to merge this PR immediately after
https://github.com/VictoriaMetrics/vmdocs/pull/116

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-03-27 11:29:36 +01:00
hagen1778
ef16681dbf
dashboards: rm ETA panel from single and cluster dashboards
The panel was producing wrong predictions as it is almost impossible,
without making too expensive queries, to make a precise predictions.

More details on reasoning why it is better to remove it than fix it
is here https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8492.

This change also removes ETA panels from alerting rules annotations.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-27 10:36:58 +01:00
Andrii Chubatiuk
45df1e1142
docs: delete old content ()
### Describe Your Changes

remove unused files from docs

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-03-27 10:30:43 +01:00
Dan Dascalescu
0a49d8c930
chore: minor grammar fix in error messages ()
### Describe Your Changes

`its'` -> `its`

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-03-27 10:21:52 +01:00
Max Kotliar
c896664b7a
apptest: clarify comment on extractRE function return behavior ()
### Describe Your Changes

clarify comment on extractRE function return behavior

extracted from
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8462#discussion_r2003515686

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-03-27 10:21:09 +01:00
Denys Holius
c1663f2175
docs/Articles.md: add a link to https://helgeklein.com/blog/victoriametrics-long-term-storage-of-home-assistant-data/ ()
### Describe Your Changes

This PR will add a link to
https://helgeklein.com/blog/victoriametrics-long-term-storage-of-home-assistant-data/

### Checklist

The following checks are **mandatory**:

- [ x ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-03-27 10:20:20 +01:00
Max Kotliar
75995fc4db
vmagent: fix stream parse flaky test ()
### Describe Your Changes

It was spotted that the test introduced In
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8515#issuecomment-2741063155
was flaky. This PR fixes it.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-03-27 10:18:57 +01:00
Yury Molodov
53a1c6162d
vmui: update dependencies to latest versions ()
### Describe Your Changes

Update dependencies to latest versions

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-03-27 10:18:32 +01:00
hagen1778
2bce56b348
docs: add cmd-line flag example to vlogs quickstart
Add example of using command-line flags when running vlogs
docker image or binary.
This might be helpful for users - see https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8599

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-27 10:15:39 +01:00
Andrii Chubatiuk
63222a512e
docs: support new docs site version updates ()
### Describe Your Changes

- add smaller search weights for changelog content
- remove replace `<details>` tag with collapse shortcode

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-03-27 10:12:13 +01:00
Andrii Chubatiuk
f152021521
docs: move victorialogs, guides, anomaly-detection and victoriametrics-cloud to separate folders ()
related PR https://github.com/VictoriaMetrics/vmdocs/pull/115

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-03-27 10:05:14 +01:00
hagen1778
35319a414b
docs: update quickstart guide
* add more detailed instructions to docker sections
* fix typos
* re-word for simplicity

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-27 09:58:45 +01:00
Aliaksandr Valialkin
2083144058
vendor: run make vendor-update 2025-03-26 20:39:16 +01:00
Aliaksandr Valialkin
2f86ef95a1
lib/{httputil,promauth}: move functions, which create TLS config and TLS-based HTTP transport, from lib/httputil to lib/promauth
- Move lib/httputil.Transport to lib/promauth.NewTLSTransport. Remove the first arg to this function (URL),
  since it has zero relation to the created transport.

- Move lib/httputil.TLSConfig to lib/promauth.NewTLSConfig. Re-use the existing functionality
  from lib/promauth.Config for creating TLS config. This enables the following features:
  - Ability to load key, cert and CA files from http urls.
  - Ability to change the key, cert and CA files without the need to restart the service.
    It automatically re-loads the new files after they change.
2025-03-26 20:14:17 +01:00
Aliaksandr Valialkin
e5f4826964
lib/httputil: add NewTransport() function for creating pre-initialized net/http.Transport 2025-03-26 18:57:17 +01:00
Aliaksandr Valialkin
cd94593383
app/vmctl: rename app/vmctl/utils to app/vmctl/vmctutil for the sake of consistency naming of *util packages 2025-03-26 18:14:30 +01:00
Aliaksandr Valialkin
5d7f68726d
app/vmalert: rename app/vmalert/utils to app/vmalert/vmalertutil for the sake of consistency of *util package naming 2025-03-26 18:07:45 +01:00
Aliaksandr Valialkin
a4c01c9c23
lib/promscrape: rename lib/promscrape/discoveryutils to lib/promscrape/discoverytuil for the sake of consistency of *util package naming 2025-03-26 18:01:30 +01:00
Aliaksandr Valialkin
e3349be39e
lib: rename lib/influxutils to lib/influxutil for the sake of consistency naming of *util packages 2025-03-26 17:38:25 +01:00
Aliaksandr Valialkin
0e0432db6c
lib: rename lib/promutils to lib/promutil for the sake of consistency for *util package naming 2025-03-26 17:32:06 +01:00
Aliaksandr Valialkin
b43c28ccdf
lib/protoparser: rename lib/protoparser/datadogutils to lib/protoparser/datadogutil for the sake of consistency for *util package naming 2025-03-26 17:13:28 +01:00
Aliaksandr Valialkin
0f3f49f9b9
app/vlinsert: rename app/vlinsert/insertutils to app/vlinsert/insertutil for the sake of consistency for *util package naming 2025-03-26 17:09:01 +01:00
Aliaksandr Valialkin
2880a290fc
app/vmselect: rename app/vmselect/searchutils to app/vmselect/searchutil for the sake of consistency for *util package naming 2025-03-26 17:05:00 +01:00
Aliaksandr Valialkin
e9c4769baf
lib: rename lib/httputils to lib/httputil for the sake of consistency for *util package naming 2025-03-26 16:44:21 +01:00
Aliaksandr Valialkin
e178f3cce5
lib/promauth: follow-up for the commit eefae85450
- Avoid a data race when multiple goroutines access and update roundTripper.trBase inside roundTripper.getTransport().
  The way to go is to make sure the roundTripper.trBase is updated only during roundTripper creation,
  and then can be only read without updating.

- Use the http.DefaultTransport for http2 client connections at Kubernetes service discovery.
  Previously golang.org/x/net/http2.Transport was used there. This had the following issues:

  - An additional dependency on golang.org/x/net/http2.
  - Missing initialization of Transport.DialContext with netutil.Dialer.DialContext for http2 client.
  - Missing initialization of Transport.TLSHandshakeTimeout for http2 client.
  - Introduction of the lib/promauth.Config.NewRoundTripperFromGetter() method, which is hard to use properly.
  - Unnecessary complications of the lib/promauth.roundTripper, which led to the data race described above.

- Avoid a data race when multiple goroutines access and update tls config shared between multiple
  net/http.Transport instances at the TLSClientConfig field. The way to go is to always make a copy of the tls config
  before assigning it to the net/http.Transport.TLSClientConfig field.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5971
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/7114
2025-03-26 16:38:12 +01:00
hagen1778
e09c3f7938
docs: update guide for capacity planning
* update wording
* rm extra verbosity
* typo fixes
* add extra info about cluster sizing

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-26 15:55:57 +01:00
Aliaksandr Valialkin
30ed5d15b9
lib/promauth: panic when programming error is detected at Config.GetTLSConfig()
It is much better to panic instead of returning an error on programming error (aka BUG),
since this significantly increases chances that the bug will be noticed, reported and fixed ASAP.

The returned error can be ignored, even if it is logged, while panic is much harder to ignore.

The code must always panic instead of returning errors when any programming error (aka unexpected state) is detected.

This is a follow-up for the commit 9feee15493

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/6783
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6771
2025-03-26 15:39:37 +01:00
hagen1778
28c9f617c2
docs: update guide for migrating from influx
* update wording
* rm extra verbosity
* typo fixes

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-26 14:47:04 +01:00
Artem Fetishev
9537594971
lib/{mergeset,storage}: Update MustClose() method comments with the condition then the method must be called ()
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-03-25 14:29:04 +01:00
Zakhar Bessarab
099b2fdba7
docs/changelog: restore tip
Restore tip after e9508465.

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-03-24 20:17:21 +04:00
Zakhar Bessarab
9a3b5114db
docs/changelog: document bugfix
Bugfix was introduced at 682e8d8af5 and was included as a part of package update here b1fab92d, but lacks a changelog change.

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-03-24 19:50:27 +04:00
Zakhar Bessarab
34e1e18bcc
{docs,deployment}: post-release update
- update references to the latest version
- port LTS changelog
- revert changes from 47201ace96 and overwrite by an actual latest enterprise release version instead of latest LTS release

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-03-24 17:38:05 +04:00
Aliaksandr Valialkin
47201ace96
docs/enterprise.md: refer the real latest enterprise release in the examples
This is the follow-up for 44b0466281
2025-03-22 13:13:06 +01:00
Zakhar Bessarab
e950846534
docs/CHANGELOG.md: cut v1.114.0
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-03-21 16:42:46 +04:00
Zakhar Bessarab
7195862a51
app/{vmselect,vlselect}: run make vmui-update vmui-logs-update
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-03-21 16:37:34 +04:00
Zhu Jiekun
ca9bb48f3f
doc: [guide] update k8s vmsingle vmcluster helm and operator guide
This commit updates some wording for the k8s guide here:
- [Kubernetes monitoring via VictoriaMetrics
Single](https://docs.victoriametrics.com/guides/k8s-monitoring-via-vm-single)
- [Kubernetes monitoring with VictoriaMetrics
Cluster](https://docs.victoriametrics.com/guides/k8s-monitoring-via-vm-cluster)
- [Getting started with VM Operator,
](https://docs.victoriametrics.com/guides/getting-started-with-vm-operator)
2025-03-21 11:07:55 +01:00
Fred Navruzov
8239c119ca
docs/vmanomaly: release v1.21.0 + HC/HA docs ()
### Describe Your Changes

PR updates docs to release v1.21.0, in particular, adjust docs and its
structure to High Availability (HA) and horizontal scalability (HS)
capabilities.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-03-20 18:21:50 +02:00
Max Kotliar
c05ffa906d
lib/promscrape: improve streamParse performance
Previously, performance of stream.Parse could be limited by mutex.Lock on callback function. It used shared writeContext. With complicated relabeling rules and any slowness at pushData function, it could significantly decrease parsed rows processing performance.

 This commit removes locks and makes parsed rows processing lock-free in the same manner as `stream.Parse` processing implemented at push ingestion processing.

 Implementation details:
- Removing global lock around stream.Parse callback.
- Using atomic operations for counters
- Creating write contexts per callback instead of sharing
- Improving series limit checking with sync.Once
- Optimizing labels hash calculation with buffer pooling
- Adding comprehensive tests for concurrency correctness

 Benchmark performance:
```
# before
BenchmarkScrapeWorkScrapeInternalStreamBigData-10             13          81973945 ns/op          37.68 MB/s    18947868 B/op        197 allocs/op

# after
goos: darwin
goarch: arm64
pkg: github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape
cpu: Apple M1 Pro
BenchmarkScrapeWorkScrapeInternalStreamBigData-10             74          15761331 ns/op         195.98 MB/s    15487399 B/op        148 allocs/op
PASS
ok      github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape       1.806s
```

Related issue:
 https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8159
---------
Signed-off-by: Maksim Kotlyar <kotlyar.maksim@gmail.com>
Co-authored-by: Roman Khavronenko <hagen1778@gmail.com>
2025-03-20 16:49:24 +01:00
Zakhar Bessarab
30625e83b6
app/vmbackupmanager: properly set vm_backup_last_run_failed metric
Previously, `getBackupsList` was appending `latest` backup in all cases without checking if it actually exists.
This lead to `vm_backup_last_run_failed` metric being set to `1` since folder did not contain successful completion marker.

This commit adds a check to handle a case when remote storage does not contain any backups yet.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8490

---
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-03-20 15:50:13 +01:00
Roman Khavronenko
4c9c2501f3
docs: fix a few typos 2025-03-20 15:37:16 +01:00
Zakhar Bessarab
3447cb0bd3
lib/backup/s3remote: add retries for "IncompleteBody" errors
These errors could be caused by intermittent network issues, especially
in case of using proxies when accessing S3 storage. Previously, such
error would abort backup/restore process and require manual intervention
to ensure backups consistency.

This commit adds automatic retries to handle this to improve backups
reliability and resilience to network issues.
2025-03-20 12:43:18 +01:00
f41gh7
985f7faad8
app/vmgateway: remove vmalert dependency
1. remove "vmalert" word from vmgateway doc and exposed metrics;
2. remove unrelated flags like -datasource.roundDigits, remoteRead.disablePathAppend, -datasource.disableStepParam
2025-03-20 12:39:24 +01:00
Zakhar Bessarab
3155542168
app/vmbackupmanager: properly close run channel when stopping
vmbackupmanager uses `runC` channel for inter-goroutine communication between `scheduler` and `execute` goroutines.

 Previously, `runC` wasn't closed during graceful shutdown. And vmbackupmanager process couldn't gracefully stop. It could only be killed with-in configured timeout.

This commit properly closes `runC` by `scheduler` when stopping vmbackupmanager in order to avoid shutdown delay.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8554
2025-03-20 12:39:01 +01:00
f41gh7
756c93c66a
app/vmbackupmanager: prevent backups being scheduled one after another
Previously, backup was first scheduled at 00:00:00 and `getSleepDuration` was immediately executed to get the sleep duration for the next backup. Since it was returning `1 * time.Second` the next backup was attempted and failed to be scheduled.

Update logic to wait for full backup interval in this case so that there will be no attempt to schedule an unneeded backup.

 It also adds the following changes:
* fix error log entry reference to type of policy
* add a message about retention completion similar to existing message for backups to make it more consistent

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8499

This is a follow-up for fb6d2e92e3a1cf412d1f7dee64a4852941a8aa1b
2025-03-20 12:38:50 +01:00
Andrii Chubatiuk
511517f491
lib/streamaggr: fix threshold update, when deduplication and windows are enabled ()
### Describe Your Changes

during initial flush with deduplication and windows enabled lower
timestamps threshold is set to an upper bound of the next deduplication
interval, which leads to ignoring all samples on subsequent intervals

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-03-20 09:54:35 +01:00
Yury Molodov
db66ab1852
vmui: show hidden common labels in group name ()
### Describe Your Changes

Changes:
1. When `hide common labels` is enabled, they will now be displayed in
the group name.
2. Legend settings toggles have been moved below the graph for better
accessibility.


![image](https://github.com/user-attachments/assets/fc8c7f4c-c155-4056-8862-301ad375d7ae)

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2025-03-20 09:41:15 +01:00
Yury Molodov
31f662a0f7
vmui/logs: implement nanosecond log sorting ()
### Describe Your Changes

This PR adds nanosecond precision to log sorting, ensuring accurate
ordering of entries with sub-millisecond differences.

Related issue:  

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2025-03-20 09:30:57 +01:00
hagen1778
7f69553230
docs: fix typos in release versions
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-20 09:23:34 +01:00
hagen1778
90d547dec0
docs: mark LTS releases explicitly in changelog
The mark is important for readers to understand the type of release.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-20 09:12:31 +01:00
Hui Wang
4587d092fd
vmgateway: fix vmgateway_ratelimit_refresh_duration_seconds
* vmgateway: fix `vmgateway_ratelimit_refresh_duration_seconds`

* revert reload ep change

---------

Co-authored-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-03-20 09:41:42 +04:00
Hui Wang
bd11e00a59
app/vmalert: properly register group and rules metrics
Commit 9ca74d1fff introduced an issue with metrics registration. Due to metrics.Summary type always registered at the global state of metrics package, vmalert had increased memory and CPU usage after multiple configuration reloads.

 This commit addresses this issue and properly registers metrics.Summary metric. Now metrics for group and rules must be explicitly registered before group.Start with group.Init method. It simplifies metrics usage an ensures that all needed metrics were registered and group is ready to start.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8532
2025-03-19 13:57:34 +01:00
Aliaksandr Valialkin
348991d1b3
app/vlselect: move the code responsible for limiting the number of concurrently executed requests, into separate functions
This improves code readability a bit.
2025-03-19 13:28:03 +01:00
Aliaksandr Valialkin
c35dfd4585
lib/chunkedbuffer: add Buffer.Len() method, which returns the byte length of the data stored in the buffer 2025-03-19 13:24:39 +01:00
Aliaksandr Valialkin
baf701889a
lib/logstorage: typo fix in the comment to Storage.GetStreamFieldValues() function 2025-03-19 13:22:16 +01:00
Hui Wang
b97cacad45
app/vmalert: fix possible data race on group checksum
1. fix possible data race on group checksum when reload is called
concurrently. Before, it didn't affect much but might update the group
one more time.
2. remove the unnecessary g.mu.RLock() and compute group.id at newGroup creation. Changes to group.ID()
indicate that type and interval have changed, and the group is new.

Related PR:
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8540
2025-03-19 12:58:51 +01:00
Aliaksandr Valialkin
acdd6faecf
docs/VictoriaLogs/querying/README.md: fix the docs for /select/logsql/stream_field_values when the limit arg is set
It returns up to `limit` values for the given log stream field with the biggest number of hits.
2025-03-19 12:41:27 +01:00
Aliaksandr Valialkin
ea4534d154
lib/logstorage: support for {field in (*)} and {field not_in (*)} syntax in LogsQL
This is needed for https://github.com/VictoriaMetrics/victorialogs-datasource/issues/238
to be consistent with `in(*)` feature, which has been added in the commit 84d5771b41
2025-03-19 12:09:55 +01:00
Hui Wang
dec237b7d6
app/vmalert: fix memory leak with -notifier.blackhole
Previous commit 9ca74d1fff added a regression for notifier's metrics exposed by vmalert. vmalert returned new notifier instances for the blackhole notifier type. And it registered new metrics each get notifiers function was called. It registered duplicate metrics and lead to OOM crash.

 This commit properly init blachole notifier instances and add metrics for it only once, during application start.

 Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8532
2025-03-19 10:43:57 +01:00
Nikolay
27d7d0b25c
lib/promscrape: properly send staleness markers
Previously, vmagent may incorrectly store partial scrape response
in case of scrapping error. It may happen if `sw.ReadData` call fetched
some chunked response and store it at buffer. And later context deadline
exceed error happened.
 As a result, at the next scrape iteration this partial response could
 be forwarded to the `sw.sendStaleSeries(lastScrape...)` function call
 and lead to `Prometheus line` parsing error.

 This commit properly set response body to the empty value in case of
scrapping error. It prevents storing partial scrape response body. And
it no longer sends partial staleness markers to the remote storage.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8528
2025-03-19 10:37:54 +01:00
Aliaksandr Valialkin
f645479b5e
lib/protoparser: rename lib/protoparser/common to lib/protoparser/protoparserutil
This improves readability of the code, which uses this package.
2025-03-18 16:24:51 +01:00
Aliaksandr Valialkin
1aa14f3b6d
app/vlinsert: do not start background flusher in the LogMessageProcessor used for synchornous processing of a single data block
This should reduce CPU usage a bit for data ingestion protocols,
which process a single message per every request without streaming.
2025-03-18 11:17:08 +01:00
Aliaksandr Valialkin
9f5ff82532
lib/protoparser/common: limit the maximum memory, which could be occupied by snappy-compressed message at ReadUncompressedData 2025-03-18 11:17:08 +01:00
Roman Khavronenko
3511e2e6af
dashboards: add Memory allocations rate to ResourceUsage tab ()
This panel should have help us to identify
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8501 during
release checks. While we already track the GC pressure, its value is
relative and change wasn't noticeable for the workloads that we
observed.

The absolute values of allocations rate could have helped to see the
anomaly.

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-17 16:42:53 +01:00
Alexander Frolov
127d4f37b8
lib/promrelabel: comment typo ()
### Describe Your Changes

`prasedRelabelConfig` -> `parsedRelabelConfig`

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-03-17 16:40:03 +01:00
Yury Molodov
44a54e4590
vmui/logs: fix endless group expansion loop bug ()
### Describe Your Changes

Fix endless group expansion loop.
Related issue: 

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-03-17 16:39:41 +01:00
hagen1778
d4560ee015
docs: add update note for renamed metric vm_mmapped_files
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-17 16:34:36 +01:00
Guillem Jover
76d205feae
spelling and grammar fixes via codespell ()
### Describe Your Changes

Fix many spelling errors and some grammar, including misspellings in
filenames. 

The change also fixes a typo in metric `vm_mmaped_files` to `vm_mmapped_files`.
While this is a breaking change, this metric isn't used in alerts or dashboards. 
So it seems to have low impact on users.

The change also deprecates `cspell` as it is much heavier and less usable. 
---------

Co-authored-by: Andrii Chubatiuk <achubatiuk@victoriametrics.com>
Co-authored-by: Andrii Chubatiuk <andrew.chubatiuk@gmail.com>
2025-03-17 16:32:10 +01:00
Jose Gómez-Sellés
d852e5e0b4
docs/cloud: add FAQ for VM Cloud ()
### Describe Your Changes

This PR adds the dedicated FAQ for VictoriaMetrics Cloud

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-03-17 15:54:52 +01:00
Aliaksandr Valialkin
336f954056
lib/logstorage: switch the type of LogRows.streamTagCanonicals from [][]byte to []string
This reduces the size of LogRows.streamTagCanonicals by 1/3 because of the eliminated `cap` field
in the slice header (reflect.SliceHeader) compared to the string header (reflect.StringHeader).
2025-03-17 15:02:51 +01:00
Aliaksandr Valialkin
6837e0c7d3
lib/prompb: use clear() function instead of loops for clearing WriteRequest fields inside WriteRequest.Reset
This makes the code shorter without lossing the clarity.
2025-03-17 14:31:05 +01:00
Fred Navruzov
ee3ed8ab86
docs/vmanomaly: update to patch release v1.20.1 ()
### Describe Your Changes

Doc updates to a patch release v1.20.1, fixing a bug in
`PeriodicScheduler` that may affect some of the customers' deployments

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-03-16 21:34:19 +02:00
Aliaksandr Valialkin
c005cb89fc
deployment: update VictoriaLogs Docker image tag from v1.16.0-victorialogs to v1.17.0-victorialogs
See https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.17.0-victorialogs
2025-03-16 01:21:59 +01:00
Aliaksandr Valialkin
771233ebcd
docs/VictoriaLogs/CHANGELOG.md: cut v1.17.0-victorialogs 2025-03-16 01:15:54 +01:00
Aliaksandr Valialkin
39082103a6
app/vlinsert/opentelemetry: follow-up for a884949aba
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8502
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8511
2025-03-16 01:09:07 +01:00
Devops
a884949aba
fix:Fixed an issue where and were incorrectly displayed ()
### Describe Your Changes

Fixed an issue where and were incorrectly displayed when sent from
OpenTelemetry Collector to Victoria Logs

Fixes 
2025-03-16 00:33:52 +01:00
Aliaksandr Valialkin
d2f4698e3f
docs/VictoriaLogs/querying/README.md: mention that /select/logsql/query endpoint may return arbitary number of logs matching the given query filter, and this is OK
This is needed for https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8507
and https://github.com/VictoriaMetrics/victorialogs-datasource/issues/261
2025-03-16 00:01:52 +01:00
Aliaksandr Valialkin
23c4e4cdb2
app/vlinsert: send 204 No Content response code at /insert/loki/api/v1/push endpoint
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8505
2025-03-15 23:34:49 +01:00
Aliaksandr Valialkin
13ff9a8ebd
lib/{mergeset,storage,logstorage}: use chunked buffer instead of bytesutil.ByteBuffer as a storage for in-memory parts
This commit adds lib/chunkedbuffer.Buffer - an in-memory chunked buffer
optimized for random access via MustReadAt() function.
It is better than bytesutil.ByteBuffer for storing large volumes of data,
since it stores the data in chunks of a fixed size (4KiB at the moment)
instead of using a contiguous memory region. This has the following benefits over bytesutil.ByteBuffer:

- reduced memory fragmentation
- reduced memory re-allocations when new data is written to the buffer
- reduced memory usage, since the allocated chunks can be re-used
  by other Buffer instances after Buffer.Reset() call

Performance tests show up to 2x memory reduction for VictoriaLogs
when ingesting logs with big number of fields (aka wide events) under high speed.
2025-03-15 20:58:33 +01:00
Aliaksandr Valialkin
73aae546e0
lib/logstorage: pre-allocate buffers for fields and rows inside block.appendRowsTo()
This reduces the number of memory re-allocations inside the loop, which copies the rows.
2025-03-15 17:18:45 +01:00
Aliaksandr Valialkin
174a6db19f
lib/logstorage: pre-allocated buffers for fields and rows inside rows.appendRows()
This should reduce the number of memory re-allocations inside the loop, which copies the rows.
2025-03-15 16:39:19 +01:00
Aliaksandr Valialkin
0e413a7efb
lib/logstorage: pre-allocate the buffer needed for marshaling a block of strings inside marshalStringsBlock
This reduces the number of memory re-allocations when appending the strings to the buffer in the loop.
2025-03-15 15:56:33 +01:00
Aliaksandr Valialkin
9769ad3a24
lib/logstorage: optimize copying dict values inside valuesDict.copyFrom a bit
Pre-allocate the needed slice of strings and then assign items to it by index
instead of appending them. This reduces the number of memory allocations
and improves performance a bit.
2025-03-15 15:32:21 +01:00
Aliaksandr Valialkin
8e773564b1
lib/logstorage: intern column names instead of cloning them during data ingestion
This reduces the number of memory allocations when ingesting logs with big number of fields (aka wide events)
2025-03-15 15:29:54 +01:00
Aliaksandr Valialkin
fbcfb6d72e
lib/protoparser/common: properly decode snappy-encoded requests
Snappy-encoded requests are encoded in block mode instead of stream mode.
Stream mode is incompatible with block mode. See https://pkg.go.dev/github.com/golang/snappy
That's why Snappy-encoded requests must be read in block mode.

Also add a protection against passing invalid readers to PutUncompressedReader().

This is a follow-up for 0451a1c9e0

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8416
2025-03-15 14:44:10 +01:00
Roman Khavronenko
3d9f2e3937
lib/bytesutil: don't drop ByteBuffer.B when its capacity is bigger th… ()
…an 64KB at Reset

This commit reverts
b58e2ab214
as it has negative impacts when ByteBuffer is used for workloads that
always exceed 64KiB size. This significantly slows down affected
components because:
* buffers aren't beign reused;
* growing new buffers to >64KiB is very slow.

https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8501

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-15 01:38:36 +01:00
Aliaksandr Valialkin
2c7dd2b991
lib/logstorage: support for {label in (v1,...,vN)} and {label not_in (v1, ..., vN)} syntax 2025-03-15 01:35:13 +01:00
Aliaksandr Valialkin
0451a1c9e0
app/vlinsert: follow-up for 37ed1842ab
- Properly decode protobuf-encoded Loki request if it has no Content-Encoding header.
  Protobuf Loki message is snappy-encoded by default, so snappy decoding must be used
  when Content-Encoding header is missing.

- Return back the previous signatures of parseJSONRequest and parseProtobufRequest functions.
  This eliminates the churn in tests for these functions. This also fixes broken
  benchmarks BenchmarkParseJSONRequest and BenchmarkParseProtobufRequest, which consume
  the whole request body on the first iteration and do nothing on subsequent iterations.

- Put the CHANGELOG entries into correct places, since they were incorrectly put into already released
  versions of VictoriaMetrics and VictoriaLogs.

- Add support for reading zstd-compressed data ingestion requests into the remaining protocols
  at VictoriaLogs and VictoriaMetrics.

- Remove the `encoding` arg from PutUncompressedReader() - it has enough information about
  the passed reader arg in order to properly deal with it.

- Add ReadUncompressedData to lib/protoparser/common for reading uncompressed data from the reader until EOF.
  This allows removing repeated code across request-based protocol parsers without streaming mode.

- Consistently limit data ingestion request sizes, which can be read by ReadUncompressedData function.
  Previously this wasn't the case for all the supported protocols.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8416
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8380
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8300
2025-03-15 00:03:03 +01:00
Aliaksandr Valialkin
c60b4175bb
app/vlinsert: add an ability to ignore log fields starting with the given prefixes
The `ignore_fields` HTTTP query args can contain prefixes ending with '*'.
For example, `ignore_fields=foo.*,bar` skips all the fields starting with `foo.`
during data ingestion.
2025-03-15 00:03:02 +01:00
Aliaksandr Valialkin
f874a3aa7b
lib/logstorage: show a link to query options docs in the error message emitted during failure to parse query options
This should help figuring out and fixing the error by the user.
2025-03-15 00:03:02 +01:00
hagen1778
972f14d540
docs: add vmsingle to affected components
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-14 19:12:05 +01:00
hagen1778
f62b690599
changelog: mention in update notes
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8501
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-14 19:04:00 +01:00
Roman Khavronenko
dc1f7ef0d0
app/vmselect/promql: optimize binary operator or for common cases ()
The optimization touches 2 things:
1. Reduces amount of allocations when comparing canonical metric names
between left and right parts of expressions.
2. Adds fast path for cases when right part of expression returns
scalar: `series_selector or on() vector(1)`, which is a typical
expression.

```
benchcmp old.txt new.txt
benchcmp is deprecated in favor of benchstat: https://pkg.go.dev/golang.org/x/perf/cmd/benchstat
benchmark                                             old ns/op     new ns/op     delta
BenchmarkBinaryOpOr/tss:1_or_tss:1-14                 291           272           -6.56%
BenchmarkBinaryOpOr/tss:1_or_tss:1000-14              44590         28592         -35.88%
BenchmarkBinaryOpOr/tss:1000_or_tss:1-14              103124        39563         -61.64%
BenchmarkBinaryOpOr/tss:1000_or_tss:1000-14           20386150      1859335       -90.88%
BenchmarkBinaryOpOr/tss:1000_or_on()_vector(0)-14     91382         36805         -59.72%
```

https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8382

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-14 12:07:22 +01:00
hagen1778
8b0129f29b
docs: re-organize order of items in vmagent docs
* tie relevant functionality together
* change hierarchy of related options to visually group it

No breaking changes to links.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-14 12:02:23 +01:00
Zhu Jiekun
9548b7e442
docs: revert doc change for on-disk persistence and move new content to another section ()
### Describe Your Changes

revert doc change in
815bad3687
and move new content to another section.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-03-14 11:50:17 +01:00
Roman Khavronenko
85f1bd172b
docs: re-organize docs ()
* move related sections clother to each other
* group related sections within the same section

The intention of the change is to tie related documentation together.

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-14 11:47:00 +01:00
Aliaksandr Valialkin
b6e4abb31e
app/vlogsgenerator: increase write buffer size in order to reduce the number of send() syscalls
This increases data ingestion performance, which can be achieved by the vlogsgenerator
2025-03-14 03:14:01 +01:00
Aliaksandr Valialkin
8c079602c1
lib/logstorage: optimize handling long constant fields
Long constant fields cannot be stored in columnsHeader as a const column,
because their size exceeds maxConstColumnValueSize, so they are stored as regular values.
This commit optimizes storing such fields by storing only a single value
across the field values in a block instead of storing multiple values.
This should improve data ingestion performance a bit. This also should improve query
performance when the query accesses such fields because of better cache locality.

Also improve persisting of constant string lengths by storing them only once.
2025-03-14 03:14:01 +01:00
Aliaksandr Valialkin
974d504043
lib/logstorage: add a test for marshalUint64Block / unmarshalUint64Block 2025-03-14 03:14:00 +01:00
Aliaksandr Valialkin
c62ccf11ae
lib/logstorage: newTestLogRows: create a const column, which cannot be stored in the column header because its length exceeds maxConstColumnValueSize 2025-03-14 03:14:00 +01:00
f41gh7
5301af33c0
app/vmselect: properly cancel multitenant query request
Previously, vmselect didn't stop multitenant query execution if it
receives error from vmstorage. Such as limit error or any other. It
continued to execute queries until it did it for all tenants. It leads
to the potential waste of resources.
 In addition, callback error was incorrectly reference and can be updated by
subsequent callback call.

This commit returns error earlier, cancels sub-sequent requests for
tenants and properly return storageNode request error.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8461
2025-03-14 00:52:01 +01:00
Andrii Chubatiuk
37ed1842ab
lib/protoparser: support zstd in all logs http ingestion, datadog and otel metrics protocols ()
This commit introduces common readers for multiple compression encoding algorithms. 

Currently, supported encodings are:
* zstd
* gzip
* deflat
* snappy

 It adds new common reader to the all VictoriaLogs ingestion protocols.
And updates opentelemetry metrics parsing for VictoriaMetrics components.

Also, it ports zstd stream parses from cluster branch.

Related issues:
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8380
fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8300

---------
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
Co-authored-by: f41gh7 <nik@victoriametrics.com>
2025-03-14 00:39:52 +01:00
Zhu Jiekun
815bad3687
app/vmagent: prevent dropping persistent queue if -remoteWrite.showURL changed
Previously, if the command-line flag value `-remoteWrite.showURL` changed, vmagent dropped content of persistent queues. It's not expected behavior and may lead to data-loss at queue.
 Further more if command-line flag value `-remoteWrite.showURL` is set to `true`, any changes to url query arguments will lead to persistent queue drop. The most common uses is kafka and gcp pub-sub integration. It uses url query arguments for client configuration.
 Also, it complicates copy content of persistent queue between vmagents. Since it requires to properly change name inside metainfo.json.

 This commit removes persistent queue name equality check from `lib/persistentqueue`. This check was added as an additional protection from on-disk data corruption.
 It's safe to skip this check for vmagent, because vmagent encodes remoteWrite.url as part of path to the queue. It guarantees that there will be no collision. 

related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8477.


### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: f41gh7 <nik@victoriametrics.com>
Co-authored-by: f41gh7 <nik@victoriametrics.com>
2025-03-13 23:50:01 +01:00
Andrii Chubatiuk
6e34ea62c7
lib/awsapi: add EKS Pod Identity auth method
AWS introduced a new secure way for Kubernetes Pod authorization at AWS API.
The feature is called Pod Identity.
 It adds the following env variables to the Pod:
* AWS_CONTAINER_CREDENTIALS_FULL_URI -  endpoint URI served by the EKS Pod Identity Agent running on the worker node.
* AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE - projected JWT token that is used to exchange for IAM credentials.

See related blog post https://aws.amazon.com/blogs/containers/amazon-eks-pod-identity-a-new-way-for-applications-on-eks-to-obtain-iam-credentials/

related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5780
2025-03-13 23:39:19 +01:00
Zakhar Bessarab
7dfdee5709
lib/httputils: always set up TLS config
Previously, TLS config was only created for URLs with `https` scheme.
This could lead to unexpected errors when original URL was redirecting
to `https` one as TLS config is not applied.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8494
2025-03-13 23:27:49 +01:00
Artem Fetishev
d48b70a5d3
apptest: Add the support of forced merge to vmsingle and vmstorage
This support is already present in enterprise.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-03-13 18:05:54 +01:00
Artem Fetishev
aad79e574a
lib/storage: Rewrite deduplication integration test with forced merge and retries
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-03-13 17:59:10 +01:00
Artem Fetishev
b2d2315c39
lib/storage: Deduplication integration test ()
Add an integration test to confirm that deduplication works for the
current month. See .

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2025-03-13 17:07:02 +01:00
Artem Fetishev
c218fa7b29
lib/storage: increment indexdb refcount during data ingestion and retrieval ()
Almost all storage API operations, both ingestion and retrieval, involve
writing and/or reading the indexdb. However, during these operations,
the indexdb refcount is not incremented. This may lead to panics if
indexdb is rotated more than once during these operations.

This commit increments the refcount before using indexdb and decrements it
after use.

Note that rotating indexdb more than once during some operation is an
impossible case under normal circumstances as the min retention period
is 1 day (i.e. the indexdb will be rotated once per day). However, we
want the storage to behave correctly in all cases.

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2025-03-13 11:15:43 +01:00
Aliaksandr Valialkin
dfc22950bd
deployment: update VictoriaLogs Docker image from v1.15.0-victorialogs to v1.16.0-victorialogs
See https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.15.0-victorialogs
2025-03-12 23:48:26 +01:00
Aliaksandr Valialkin
4b52f7973d
docs/VictoriaLogs/CHANGELOG.md: cut v1.16.0-victorialogs 2025-03-12 23:41:16 +01:00
Aliaksandr Valialkin
b1fab92d1f
vendor: run make vendor-update 2025-03-12 22:40:42 +01:00
Aliaksandr Valialkin
19165c436f
app/vlinsert/loki: automatically parse JSON-encoded log fields from the plaintext log message
Loki doesn't support well high-cardinality log fields (e.g. fields with big number of unique values).
That's why Promtail, Grafana Agent and Grafana Alloy encode such fields into a JSON and push them
as a plaintext log message to the remote storage. This isn't an efficient way to store high-cardinality
log fields in VictoriaLogs, since it is optimized for storing and querying such fields when they are stored
distinctly as a regular log fields according to VictoriaLogs data model ( https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model ).

This commit enables automatic parsing of JSON-encoded log fields at plaintext log message received over Loki protocol
and storing them as a separate log fields. This should improve data compression ratio and reduce disk space usage.
This should also improve query performance when the parsed log fields are used in queries for filtering and aggregation.
The old behaviour can be restored by passing -loki.disableMessageParsing command-line flag to VictoriaLogs.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8486
2025-03-12 21:23:28 +01:00
Artem Fetishev
8d177f06da
lib/storage: a followup for ee66d601b4: enable cluster integration tests
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-03-12 18:13:24 +01:00
Artem Fetishev
ee66d601b4
lib/storage: fix active timeseries collection when per-day index is disabled ()
Fix metric that shows number of active time series when per-day index is disabled. Previously, once per-day index was disabled, the active time series metric would stop being populated and the `Active time series` chart would show 0.

See: https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8411.
2025-03-12 17:12:09 +01:00
Aliaksandr Valialkin
5e231fe07b
Makefile: update golangci-lint from v1.64.5 to v1.64.7
See https://github.com/golangci/golangci-lint/releases/tag/v1.64.7
2025-03-12 16:35:11 +01:00
Aliaksandr Valialkin
cb240eda70
app/vlinsert: follow-up for 67f8fa66ed
- Properly handle negative timestamps (e.g. timestamps before 1970-01-01)

- Optimize parsing floating-point timestamps by eliminating the memory allocation
  needed for returning an error from strconv.ParseInt. Instead, check whether the string contains a dot,
  and then parse it as a floating-point number.

- Add tests for ParseUnixTimestamp function.

- Make the code easier to understand and maintain by removing unneeded generic function toNano().

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8470
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8472
2025-03-12 16:28:40 +01:00
Aliaksandr Valialkin
02bef62a66
docs/VictoriaLogs/CHANGELOG.md: move the description of the fix for the proper OpenTelemetry attributes conversion into JSON into the correct place
The bugfix isn't released yet, so move it from v1.15.0-victorialogs release to the tip.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8384
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8387

This is a follow-up for 26fba57cfa
2025-03-12 15:57:10 +01:00
Aliaksandr Valialkin
4d44c3e154
lib/logstorage: properly parse floating-point numbers with leading zeroes in fractional part
Parsing for floating-point numbers with leading zeroes such as 1.023, 1.00234 has been broken
in the commit ae5e28524e .

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8464
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8361
2025-03-12 15:25:58 +01:00
Emre Yazıcı
cfd2c6e5e7
app/vmalert: add vmalert_alerts_send_duration_seconds metric ()
### Describe Your Changes

Add `vmalert_alerts_send_latency_seconds` metric for
alertmanager.notifier.

To measure the time for alertmanager calls to send alerts per notifier.
This is needed to see the latency for each notifier from vmalert calls.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: emreya <e.yazici1990@gmail.com>
Co-authored-by: Hui Wang <haley@victoriametrics.com>
Co-authored-by: Roman Khavronenko <hagen1778@gmail.com>
2025-03-12 14:30:54 +01:00
alicja-karasiewicz
d47d329ce7
feat: make topN limit configurable from CLI
Implement changes mentioned in 

Allow the administrator to specify the limit of returned TSDB series in
`/api/v1/status/tsdb` by making a TopN limit configurable from CLI.

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: alicja-karasiewicz <alicja.karasiewicz@allegro.com>
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
Co-authored-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2025-03-12 11:33:30 +04:00
Nikolay
11436d5f00
docs: update metric names stats description ()
* add version since feature is available
* add cluster endpoint paths

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-03-12 11:01:24 +04:00
Evgeny
486b9e1c64
lib/promscrape: use original job name as scrapePool value in targets api ()
### Fix scrapePool name

If in the scrape file, I do some magic and manipulate the job name then
Prometheus will show scrapePool as the original job name in the targets
API, but vmagent will set it to the final value which is wrong.
example
```
job: consul-targets
...

- source_labels: [ __meta_consul_service ]
      regex: (\w+)[_-]exporter
      target_label: job
      replacement: $1
```

curl to prom API will show
`"scrapePool": "consul-targets",`
vmagent:
`""scrapePool": "node",`

before changes:
```
curl -s 'http://localhost:8429/api/v1/targets' | jq -r '.data.activeTargets[].scrapePool'| sort|uniq
blackbox
pgbackrest
postgres
```
after changes
```
curl -s 'http://localhost:8429/api/v1/targets' | jq -r '.data.activeTargets[].scrapePool'| sort|uniq
blackbox
consul-targets
```

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-03-11 13:11:35 +01:00
Roman Khavronenko
18d6c715ac
vendor: bump go-control-plane/envoy to v1.32.4
Solves the following error:
verifying github.com/envoyproxy/go-control-plane/envoy@v1.32.3/go.mod:
checksum mismatch

See https://github.com/envoyproxy/go-control-plane/issues/1083
2025-03-11 11:11:43 +01:00
hagen1778
6a1c70115a
docs: order releases in changelog by their version
Ordering changes by release versions enhances the searchability
of the documentation. For example, tracking which release got
the bugfix becomes easier if releases are already sorted.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-11 10:11:58 +01:00
Alexander Marshalov
9007a4803c
vmcloud docs: information about new APIs in Cloud Public API: cloud providers, regions, tiers, deployments and access tokens. ()
2025-03-10 20:33:42 +01:00
Jose Gómez-Sellés
0873d1d8ab
docs/cloud: add account management section ()
### Describe Your Changes

This PR updates the documentation by removing old assets and adding the
user management chapter, divided in different sections.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-03-10 20:28:55 +01:00
Aliaksandr Valialkin
edecc433ff
deployment: update Go builder from Go1.24.0 to Go1.24.1
See https://github.com/golang/go/issues?q=milestone%3AGo1.24.1+label%3ACherryPickApproved
2025-03-10 18:59:59 +01:00
Artem Fetishev
77b08cdbb0
docs: update vm apps versions to the v1.113.0 release
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-03-10 15:17:14 +01:00
Artem Fetishev
44b0466281
docs/changelog: mention LTS releases
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-03-10 14:45:55 +01:00
Naveen
179c530095
docs: update vlogs README.md ()
### Describe Your Changes

Fixed the typo in the documentation. Updated `Ir provides` to `It
provides`

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-03-10 13:52:43 +01:00
Andrii Chubatiuk
c174a046e2
lib/streamaggr: fixed streamaggr panic ()
### Describe Your Changes

fixes 

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2025-03-10 13:50:55 +01:00
Andrii Chubatiuk
67f8fa66ed
app/vlinsert: support floats for elasticseach timestamps ()
### Describe Your Changes

fixes https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8470

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-03-10 13:48:48 +01:00
2015 changed files with 105839 additions and 27215 deletions
.github
.gitignoreMakefile
app
vlinsert
vlogscli
vlogsgenerator
vlselect
vlstorage
vmagent
vmalert-tool
vmalert

View file

@ -8,7 +8,7 @@ body:
Before filling a bug report it would be great to [upgrade](https://docs.victoriametrics.com/#how-to-upgrade)
to [the latest available release](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest)
and verify whether the bug is reproducible there.
It's also recommended to read the [troubleshooting docs](https://docs.victoriametrics.com/troubleshooting/) first.
It's also recommended to read the [troubleshooting docs](https://docs.victoriametrics.com/victoriametrics/troubleshooting/) first.
- type: textarea
id: describe-the-bug
attributes:

View file

@ -24,9 +24,9 @@ body:
label: Troubleshooting docs
description: I am familiar with the following troubleshooting docs
options:
- label: General - https://docs.victoriametrics.com/troubleshooting/
- label: General - https://docs.victoriametrics.com/victoriametrics/troubleshooting/
required: false
- label: vmagent - https://docs.victoriametrics.com/vmagent/#troubleshooting
- label: vmagent - https://docs.victoriametrics.com/victoriametrics/vmagent/#troubleshooting
required: false
- label: vmalert - https://docs.victoriametrics.com/vmalert/#troubleshooting
- label: vmalert - https://docs.victoriametrics.com/victoriametrics/vmalert/#troubleshooting
required: false

View file

@ -6,4 +6,4 @@ Please provide a brief description of the changes you made. Be as specific as po
The following checks are **mandatory**:
- [ ] My change adheres [VictoriaMetrics contributing guidelines](https://docs.victoriametrics.com/contributing/).
- [ ] My change adheres to [VictoriaMetrics contributing guidelines](https://docs.victoriametrics.com/contributing/).

View file

@ -40,9 +40,8 @@ jobs:
- name: Copy docs
id: update
run: |
rsync -zarv \
--exclude="Makefile" \
docs/ ../__vm-docs/content/victoriametrics
find docs -type d -maxdepth 1 -mindepth 1 -exec \
sh -c 'rsync -zarvh --delete {}/ ../__vm-docs/content/$(basename {})/' \;
echo "SHORT_SHA=$(git rev-parse --short $GITHUB_SHA)" >> $GITHUB_OUTPUT
working-directory: __vm

1
.gitignore vendored
View file

@ -27,3 +27,4 @@ _site
coverage.txt
cspell.json
*~
deployment/docker/provisioning/plugins/

View file

@ -18,7 +18,7 @@ TAR_OWNERSHIP ?= --owner=1000 --group=1000
.PHONY: $(MAKECMDGOALS)
include app/*/Makefile
include cspell/Makefile
include codespell/Makefile
include docs/Makefile
include deployment/*/Makefile
include dashboards/Makefile
@ -504,7 +504,7 @@ fmt:
gofmt -l -w -s ./apptest
vet:
go vet ./lib/...
GOEXPERIMENT=synctest go vet ./lib/...
go vet ./app/...
go vet ./apptest/...
@ -513,35 +513,35 @@ check-all: fmt vet golangci-lint govulncheck
clean-checkers: remove-golangci-lint remove-govulncheck
test:
go test ./lib/... ./app/...
GOEXPERIMENT=synctest go test ./lib/... ./app/...
test-race:
go test -race ./lib/... ./app/...
GOEXPERIMENT=synctest go test -race ./lib/... ./app/...
test-pure:
CGO_ENABLED=0 go test ./lib/... ./app/...
GOEXPERIMENT=synctest CGO_ENABLED=0 go test ./lib/... ./app/...
test-full:
go test -coverprofile=coverage.txt -covermode=atomic ./lib/... ./app/...
GOEXPERIMENT=synctest go test -coverprofile=coverage.txt -covermode=atomic ./lib/... ./app/...
test-full-386:
GOARCH=386 go test -coverprofile=coverage.txt -covermode=atomic ./lib/... ./app/...
GOEXPERIMENT=synctest GOARCH=386 go test -coverprofile=coverage.txt -covermode=atomic ./lib/... ./app/...
integration-test: victoria-metrics vmagent vmalert vmauth
go test ./apptest/... -skip="^TestCluster.*"
benchmark:
go test -bench=. ./lib/...
GOEXPERIMENT=synctest go test -bench=. ./lib/...
go test -bench=. ./app/...
benchmark-pure:
CGO_ENABLED=0 go test -bench=. ./lib/...
GOEXPERIMENT=synctest CGO_ENABLED=0 go test -bench=. ./lib/...
CGO_ENABLED=0 go test -bench=. ./app/...
vendor-update:
go get -u ./lib/...
go get -u ./app/...
go mod tidy -compat=1.23
go mod tidy -compat=1.24
go mod vendor
app-local:
@ -564,10 +564,10 @@ install-qtc:
golangci-lint: install-golangci-lint
golangci-lint run
GOEXPERIMENT=synctest golangci-lint run
install-golangci-lint:
which golangci-lint || curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(shell go env GOPATH)/bin v1.64.5
which golangci-lint || curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(shell go env GOPATH)/bin v1.64.7
remove-golangci-lint:
rm -rf `which golangci-lint`

View file

@ -3,7 +3,6 @@ package datadog
import (
"bytes"
"fmt"
"io"
"net/http"
"strconv"
"time"
@ -11,20 +10,22 @@ import (
"github.com/VictoriaMetrics/metrics"
"github.com/valyala/fastjson"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
)
var (
datadogStreamFields = flagutil.NewArrayString("datadog.streamFields", "Datadog tags to be used as stream fields.")
datadogIgnoreFields = flagutil.NewArrayString("datadog.ignoreFields", "Datadog tags to ignore.")
datadogStreamFields = flagutil.NewArrayString("datadog.streamFields", "Comma-separated list of fields to use as log stream fields for logs ingested via DataDog protocol. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/datadog-agent/#stream-fields")
datadogIgnoreFields = flagutil.NewArrayString("datadog.ignoreFields", "Comma-separated list of fields to ignore for logs ingested via DataDog protocol. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/datadog-agent/#dropping-fields")
maxRequestSize = flagutil.NewBytes("datadog.maxRequestSize", 64*1024*1024, "The maximum size in bytes of a single DataDog request")
)
var parserPool fastjson.ParserPool
@ -46,7 +47,6 @@ func datadogLogsIngestion(w http.ResponseWriter, r *http.Request) bool {
w.Header().Add("Content-Type", "application/json")
startTime := time.Now()
v2LogsRequestsTotal.Inc()
reader := r.Body
var ts int64
if tsValue := r.Header.Get("dd-message-timestamp"); tsValue != "" && tsValue != "0" {
@ -61,25 +61,7 @@ func datadogLogsIngestion(w http.ResponseWriter, r *http.Request) bool {
ts = startTime.UnixNano()
}
if r.Header.Get("Content-Encoding") == "gzip" {
zr, err := common.GetGzipReader(reader)
if err != nil {
httpserver.Errorf(w, r, "cannot read gzipped logs request: %s", err)
return true
}
defer common.PutGzipReader(zr)
reader = zr
}
wcr := writeconcurrencylimiter.GetReader(reader)
data, err := io.ReadAll(wcr)
writeconcurrencylimiter.PutReader(wcr)
if err != nil {
httpserver.Errorf(w, r, "cannot read request body: %s", err)
return true
}
cp, err := insertutils.GetCommonParams(r)
cp, err := insertutil.GetCommonParams(r)
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return true
@ -97,11 +79,15 @@ func datadogLogsIngestion(w http.ResponseWriter, r *http.Request) bool {
return true
}
lmp := cp.NewLogMessageProcessor("datadog")
err = readLogsRequest(ts, data, lmp)
lmp.MustClose()
encoding := r.Header.Get("Content-Encoding")
err = protoparserutil.ReadUncompressedData(r.Body, encoding, maxRequestSize, func(data []byte) error {
lmp := cp.NewLogMessageProcessor("datadog", false)
err := readLogsRequest(ts, data, lmp)
lmp.MustClose()
return err
})
if err != nil {
logger.Warnf("cannot decode log message in /api/v2/logs request: %s, stream fields: %s", err, cp.StreamFields)
httpserver.Errorf(w, r, "cannot read DataDog protocol data: %s", err)
return true
}
@ -121,7 +107,7 @@ var (
// datadog message field has two formats:
// - regular log message with string text
// - nested json format for serverless plugins
// which has folowing format:
// which has the following format:
// {"message": {"message": "text","lamdba": {"arn": "string","requestID": "string"}, "timestamp": int64} }
//
// See https://github.com/DataDog/datadog-lambda-extension/blob/28b90c7e4e985b72d60b5f5a5147c69c7ac693c4/bottlecap/src/logs/lambda/mod.rs#L24
@ -184,7 +170,7 @@ func appendMsgFields(fields []logstorage.Field, v *fastjson.Value) ([]logstorage
// readLogsRequest parses data according to DataDog logs format
// https://docs.datadoghq.com/api/latest/logs/#send-logs
func readLogsRequest(ts int64, data []byte, lmp insertutils.LogMessageProcessor) error {
func readLogsRequest(ts int64, data []byte, lmp insertutil.LogMessageProcessor) error {
p := parserPool.Get()
defer parserPool.Put(p)
v, err := p.ParseBytes(data)

View file

@ -4,7 +4,7 @@ import (
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
)
func TestReadLogsRequestFailure(t *testing.T) {
@ -13,7 +13,7 @@ func TestReadLogsRequestFailure(t *testing.T) {
ts := time.Now().UnixNano()
lmp := &insertutils.TestLogMessageProcessor{}
lmp := &insertutil.TestLogMessageProcessor{}
if err := readLogsRequest(ts, []byte(data), lmp); err == nil {
t.Fatalf("expecting non-empty error")
}
@ -37,7 +37,7 @@ func TestReadLogsRequestSuccess(t *testing.T) {
for i := 0; i < rowsExpected; i++ {
timestampsExpected = append(timestampsExpected, ts)
}
lmp := &insertutils.TestLogMessageProcessor{}
lmp := &insertutil.TestLogMessageProcessor{}
if err := readLogsRequest(ts, []byte(data), lmp); err != nil {
t.Fatalf("unexpected error: %s", err)
}

View file

@ -10,14 +10,14 @@ import (
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bufferedwriter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
)
@ -90,7 +90,7 @@ func RequestHandler(path string, w http.ResponseWriter, r *http.Request) bool {
startTime := time.Now()
bulkRequestsTotal.Inc()
cp, err := insertutils.GetCommonParams(r)
cp, err := insertutil.GetCommonParams(r)
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return true
@ -99,10 +99,10 @@ func RequestHandler(path string, w http.ResponseWriter, r *http.Request) bool {
httpserver.Errorf(w, r, "%s", err)
return true
}
lmp := cp.NewLogMessageProcessor("elasticsearch_bulk")
isGzip := r.Header.Get("Content-Encoding") == "gzip"
lmp := cp.NewLogMessageProcessor("elasticsearch_bulk", true)
encoding := r.Header.Get("Content-Encoding")
streamName := fmt.Sprintf("remoteAddr=%s, requestURI=%q", httpserver.GetQuotedRemoteAddr(r), r.RequestURI)
n, err := readBulkRequest(streamName, r.Body, isGzip, cp.TimeField, cp.MsgFields, lmp)
n, err := readBulkRequest(streamName, r.Body, encoding, cp.TimeField, cp.MsgFields, lmp)
lmp.MustClose()
if err != nil {
logger.Warnf("cannot decode log message #%d in /_bulk request: %s, stream fields: %s", n, err, cp.StreamFields)
@ -131,22 +131,19 @@ var (
bulkRequestDuration = metrics.NewHistogram(`vl_http_request_duration_seconds{path="/insert/elasticsearch/_bulk"}`)
)
func readBulkRequest(streamName string, r io.Reader, isGzip bool, timeField string, msgFields []string, lmp insertutils.LogMessageProcessor) (int, error) {
func readBulkRequest(streamName string, r io.Reader, encoding string, timeField string, msgFields []string, lmp insertutil.LogMessageProcessor) (int, error) {
// See https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html
if isGzip {
zr, err := common.GetGzipReader(r)
if err != nil {
return 0, fmt.Errorf("cannot read gzipped _bulk request: %w", err)
}
defer common.PutGzipReader(zr)
r = zr
reader, err := protoparserutil.GetUncompressedReader(r, encoding)
if err != nil {
return 0, fmt.Errorf("cannot decode Elasticsearch protocol data: %w", err)
}
defer protoparserutil.PutUncompressedReader(reader)
wcr := writeconcurrencylimiter.GetReader(r)
wcr := writeconcurrencylimiter.GetReader(reader)
defer writeconcurrencylimiter.PutReader(wcr)
lr := insertutils.NewLineReader(streamName, wcr)
lr := insertutil.NewLineReader(streamName, wcr)
n := 0
for {
@ -159,7 +156,7 @@ func readBulkRequest(streamName string, r io.Reader, isGzip bool, timeField stri
}
}
func readBulkLine(lr *insertutils.LineReader, timeField string, msgFields []string, lmp insertutils.LogMessageProcessor) (bool, error) {
func readBulkLine(lr *insertutil.LineReader, timeField string, msgFields []string, lmp insertutil.LogMessageProcessor) (bool, error) {
var line []byte
// Read the command, must be "create" or "index"
@ -231,7 +228,7 @@ func parseElasticsearchTimestamp(s string) (int64, error) {
}
if len(s) < len("YYYY-MM-DD") || s[len("YYYY")] != '-' {
// Try parsing timestamp in seconds or milliseconds
return insertutils.ParseUnixTimestamp(s)
return insertutil.ParseUnixTimestamp(s)
}
if len(s) == len("YYYY-MM-DD") {
t, err := time.Parse("2006-01-02", s)

View file

@ -2,20 +2,24 @@ package elasticsearch
import (
"bytes"
"compress/gzip"
"fmt"
"github.com/golang/snappy"
"github.com/klauspost/compress/gzip"
"github.com/klauspost/compress/zlib"
"github.com/klauspost/compress/zstd"
"io"
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
)
func TestReadBulkRequest_Failure(t *testing.T) {
f := func(data string) {
t.Helper()
tlp := &insertutils.TestLogMessageProcessor{}
tlp := &insertutil.TestLogMessageProcessor{}
r := bytes.NewBufferString(data)
rows, err := readBulkRequest("test", r, false, "_time", []string{"_msg"}, tlp)
rows, err := readBulkRequest("test", r, "", "_time", []string{"_msg"}, tlp)
if err == nil {
t.Fatalf("expecting non-empty error")
}
@ -33,15 +37,15 @@ foobar`)
}
func TestReadBulkRequest_Success(t *testing.T) {
f := func(data, timeField, msgField string, timestampsExpected []int64, resultExpected string) {
f := func(data, encoding, timeField, msgField string, timestampsExpected []int64, resultExpected string) {
t.Helper()
msgFields := []string{"non_existing_foo", msgField, "non_exiting_bar"}
tlp := &insertutils.TestLogMessageProcessor{}
tlp := &insertutil.TestLogMessageProcessor{}
// Read the request without compression
r := bytes.NewBufferString(data)
rows, err := readBulkRequest("test", r, false, timeField, msgFields, tlp)
rows, err := readBulkRequest("test", r, "", timeField, msgFields, tlp)
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
@ -53,10 +57,12 @@ func TestReadBulkRequest_Success(t *testing.T) {
}
// Read the request with compression
tlp = &insertutils.TestLogMessageProcessor{}
compressedData := compressData(data)
r = bytes.NewBufferString(compressedData)
rows, err = readBulkRequest("test", r, true, timeField, msgFields, tlp)
tlp = &insertutil.TestLogMessageProcessor{}
if encoding != "" {
data = compressData(data, encoding)
}
r = bytes.NewBufferString(data)
rows, err = readBulkRequest("test", r, encoding, timeField, msgFields, tlp)
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
@ -69,9 +75,9 @@ func TestReadBulkRequest_Success(t *testing.T) {
}
// Verify an empty data
f("", "_time", "_msg", nil, "")
f("\n", "_time", "_msg", nil, "")
f("\n\n", "_time", "_msg", nil, "")
f("", "gzip", "_time", "_msg", nil, "")
f("\n", "gzip", "_time", "_msg", nil, "")
f("\n\n", "gzip", "_time", "_msg", nil, "")
// Verify non-empty data
data := `{"create":{"_index":"filebeat-8.8.0"}}
@ -82,20 +88,35 @@ func TestReadBulkRequest_Success(t *testing.T) {
{"message":"xyz","@timestamp":"1686026893735","x":"y"}
{"create":{"_index":"filebeat-8.8.0"}}
{"message":"qwe rty","@timestamp":"1686026893"}
{"create":{"_index":"filebeat-8.8.0"}}
{"message":"qwe rty float","@timestamp":"1686026123.62"}
`
timeField := "@timestamp"
msgField := "message"
timestampsExpected := []int64{1686026891735000000, 1686023292735000000, 1686026893735000000, 1686026893000000000}
timestampsExpected := []int64{1686026891735000000, 1686023292735000000, 1686026893735000000, 1686026893000000000, 1686026123620000000}
resultExpected := `{"log.offset":"71770","log.file.path":"/var/log/auth.log","_msg":"foobar"}
{"_msg":"baz"}
{"_msg":"xyz","x":"y"}
{"_msg":"qwe rty"}`
f(data, timeField, msgField, timestampsExpected, resultExpected)
{"_msg":"qwe rty"}
{"_msg":"qwe rty float"}`
f(data, "zstd", timeField, msgField, timestampsExpected, resultExpected)
}
func compressData(s string) string {
func compressData(s string, encoding string) string {
var bb bytes.Buffer
zw := gzip.NewWriter(&bb)
var zw io.WriteCloser
switch encoding {
case "gzip":
zw = gzip.NewWriter(&bb)
case "zstd":
zw, _ = zstd.NewWriter(&bb)
case "snappy":
zw = snappy.NewBufferedWriter(&bb)
case "deflate":
zw = zlib.NewWriter(&bb)
default:
panic(fmt.Errorf("%q encoding is not supported", encoding))
}
if _, err := zw.Write([]byte(s)); err != nil {
panic(fmt.Errorf("unexpected error when compressing data: %w", err))
}

View file

@ -5,20 +5,29 @@ import (
"fmt"
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
)
func BenchmarkReadBulkRequest(b *testing.B) {
b.Run("gzip:off", func(b *testing.B) {
benchmarkReadBulkRequest(b, false)
b.Run("encoding:none", func(b *testing.B) {
benchmarkReadBulkRequest(b, "")
})
b.Run("gzip:on", func(b *testing.B) {
benchmarkReadBulkRequest(b, true)
b.Run("encoding:gzip", func(b *testing.B) {
benchmarkReadBulkRequest(b, "gzip")
})
b.Run("encoding:zstd", func(b *testing.B) {
benchmarkReadBulkRequest(b, "zstd")
})
b.Run("encoding:deflate", func(b *testing.B) {
benchmarkReadBulkRequest(b, "deflate")
})
b.Run("encoding:snappy", func(b *testing.B) {
benchmarkReadBulkRequest(b, "snappy")
})
}
func benchmarkReadBulkRequest(b *testing.B, isGzip bool) {
func benchmarkReadBulkRequest(b *testing.B, encoding string) {
data := `{"create":{"_index":"filebeat-8.8.0"}}
{"@timestamp":"2023-06-06T04:48:11.735Z","log":{"offset":71770,"file":{"path":"/var/log/auth.log"}},"message":"foobar"}
{"create":{"_index":"filebeat-8.8.0"}}
@ -26,14 +35,14 @@ func benchmarkReadBulkRequest(b *testing.B, isGzip bool) {
{"create":{"_index":"filebeat-8.8.0"}}
{"message":"xyz","@timestamp":"2023-06-06T04:48:13.735Z","x":"y"}
`
if isGzip {
data = compressData(data)
if encoding != "" {
data = compressData(data, encoding)
}
dataBytes := bytesutil.ToUnsafeBytes(data)
timeField := "@timestamp"
msgFields := []string{"message"}
blp := &insertutils.BenchmarkLogMessageProcessor{}
blp := &insertutil.BenchmarkLogMessageProcessor{}
b.ReportAllocs()
b.SetBytes(int64(len(data)))
@ -41,7 +50,7 @@ func benchmarkReadBulkRequest(b *testing.B, isGzip bool) {
r := &bytes.Reader{}
for pb.Next() {
r.Reset(dataBytes)
_, err := readBulkRequest("test", r, isGzip, timeField, msgFields, blp)
_, err := readBulkRequest("test", r, encoding, timeField, msgFields, blp)
if err != nil {
panic(fmt.Errorf("unexpected error: %w", err))
}

View file

@ -1,4 +1,4 @@
package insertutils
package insertutil
import (
"flag"
@ -13,7 +13,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeutil"
@ -49,13 +49,13 @@ func GetCommonParams(r *http.Request) (*CommonParams, error) {
}
timeField := "_time"
if tf := httputils.GetRequestValue(r, "_time_field", "VL-Time-Field"); tf != "" {
if tf := httputil.GetRequestValue(r, "_time_field", "VL-Time-Field"); tf != "" {
timeField = tf
}
msgFields := httputils.GetArray(r, "_msg_field", "VL-Msg-Field")
streamFields := httputils.GetArray(r, "_stream_fields", "VL-Stream-Fields")
ignoreFields := httputils.GetArray(r, "ignore_fields", "VL-Ignore-Fields")
msgFields := httputil.GetArray(r, "_msg_field", "VL-Msg-Field")
streamFields := httputil.GetArray(r, "_stream_fields", "VL-Stream-Fields")
ignoreFields := httputil.GetArray(r, "ignore_fields", "VL-Ignore-Fields")
extraFields, err := getExtraFields(r)
if err != nil {
@ -63,7 +63,7 @@ func GetCommonParams(r *http.Request) (*CommonParams, error) {
}
debug := false
if dv := httputils.GetRequestValue(r, "debug", "VL-Debug"); dv != "" {
if dv := httputil.GetRequestValue(r, "debug", "VL-Debug"); dv != "" {
debug, err = strconv.ParseBool(dv)
if err != nil {
return nil, fmt.Errorf("cannot parse debug=%q: %w", dv, err)
@ -92,7 +92,7 @@ func GetCommonParams(r *http.Request) (*CommonParams, error) {
}
func getExtraFields(r *http.Request) ([]logstorage.Field, error) {
efs := httputils.GetArray(r, "extra_fields", "VL-Extra-Fields")
efs := httputil.GetArray(r, "extra_fields", "VL-Extra-Fields")
if len(efs) == 0 {
return nil, nil
}
@ -141,7 +141,7 @@ type LogMessageProcessor interface {
//
// If streamFields is non-nil, then the given streamFields must be used as log stream fields instead of pre-configured fields.
//
// The LogMessageProcessor implementation cannot hold references to fields, since the caller can re-use them.
// The LogMessageProcessor implementation cannot hold references to fields, since the caller can reuse them.
AddRow(timestamp int64, fields, streamFields []logstorage.Field)
// MustClose() must flush all the remaining fields and free up resources occupied by LogMessageProcessor.
@ -191,9 +191,6 @@ func (lmp *logMessageProcessor) initPeriodicFlush() {
//
// If streamFields is non-nil, then it is used as log stream fields instead of the pre-configured stream fields.
func (lmp *logMessageProcessor) AddRow(timestamp int64, fields, streamFields []logstorage.Field) {
lmp.mu.Lock()
defer lmp.mu.Unlock()
lmp.rowsIngestedTotal.Inc()
n := logstorage.EstimatedJSONRowLen(fields)
lmp.bytesIngestedTotal.Add(n)
@ -205,7 +202,47 @@ func (lmp *logMessageProcessor) AddRow(timestamp int64, fields, streamFields []l
return
}
lmp.mu.Lock()
defer lmp.mu.Unlock()
lmp.lr.MustAdd(lmp.cp.TenantID, timestamp, fields, streamFields)
if lmp.cp.Debug {
s := lmp.lr.GetRowString(0)
lmp.lr.ResetKeepSettings()
logger.Infof("remoteAddr=%s; requestURI=%s; ignoring log entry because of `debug` arg: %s", lmp.cp.DebugRemoteAddr, lmp.cp.DebugRequestURI, s)
rowsDroppedTotalDebug.Inc()
return
}
if lmp.lr.NeedFlush() {
lmp.flushLocked()
}
}
// InsertRowProcessor is used by native data ingestion protocol parser.
type InsertRowProcessor interface {
// AddInsertRow must add r to the underlying storage.
AddInsertRow(r *logstorage.InsertRow)
}
// AddInsertRow adds r to lmp.
func (lmp *logMessageProcessor) AddInsertRow(r *logstorage.InsertRow) {
lmp.rowsIngestedTotal.Inc()
n := logstorage.EstimatedJSONRowLen(r.Fields)
lmp.bytesIngestedTotal.Add(n)
if len(r.Fields) > *MaxFieldsPerLine {
line := logstorage.MarshalFieldsToJSON(nil, r.Fields)
logger.Warnf("dropping log line with %d fields; it exceeds -insert.maxFieldsPerLine=%d; %s", len(r.Fields), *MaxFieldsPerLine, line)
rowsDroppedTotalTooManyFields.Inc()
return
}
lmp.mu.Lock()
defer lmp.mu.Unlock()
lmp.lr.MustAddInsertRow(r)
if lmp.cp.Debug {
s := lmp.lr.GetRowString(0)
lmp.lr.ResetKeepSettings()
@ -238,7 +275,7 @@ func (lmp *logMessageProcessor) MustClose() {
// NewLogMessageProcessor returns new LogMessageProcessor for the given cp.
//
// MustClose() must be called on the returned LogMessageProcessor when it is no longer needed.
func (cp *CommonParams) NewLogMessageProcessor(protocolName string) LogMessageProcessor {
func (cp *CommonParams) NewLogMessageProcessor(protocolName string, isStreamMode bool) LogMessageProcessor {
lr := logstorage.GetLogRows(cp.StreamFields, cp.IgnoreFields, cp.ExtraFields, *defaultMsgValue)
rowsIngestedTotal := metrics.GetOrCreateCounter(fmt.Sprintf("vl_rows_ingested_total{type=%q}", protocolName))
bytesIngestedTotal := metrics.GetOrCreateCounter(fmt.Sprintf("vl_bytes_ingested_total{type=%q}", protocolName))
@ -251,7 +288,10 @@ func (cp *CommonParams) NewLogMessageProcessor(protocolName string) LogMessagePr
stopCh: make(chan struct{}),
}
lmp.initPeriodicFlush()
if isStreamMode {
lmp.initPeriodicFlush()
}
return lmp
}

View file

@ -1,4 +1,4 @@
package insertutils
package insertutil
import (
"flag"

View file

@ -1,4 +1,4 @@
package insertutils
package insertutil
import (
"bytes"

View file

@ -1,4 +1,4 @@
package insertutils
package insertutil
import (
"bytes"

View file

@ -1,4 +1,4 @@
package insertutils
package insertutil
import (
"fmt"

View file

@ -1,8 +1,10 @@
package insertutils
package insertutil
import (
"fmt"
"math"
"strconv"
"strings"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
@ -49,6 +51,35 @@ func parseTimestamp(s string) (int64, error) {
// ParseUnixTimestamp parses s as unix timestamp in seconds, milliseconds, microseconds or nanoseconds and returns the parsed timestamp in nanoseconds.
func ParseUnixTimestamp(s string) (int64, error) {
if strings.IndexByte(s, '.') >= 0 {
// Parse timestamp as floating-point value
f, err := strconv.ParseFloat(s, 64)
if err != nil {
return 0, fmt.Errorf("cannot parse unix timestamp from %q: %w", s, err)
}
if f < (1<<31) && f >= (-1<<31) {
// The timestamp is in seconds.
return int64(f * 1e9), nil
}
if f < 1e3*(1<<31) && f >= 1e3*(-1<<31) {
// The timestamp is in milliseconds.
return int64(f * 1e6), nil
}
if f < 1e6*(1<<31) && f >= 1e6*(-1<<31) {
// The timestamp is in microseconds.
return int64(f * 1e3), nil
}
// The timestamp is in nanoseconds
if f > math.MaxInt64 {
return 0, fmt.Errorf("too big timestamp in nanoseconds: %v; mustn't exceed %v", f, int64(math.MaxInt64))
}
if f < math.MinInt64 {
return 0, fmt.Errorf("too small timestamp in nanoseconds: %v; must be bigger or equal to %v", f, int64(math.MinInt64))
}
return int64(f), nil
}
// Parse timestamp as integer
n, err := strconv.ParseInt(s, 10, 64)
if err != nil {
return 0, fmt.Errorf("cannot parse unix timestamp from %q: %w", s, err)

View file

@ -1,4 +1,4 @@
package insertutils
package insertutil
import (
"testing"
@ -6,6 +6,63 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
)
func TestParseUnixTimestamp_Success(t *testing.T) {
f := func(s string, timestampExpected int64) {
t.Helper()
timestamp, err := ParseUnixTimestamp(s)
if err != nil {
t.Fatalf("unexpected error in ParseUnixTimestamp(%q): %s", s, err)
}
if timestamp != timestampExpected {
t.Fatalf("unexpected timestamp returned from ParseUnixTimestamp(%q); got %d; want %d", s, timestamp, timestampExpected)
}
}
f("0", 0)
// nanoseconds
f("-1234567890123456789", -1234567890123456789)
f("1234567890123456789", 1234567890123456789)
// microseconds
f("-1234567890123456", -1234567890123456000)
f("1234567890123456", 1234567890123456000)
f("1234567890123456.789", 1234567890123456768)
// milliseconds
f("-1234567890123", -1234567890123000000)
f("1234567890123", 1234567890123000000)
f("1234567890123.456", 1234567890123456000)
// seconds
f("-1234567890", -1234567890000000000)
f("1234567890", 1234567890000000000)
f("-1234567890.123456", -1234567890123456000)
}
func TestParseUnixTimestamp_Failure(t *testing.T) {
f := func(s string) {
t.Helper()
_, err := ParseUnixTimestamp(s)
if err == nil {
t.Fatalf("expecting non-nil error in ParseUnixTimestamp(%q)", s)
}
}
// non-numeric timestamp
f("")
f("foobar")
f("foo.bar")
// too big timestamp
f("12345678901234567890")
f("-12345678901234567890")
f("12345678901234567890.235424")
f("-12345678901234567890.235424")
}
func TestExtractTimestampFromFields_Success(t *testing.T) {
f := func(timeField string, fields []logstorage.Field, nsecsExpected int64) {
t.Helper()

View file

@ -0,0 +1,96 @@
package internalinsert
import (
"flag"
"fmt"
"net/http"
"time"
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage/netinsert"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
)
var (
disableInsert = flag.Bool("internalinsert.disable", false, "Whether to disable /internal/insert HTTP endpoint")
maxRequestSize = flagutil.NewBytes("internalinsert.maxRequestSize", 64*1024*1024, "The maximum size in bytes of a single request, which can be accepted at /internal/insert HTTP endpoint")
)
// RequestHandler processes /internal/insert requests.
func RequestHandler(w http.ResponseWriter, r *http.Request) {
if *disableInsert {
httpserver.Errorf(w, r, "requests to /internal/insert are disabled with -internalinsert.disable command-line flag")
return
}
startTime := time.Now()
if r.Method != "POST" {
w.WriteHeader(http.StatusMethodNotAllowed)
return
}
version := r.FormValue("version")
if version != netinsert.ProtocolVersion {
httpserver.Errorf(w, r, "unsupported protocol version=%q; want %q", version, netinsert.ProtocolVersion)
return
}
requestsTotal.Inc()
cp, err := insertutil.GetCommonParams(r)
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return
}
if err := vlstorage.CanWriteData(); err != nil {
httpserver.Errorf(w, r, "%s", err)
return
}
encoding := r.Header.Get("Content-Encoding")
err = protoparserutil.ReadUncompressedData(r.Body, encoding, maxRequestSize, func(data []byte) error {
lmp := cp.NewLogMessageProcessor("internalinsert", false)
irp := lmp.(insertutil.InsertRowProcessor)
err := parseData(irp, data)
lmp.MustClose()
return err
})
if err != nil {
errorsTotal.Inc()
httpserver.Errorf(w, r, "cannot parse internal insert request: %s", err)
return
}
requestDuration.UpdateDuration(startTime)
}
func parseData(irp insertutil.InsertRowProcessor, data []byte) error {
r := logstorage.GetInsertRow()
src := data
i := 0
for len(src) > 0 {
tail, err := r.UnmarshalInplace(src)
if err != nil {
return fmt.Errorf("cannot parse row #%d: %s", i, err)
}
src = tail
i++
irp.AddInsertRow(r)
}
logstorage.PutInsertRow(r)
return nil
}
var (
requestsTotal = metrics.NewCounter(`vl_http_requests_total{path="/internal/insert"}`)
errorsTotal = metrics.NewCounter(`vl_http_errors_total{path="/internal/insert"}`)
requestDuration = metrics.NewHistogram(`vl_http_request_duration_seconds{path="/internal/insert"}`)
)

View file

@ -5,7 +5,6 @@ import (
"encoding/binary"
"flag"
"fmt"
"io"
"net/http"
"regexp"
"slices"
@ -13,39 +12,37 @@ import (
"strings"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding/zstd"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/metrics"
)
const (
journaldEntryMaxNameLen = 64
)
// See https://github.com/systemd/systemd/blob/main/src/libsystemd/sd-journal/journal-file.c#L1703
const journaldEntryMaxNameLen = 64
var allowedJournaldEntryNameChars = regexp.MustCompile(`^[A-Z_][A-Z0-9_]*`)
var (
bodyBufferPool bytesutil.ByteBufferPool
allowedJournaldEntryNameChars = regexp.MustCompile(`^[A-Z_][A-Z0-9_]*`)
)
var (
journaldStreamFields = flagutil.NewArrayString("journald.streamFields", "Journal fields to be used as stream fields. "+
"See the list of allowed fields at https://www.freedesktop.org/software/systemd/man/latest/systemd.journal-fields.html.")
journaldIgnoreFields = flagutil.NewArrayString("journald.ignoreFields", "Journal fields to ignore. "+
"See the list of allowed fields at https://www.freedesktop.org/software/systemd/man/latest/systemd.journal-fields.html.")
journaldTimeField = flag.String("journald.timeField", "__REALTIME_TIMESTAMP", "Journal field to be used as time field. "+
"See the list of allowed fields at https://www.freedesktop.org/software/systemd/man/latest/systemd.journal-fields.html.")
journaldTenantID = flag.String("journald.tenantID", "0:0", "TenantID for logs ingested via the Journald endpoint.")
journaldStreamFields = flagutil.NewArrayString("journald.streamFields", "Comma-separated list of fields to use as log stream fields for logs ingested over journald protocol. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/journald/#stream-fields")
journaldIgnoreFields = flagutil.NewArrayString("journald.ignoreFields", "Comma-separated list of fields to ignore for logs ingested over journald protocol. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/journald/#dropping-fields")
journaldTimeField = flag.String("journald.timeField", "__REALTIME_TIMESTAMP", "Field to use as a log timestamp for logs ingested via journald protocol. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/journald/#time-field")
journaldTenantID = flag.String("journald.tenantID", "0:0", "TenantID for logs ingested via the Journald endpoint. "+
"See https://docs.victoriametrics.com/victorialogs/data-ingestion/journald/#multitenancy")
journaldIncludeEntryMetadata = flag.Bool("journald.includeEntryMetadata", false, "Include journal entry fields, which with double underscores.")
maxRequestSize = flagutil.NewBytes("journald.maxRequestSize", 64*1024*1024, "The maximum size in bytes of a single journald request")
)
func getCommonParams(r *http.Request) (*insertutils.CommonParams, error) {
cp, err := insertutils.GetCommonParams(r)
func getCommonParams(r *http.Request) (*insertutil.CommonParams, error) {
cp, err := insertutil.GetCommonParams(r)
if err != nil {
return nil, err
}
@ -89,43 +86,29 @@ func handleJournald(r *http.Request, w http.ResponseWriter) {
startTime := time.Now()
requestsJournaldTotal.Inc()
if err := vlstorage.CanWriteData(); err != nil {
httpserver.Errorf(w, r, "%s", err)
return
}
reader := r.Body
var err error
wcr := writeconcurrencylimiter.GetReader(reader)
data, err := io.ReadAll(wcr)
if err != nil {
httpserver.Errorf(w, r, "cannot read request body: %s", err)
return
}
writeconcurrencylimiter.PutReader(wcr)
bb := bodyBufferPool.Get()
defer bodyBufferPool.Put(bb)
if r.Header.Get("Content-Encoding") == "zstd" {
bb.B, err = zstd.Decompress(bb.B[:0], data)
if err != nil {
httpserver.Errorf(w, r, "cannot decompress zstd-encoded request with length %d: %s", len(data), err)
return
}
data = bb.B
}
cp, err := getCommonParams(r)
if err != nil {
errorsTotal.Inc()
httpserver.Errorf(w, r, "cannot parse common params from request: %s", err)
return
}
lmp := cp.NewLogMessageProcessor("journald")
err = parseJournaldRequest(data, lmp, cp)
lmp.MustClose()
if err := vlstorage.CanWriteData(); err != nil {
errorsTotal.Inc()
httpserver.Errorf(w, r, "%s", err)
return
}
encoding := r.Header.Get("Content-Encoding")
err = protoparserutil.ReadUncompressedData(r.Body, encoding, maxRequestSize, func(data []byte) error {
lmp := cp.NewLogMessageProcessor("journald", false)
err := parseJournaldRequest(data, lmp, cp)
lmp.MustClose()
return err
})
if err != nil {
errorsTotal.Inc()
httpserver.Errorf(w, r, "cannot parse Journald protobuf request: %s", err)
httpserver.Errorf(w, r, "cannot read journald protocol data: %s", err)
return
}
@ -148,7 +131,7 @@ var (
)
// See https://systemd.io/JOURNAL_EXPORT_FORMATS/#journal-export-format
func parseJournaldRequest(data []byte, lmp insertutils.LogMessageProcessor, cp *insertutils.CommonParams) error {
func parseJournaldRequest(data []byte, lmp insertutil.LogMessageProcessor, cp *insertutil.CommonParams) error {
var fields []logstorage.Field
var ts int64
var size uint64
@ -198,7 +181,7 @@ func parseJournaldRequest(data []byte, lmp insertutils.LogMessageProcessor, cp *
if err != nil {
return fmt.Errorf("failed to extract binary field %q value size: %w", name, err)
}
// skip binary data sise
// skip binary data size
data = data[idx:]
if size == 0 {
return fmt.Errorf("unexpected zero binary data size decoded %d", size)
@ -218,7 +201,6 @@ func parseJournaldRequest(data []byte, lmp insertutils.LogMessageProcessor, cp *
}
data = data[1:]
}
// https://github.com/systemd/systemd/blob/main/src/libsystemd/sd-journal/journal-file.c#L1703
if len(name) > journaldEntryMaxNameLen {
return fmt.Errorf("journald entry name should not exceed %d symbols, got: %q", journaldEntryMaxNameLen, name)
}

View file

@ -3,14 +3,14 @@ package journald
import (
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
)
func TestPushJournaldOk(t *testing.T) {
f := func(src string, timestampsExpected []int64, resultExpected string) {
t.Helper()
tlp := &insertutils.TestLogMessageProcessor{}
cp := &insertutils.CommonParams{
tlp := &insertutil.TestLogMessageProcessor{}
cp := &insertutil.CommonParams{
TimeField: "__REALTIME_TIMESTAMP",
MsgFields: []string{"MESSAGE"},
}
@ -44,8 +44,8 @@ func TestPushJournaldOk(t *testing.T) {
func TestPushJournald_Failure(t *testing.T) {
f := func(data string) {
t.Helper()
tlp := &insertutils.TestLogMessageProcessor{}
cp := &insertutils.CommonParams{
tlp := &insertutil.TestLogMessageProcessor{}
cp := &insertutil.CommonParams{
TimeField: "__REALTIME_TIMESTAMP",
MsgFields: []string{"MESSAGE"},
}

View file

@ -6,12 +6,12 @@ import (
"net/http"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
"github.com/VictoriaMetrics/metrics"
)
@ -28,7 +28,7 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) {
requestsTotal.Inc()
cp, err := insertutils.GetCommonParams(r)
cp, err := insertutil.GetCommonParams(r)
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return
@ -38,18 +38,15 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) {
return
}
reader := r.Body
if r.Header.Get("Content-Encoding") == "gzip" {
zr, err := common.GetGzipReader(reader)
if err != nil {
logger.Errorf("cannot read gzipped jsonline request: %s", err)
return
}
defer common.PutGzipReader(zr)
reader = zr
encoding := r.Header.Get("Content-Encoding")
reader, err := protoparserutil.GetUncompressedReader(r.Body, encoding)
if err != nil {
logger.Errorf("cannot decode jsonline request: %s", err)
return
}
defer protoparserutil.PutUncompressedReader(reader)
lmp := cp.NewLogMessageProcessor("jsonline")
lmp := cp.NewLogMessageProcessor("jsonline", true)
streamName := fmt.Sprintf("remoteAddr=%s, requestURI=%q", httpserver.GetQuotedRemoteAddr(r), r.RequestURI)
processStreamInternal(streamName, reader, cp.TimeField, cp.MsgFields, lmp)
lmp.MustClose()
@ -57,11 +54,11 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) {
requestDuration.UpdateDuration(startTime)
}
func processStreamInternal(streamName string, r io.Reader, timeField string, msgFields []string, lmp insertutils.LogMessageProcessor) {
func processStreamInternal(streamName string, r io.Reader, timeField string, msgFields []string, lmp insertutil.LogMessageProcessor) {
wcr := writeconcurrencylimiter.GetReader(r)
defer writeconcurrencylimiter.PutReader(wcr)
lr := insertutils.NewLineReader(streamName, wcr)
lr := insertutil.NewLineReader(streamName, wcr)
n := 0
for {
@ -78,7 +75,7 @@ func processStreamInternal(streamName string, r io.Reader, timeField string, msg
}
}
func readLine(lr *insertutils.LineReader, timeField string, msgFields []string, lmp insertutils.LogMessageProcessor) (bool, error) {
func readLine(lr *insertutil.LineReader, timeField string, msgFields []string, lmp insertutil.LogMessageProcessor) (bool, error) {
var line []byte
for len(line) == 0 {
if !lr.NextLine() {
@ -94,7 +91,7 @@ func readLine(lr *insertutils.LineReader, timeField string, msgFields []string,
if err := p.ParseLogMessage(line); err != nil {
return true, fmt.Errorf("cannot parse json-encoded line: %w; line contents: %q", err, line)
}
ts, err := insertutils.ExtractTimestampFromFields(timeField, p.Fields)
ts, err := insertutil.ExtractTimestampFromFields(timeField, p.Fields)
if err != nil {
return true, fmt.Errorf("cannot get timestamp from json-encoded line: %w; line contents: %q", err, line)
}

View file

@ -4,7 +4,7 @@ import (
"bytes"
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
)
func TestProcessStreamInternal(t *testing.T) {
@ -12,7 +12,7 @@ func TestProcessStreamInternal(t *testing.T) {
t.Helper()
msgFields := []string{msgField}
tlp := &insertutils.TestLogMessageProcessor{}
tlp := &insertutil.TestLogMessageProcessor{}
r := bytes.NewBufferString(data)
processStreamInternal("test", r, timeField, msgFields, tlp)

View file

@ -1,12 +1,18 @@
package loki
import (
"flag"
"fmt"
"net/http"
"strconv"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
)
var disableMessageParsing = flag.Bool("loki.disableMessageParsing", false, "Whether to disable automatic parsing of JSON-encoded log fields inside Loki log message into distinct log fields")
// RequestHandler processes Loki insert requests
func RequestHandler(path string, w http.ResponseWriter, r *http.Request) bool {
switch path {
@ -35,8 +41,17 @@ func handleInsert(r *http.Request, w http.ResponseWriter) {
}
}
func getCommonParams(r *http.Request) (*insertutils.CommonParams, error) {
cp, err := insertutils.GetCommonParams(r)
type commonParams struct {
cp *insertutil.CommonParams
// Whether to parse JSON inside plaintext log message.
//
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8486
parseMessage bool
}
func getCommonParams(r *http.Request) (*commonParams, error) {
cp, err := insertutil.GetCommonParams(r)
if err != nil {
return nil, err
}
@ -55,5 +70,17 @@ func getCommonParams(r *http.Request) (*insertutils.CommonParams, error) {
}
return cp, nil
parseMessage := !*disableMessageParsing
if rv := httputil.GetRequestValue(r, "disable_message_parsing", "VL-Loki-Disable-Message-Parsing"); rv != "" {
bv, err := strconv.ParseBool(rv)
if err != nil {
return nil, fmt.Errorf("cannot parse dusable_message_parsing=%q: %s", rv, err)
}
parseMessage = !bv
}
return &commonParams{
cp: cp,
parseMessage: parseMessage,
}, nil
}

View file

@ -2,47 +2,28 @@ package loki
import (
"fmt"
"io"
"math"
"net/http"
"strconv"
"time"
"github.com/VictoriaMetrics/metrics"
"github.com/valyala/fastjson"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
)
var maxRequestSize = flagutil.NewBytes("loki.maxRequestSize", 64*1024*1024, "The maximum size in bytes of a single Loki request")
var parserPool fastjson.ParserPool
func handleJSON(r *http.Request, w http.ResponseWriter) {
startTime := time.Now()
requestsJSONTotal.Inc()
reader := r.Body
if r.Header.Get("Content-Encoding") == "gzip" {
zr, err := common.GetGzipReader(reader)
if err != nil {
httpserver.Errorf(w, r, "cannot initialize gzip reader: %s", err)
return
}
defer common.PutGzipReader(zr)
reader = zr
}
wcr := writeconcurrencylimiter.GetReader(reader)
data, err := io.ReadAll(wcr)
writeconcurrencylimiter.PutReader(wcr)
if err != nil {
httpserver.Errorf(w, r, "cannot read request body: %s", err)
return
}
cp, err := getCommonParams(r)
if err != nil {
@ -53,12 +34,17 @@ func handleJSON(r *http.Request, w http.ResponseWriter) {
httpserver.Errorf(w, r, "%s", err)
return
}
lmp := cp.NewLogMessageProcessor("loki_json")
useDefaultStreamFields := len(cp.StreamFields) == 0
err = parseJSONRequest(data, lmp, useDefaultStreamFields)
lmp.MustClose()
encoding := r.Header.Get("Content-Encoding")
err = protoparserutil.ReadUncompressedData(r.Body, encoding, maxRequestSize, func(data []byte) error {
lmp := cp.cp.NewLogMessageProcessor("loki_json", false)
useDefaultStreamFields := len(cp.cp.StreamFields) == 0
err := parseJSONRequest(data, lmp, cp.cp.MsgFields, useDefaultStreamFields, cp.parseMessage)
lmp.MustClose()
return err
})
if err != nil {
httpserver.Errorf(w, r, "cannot parse Loki json request: %s; data=%s", err, data)
httpserver.Errorf(w, r, "cannot read Loki json data: %s", err)
return
}
@ -66,6 +52,9 @@ func handleJSON(r *http.Request, w http.ResponseWriter) {
// There is no need in updating requestJSONDuration for request errors,
// since their timings are usually much smaller than the timing for successful request parsing.
requestJSONDuration.UpdateDuration(startTime)
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8505
w.WriteHeader(http.StatusNoContent)
}
var (
@ -73,9 +62,10 @@ var (
requestJSONDuration = metrics.NewHistogram(`vl_http_request_duration_seconds{path="/insert/loki/api/v1/push",format="json"}`)
)
func parseJSONRequest(data []byte, lmp insertutils.LogMessageProcessor, useDefaultStreamFields bool) error {
func parseJSONRequest(data []byte, lmp insertutil.LogMessageProcessor, msgFields []string, useDefaultStreamFields, parseMessage bool) error {
p := parserPool.Get()
defer parserPool.Put(p)
v, err := p.ParseBytes(data)
if err != nil {
return fmt.Errorf("cannot parse JSON request body: %w", err)
@ -90,11 +80,20 @@ func parseJSONRequest(data []byte, lmp insertutils.LogMessageProcessor, useDefau
return fmt.Errorf("`streams` item in the parsed JSON must contain an array; got %q", streamsV)
}
fields := getFields()
defer putFields(fields)
var msgParser *logstorage.JSONParser
if parseMessage {
msgParser = logstorage.GetJSONParser()
defer logstorage.PutJSONParser(msgParser)
}
currentTimestamp := time.Now().UnixNano()
var commonFields []logstorage.Field
for _, stream := range streams {
// populate common labels from `stream` dict
commonFields = commonFields[:0]
fields.fields = fields.fields[:0]
labelsV := stream.Get("stream")
var labels *fastjson.Object
if labelsV != nil {
@ -110,7 +109,7 @@ func parseJSONRequest(data []byte, lmp insertutils.LogMessageProcessor, useDefau
err = fmt.Errorf("unexpected label value type for %q:%q; want string", k, v)
return
}
commonFields = append(commonFields, logstorage.Field{
fields.fields = append(fields.fields, logstorage.Field{
Name: bytesutil.ToUnsafeString(k),
Value: bytesutil.ToUnsafeString(vStr),
})
@ -129,8 +128,10 @@ func parseJSONRequest(data []byte, lmp insertutils.LogMessageProcessor, useDefau
return fmt.Errorf("`values` item in the parsed JSON must contain an array; got %q", linesV)
}
fields := commonFields
commonFieldsLen := len(fields.fields)
for _, line := range lines {
fields.fields = fields.fields[:commonFieldsLen]
lineA, err := line.Array()
if err != nil {
return fmt.Errorf("unexpected contents of `values` item; want array; got %q", line)
@ -152,17 +153,6 @@ func parseJSONRequest(data []byte, lmp insertutils.LogMessageProcessor, useDefau
ts = currentTimestamp
}
// parse log message
msg, err := lineA[1].StringBytes()
if err != nil {
return fmt.Errorf("unexpected log message type for %q; want string", lineA[1])
}
fields = append(fields[:len(commonFields)], logstorage.Field{
Name: "_msg",
Value: bytesutil.ToUnsafeString(msg),
})
// parse structured metadata - see https://grafana.com/docs/loki/latest/reference/loki-http-api/#ingest-logs
if len(lineA) > 2 {
structuredMetadata, err := lineA[2].Object()
@ -177,7 +167,7 @@ func parseJSONRequest(data []byte, lmp insertutils.LogMessageProcessor, useDefau
return
}
fields = append(fields, logstorage.Field{
fields.fields = append(fields.fields, logstorage.Field{
Name: bytesutil.ToUnsafeString(k),
Value: bytesutil.ToUnsafeString(vStr),
})
@ -186,39 +176,51 @@ func parseJSONRequest(data []byte, lmp insertutils.LogMessageProcessor, useDefau
return fmt.Errorf("error when parsing `structuredMetadata` object: %w", err)
}
}
// parse log message
msg, err := lineA[1].StringBytes()
if err != nil {
return fmt.Errorf("unexpected log message type for %q; want string", lineA[1])
}
allowMsgRenaming := false
fields.fields, allowMsgRenaming = addMsgField(fields.fields, msgParser, bytesutil.ToUnsafeString(msg))
var streamFields []logstorage.Field
if useDefaultStreamFields {
streamFields = commonFields
streamFields = fields.fields[:commonFieldsLen]
}
lmp.AddRow(ts, fields, streamFields)
if allowMsgRenaming {
logstorage.RenameField(fields.fields[commonFieldsLen:], msgFields, "_msg")
}
lmp.AddRow(ts, fields.fields, streamFields)
}
}
return nil
}
func addMsgField(dst []logstorage.Field, msgParser *logstorage.JSONParser, msg string) ([]logstorage.Field, bool) {
if msgParser == nil || len(msg) < 2 || msg[0] != '{' || msg[len(msg)-1] != '}' {
return append(dst, logstorage.Field{
Name: "_msg",
Value: msg,
}), false
}
if msgParser != nil && len(msg) >= 2 && msg[0] == '{' && msg[len(msg)-1] == '}' {
if err := msgParser.ParseLogMessage(bytesutil.ToUnsafeBytes(msg)); err == nil {
return append(dst, msgParser.Fields...), true
}
}
return append(dst, logstorage.Field{
Name: "_msg",
Value: msg,
}), false
}
func parseLokiTimestamp(s string) (int64, error) {
if s == "" {
// Special case - an empty timestamp must be substituted with the current time by the caller.
return 0, nil
}
n, err := strconv.ParseInt(s, 10, 64)
if err != nil {
// Fall back to parsing floating-point value
f, err := strconv.ParseFloat(s, 64)
if err != nil {
return 0, err
}
if f > math.MaxInt64 {
return 0, fmt.Errorf("too big timestamp in nanoseconds: %v; mustn't exceed %v", f, int64(math.MaxInt64))
}
if f < math.MinInt64 {
return 0, fmt.Errorf("too small timestamp in nanoseconds: %v; must be bigger or equal to %v", f, int64(math.MinInt64))
}
n = int64(f)
}
if n < 0 {
return 0, fmt.Errorf("too small timestamp in nanoseconds: %d; must be bigger than 0", n)
}
return n, nil
return insertutil.ParseUnixTimestamp(s)
}

View file

@ -3,15 +3,15 @@ package loki
import (
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
)
func TestParseJSONRequest_Failure(t *testing.T) {
f := func(s string) {
t.Helper()
tlp := &insertutils.TestLogMessageProcessor{}
if err := parseJSONRequest([]byte(s), tlp, false); err == nil {
tlp := &insertutil.TestLogMessageProcessor{}
if err := parseJSONRequest([]byte(s), tlp, nil, false, false); err == nil {
t.Fatalf("expecting non-nil error")
}
if err := tlp.Verify(nil, ""); err != nil {
@ -63,9 +63,9 @@ func TestParseJSONRequest_Success(t *testing.T) {
f := func(s string, timestampsExpected []int64, resultExpected string) {
t.Helper()
tlp := &insertutils.TestLogMessageProcessor{}
tlp := &insertutil.TestLogMessageProcessor{}
if err := parseJSONRequest([]byte(s), tlp, false); err != nil {
if err := parseJSONRequest([]byte(s), tlp, nil, false, false); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if err := tlp.Verify(timestampsExpected, resultExpected); err != nil {
@ -89,9 +89,9 @@ func TestParseJSONRequest_Success(t *testing.T) {
"label2": "value2"
},"values":[
["1577836800000000001", "foo bar"],
["1477836900005000002", "abc"],
["1686026123.62", "abc"],
["147.78369e9", "foobar"]
]}]}`, []int64{1577836800000000001, 1477836900005000002, 147783690000}, `{"label1":"value1","label2":"value2","_msg":"foo bar"}
]}]}`, []int64{1577836800000000001, 1686026123620000000, 147783690000000000}, `{"label1":"value1","label2":"value2","_msg":"foo bar"}
{"label1":"value1","label2":"value2","_msg":"abc"}
{"label1":"value1","label2":"value2","_msg":"foobar"}`)
@ -122,6 +122,48 @@ func TestParseJSONRequest_Success(t *testing.T) {
{"x":"y","_msg":"yx"}`)
// values with metadata
f(`{"streams":[{"values":[["1577836800000000001", "foo bar", {"metadata_1": "md_value"}]]}]}`, []int64{1577836800000000001}, `{"_msg":"foo bar","metadata_1":"md_value"}`)
f(`{"streams":[{"values":[["1577836800000000001", "foo bar", {"metadata_1": "md_value"}]]}]}`, []int64{1577836800000000001}, `{"metadata_1":"md_value","_msg":"foo bar"}`)
f(`{"streams":[{"values":[["1577836800000000001", "foo bar", {}]]}]}`, []int64{1577836800000000001}, `{"_msg":"foo bar"}`)
}
func TestParseJSONRequest_ParseMessage(t *testing.T) {
f := func(s string, msgFields []string, timestampsExpected []int64, resultExpected string) {
t.Helper()
tlp := &insertutil.TestLogMessageProcessor{}
if err := parseJSONRequest([]byte(s), tlp, msgFields, false, true); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if err := tlp.Verify(timestampsExpected, resultExpected); err != nil {
t.Fatal(err)
}
}
f(`{
"streams": [
{
"stream": {
"foo": "bar",
"a": "b"
},
"values": [
["1577836800000000001", "{\"user_id\":\"123\"}"],
["1577836900005000002", "abc", {"trace_id":"pqw"}],
["1577836900005000003", "{def}"]
]
},
{
"stream": {
"x": "y"
},
"values": [
["1877836900005000004", "{\"trace_id\":\"111\",\"parent_id\":\"abc\"}"]
]
}
]
}`, []string{"a", "trace_id"}, []int64{1577836800000000001, 1577836900005000002, 1577836900005000003, 1877836900005000004}, `{"foo":"bar","a":"b","user_id":"123"}
{"foo":"bar","a":"b","trace_id":"pqw","_msg":"abc"}
{"foo":"bar","a":"b","_msg":"{def}"}
{"x":"y","_msg":"111","parent_id":"abc"}`)
}

View file

@ -6,7 +6,7 @@ import (
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
)
func BenchmarkParseJSONRequest(b *testing.B) {
@ -22,13 +22,13 @@ func BenchmarkParseJSONRequest(b *testing.B) {
}
func benchmarkParseJSONRequest(b *testing.B, streams, rows, labels int) {
blp := &insertutils.BenchmarkLogMessageProcessor{}
blp := &insertutil.BenchmarkLogMessageProcessor{}
b.ReportAllocs()
b.SetBytes(int64(streams * rows))
b.RunParallel(func(pb *testing.PB) {
data := getJSONBody(streams, rows, labels)
for pb.Next() {
if err := parseJSONRequest(data, blp, false); err != nil {
if err := parseJSONRequest(data, blp, nil, false, true); err != nil {
panic(fmt.Errorf("unexpected error: %w", err))
}
}

View file

@ -2,38 +2,27 @@ package loki
import (
"fmt"
"io"
"net/http"
"strconv"
"strings"
"sync"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/metrics"
"github.com/golang/snappy"
)
var (
bytesBufPool bytesutil.ByteBufferPool
pushReqsPool sync.Pool
)
func handleProtobuf(r *http.Request, w http.ResponseWriter) {
startTime := time.Now()
requestsProtobufTotal.Inc()
wcr := writeconcurrencylimiter.GetReader(r.Body)
data, err := io.ReadAll(wcr)
writeconcurrencylimiter.PutReader(wcr)
if err != nil {
httpserver.Errorf(w, r, "cannot read request body: %s", err)
return
}
cp, err := getCommonParams(r)
if err != nil {
@ -44,12 +33,22 @@ func handleProtobuf(r *http.Request, w http.ResponseWriter) {
httpserver.Errorf(w, r, "%s", err)
return
}
lmp := cp.NewLogMessageProcessor("loki_protobuf")
useDefaultStreamFields := len(cp.StreamFields) == 0
err = parseProtobufRequest(data, lmp, useDefaultStreamFields)
lmp.MustClose()
encoding := r.Header.Get("Content-Encoding")
if encoding == "" {
// Loki protocol uses snappy compression by default.
// See https://grafana.com/docs/loki/latest/reference/loki-http-api/#ingest-logs
encoding = "snappy"
}
err = protoparserutil.ReadUncompressedData(r.Body, encoding, maxRequestSize, func(data []byte) error {
lmp := cp.cp.NewLogMessageProcessor("loki_protobuf", false)
useDefaultStreamFields := len(cp.cp.StreamFields) == 0
err := parseProtobufRequest(data, lmp, cp.cp.MsgFields, useDefaultStreamFields, cp.parseMessage)
lmp.MustClose()
return err
})
if err != nil {
httpserver.Errorf(w, r, "cannot parse Loki protobuf request: %s", err)
httpserver.Errorf(w, r, "cannot read Loki protobuf data: %s", err)
return
}
@ -57,6 +56,9 @@ func handleProtobuf(r *http.Request, w http.ResponseWriter) {
// There is no need in updating requestProtobufDuration for request errors,
// since their timings are usually much smaller than the timing for successful request parsing.
requestProtobufDuration.UpdateDuration(startTime)
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8505
w.WriteHeader(http.StatusNoContent)
}
var (
@ -64,20 +66,11 @@ var (
requestProtobufDuration = metrics.NewHistogram(`vl_http_request_duration_seconds{path="/insert/loki/api/v1/push",format="protobuf"}`)
)
func parseProtobufRequest(data []byte, lmp insertutils.LogMessageProcessor, useDefaultStreamFields bool) error {
bb := bytesBufPool.Get()
defer bytesBufPool.Put(bb)
buf, err := snappy.Decode(bb.B[:cap(bb.B)], data)
if err != nil {
return fmt.Errorf("cannot decode snappy-encoded request body: %w", err)
}
bb.B = buf
func parseProtobufRequest(data []byte, lmp insertutil.LogMessageProcessor, msgFields []string, useDefaultStreamFields, parseMessage bool) error {
req := getPushRequest()
defer putPushRequest(req)
err = req.UnmarshalProtobuf(bb.B)
err := req.UnmarshalProtobuf(data)
if err != nil {
return fmt.Errorf("cannot parse request body: %w", err)
}
@ -85,8 +78,15 @@ func parseProtobufRequest(data []byte, lmp insertutils.LogMessageProcessor, useD
fields := getFields()
defer putFields(fields)
var msgParser *logstorage.JSONParser
if parseMessage {
msgParser = logstorage.GetJSONParser()
defer logstorage.PutJSONParser(msgParser)
}
streams := req.Streams
currentTimestamp := time.Now().UnixNano()
for i := range streams {
stream := &streams[i]
// st.Labels contains labels for the stream.
@ -109,10 +109,8 @@ func parseProtobufRequest(data []byte, lmp insertutils.LogMessageProcessor, useD
})
}
fields.fields = append(fields.fields, logstorage.Field{
Name: "_msg",
Value: e.Line,
})
allowMsgRenaming := false
fields.fields, allowMsgRenaming = addMsgField(fields.fields, msgParser, e.Line)
ts := e.Timestamp.UnixNano()
if ts == 0 {
@ -123,6 +121,9 @@ func parseProtobufRequest(data []byte, lmp insertutils.LogMessageProcessor, useD
if useDefaultStreamFields {
streamFields = fields.fields[:commonFieldsLen]
}
if allowMsgRenaming {
logstorage.RenameField(fields.fields[commonFieldsLen:], msgFields, "_msg")
}
lmp.AddRow(ts, fields.fields, streamFields)
}
}

View file

@ -6,9 +6,8 @@ import (
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/golang/snappy"
)
type testLogMessageProcessor struct {
@ -53,7 +52,7 @@ func TestParseProtobufRequest_Success(t *testing.T) {
t.Helper()
tlp := &testLogMessageProcessor{}
if err := parseJSONRequest([]byte(s), tlp, false); err != nil {
if err := parseJSONRequest([]byte(s), tlp, nil, false, false); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if len(tlp.pr.Streams) != len(timestampsExpected) {
@ -61,10 +60,9 @@ func TestParseProtobufRequest_Success(t *testing.T) {
}
data := tlp.pr.MarshalProtobuf(nil)
encodedData := snappy.Encode(nil, data)
tlp2 := &insertutils.TestLogMessageProcessor{}
if err := parseProtobufRequest(encodedData, tlp2, false); err != nil {
tlp2 := &insertutil.TestLogMessageProcessor{}
if err := parseProtobufRequest(data, tlp2, nil, false, false); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if err := tlp2.Verify(timestampsExpected, resultExpected); err != nil {
@ -90,7 +88,7 @@ func TestParseProtobufRequest_Success(t *testing.T) {
["1577836800000000001", "foo bar"],
["1477836900005000002", "abc"],
["147.78369e9", "foobar"]
]}]}`, []int64{1577836800000000001, 1477836900005000002, 147783690000}, `{"label1":"value1","label2":"value2","_msg":"foo bar"}
]}]}`, []int64{1577836800000000001, 1477836900005000002, 147783690000000000}, `{"label1":"value1","label2":"value2","_msg":"foo bar"}
{"label1":"value1","label2":"value2","_msg":"abc"}
{"label1":"value1","label2":"value2","_msg":"foobar"}`)
@ -121,6 +119,57 @@ func TestParseProtobufRequest_Success(t *testing.T) {
{"x":"y","_msg":"yx"}`)
}
func TestParseProtobufRequest_ParseMessage(t *testing.T) {
f := func(s string, msgFields []string, timestampsExpected []int64, resultExpected string) {
t.Helper()
tlp := &testLogMessageProcessor{}
if err := parseJSONRequest([]byte(s), tlp, nil, false, false); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if len(tlp.pr.Streams) != len(timestampsExpected) {
t.Fatalf("unexpected number of streams; got %d; want %d", len(tlp.pr.Streams), len(timestampsExpected))
}
data := tlp.pr.MarshalProtobuf(nil)
tlp2 := &insertutil.TestLogMessageProcessor{}
if err := parseProtobufRequest(data, tlp2, msgFields, false, true); err != nil {
t.Fatalf("unexpected error: %s", err)
}
if err := tlp2.Verify(timestampsExpected, resultExpected); err != nil {
t.Fatal(err)
}
}
f(`{
"streams": [
{
"stream": {
"foo": "bar",
"a": "b"
},
"values": [
["1577836800000000001", "{\"user_id\":\"123\"}"],
["1577836900005000002", "abc", {"trace_id":"pqw"}],
["1577836900005000003", "{def}"]
]
},
{
"stream": {
"x": "y"
},
"values": [
["1877836900005000004", "{\"trace_id\":\"432\",\"parent_id\":\"qwerty\"}"]
]
}
]
}`, []string{"a", "trace_id"}, []int64{1577836800000000001, 1577836900005000002, 1577836900005000003, 1877836900005000004}, `{"foo":"bar","a":"b","user_id":"123"}
{"foo":"bar","a":"b","trace_id":"pqw","_msg":"abc"}
{"foo":"bar","a":"b","_msg":"{def}"}
{"x":"y","_msg":"432","parent_id":"qwerty"}`)
}
func TestParsePromLabels_Success(t *testing.T) {
f := func(s string) {
t.Helper()

View file

@ -6,9 +6,7 @@ import (
"testing"
"time"
"github.com/golang/snappy"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
)
@ -25,13 +23,13 @@ func BenchmarkParseProtobufRequest(b *testing.B) {
}
func benchmarkParseProtobufRequest(b *testing.B, streams, rows, labels int) {
blp := &insertutils.BenchmarkLogMessageProcessor{}
blp := &insertutil.BenchmarkLogMessageProcessor{}
b.ReportAllocs()
b.SetBytes(int64(streams * rows))
b.RunParallel(func(pb *testing.PB) {
body := getProtobufBody(streams, rows, labels)
for pb.Next() {
if err := parseProtobufRequest(body, blp, false); err != nil {
if err := parseProtobufRequest(body, blp, nil, false, true); err != nil {
panic(fmt.Errorf("unexpected error: %w", err))
}
}
@ -78,8 +76,5 @@ func getProtobufBody(streamsCount, rowsCount, labelsCount int) []byte {
Streams: streams,
}
body := pr.MarshalProtobuf(nil)
encodedBody := snappy.Encode(nil, body)
return encodedBody
return pr.MarshalProtobuf(nil)
}

View file

@ -7,6 +7,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/datadog"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/elasticsearch"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/internalinsert"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/journald"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/jsonline"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/loki"
@ -28,6 +29,11 @@ func Stop() {
func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
path := r.URL.Path
if path == "/internal/insert" {
internalinsert.RequestHandler(w, r)
return true
}
if !strings.HasPrefix(path, "/insert/") {
// Skip requests, which do not start with /insert/, since these aren't our requests.
return false

View file

@ -2,21 +2,22 @@ package opentelemetry
import (
"fmt"
"io"
"net/http"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentelemetry/pb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/slicesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
"github.com/VictoriaMetrics/metrics"
)
var maxRequestSize = flagutil.NewBytes("opentelemetry.maxRequestSize", 64*1024*1024, "The maximum size in bytes of a single OpenTelemetry request")
// RequestHandler processes Opentelemetry insert requests
func RequestHandler(path string, w http.ResponseWriter, r *http.Request) bool {
switch path {
@ -37,26 +38,8 @@ func RequestHandler(path string, w http.ResponseWriter, r *http.Request) bool {
func handleProtobuf(r *http.Request, w http.ResponseWriter) {
startTime := time.Now()
requestsProtobufTotal.Inc()
reader := r.Body
if r.Header.Get("Content-Encoding") == "gzip" {
zr, err := common.GetGzipReader(reader)
if err != nil {
httpserver.Errorf(w, r, "cannot initialize gzip reader: %s", err)
return
}
defer common.PutGzipReader(zr)
reader = zr
}
wcr := writeconcurrencylimiter.GetReader(reader)
data, err := io.ReadAll(wcr)
writeconcurrencylimiter.PutReader(wcr)
if err != nil {
httpserver.Errorf(w, r, "cannot read request body: %s", err)
return
}
cp, err := insertutils.GetCommonParams(r)
cp, err := insertutil.GetCommonParams(r)
if err != nil {
httpserver.Errorf(w, r, "cannot parse common params from request: %s", err)
return
@ -66,12 +49,16 @@ func handleProtobuf(r *http.Request, w http.ResponseWriter) {
return
}
lmp := cp.NewLogMessageProcessor("opentelelemtry_protobuf")
useDefaultStreamFields := len(cp.StreamFields) == 0
err = pushProtobufRequest(data, lmp, useDefaultStreamFields)
lmp.MustClose()
encoding := r.Header.Get("Content-Encoding")
err = protoparserutil.ReadUncompressedData(r.Body, encoding, maxRequestSize, func(data []byte) error {
lmp := cp.NewLogMessageProcessor("opentelelemtry_protobuf", false)
useDefaultStreamFields := len(cp.StreamFields) == 0
err := pushProtobufRequest(data, lmp, useDefaultStreamFields)
lmp.MustClose()
return err
})
if err != nil {
httpserver.Errorf(w, r, "cannot parse OpenTelemetry protobuf request: %s", err)
httpserver.Errorf(w, r, "cannot read OpenTelemetry protocol data: %s", err)
return
}
@ -88,7 +75,7 @@ var (
requestProtobufDuration = metrics.NewHistogram(`vl_http_request_duration_seconds{path="/insert/opentelemetry/v1/logs",format="protobuf"}`)
)
func pushProtobufRequest(data []byte, lmp insertutils.LogMessageProcessor, useDefaultStreamFields bool) error {
func pushProtobufRequest(data []byte, lmp insertutil.LogMessageProcessor, useDefaultStreamFields bool) error {
var req pb.ExportLogsServiceRequest
if err := req.UnmarshalProtobuf(data); err != nil {
errorsTotal.Inc()
@ -112,7 +99,7 @@ func pushProtobufRequest(data []byte, lmp insertutils.LogMessageProcessor, useDe
return nil
}
func pushFieldsFromScopeLogs(sc *pb.ScopeLogs, commonFields []logstorage.Field, lmp insertutils.LogMessageProcessor, useDefaultStreamFields bool) []logstorage.Field {
func pushFieldsFromScopeLogs(sc *pb.ScopeLogs, commonFields []logstorage.Field, lmp insertutil.LogMessageProcessor, useDefaultStreamFields bool) []logstorage.Field {
fields := commonFields
for _, lr := range sc.LogRecords {
fields = fields[:len(commonFields)]

View file

@ -3,7 +3,7 @@ package opentelemetry
import (
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentelemetry/pb"
)
@ -15,7 +15,7 @@ func TestPushProtoOk(t *testing.T) {
}
pData := lr.MarshalProtobuf(nil)
tlp := &insertutils.TestLogMessageProcessor{}
tlp := &insertutil.TestLogMessageProcessor{}
if err := pushProtobufRequest(pData, tlp, false); err != nil {
t.Fatalf("unexpected error: %s", err)
}
@ -24,6 +24,7 @@ func TestPushProtoOk(t *testing.T) {
t.Fatal(err)
}
}
// single line without resource attributes
f([]pb.ResourceLogs{
{
@ -39,6 +40,27 @@ func TestPushProtoOk(t *testing.T) {
[]int64{1234},
`{"_msg":"log-line-message","severity":"Trace"}`,
)
// severities mapping
f([]pb.ResourceLogs{
{
ScopeLogs: []pb.ScopeLogs{
{
LogRecords: []pb.LogRecord{
{Attributes: []*pb.KeyValue{}, TimeUnixNano: 1234, SeverityNumber: 1, Body: pb.AnyValue{StringValue: ptrTo("log-line-message")}},
{Attributes: []*pb.KeyValue{}, TimeUnixNano: 1234, SeverityNumber: 13, Body: pb.AnyValue{StringValue: ptrTo("log-line-message")}},
{Attributes: []*pb.KeyValue{}, TimeUnixNano: 1234, SeverityNumber: 24, Body: pb.AnyValue{StringValue: ptrTo("log-line-message")}},
},
},
},
},
},
[]int64{1234, 1234, 1234},
`{"_msg":"log-line-message","severity":"Trace"}
{"_msg":"log-line-message","severity":"Warn"}
{"_msg":"log-line-message","severity":"Fatal4"}`,
)
// multi-line with resource attributes
f([]pb.ResourceLogs{
{
@ -58,7 +80,7 @@ func TestPushProtoOk(t *testing.T) {
{
LogRecords: []pb.LogRecord{
{Attributes: []*pb.KeyValue{}, TimeUnixNano: 1234, SeverityNumber: 1, Body: pb.AnyValue{StringValue: ptrTo("log-line-message")}},
{Attributes: []*pb.KeyValue{}, TimeUnixNano: 1235, SeverityNumber: 21, Body: pb.AnyValue{StringValue: ptrTo("log-line-message-msg-2")}},
{Attributes: []*pb.KeyValue{}, TimeUnixNano: 1235, SeverityNumber: 25, Body: pb.AnyValue{StringValue: ptrTo("log-line-message-msg-2")}},
{Attributes: []*pb.KeyValue{}, TimeUnixNano: 1236, SeverityNumber: -1, Body: pb.AnyValue{StringValue: ptrTo("log-line-message-msg-2")}},
},
},
@ -106,19 +128,21 @@ func TestPushProtoOk(t *testing.T) {
{
LogRecords: []pb.LogRecord{
{TimeUnixNano: 2347, SeverityNumber: 12, Body: pb.AnyValue{StringValue: ptrTo("log-line-resource-scope-1-1-0")}},
{ObservedTimeUnixNano: 2348, SeverityNumber: 12, Body: pb.AnyValue{StringValue: ptrTo("log-line-resource-scope-1-1-1")}},
{TraceID: "1234", SpanID: "45", ObservedTimeUnixNano: 2348, SeverityNumber: 12, Body: pb.AnyValue{StringValue: ptrTo("log-line-resource-scope-1-1-1")}},
{TraceID: "4bf92f3577b34da6a3ce929d0e0e4736", SpanID: "00f067aa0ba902b7", ObservedTimeUnixNano: 3333, Body: pb.AnyValue{StringValue: ptrTo("log-line-resource-scope-1-1-2")}},
},
},
},
},
},
[]int64{1234, 1235, 2345, 2346, 2347, 2348},
[]int64{1234, 1235, 2345, 2346, 2347, 2348, 3333},
`{"logger":"context","instance_id":"10","node_taints":"{\"role\":\"dev\",\"cluster_load_percent\":0.55}","_msg":"log-line-message","severity":"Trace"}
{"logger":"context","instance_id":"10","node_taints":"{\"role\":\"dev\",\"cluster_load_percent\":0.55}","_msg":"log-line-message-msg-2","severity":"Debug"}
{"_msg":"log-line-resource-scope-1-0-0","severity":"Info2"}
{"_msg":"log-line-resource-scope-1-0-1","severity":"Info2"}
{"_msg":"log-line-resource-scope-1-1-0","severity":"Info4"}
{"_msg":"log-line-resource-scope-1-1-1","severity":"Info4"}`,
{"_msg":"log-line-resource-scope-1-1-1","trace_id":"1234","span_id":"45","severity":"Info4"}
{"_msg":"log-line-resource-scope-1-1-2","trace_id":"4bf92f3577b34da6a3ce929d0e0e4736","span_id":"00f067aa0ba902b7","severity":"Unspecified"}`,
)
}

View file

@ -4,7 +4,7 @@ import (
"fmt"
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentelemetry/pb"
)
@ -21,7 +21,7 @@ func BenchmarkParseProtobufRequest(b *testing.B) {
}
func benchmarkParseProtobufRequest(b *testing.B, streams, rows, labels int) {
blp := &insertutils.BenchmarkLogMessageProcessor{}
blp := &insertutil.BenchmarkLogMessageProcessor{}
b.ReportAllocs()
b.SetBytes(int64(streams * rows))
b.RunParallel(func(pb *testing.PB) {

View file

@ -16,9 +16,7 @@ import (
"sync/atomic"
"time"
"github.com/klauspost/compress/gzip"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
@ -27,7 +25,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/netutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/slicesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
"github.com/VictoriaMetrics/metrics"
@ -274,14 +272,14 @@ func runTCPListener(addr string, argIdx int) {
func checkCompressMethod(compressMethod, addr, protocol string) {
switch compressMethod {
case "", "none", "gzip", "deflate":
case "", "none", "zstd", "gzip", "deflate":
return
default:
logger.Fatalf("unsupported -syslog.compressMethod.%s=%q for -syslog.listenAddr.%s=%q; supported values: 'none', 'gzip', 'deflate'", protocol, compressMethod, protocol, addr)
logger.Fatalf("unsupported -syslog.compressMethod.%s=%q for -syslog.listenAddr.%s=%q; supported values: 'none', 'zstd', 'gzip', 'deflate'", protocol, compressMethod, protocol, addr)
}
}
func serveUDP(ln net.PacketConn, tenantID logstorage.TenantID, compressMethod string, useLocalTimestamp bool, streamFields, ignoreFields []string, extraFields []logstorage.Field) {
func serveUDP(ln net.PacketConn, tenantID logstorage.TenantID, encoding string, useLocalTimestamp bool, streamFields, ignoreFields []string, extraFields []logstorage.Field) {
gomaxprocs := cgroup.AvailableCPUs()
var wg sync.WaitGroup
localAddr := ln.LocalAddr()
@ -289,7 +287,7 @@ func serveUDP(ln net.PacketConn, tenantID logstorage.TenantID, compressMethod st
wg.Add(1)
go func() {
defer wg.Done()
cp := insertutils.GetCommonParamsForSyslog(tenantID, streamFields, ignoreFields, extraFields)
cp := insertutil.GetCommonParamsForSyslog(tenantID, streamFields, ignoreFields, extraFields)
var bb bytesutil.ByteBuffer
bb.B = bytesutil.ResizeNoCopyNoOverallocate(bb.B, 64*1024)
for {
@ -314,7 +312,7 @@ func serveUDP(ln net.PacketConn, tenantID logstorage.TenantID, compressMethod st
}
bb.B = bb.B[:n]
udpRequestsTotal.Inc()
if err := processStream("udp", bb.NewReader(), compressMethod, useLocalTimestamp, cp); err != nil {
if err := processStream("udp", bb.NewReader(), encoding, useLocalTimestamp, cp); err != nil {
logger.Errorf("syslog: cannot process UDP data from %s at %s: %s", remoteAddr, localAddr, err)
}
}
@ -323,7 +321,7 @@ func serveUDP(ln net.PacketConn, tenantID logstorage.TenantID, compressMethod st
wg.Wait()
}
func serveTCP(ln net.Listener, tenantID logstorage.TenantID, compressMethod string, useLocalTimestamp bool, streamFields, ignoreFields []string, extraFields []logstorage.Field) {
func serveTCP(ln net.Listener, tenantID logstorage.TenantID, encoding string, useLocalTimestamp bool, streamFields, ignoreFields []string, extraFields []logstorage.Field) {
var cm ingestserver.ConnsMap
cm.Init("syslog")
@ -353,8 +351,8 @@ func serveTCP(ln net.Listener, tenantID logstorage.TenantID, compressMethod stri
wg.Add(1)
go func() {
cp := insertutils.GetCommonParamsForSyslog(tenantID, streamFields, ignoreFields, extraFields)
if err := processStream("tcp", c, compressMethod, useLocalTimestamp, cp); err != nil {
cp := insertutil.GetCommonParamsForSyslog(tenantID, streamFields, ignoreFields, extraFields)
if err := processStream("tcp", c, encoding, useLocalTimestamp, cp); err != nil {
logger.Errorf("syslog: cannot process TCP data at %q: %s", addr, err)
}
@ -369,52 +367,29 @@ func serveTCP(ln net.Listener, tenantID logstorage.TenantID, compressMethod stri
}
// processStream parses a stream of syslog messages from r and ingests them into vlstorage.
func processStream(protocol string, r io.Reader, compressMethod string, useLocalTimestamp bool, cp *insertutils.CommonParams) error {
func processStream(protocol string, r io.Reader, encoding string, useLocalTimestamp bool, cp *insertutil.CommonParams) error {
if err := vlstorage.CanWriteData(); err != nil {
return err
}
lmp := cp.NewLogMessageProcessor("syslog_" + protocol)
err := processStreamInternal(r, compressMethod, useLocalTimestamp, lmp)
lmp := cp.NewLogMessageProcessor("syslog_"+protocol, true)
err := processStreamInternal(r, encoding, useLocalTimestamp, lmp)
lmp.MustClose()
return err
}
func processStreamInternal(r io.Reader, compressMethod string, useLocalTimestamp bool, lmp insertutils.LogMessageProcessor) error {
switch compressMethod {
case "", "none":
case "gzip":
zr, err := common.GetGzipReader(r)
if err != nil {
return fmt.Errorf("cannot read gzipped data: %w", err)
}
r = zr
case "deflate":
zr, err := common.GetZlibReader(r)
if err != nil {
return fmt.Errorf("cannot read deflated data: %w", err)
}
r = zr
default:
logger.Panicf("BUG: unsupported compressMethod=%q; supported values: none, gzip, deflate", compressMethod)
func processStreamInternal(r io.Reader, encoding string, useLocalTimestamp bool, lmp insertutil.LogMessageProcessor) error {
reader, err := protoparserutil.GetUncompressedReader(r, encoding)
if err != nil {
return fmt.Errorf("cannot decode syslog data: %w", err)
}
defer protoparserutil.PutUncompressedReader(reader)
err := processUncompressedStream(r, useLocalTimestamp, lmp)
switch compressMethod {
case "gzip":
zr := r.(*gzip.Reader)
common.PutGzipReader(zr)
case "deflate":
zr := r.(io.ReadCloser)
common.PutZlibReader(zr)
}
return err
return processUncompressedStream(reader, useLocalTimestamp, lmp)
}
func processUncompressedStream(r io.Reader, useLocalTimestamp bool, lmp insertutils.LogMessageProcessor) error {
func processUncompressedStream(r io.Reader, useLocalTimestamp bool, lmp insertutil.LogMessageProcessor) error {
wcr := writeconcurrencylimiter.GetReader(r)
defer writeconcurrencylimiter.PutReader(wcr)
@ -499,7 +474,7 @@ again:
slr.err = fmt.Errorf("cannot parse message length from %q: %w", msgLenStr, err)
return false
}
if maxMsgLen := insertutils.MaxLineSizeBytes.IntN(); msgLen > uint64(maxMsgLen) {
if maxMsgLen := insertutil.MaxLineSizeBytes.IntN(); msgLen > uint64(maxMsgLen) {
slr.err = fmt.Errorf("cannot read message longer than %d bytes; msgLen=%d", maxMsgLen, msgLen)
return false
}
@ -551,7 +526,7 @@ func putSyslogLineReader(slr *syslogLineReader) {
var syslogLineReaderPool sync.Pool
func processLine(line []byte, currentYear int, timezone *time.Location, useLocalTimestamp bool, lmp insertutils.LogMessageProcessor) error {
func processLine(line []byte, currentYear int, timezone *time.Location, useLocalTimestamp bool, lmp insertutil.LogMessageProcessor) error {
p := logstorage.GetSyslogParser(currentYear, timezone)
lineStr := bytesutil.ToUnsafeString(line)
p.Parse(lineStr)
@ -560,7 +535,7 @@ func processLine(line []byte, currentYear int, timezone *time.Location, useLocal
if useLocalTimestamp {
ts = time.Now().UnixNano()
} else {
nsecs, err := insertutils.ExtractTimestampFromFields("timestamp", p.Fields)
nsecs, err := insertutil.ExtractTimestampFromFields("timestamp", p.Fields)
if err != nil {
return fmt.Errorf("cannot get timestamp from syslog line %q: %w", line, err)
}

View file

@ -6,7 +6,7 @@ import (
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlinsert/insertutil"
)
func TestSyslogLineReader_Success(t *testing.T) {
@ -84,7 +84,7 @@ func TestProcessStreamInternal_Success(t *testing.T) {
globalTimezone = time.UTC
globalCurrentYear.Store(int64(currentYear))
tlp := &insertutils.TestLogMessageProcessor{}
tlp := &insertutil.TestLogMessageProcessor{}
r := bytes.NewBufferString(data)
if err := processStreamInternal(r, "", false, tlp); err != nil {
t.Fatalf("unexpected error: %s", err)
@ -113,7 +113,7 @@ func TestProcessStreamInternal_Failure(t *testing.T) {
MustInit()
defer MustStop()
tlp := &insertutils.TestLogMessageProcessor{}
tlp := &insertutil.TestLogMessageProcessor{}
r := bytes.NewBufferString(data)
if err := processStreamInternal(r, "", false, tlp); err == nil {
t.Fatalf("expecting non-nil error")

View file

@ -154,7 +154,7 @@ func readNextJSONObject(d *json.Decoder) ([]logstorage.Field, error) {
}
value, ok := t.(string)
if !ok {
return nil, fmt.Errorf("unexpected token read for oject value: %v; want string", t)
return nil, fmt.Errorf("unexpected token read for object value: %v; want string", t)
}
fields = append(fields, logstorage.Field{

View file

@ -31,7 +31,7 @@ var (
totalStreams = flag.Int("totalStreams", 0, "The number of total log streams; if -totalStreams > -activeStreams, then some active streams are substituted with new streams "+
"during data generation")
logsPerStream = flag.Int64("logsPerStream", 1_000, "The number of log entries to generate per each log stream. Log entries are evenly distributed between -start and -end")
constFieldsPerLog = flag.Int("constFieldsPerLog", 3, "The number of fields with constaint values to generate per each log entry; "+
constFieldsPerLog = flag.Int("constFieldsPerLog", 3, "The number of fields with constant values to generate per each log entry; "+
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
varFieldsPerLog = flag.Int("varFieldsPerLog", 1, "The number of fields with variable values to generate per each log entry; "+
"see https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model")
@ -177,7 +177,10 @@ func generateAndPushLogs(cfg *workerConfig, workerID int) {
sw := &statWriter{
w: pw,
}
bw := bufio.NewWriter(sw)
// The 1MB write buffer increases data ingestion performance by reducing the number of send() syscalls
bw := bufio.NewWriterSize(sw, 1024*1024)
doneCh := make(chan struct{})
go func() {
generateLogs(bw, workerID, cfg.activeStreams, cfg.totalStreams)

View file

@ -0,0 +1,324 @@
package internalselect
import (
"context"
"flag"
"fmt"
"net/http"
"strconv"
"sync"
"time"
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage/netselect"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/atomicutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding/zstd"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/netutil"
)
var disableSelect = flag.Bool("internalselect.disable", false, "Whether to disable /internal/select/* HTTP endpoints")
// RequestHandler processes requests to /internal/select/*
func RequestHandler(ctx context.Context, w http.ResponseWriter, r *http.Request) {
if *disableSelect {
httpserver.Errorf(w, r, "requests to /internal/select/* are disabled with -internalselect.disable command-line flag")
return
}
startTime := time.Now()
path := r.URL.Path
rh := requestHandlers[path]
if rh == nil {
httpserver.Errorf(w, r, "unsupported endpoint requested: %s", path)
return
}
metrics.GetOrCreateCounter(fmt.Sprintf(`vl_http_requests_total{path=%q}`, path)).Inc()
if err := rh(ctx, w, r); err != nil && !netutil.IsTrivialNetworkError(err) {
metrics.GetOrCreateCounter(fmt.Sprintf(`vl_http_request_errors_total{path=%q}`, path)).Inc()
httpserver.Errorf(w, r, "%s", err)
// The return is skipped intentionally in order to track the duration of failed queries.
}
metrics.GetOrCreateSummary(fmt.Sprintf(`vl_http_request_duration_seconds{path=%q}`, path)).UpdateDuration(startTime)
}
var requestHandlers = map[string]func(ctx context.Context, w http.ResponseWriter, r *http.Request) error{
"/internal/select/query": processQueryRequest,
"/internal/select/field_names": processFieldNamesRequest,
"/internal/select/field_values": processFieldValuesRequest,
"/internal/select/stream_field_names": processStreamFieldNamesRequest,
"/internal/select/stream_field_values": processStreamFieldValuesRequest,
"/internal/select/streams": processStreamsRequest,
"/internal/select/stream_ids": processStreamIDsRequest,
}
func processQueryRequest(ctx context.Context, w http.ResponseWriter, r *http.Request) error {
cp, err := getCommonParams(r, netselect.QueryProtocolVersion)
if err != nil {
return err
}
w.Header().Set("Content-Type", "application/octet-stream")
var wLock sync.Mutex
var dataLenBuf []byte
sendBuf := func(bb *bytesutil.ByteBuffer) error {
if len(bb.B) == 0 {
return nil
}
data := bb.B
if !cp.DisableCompression {
bufLen := len(bb.B)
bb.B = zstd.CompressLevel(bb.B, bb.B, 1)
data = bb.B[bufLen:]
}
wLock.Lock()
dataLenBuf = encoding.MarshalUint64(dataLenBuf[:0], uint64(len(data)))
_, err := w.Write(dataLenBuf)
if err == nil {
_, err = w.Write(data)
}
wLock.Unlock()
// Reset the sent buf
bb.Reset()
return err
}
var bufs atomicutil.Slice[bytesutil.ByteBuffer]
var errGlobalLock sync.Mutex
var errGlobal error
writeBlock := func(workerID uint, db *logstorage.DataBlock) {
if errGlobal != nil {
return
}
bb := bufs.Get(workerID)
bb.B = db.Marshal(bb.B)
if len(bb.B) < 1024*1024 {
// Fast path - the bb is too small to be sent to the client yet.
return
}
// Slow path - the bb must be sent to the client.
if err := sendBuf(bb); err != nil {
errGlobalLock.Lock()
if errGlobal != nil {
errGlobal = err
}
errGlobalLock.Unlock()
}
}
if err := vlstorage.RunQuery(ctx, cp.TenantIDs, cp.Query, writeBlock); err != nil {
return err
}
if errGlobal != nil {
return errGlobal
}
// Send the remaining data
for _, bb := range bufs.All() {
if err := sendBuf(bb); err != nil {
return err
}
}
return nil
}
func processFieldNamesRequest(ctx context.Context, w http.ResponseWriter, r *http.Request) error {
cp, err := getCommonParams(r, netselect.FieldNamesProtocolVersion)
if err != nil {
return err
}
fieldNames, err := vlstorage.GetFieldNames(ctx, cp.TenantIDs, cp.Query)
if err != nil {
return fmt.Errorf("cannot obtain field names: %w", err)
}
return writeValuesWithHits(w, fieldNames, cp.DisableCompression)
}
func processFieldValuesRequest(ctx context.Context, w http.ResponseWriter, r *http.Request) error {
cp, err := getCommonParams(r, netselect.FieldValuesProtocolVersion)
if err != nil {
return err
}
fieldName := r.FormValue("field")
limit, err := getInt64FromRequest(r, "limit")
if err != nil {
return err
}
fieldValues, err := vlstorage.GetFieldValues(ctx, cp.TenantIDs, cp.Query, fieldName, uint64(limit))
if err != nil {
return fmt.Errorf("cannot obtain field values: %w", err)
}
return writeValuesWithHits(w, fieldValues, cp.DisableCompression)
}
func processStreamFieldNamesRequest(ctx context.Context, w http.ResponseWriter, r *http.Request) error {
cp, err := getCommonParams(r, netselect.StreamFieldNamesProtocolVersion)
if err != nil {
return err
}
fieldNames, err := vlstorage.GetStreamFieldNames(ctx, cp.TenantIDs, cp.Query)
if err != nil {
return fmt.Errorf("cannot obtain stream field names: %w", err)
}
return writeValuesWithHits(w, fieldNames, cp.DisableCompression)
}
func processStreamFieldValuesRequest(ctx context.Context, w http.ResponseWriter, r *http.Request) error {
cp, err := getCommonParams(r, netselect.StreamFieldValuesProtocolVersion)
if err != nil {
return err
}
fieldName := r.FormValue("field")
limit, err := getInt64FromRequest(r, "limit")
if err != nil {
return err
}
fieldValues, err := vlstorage.GetStreamFieldValues(ctx, cp.TenantIDs, cp.Query, fieldName, uint64(limit))
if err != nil {
return fmt.Errorf("cannot obtain stream field values: %w", err)
}
return writeValuesWithHits(w, fieldValues, cp.DisableCompression)
}
func processStreamsRequest(ctx context.Context, w http.ResponseWriter, r *http.Request) error {
cp, err := getCommonParams(r, netselect.StreamsProtocolVersion)
if err != nil {
return err
}
limit, err := getInt64FromRequest(r, "limit")
if err != nil {
return err
}
streams, err := vlstorage.GetStreams(ctx, cp.TenantIDs, cp.Query, uint64(limit))
if err != nil {
return fmt.Errorf("cannot obtain streams: %w", err)
}
return writeValuesWithHits(w, streams, cp.DisableCompression)
}
func processStreamIDsRequest(ctx context.Context, w http.ResponseWriter, r *http.Request) error {
cp, err := getCommonParams(r, netselect.StreamIDsProtocolVersion)
if err != nil {
return err
}
limit, err := getInt64FromRequest(r, "limit")
if err != nil {
return err
}
streamIDs, err := vlstorage.GetStreamIDs(ctx, cp.TenantIDs, cp.Query, uint64(limit))
if err != nil {
return fmt.Errorf("cannot obtain streams: %w", err)
}
return writeValuesWithHits(w, streamIDs, cp.DisableCompression)
}
type commonParams struct {
TenantIDs []logstorage.TenantID
Query *logstorage.Query
DisableCompression bool
}
func getCommonParams(r *http.Request, expectedProtocolVersion string) (*commonParams, error) {
version := r.FormValue("version")
if version != expectedProtocolVersion {
return nil, fmt.Errorf("unexpected version=%q; want %q", version, expectedProtocolVersion)
}
tenantIDsStr := r.FormValue("tenant_ids")
tenantIDs, err := logstorage.UnmarshalTenantIDs([]byte(tenantIDsStr))
if err != nil {
return nil, fmt.Errorf("cannot unmarshal tenant_ids=%q: %w", tenantIDsStr, err)
}
timestamp, err := getInt64FromRequest(r, "timestamp")
if err != nil {
return nil, err
}
qStr := r.FormValue("query")
q, err := logstorage.ParseQueryAtTimestamp(qStr, timestamp)
if err != nil {
return nil, fmt.Errorf("cannot unmarshal query=%q: %w", qStr, err)
}
s := r.FormValue("disable_compression")
disableCompression, err := strconv.ParseBool(s)
if err != nil {
return nil, fmt.Errorf("cannot parse disable_compression=%q: %w", s, err)
}
cp := &commonParams{
TenantIDs: tenantIDs,
Query: q,
DisableCompression: disableCompression,
}
return cp, nil
}
func writeValuesWithHits(w http.ResponseWriter, vhs []logstorage.ValueWithHits, disableCompression bool) error {
var b []byte
for i := range vhs {
b = vhs[i].Marshal(b)
}
if !disableCompression {
b = zstd.CompressLevel(nil, b, 1)
}
w.Header().Set("Content-Type", "application/octet-stream")
if _, err := w.Write(b); err != nil {
return fmt.Errorf("cannot send response to the client: %w", err)
}
return nil
}
func getInt64FromRequest(r *http.Request, argName string) (int64, error) {
s := r.FormValue(argName)
n, err := strconv.ParseInt(s, 10, 64)
if err != nil {
return 0, fmt.Errorf("cannot parse %s=%q: %w", argName, s, err)
}
return n, nil
}

View file

@ -1,47 +0,0 @@
package logsql
import (
"bufio"
"io"
"sync"
)
func getBufferedWriter(w io.Writer) *bufferedWriter {
v := bufferedWriterPool.Get()
if v == nil {
return &bufferedWriter{
bw: bufio.NewWriter(w),
}
}
bw := v.(*bufferedWriter)
bw.bw.Reset(w)
return bw
}
func putBufferedWriter(bw *bufferedWriter) {
bw.reset()
bufferedWriterPool.Put(bw)
}
var bufferedWriterPool sync.Pool
type bufferedWriter struct {
mu sync.Mutex
bw *bufio.Writer
}
func (bw *bufferedWriter) reset() {
// nothing to do
}
func (bw *bufferedWriter) WriteIgnoreErrors(p []byte) {
bw.mu.Lock()
_, _ = bw.bw.Write(p)
bw.mu.Unlock()
}
func (bw *bufferedWriter) FlushIgnoreErrors() {
bw.mu.Lock()
_ = bw.bw.Flush()
bw.mu.Unlock()
}

View file

@ -3,6 +3,7 @@ package logsql
import (
"context"
"fmt"
"io"
"math"
"net/http"
"regexp"
@ -11,15 +12,17 @@ import (
"strconv"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/VictoriaMetrics/metrics"
"github.com/valyala/fastjson"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/atomicutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeutil"
@ -35,32 +38,35 @@ func ProcessFacetsRequest(ctx context.Context, w http.ResponseWriter, r *http.Re
return
}
limit, err := httputils.GetInt(r, "limit")
limit, err := httputil.GetInt(r, "limit")
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return
}
maxValuesPerField, err := httputils.GetInt(r, "max_values_per_field")
maxValuesPerField, err := httputil.GetInt(r, "max_values_per_field")
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return
}
maxValueLen, err := httputils.GetInt(r, "max_value_len")
maxValueLen, err := httputil.GetInt(r, "max_value_len")
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return
}
keepConstFields := httputils.GetBool(r, "keep_const_fields")
keepConstFields := httputil.GetBool(r, "keep_const_fields")
q.DropAllPipes()
q.AddFacetsPipe(limit, maxValuesPerField, maxValueLen, keepConstFields)
var mLock sync.Mutex
m := make(map[string][]facetEntry)
writeBlock := func(_ uint, _ []int64, columns []logstorage.BlockColumn) {
if len(columns) == 0 || len(columns[0].Values) == 0 {
writeBlock := func(_ uint, db *logstorage.DataBlock) {
rowsCount := db.RowsCount()
if rowsCount == 0 {
return
}
columns := db.Columns
if len(columns) != 3 {
logger.Panicf("BUG: expecting 3 columns; got %d columns", len(columns))
}
@ -141,7 +147,7 @@ func ProcessHitsRequest(ctx context.Context, w http.ResponseWriter, r *http.Requ
fields := r.Form["field"]
// Obtain limit on the number of top fields entries.
fieldsLimit, err := httputils.GetInt(r, "fields_limit")
fieldsLimit, err := httputil.GetInt(r, "fields_limit")
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return
@ -156,17 +162,19 @@ func ProcessHitsRequest(ctx context.Context, w http.ResponseWriter, r *http.Requ
var mLock sync.Mutex
m := make(map[string]*hitsSeries)
writeBlock := func(_ uint, timestamps []int64, columns []logstorage.BlockColumn) {
if len(columns) == 0 || len(columns[0].Values) == 0 {
writeBlock := func(_ uint, db *logstorage.DataBlock) {
rowsCount := db.RowsCount()
if rowsCount == 0 {
return
}
columns := db.Columns
timestampValues := columns[0].Values
hitsValues := columns[len(columns)-1].Values
columns = columns[1 : len(columns)-1]
bb := blockResultPool.Get()
for i := range timestamps {
for i := 0; i < rowsCount; i++ {
timestampStr := strings.Clone(timestampValues[i])
hitsStr := strings.Clone(hitsValues[i])
hits, err := strconv.ParseUint(hitsStr, 10, 64)
@ -205,6 +213,8 @@ func ProcessHitsRequest(ctx context.Context, w http.ResponseWriter, r *http.Requ
WriteHitsSeries(w, m)
}
var blockResultPool bytesutil.ByteBufferPool
func getTopHitsSeries(m map[string]*hitsSeries, fieldsLimit int) map[string]*hitsSeries {
if fieldsLimit <= 0 || fieldsLimit >= len(m) {
return m
@ -310,7 +320,7 @@ func ProcessFieldValuesRequest(ctx context.Context, w http.ResponseWriter, r *ht
}
// Parse limit query arg
limit, err := httputils.GetInt(r, "limit")
limit, err := httputil.GetInt(r, "limit")
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return
@ -370,7 +380,7 @@ func ProcessStreamFieldValuesRequest(ctx context.Context, w http.ResponseWriter,
}
// Parse limit query arg
limit, err := httputils.GetInt(r, "limit")
limit, err := httputil.GetInt(r, "limit")
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return
@ -401,7 +411,7 @@ func ProcessStreamIDsRequest(ctx context.Context, w http.ResponseWriter, r *http
}
// Parse limit query arg
limit, err := httputils.GetInt(r, "limit")
limit, err := httputil.GetInt(r, "limit")
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return
@ -432,7 +442,7 @@ func ProcessStreamsRequest(ctx context.Context, w http.ResponseWriter, r *http.R
}
// Parse limit query arg
limit, err := httputils.GetInt(r, "limit")
limit, err := httputil.GetInt(r, "limit")
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return
@ -470,21 +480,21 @@ func ProcessLiveTailRequest(ctx context.Context, w http.ResponseWriter, r *http.
return
}
refreshIntervalMsecs, err := httputils.GetDuration(r, "refresh_interval", 1000)
refreshIntervalMsecs, err := httputil.GetDuration(r, "refresh_interval", 1000)
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return
}
refreshInterval := time.Millisecond * time.Duration(refreshIntervalMsecs)
startOffsetMsecs, err := httputils.GetDuration(r, "start_offset", 5*1000)
startOffsetMsecs, err := httputil.GetDuration(r, "start_offset", 5*1000)
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return
}
startOffset := startOffsetMsecs * 1e6
offsetMsecs, err := httputils.GetDuration(r, "offset", 1000)
offsetMsecs, err := httputil.GetDuration(r, "offset", 1000)
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return
@ -536,7 +546,7 @@ var liveTailRequests = metrics.NewCounter(`vl_live_tailing_requests`)
const tailOffsetNsecs = 5e9
type logRow struct {
timestamp int64
timestamp string
fields []logstorage.Field
}
@ -552,7 +562,7 @@ type tailProcessor struct {
mu sync.Mutex
perStreamRows map[string][]logRow
lastTimestamps map[string]int64
lastTimestamps map[string]string
err error
}
@ -562,12 +572,12 @@ func newTailProcessor(cancel func()) *tailProcessor {
cancel: cancel,
perStreamRows: make(map[string][]logRow),
lastTimestamps: make(map[string]int64),
lastTimestamps: make(map[string]string),
}
}
func (tp *tailProcessor) writeBlock(_ uint, timestamps []int64, columns []logstorage.BlockColumn) {
if len(timestamps) == 0 {
func (tp *tailProcessor) writeBlock(_ uint, db *logstorage.DataBlock) {
if db.RowsCount() == 0 {
return
}
@ -579,14 +589,8 @@ func (tp *tailProcessor) writeBlock(_ uint, timestamps []int64, columns []logsto
}
// Make sure columns contain _time field, since it is needed for proper tail work.
hasTime := false
for _, c := range columns {
if c.Name == "_time" {
hasTime = true
break
}
}
if !hasTime {
timestamps, ok := db.GetTimestamps()
if !ok {
tp.err = fmt.Errorf("missing _time field")
tp.cancel()
return
@ -595,8 +599,8 @@ func (tp *tailProcessor) writeBlock(_ uint, timestamps []int64, columns []logsto
// Copy block rows to tp.perStreamRows
for i, timestamp := range timestamps {
streamID := ""
fields := make([]logstorage.Field, len(columns))
for j, c := range columns {
fields := make([]logstorage.Field, len(db.Columns))
for j, c := range db.Columns {
name := strings.Clone(c.Name)
value := strings.Clone(c.Values[i])
@ -688,12 +692,15 @@ func ProcessStatsQueryRangeRequest(ctx context.Context, w http.ResponseWriter, r
m := make(map[string]*statsSeries)
var mLock sync.Mutex
writeBlock := func(_ uint, timestamps []int64, columns []logstorage.BlockColumn) {
writeBlock := func(_ uint, db *logstorage.DataBlock) {
rowsCount := db.RowsCount()
columns := db.Columns
clonedColumnNames := make([]string, len(columns))
for i, c := range columns {
clonedColumnNames[i] = strings.Clone(c.Name)
}
for i := range timestamps {
for i := 0; i < rowsCount; i++ {
// Do not move q.GetTimestamp() outside writeBlock, since ts
// must be initialized to query timestamp for every processed log row.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8312
@ -802,12 +809,14 @@ func ProcessStatsQueryRequest(ctx context.Context, w http.ResponseWriter, r *htt
var rowsLock sync.Mutex
timestamp := q.GetTimestamp()
writeBlock := func(_ uint, timestamps []int64, columns []logstorage.BlockColumn) {
writeBlock := func(_ uint, db *logstorage.DataBlock) {
rowsCount := db.RowsCount()
columns := db.Columns
clonedColumnNames := make([]string, len(columns))
for i, c := range columns {
clonedColumnNames[i] = strings.Clone(c.Name)
}
for i := range timestamps {
for i := 0; i < rowsCount; i++ {
labels := make([]logstorage.Field, 0, len(byFields))
for j, c := range columns {
if slices.Contains(byFields, c.Name) {
@ -863,17 +872,27 @@ func ProcessQueryRequest(ctx context.Context, w http.ResponseWriter, r *http.Req
}
// Parse limit query arg
limit, err := httputils.GetInt(r, "limit")
limit, err := httputil.GetInt(r, "limit")
if err != nil {
httpserver.Errorf(w, r, "%s", err)
return
}
bw := getBufferedWriter(w)
sw := &syncWriter{
w: w,
}
var bwShards atomicutil.Slice[bufferedWriter]
bwShards.Init = func(shard *bufferedWriter) {
shard.sw = sw
}
defer func() {
bw.FlushIgnoreErrors()
putBufferedWriter(bw)
shards := bwShards.All()
for _, shard := range shards {
shard.FlushIgnoreErrors()
}
}()
w.Header().Set("Content-Type", "application/stream+json")
if limit > 0 {
@ -883,32 +902,34 @@ func ProcessQueryRequest(ctx context.Context, w http.ResponseWriter, r *http.Req
httpserver.Errorf(w, r, "%s", err)
return
}
bb := blockResultPool.Get()
b := bb.B
bw := bwShards.Get(0)
for i := range rows {
b = logstorage.MarshalFieldsToJSON(b[:0], rows[i].fields)
b = append(b, '\n')
bw.WriteIgnoreErrors(b)
bw.buf = logstorage.MarshalFieldsToJSON(bw.buf, rows[i].fields)
bw.buf = append(bw.buf, '\n')
if len(bw.buf) > 16*1024 {
bw.FlushIgnoreErrors()
}
}
bb.B = b
blockResultPool.Put(bb)
return
}
q.AddPipeLimit(uint64(limit))
}
writeBlock := func(_ uint, timestamps []int64, columns []logstorage.BlockColumn) {
if len(columns) == 0 || len(columns[0].Values) == 0 {
writeBlock := func(workerID uint, db *logstorage.DataBlock) {
rowsCount := db.RowsCount()
if rowsCount == 0 {
return
}
columns := db.Columns
bb := blockResultPool.Get()
for i := range timestamps {
WriteJSONRow(bb, columns, i)
bw := bwShards.Get(workerID)
for i := 0; i < rowsCount; i++ {
WriteJSONRow(bw, columns, i)
if len(bw.buf) > 16*1024 {
bw.FlushIgnoreErrors()
}
}
bw.WriteIgnoreErrors(bb.B)
blockResultPool.Put(bb)
}
if err := vlstorage.RunQuery(ctx, tenantIDs, q, writeBlock); err != nil {
@ -917,14 +938,37 @@ func ProcessQueryRequest(ctx context.Context, w http.ResponseWriter, r *http.Req
}
}
var blockResultPool bytesutil.ByteBufferPool
type row struct {
timestamp int64
fields []logstorage.Field
type syncWriter struct {
mu sync.Mutex
w io.Writer
}
func getLastNQueryResults(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query, limit int) ([]row, error) {
func (sw *syncWriter) Write(p []byte) (int, error) {
sw.mu.Lock()
n, err := sw.w.Write(p)
sw.mu.Unlock()
return n, err
}
type bufferedWriter struct {
buf []byte
sw *syncWriter
}
func (bw *bufferedWriter) Write(p []byte) (int, error) {
bw.buf = append(bw.buf, p...)
// Do not send bw.buf to bw.sw here, since the data at bw.buf may be incomplete (it must end with '\n')
return len(p), nil
}
func (bw *bufferedWriter) FlushIgnoreErrors() {
_, _ = bw.sw.Write(bw.buf)
bw.buf = bw.buf[:0]
}
func getLastNQueryResults(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query, limit int) ([]logRow, error) {
limitUpper := 2 * limit
q.AddPipeLimit(uint64(limitUpper))
@ -993,7 +1037,7 @@ func getLastNQueryResults(ctx context.Context, tenantIDs []logstorage.TenantID,
}
}
func getLastNRows(rows []row, limit int) []row {
func getLastNRows(rows []logRow, limit int) []logRow {
sort.Slice(rows, func(i, j int) bool {
return rows[i].timestamp < rows[j].timestamp
})
@ -1003,18 +1047,31 @@ func getLastNRows(rows []row, limit int) []row {
return rows
}
func getQueryResultsWithLimit(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query, limit int) ([]row, error) {
func getQueryResultsWithLimit(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query, limit int) ([]logRow, error) {
ctxWithCancel, cancel := context.WithCancel(ctx)
defer cancel()
var rows []row
var missingTimeColumn atomic.Bool
var rows []logRow
var rowsLock sync.Mutex
writeBlock := func(_ uint, timestamps []int64, columns []logstorage.BlockColumn) {
writeBlock := func(_ uint, db *logstorage.DataBlock) {
if missingTimeColumn.Load() {
return
}
columns := db.Columns
clonedColumnNames := make([]string, len(columns))
for i, c := range columns {
clonedColumnNames[i] = strings.Clone(c.Name)
}
timestamps, ok := db.GetTimestamps()
if !ok {
missingTimeColumn.Store(true)
cancel()
return
}
for i, timestamp := range timestamps {
fields := make([]logstorage.Field, len(columns))
for j := range columns {
@ -1024,7 +1081,7 @@ func getQueryResultsWithLimit(ctx context.Context, tenantIDs []logstorage.Tenant
}
rowsLock.Lock()
rows = append(rows, row{
rows = append(rows, logRow{
timestamp: timestamp,
fields: fields,
})
@ -1035,11 +1092,13 @@ func getQueryResultsWithLimit(ctx context.Context, tenantIDs []logstorage.Tenant
cancel()
}
}
if err := vlstorage.RunQuery(ctxWithCancel, tenantIDs, q, writeBlock); err != nil {
return nil, err
err := vlstorage.RunQuery(ctxWithCancel, tenantIDs, q, writeBlock)
if missingTimeColumn.Load() {
return nil, fmt.Errorf("missing _time column in the result for the query [%s]", q)
}
return rows, nil
return rows, err
}
func parseCommonArgs(r *http.Request) (*logstorage.Query, []logstorage.TenantID, error) {
@ -1098,20 +1157,22 @@ func parseCommonArgs(r *http.Request) (*logstorage.Query, []logstorage.TenantID,
}
// Parse optional extra_filters
extraFiltersStr := r.FormValue("extra_filters")
extraFilters, err := parseExtraFilters(extraFiltersStr)
if err != nil {
return nil, nil, err
for _, extraFiltersStr := range r.Form["extra_filters"] {
extraFilters, err := parseExtraFilters(extraFiltersStr)
if err != nil {
return nil, nil, err
}
q.AddExtraFilters(extraFilters)
}
q.AddExtraFilters(extraFilters)
// Parse optional extra_stream_filters
extraStreamFiltersStr := r.FormValue("extra_stream_filters")
extraStreamFilters, err := parseExtraStreamFilters(extraStreamFiltersStr)
if err != nil {
return nil, nil, err
for _, extraStreamFiltersStr := range r.Form["extra_stream_filters"] {
extraStreamFilters, err := parseExtraStreamFilters(extraStreamFiltersStr)
if err != nil {
return nil, nil, err
}
q.AddExtraFilters(extraStreamFilters)
}
q.AddExtraFilters(extraStreamFilters)
return q, tenantIDs, nil
}

View file

@ -9,10 +9,11 @@ import (
"strings"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlselect/internalselect"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlselect/logsql"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/metrics"
)
@ -71,7 +72,8 @@ var vmuiFileServer = http.FileServer(http.FS(vmuiFiles))
// RequestHandler handles select requests for VictoriaLogs
func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
path := r.URL.Path
if !strings.HasPrefix(path, "/select/") {
if !strings.HasPrefix(path, "/select/") && !strings.HasPrefix(path, "/internal/select/") {
// Skip requests, which do not start with /select/, since these aren't our requests.
return false
}
@ -98,25 +100,75 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
return true
}
ctx := r.Context()
if path == "/select/logsql/tail" {
logsqlTailRequests.Inc()
// Process live tailing request without timeout, since it is OK to run live tailing requests for very long time.
// Also do not apply concurrency limit to tail requests, since these limits are intended for non-tail requests.
logsql.ProcessLiveTailRequest(ctx, w, r)
return true
}
// Limit the number of concurrent queries, which can consume big amounts of CPU time.
startTime := time.Now()
ctx := r.Context()
d := getMaxQueryDuration(r)
ctxWithTimeout, cancel := context.WithTimeout(ctx, d)
defer cancel()
stopCh := ctxWithTimeout.Done()
if !incRequestConcurrency(ctxWithTimeout, w, r) {
return true
}
defer decRequestConcurrency()
if strings.HasPrefix(path, "/internal/select/") {
// Process internal request from vlselect without timeout (e.g. use ctx instead of ctxWithTimeout),
// since the timeout must be controlled by the vlselect.
internalselect.RequestHandler(ctx, w, r)
return true
}
ok := processSelectRequest(ctxWithTimeout, w, r, path)
if !ok {
return false
}
logRequestErrorIfNeeded(ctxWithTimeout, w, r, startTime)
return true
}
func logRequestErrorIfNeeded(ctx context.Context, w http.ResponseWriter, r *http.Request, startTime time.Time) {
err := ctx.Err()
switch err {
case nil:
// nothing to do
case context.Canceled:
// do not log canceled requests, since they are expected and legal.
case context.DeadlineExceeded:
err = &httpserver.ErrorWithStatusCode{
Err: fmt.Errorf("the request couldn't be executed in %.3f seconds; possible solutions: "+
"to increase -search.maxQueryDuration=%s; to pass bigger value to 'timeout' query arg", time.Since(startTime).Seconds(), maxQueryDuration),
StatusCode: http.StatusServiceUnavailable,
}
httpserver.Errorf(w, r, "%s", err)
default:
httpserver.Errorf(w, r, "unexpected error: %s", err)
}
}
func incRequestConcurrency(ctx context.Context, w http.ResponseWriter, r *http.Request) bool {
startTime := time.Now()
stopCh := ctx.Done()
select {
case concurrencyLimitCh <- struct{}{}:
defer func() { <-concurrencyLimitCh }()
return true
default:
// Sleep for a while until giving up. This should resolve short bursts in requests.
concurrencyLimitReached.Inc()
select {
case concurrencyLimitCh <- struct{}{}:
defer func() { <-concurrencyLimitCh }()
return true
case <-stopCh:
switch ctxWithTimeout.Err() {
switch ctx.Err() {
case context.Canceled:
remoteAddr := httpserver.GetQuotedRemoteAddr(r)
requestURI := httpserver.GetRequestURI(r)
@ -129,49 +181,18 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
"are executed. Possible solutions: to reduce query load; to add more compute resources to the server; "+
"to increase -search.maxQueueDuration=%s; to increase -search.maxQueryDuration=%s; to increase -search.maxConcurrentRequests; "+
"to pass bigger value to 'timeout' query arg",
d.Seconds(), *maxConcurrentRequests, maxQueueDuration, maxQueryDuration),
time.Since(startTime).Seconds(), *maxConcurrentRequests, maxQueueDuration, maxQueryDuration),
StatusCode: http.StatusServiceUnavailable,
}
httpserver.Errorf(w, r, "%s", err)
}
return true
return false
}
}
}
if path == "/select/logsql/tail" {
logsqlTailRequests.Inc()
// Process live tailing request without timeout (e.g. use ctx instead of ctxWithTimeout),
// since it is OK to run live tailing requests for very long time.
logsql.ProcessLiveTailRequest(ctx, w, r)
return true
}
ok := processSelectRequest(ctxWithTimeout, w, r, path)
if !ok {
return false
}
err := ctxWithTimeout.Err()
switch err {
case nil:
// nothing to do
case context.Canceled:
remoteAddr := httpserver.GetQuotedRemoteAddr(r)
requestURI := httpserver.GetRequestURI(r)
logger.Infof("client has canceled the request after %.3f seconds: remoteAddr=%s, requestURI: %q",
time.Since(startTime).Seconds(), remoteAddr, requestURI)
case context.DeadlineExceeded:
err = &httpserver.ErrorWithStatusCode{
Err: fmt.Errorf("the request couldn't be executed in %.3f seconds; possible solutions: "+
"to increase -search.maxQueryDuration=%s; to pass bigger value to 'timeout' query arg", d.Seconds(), maxQueryDuration),
StatusCode: http.StatusServiceUnavailable,
}
httpserver.Errorf(w, r, "%s", err)
default:
httpserver.Errorf(w, r, "unexpected error: %s", err)
}
return true
func decRequestConcurrency() {
<-concurrencyLimitCh
}
func processSelectRequest(ctx context.Context, w http.ResponseWriter, r *http.Request, path string) bool {
@ -240,7 +261,7 @@ func processSelectRequest(ctx context.Context, w http.ResponseWriter, r *http.Re
// getMaxQueryDuration returns the maximum duration for query from r.
func getMaxQueryDuration(r *http.Request) time.Duration {
dms, err := httputils.GetDuration(r, "timeout", 0)
dms, err := httputil.GetDuration(r, "timeout", 0)
if err != nil {
dms = 0
}

View file

@ -5,9 +5,13 @@ menu:
docs:
parent: 'victoriametrics'
weight: 23
tags:
- metrics
aliases:
- /ExtendedPromQL.html
- /MetricsQL.html
- /metricsql/index.html
- /metricsql/
---
[VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics) implements MetricsQL -
query language inspired by [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics/).
@ -336,7 +340,7 @@ See also [increases_over_time](#increases_over_time).
`default_rollup(series_selector[d])` is a [rollup function](#rollup-functions), which returns the last [raw sample](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
value on the given lookbehind window `d` per each time series returned from the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
Compared to [last_over_time](#last_over_time) it accounts for [staleness markers](https://docs.victoriametrics.com/vmagent/#prometheus-staleness-markers) to detect stale series.
Compared to [last_over_time](#last_over_time) it accounts for [staleness markers](https://docs.victoriametrics.com/victoriametrics/vmagent/#prometheus-staleness-markers) to detect stale series.
If the lookbehind window is skipped in square brackets, then it is automatically calculated as `max(step, scrape_interval)`, where `step` is the query arg value
passed to [/api/v1/query_range](https://docs.victoriametrics.com/keyconcepts/#range-query) or [/api/v1/query](https://docs.victoriametrics.com/keyconcepts/#instant-query),
@ -921,7 +925,7 @@ See also [count_eq_over_time](#count_eq_over_time).
#### stale_samples_over_time
`stale_samples_over_time(series_selector[d])` is a [rollup function](#rollup-functions), which calculates the number
of [staleness markers](https://docs.victoriametrics.com/vmagent/#prometheus-staleness-markers) on the given lookbehind window `d`
of [staleness markers](https://docs.victoriametrics.com/victoriametrics/vmagent/#prometheus-staleness-markers) on the given lookbehind window `d`
per each time series matching the given [series_selector](https://docs.victoriametrics.com/keyconcepts/#filtering).
Metric names are stripped from the resulting rollups. Add [keep_metric_names](#keep_metric_names) modifier in order to keep metric names.

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View file

@ -35,10 +35,10 @@
<meta property="og:title" content="UI for VictoriaLogs">
<meta property="og:url" content="https://victoriametrics.com/products/victorialogs/">
<meta property="og:description" content="Explore your log data with VictoriaLogs UI">
<script type="module" crossorigin src="./assets/index-C68hz-qY.js"></script>
<link rel="modulepreload" crossorigin href="./assets/vendor-DojlIpLz.js">
<script type="module" crossorigin src="./assets/index-Wi3UrT0H.js"></script>
<link rel="modulepreload" crossorigin href="./assets/vendor-C-vZmbyg.js">
<link rel="stylesheet" crossorigin href="./assets/vendor-D1GxaB_c.css">
<link rel="stylesheet" crossorigin href="./assets/index-B_R5bdPN.css">
<link rel="stylesheet" crossorigin href="./assets/index-Brup_hCI.css">
</head>
<body>
<noscript>You need to enable JavaScript to run this app.</noscript>

View file

@ -10,11 +10,14 @@ import (
"github.com/VictoriaMetrics/metrics"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage/netinsert"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vlstorage/netselect"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
)
var (
@ -38,15 +41,59 @@ var (
minFreeDiskSpaceBytes = flagutil.NewBytes("storage.minFreeDiskSpaceBytes", 10e6, "The minimum free disk space at -storageDataPath after which "+
"the storage stops accepting new data")
forceMergeAuthKey = flagutil.NewPassword("forceMergeAuthKey", "authKey, which must be passed in query string to /internal/force_merge pages. It overrides -httpAuth.*")
forceMergeAuthKey = flagutil.NewPassword("forceMergeAuthKey", "authKey, which must be passed in query string to /internal/force_merge . It overrides -httpAuth.* . "+
"See https://docs.victoriametrics.com/victorialogs/#forced-merge")
forceFlushAuthKey = flagutil.NewPassword("forceFlushAuthKey", "authKey, which must be passed in query string to /internal/force_flush . It overrides -httpAuth.* . "+
"See https://docs.victoriametrics.com/victorialogs/#forced-flush")
storageNodeAddrs = flagutil.NewArrayString("storageNode", "Comma-separated list of TCP addresses for storage nodes to route the ingested logs to and to send select queries to. "+
"If the list is empty, then the ingested logs are stored and queried locally from -storageDataPath")
insertConcurrency = flag.Int("insert.concurrency", 2, "The average number of concurrent data ingestion requests, which can be sent to every -storageNode")
insertDisableCompression = flag.Bool("insert.disableCompression", false, "Whether to disable compression when sending the ingested data to -storageNode nodes. "+
"Disabled compression reduces CPU usage at the cost of higher network usage")
selectDisableCompression = flag.Bool("select.disableCompression", false, "Whether to disable compression for select query responses received from -storageNode nodes. "+
"Disabled compression reduces CPU usage at the cost of higher network usage")
storageNodeUsername = flagutil.NewArrayString("storageNode.username", "Optional basic auth username to use for the corresponding -storageNode")
storageNodePassword = flagutil.NewArrayString("storageNode.password", "Optional basic auth password to use for the corresponding -storageNode")
storageNodePasswordFile = flagutil.NewArrayString("storageNode.passwordFile", "Optional path to basic auth password to use for the corresponding -storageNode. "+
"The file is re-read every second")
storageNodeBearerToken = flagutil.NewArrayString("storageNode.bearerToken", "Optional bearer auth token to use for the corresponding -storageNode")
storageNodeBearerTokenFile = flagutil.NewArrayString("storageNode.bearerTokenFile", "Optional path to bearer token file to use for the corresponding -storageNode. "+
"The token is re-read from the file every second")
storageNodeTLS = flagutil.NewArrayBool("storageNode.tls", "Whether to use TLS (HTTPS) protocol for communicating with the corresponding -storageNode. "+
"By default communication is performed via HTTP")
storageNodeTLSCAFile = flagutil.NewArrayString("storageNode.tlsCAFile", "Optional path to TLS CA file to use for verifying connections to the corresponding -storageNode. "+
"By default, system CA is used")
storageNodeTLSCertFile = flagutil.NewArrayString("storageNode.tlsCertFile", "Optional path to client-side TLS certificate file to use when connecting "+
"to the corresponding -storageNode")
storageNodeTLSKeyFile = flagutil.NewArrayString("storageNode.tlsKeyFile", "Optional path to client-side TLS certificate key to use when connecting to the corresponding -storageNode")
storageNodeTLSServerName = flagutil.NewArrayString("storageNode.tlsServerName", "Optional TLS server name to use for connections to the corresponding -storageNode. "+
"By default, the server name from -storageNode is used")
storageNodeTLSInsecureSkipVerify = flagutil.NewArrayBool("storageNode.tlsInsecureSkipVerify", "Whether to skip tls verification when connecting to the corresponding -storageNode")
)
var localStorage *logstorage.Storage
var localStorageMetrics *metrics.Set
var netstorageInsert *netinsert.Storage
var netstorageSelect *netselect.Storage
// Init initializes vlstorage.
//
// Stop must be called when vlstorage is no longer needed
func Init() {
if strg != nil {
logger.Panicf("BUG: Init() has been already called")
if len(*storageNodeAddrs) == 0 {
initLocalStorage()
} else {
initNetworkStorage()
}
}
func initLocalStorage() {
if localStorage != nil {
logger.Panicf("BUG: initLocalStorage() has been already called")
}
if retentionPeriod.Duration() < 24*time.Hour {
@ -63,60 +110,157 @@ func Init() {
}
logger.Infof("opening storage at -storageDataPath=%s", *storageDataPath)
startTime := time.Now()
strg = logstorage.MustOpenStorage(*storageDataPath, cfg)
localStorage = logstorage.MustOpenStorage(*storageDataPath, cfg)
var ss logstorage.StorageStats
strg.UpdateStats(&ss)
localStorage.UpdateStats(&ss)
logger.Infof("successfully opened storage in %.3f seconds; smallParts: %d; bigParts: %d; smallPartBlocks: %d; bigPartBlocks: %d; smallPartRows: %d; bigPartRows: %d; "+
"smallPartSize: %d bytes; bigPartSize: %d bytes",
time.Since(startTime).Seconds(), ss.SmallParts, ss.BigParts, ss.SmallPartBlocks, ss.BigPartBlocks, ss.SmallPartRowsCount, ss.BigPartRowsCount,
ss.CompressedSmallPartSize, ss.CompressedBigPartSize)
// register storage metrics
storageMetrics = metrics.NewSet()
storageMetrics.RegisterMetricsWriter(func(w io.Writer) {
writeStorageMetrics(w, strg)
// register local storage metrics
localStorageMetrics = metrics.NewSet()
localStorageMetrics.RegisterMetricsWriter(func(w io.Writer) {
writeStorageMetrics(w, localStorage)
})
metrics.RegisterSet(storageMetrics)
metrics.RegisterSet(localStorageMetrics)
}
func initNetworkStorage() {
if netstorageInsert != nil || netstorageSelect != nil {
logger.Panicf("BUG: initNetworkStorage() has been already called")
}
authCfgs := make([]*promauth.Config, len(*storageNodeAddrs))
isTLSs := make([]bool, len(*storageNodeAddrs))
for i := range authCfgs {
authCfgs[i] = newAuthConfigForStorageNode(i)
isTLSs[i] = storageNodeTLS.GetOptionalArg(i)
}
logger.Infof("starting insert service for nodes %s", *storageNodeAddrs)
netstorageInsert = netinsert.NewStorage(*storageNodeAddrs, authCfgs, isTLSs, *insertConcurrency, *insertDisableCompression)
logger.Infof("initializing select service for nodes %s", *storageNodeAddrs)
netstorageSelect = netselect.NewStorage(*storageNodeAddrs, authCfgs, isTLSs, *selectDisableCompression)
logger.Infof("initialized all the network services")
}
func newAuthConfigForStorageNode(argIdx int) *promauth.Config {
username := storageNodeUsername.GetOptionalArg(argIdx)
password := storageNodePassword.GetOptionalArg(argIdx)
passwordFile := storageNodePasswordFile.GetOptionalArg(argIdx)
var basicAuthCfg *promauth.BasicAuthConfig
if username != "" || password != "" || passwordFile != "" {
basicAuthCfg = &promauth.BasicAuthConfig{
Username: username,
Password: promauth.NewSecret(password),
PasswordFile: passwordFile,
}
}
token := storageNodeBearerToken.GetOptionalArg(argIdx)
tokenFile := storageNodeBearerTokenFile.GetOptionalArg(argIdx)
tlsCfg := &promauth.TLSConfig{
CAFile: storageNodeTLSCAFile.GetOptionalArg(argIdx),
CertFile: storageNodeTLSCertFile.GetOptionalArg(argIdx),
KeyFile: storageNodeTLSKeyFile.GetOptionalArg(argIdx),
ServerName: storageNodeTLSServerName.GetOptionalArg(argIdx),
InsecureSkipVerify: storageNodeTLSInsecureSkipVerify.GetOptionalArg(argIdx),
}
opts := &promauth.Options{
BasicAuth: basicAuthCfg,
BearerToken: token,
BearerTokenFile: tokenFile,
TLSConfig: tlsCfg,
}
ac, err := opts.NewConfig()
if err != nil {
logger.Panicf("FATAL: cannot populate auth config for storage node #%d: %s", argIdx, err)
}
return ac
}
// Stop stops vlstorage.
func Stop() {
metrics.UnregisterSet(storageMetrics, true)
storageMetrics = nil
if localStorage != nil {
metrics.UnregisterSet(localStorageMetrics, true)
localStorageMetrics = nil
strg.MustClose()
strg = nil
localStorage.MustClose()
localStorage = nil
} else {
netstorageInsert.MustStop()
netstorageInsert = nil
netstorageSelect.MustStop()
netstorageSelect = nil
}
}
// RequestHandler is a storage request handler.
func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
path := r.URL.Path
if path == "/internal/force_merge" {
if !httpserver.CheckAuthFlag(w, r, forceMergeAuthKey) {
return true
}
// Run force merge in background
partitionNamePrefix := r.FormValue("partition_prefix")
go func() {
activeForceMerges.Inc()
defer activeForceMerges.Dec()
logger.Infof("forced merge for partition_prefix=%q has been started", partitionNamePrefix)
startTime := time.Now()
strg.MustForceMerge(partitionNamePrefix)
logger.Infof("forced merge for partition_prefix=%q has been successfully finished in %.3f seconds", partitionNamePrefix, time.Since(startTime).Seconds())
}()
return true
switch path {
case "/internal/force_merge":
return processForceMerge(w, r)
case "/internal/force_flush":
return processForceFlush(w, r)
}
return false
}
var strg *logstorage.Storage
var storageMetrics *metrics.Set
func processForceMerge(w http.ResponseWriter, r *http.Request) bool {
if localStorage == nil {
// Force merge isn't supported by non-local storage
return false
}
// CanWriteData returns non-nil error if it cannot write data to vlstorage.
if !httpserver.CheckAuthFlag(w, r, forceMergeAuthKey) {
return true
}
// Run force merge in background
partitionNamePrefix := r.FormValue("partition_prefix")
go func() {
activeForceMerges.Inc()
defer activeForceMerges.Dec()
logger.Infof("forced merge for partition_prefix=%q has been started", partitionNamePrefix)
startTime := time.Now()
localStorage.MustForceMerge(partitionNamePrefix)
logger.Infof("forced merge for partition_prefix=%q has been successfully finished in %.3f seconds", partitionNamePrefix, time.Since(startTime).Seconds())
}()
return true
}
func processForceFlush(w http.ResponseWriter, r *http.Request) bool {
if localStorage == nil {
// Force merge isn't supported by non-local storage
return false
}
if !httpserver.CheckAuthFlag(w, r, forceFlushAuthKey) {
return true
}
logger.Infof("flushing storage to make pending data available for reading")
localStorage.DebugFlush()
return true
}
// CanWriteData returns non-nil error if it cannot write data to vlstorage
func CanWriteData() error {
if strg.IsReadOnly() {
if localStorage == nil {
// The data can be always written in non-local mode.
return nil
}
if localStorage.IsReadOnly() {
return &httpserver.ErrorWithStatusCode{
Err: fmt.Errorf("cannot add rows into storage in read-only mode; the storage can be in read-only mode "+
"because of lack of free disk space at -storageDataPath=%s", *storageDataPath),
@ -130,50 +274,77 @@ func CanWriteData() error {
//
// It is advised to call CanWriteData() before calling MustAddRows()
func MustAddRows(lr *logstorage.LogRows) {
strg.MustAddRows(lr)
if localStorage != nil {
// Store lr in the local storage.
localStorage.MustAddRows(lr)
} else {
// Store lr across the remote storage nodes.
lr.ForEachRow(netstorageInsert.AddRow)
}
}
// RunQuery runs the given q and calls writeBlock for the returned data blocks
func RunQuery(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query, writeBlock logstorage.WriteBlockFunc) error {
return strg.RunQuery(ctx, tenantIDs, q, writeBlock)
func RunQuery(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query, writeBlock logstorage.WriteDataBlockFunc) error {
if localStorage != nil {
return localStorage.RunQuery(ctx, tenantIDs, q, writeBlock)
}
return netstorageSelect.RunQuery(ctx, tenantIDs, q, writeBlock)
}
// GetFieldNames executes q and returns field names seen in results.
func GetFieldNames(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query) ([]logstorage.ValueWithHits, error) {
return strg.GetFieldNames(ctx, tenantIDs, q)
if localStorage != nil {
return localStorage.GetFieldNames(ctx, tenantIDs, q)
}
return netstorageSelect.GetFieldNames(ctx, tenantIDs, q)
}
// GetFieldValues executes q and returns unique values for the fieldName seen in results.
//
// If limit > 0, then up to limit unique values are returned.
func GetFieldValues(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query, fieldName string, limit uint64) ([]logstorage.ValueWithHits, error) {
return strg.GetFieldValues(ctx, tenantIDs, q, fieldName, limit)
if localStorage != nil {
return localStorage.GetFieldValues(ctx, tenantIDs, q, fieldName, limit)
}
return netstorageSelect.GetFieldValues(ctx, tenantIDs, q, fieldName, limit)
}
// GetStreamFieldNames executes q and returns stream field names seen in results.
func GetStreamFieldNames(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query) ([]logstorage.ValueWithHits, error) {
return strg.GetStreamFieldNames(ctx, tenantIDs, q)
if localStorage != nil {
return localStorage.GetStreamFieldNames(ctx, tenantIDs, q)
}
return netstorageSelect.GetStreamFieldNames(ctx, tenantIDs, q)
}
// GetStreamFieldValues executes q and returns stream field values for the given fieldName seen in results.
//
// If limit > 0, then up to limit unique stream field values are returned.
func GetStreamFieldValues(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query, fieldName string, limit uint64) ([]logstorage.ValueWithHits, error) {
return strg.GetStreamFieldValues(ctx, tenantIDs, q, fieldName, limit)
if localStorage != nil {
return localStorage.GetStreamFieldValues(ctx, tenantIDs, q, fieldName, limit)
}
return netstorageSelect.GetStreamFieldValues(ctx, tenantIDs, q, fieldName, limit)
}
// GetStreams executes q and returns streams seen in query results.
//
// If limit > 0, then up to limit unique streams are returned.
func GetStreams(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query, limit uint64) ([]logstorage.ValueWithHits, error) {
return strg.GetStreams(ctx, tenantIDs, q, limit)
if localStorage != nil {
return localStorage.GetStreams(ctx, tenantIDs, q, limit)
}
return netstorageSelect.GetStreams(ctx, tenantIDs, q, limit)
}
// GetStreamIDs executes q and returns streamIDs seen in query results.
//
// If limit > 0, then up to limit unique streamIDs are returned.
func GetStreamIDs(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query, limit uint64) ([]logstorage.ValueWithHits, error) {
return strg.GetStreamIDs(ctx, tenantIDs, q, limit)
if localStorage != nil {
return localStorage.GetStreamIDs(ctx, tenantIDs, q, limit)
}
return netstorageSelect.GetStreamIDs(ctx, tenantIDs, q, limit)
}
func writeStorageMetrics(w io.Writer, strg *logstorage.Storage) {

View file

@ -0,0 +1,369 @@
package netinsert
import (
"errors"
"fmt"
"io"
"net/http"
"net/url"
"sync"
"sync/atomic"
"time"
"github.com/valyala/fastrand"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/contextutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding/zstd"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timerpool"
)
// the maximum size of a single data block sent to storage node.
const maxInsertBlockSize = 2 * 1024 * 1024
// ProtocolVersion is the version of the data ingestion protocol.
//
// It must be changed every time the data encoding at /internal/insert HTTP endpoint is changed.
const ProtocolVersion = "v1"
// Storage is a network storage for sending data to remote storage nodes in the cluster.
type Storage struct {
sns []*storageNode
disableCompression bool
srt *streamRowsTracker
pendingDataBuffers chan *bytesutil.ByteBuffer
stopCh chan struct{}
wg sync.WaitGroup
}
type storageNode struct {
// scheme is http or https scheme to communicate with addr
scheme string
// addr is TCP address of storage node to send the ingested data to
addr string
// s is a storage, which holds the given storageNode
s *Storage
// c is an http client used for sending data blocks to addr.
c *http.Client
// ac is auth config used for setting request headers such as Authorization and Host.
ac *promauth.Config
// pendingData contains pending data, which must be sent to the storage node at the addr.
pendingDataMu sync.Mutex
pendingData *bytesutil.ByteBuffer
pendingDataLastFlush time.Time
// the unix timestamp until the storageNode is disabled for data writing.
disabledUntil atomic.Uint64
}
func newStorageNode(s *Storage, addr string, ac *promauth.Config, isTLS bool) *storageNode {
tr := httputil.NewTransport(false, "vlinsert_backend")
tr.TLSHandshakeTimeout = 20 * time.Second
tr.DisableCompression = true
scheme := "http"
if isTLS {
scheme = "https"
}
sn := &storageNode{
scheme: scheme,
addr: addr,
s: s,
c: &http.Client{
Transport: ac.NewRoundTripper(tr),
},
ac: ac,
pendingData: &bytesutil.ByteBuffer{},
}
s.wg.Add(1)
go func() {
defer s.wg.Done()
sn.backgroundFlusher()
}()
return sn
}
func (sn *storageNode) backgroundFlusher() {
t := time.NewTicker(time.Second)
defer t.Stop()
for {
select {
case <-sn.s.stopCh:
return
case <-t.C:
sn.flushPendingData()
}
}
}
func (sn *storageNode) flushPendingData() {
sn.pendingDataMu.Lock()
if time.Since(sn.pendingDataLastFlush) < time.Second {
// nothing to flush
sn.pendingDataMu.Unlock()
return
}
pendingData := sn.grabPendingDataForFlushLocked()
sn.pendingDataMu.Unlock()
sn.mustSendInsertRequest(pendingData)
}
func (sn *storageNode) addRow(r *logstorage.InsertRow) {
bb := bbPool.Get()
b := bb.B
b = r.Marshal(b)
if len(b) > maxInsertBlockSize {
logger.Warnf("skipping too long log entry, since its length exceeds %d bytes; the actual log entry length is %d bytes; log entry contents: %s", maxInsertBlockSize, len(b), b)
return
}
var pendingData *bytesutil.ByteBuffer
sn.pendingDataMu.Lock()
if sn.pendingData.Len()+len(b) > maxInsertBlockSize {
pendingData = sn.grabPendingDataForFlushLocked()
}
sn.pendingData.MustWrite(b)
sn.pendingDataMu.Unlock()
bb.B = b
bbPool.Put(bb)
if pendingData != nil {
sn.mustSendInsertRequest(pendingData)
}
}
var bbPool bytesutil.ByteBufferPool
func (sn *storageNode) grabPendingDataForFlushLocked() *bytesutil.ByteBuffer {
sn.pendingDataLastFlush = time.Now()
pendingData := sn.pendingData
sn.pendingData = <-sn.s.pendingDataBuffers
return pendingData
}
func (sn *storageNode) mustSendInsertRequest(pendingData *bytesutil.ByteBuffer) {
defer func() {
pendingData.Reset()
sn.s.pendingDataBuffers <- pendingData
}()
err := sn.sendInsertRequest(pendingData)
if err == nil {
return
}
if !errors.Is(err, errTemporarilyDisabled) {
logger.Warnf("%s; re-routing the data block to the remaining nodes", err)
}
for !sn.s.sendInsertRequestToAnyNode(pendingData) {
logger.Errorf("cannot send pending data to all storage nodes, since all of them are unavailable; re-trying to send the data in a second")
t := timerpool.Get(time.Second)
select {
case <-sn.s.stopCh:
timerpool.Put(t)
logger.Errorf("dropping %d bytes of data, since there are no available storage nodes", pendingData.Len())
return
case <-t.C:
timerpool.Put(t)
}
}
}
func (sn *storageNode) sendInsertRequest(pendingData *bytesutil.ByteBuffer) error {
dataLen := pendingData.Len()
if dataLen == 0 {
// Nothing to send.
return nil
}
if sn.disabledUntil.Load() > fasttime.UnixTimestamp() {
return errTemporarilyDisabled
}
ctx, cancel := contextutil.NewStopChanContext(sn.s.stopCh)
defer cancel()
var body io.Reader
if !sn.s.disableCompression {
bb := zstdBufPool.Get()
defer zstdBufPool.Put(bb)
bb.B = zstd.CompressLevel(bb.B[:0], pendingData.B, 1)
body = bb.NewReader()
} else {
body = pendingData.NewReader()
}
reqURL := sn.getRequestURL("/internal/insert")
req, err := http.NewRequestWithContext(ctx, "POST", reqURL, body)
if err != nil {
logger.Panicf("BUG: unexpected error when creating an http request: %s", err)
}
req.Header.Set("Content-Type", "application/octet-stream")
if !sn.s.disableCompression {
req.Header.Set("Content-Encoding", "zstd")
}
if err := sn.ac.SetHeaders(req, true); err != nil {
return fmt.Errorf("cannot set auth headers for %q: %w", reqURL, err)
}
resp, err := sn.c.Do(req)
if err != nil {
// Disable sn for data writing for 10 seconds.
sn.disabledUntil.Store(fasttime.UnixTimestamp() + 10)
return fmt.Errorf("cannot send data block with the length %d to %q: %s", pendingData.Len(), reqURL, err)
}
defer resp.Body.Close()
if resp.StatusCode/100 == 2 {
return nil
}
respBody, err := io.ReadAll(resp.Body)
if err != nil {
respBody = []byte(fmt.Sprintf("%s", err))
}
// Disable sn for data writing for 10 seconds.
sn.disabledUntil.Store(fasttime.UnixTimestamp() + 10)
return fmt.Errorf("unexpected status code returned when sending data block to %q: %d; want 2xx; response body: %q", reqURL, resp.StatusCode, respBody)
}
func (sn *storageNode) getRequestURL(path string) string {
return fmt.Sprintf("%s://%s%s?version=%s", sn.scheme, sn.addr, path, url.QueryEscape(ProtocolVersion))
}
var zstdBufPool bytesutil.ByteBufferPool
// NewStorage returns new Storage for the given addrs with the given authCfgs.
//
// The concurrency is the average number of concurrent connections per every addr.
//
// If disableCompression is set, then the data is sent uncompressed to the remote storage.
//
// Call MustStop on the returned storage when it is no longer needed.
func NewStorage(addrs []string, authCfgs []*promauth.Config, isTLSs []bool, concurrency int, disableCompression bool) *Storage {
pendingDataBuffers := make(chan *bytesutil.ByteBuffer, concurrency*len(addrs))
for i := 0; i < cap(pendingDataBuffers); i++ {
pendingDataBuffers <- &bytesutil.ByteBuffer{}
}
s := &Storage{
disableCompression: disableCompression,
pendingDataBuffers: pendingDataBuffers,
stopCh: make(chan struct{}),
}
sns := make([]*storageNode, len(addrs))
for i, addr := range addrs {
sns[i] = newStorageNode(s, addr, authCfgs[i], isTLSs[i])
}
s.sns = sns
s.srt = newStreamRowsTracker(len(sns))
return s
}
// MustStop stops the s.
func (s *Storage) MustStop() {
close(s.stopCh)
s.wg.Wait()
s.sns = nil
}
// AddRow adds the given log row into s.
func (s *Storage) AddRow(streamHash uint64, r *logstorage.InsertRow) {
idx := s.srt.getNodeIdx(streamHash)
sn := s.sns[idx]
sn.addRow(r)
}
func (s *Storage) sendInsertRequestToAnyNode(pendingData *bytesutil.ByteBuffer) bool {
startIdx := int(fastrand.Uint32n(uint32(len(s.sns))))
for i := range s.sns {
idx := (startIdx + i) % len(s.sns)
sn := s.sns[idx]
err := sn.sendInsertRequest(pendingData)
if err == nil {
return true
}
if !errors.Is(err, errTemporarilyDisabled) {
logger.Warnf("cannot send pending data to the storage node %q: %s; trying to send it to another storage node", sn.addr, err)
}
}
return false
}
var errTemporarilyDisabled = fmt.Errorf("writing to the node is temporarily disabled")
type streamRowsTracker struct {
mu sync.Mutex
nodesCount int64
rowsPerStream map[uint64]uint64
}
func newStreamRowsTracker(nodesCount int) *streamRowsTracker {
return &streamRowsTracker{
nodesCount: int64(nodesCount),
rowsPerStream: make(map[uint64]uint64),
}
}
func (srt *streamRowsTracker) getNodeIdx(streamHash uint64) uint64 {
if srt.nodesCount == 1 {
// Fast path for a single node.
return 0
}
srt.mu.Lock()
defer srt.mu.Unlock()
streamRows := srt.rowsPerStream[streamHash] + 1
srt.rowsPerStream[streamHash] = streamRows
if streamRows <= 1000 {
// Write the initial rows for the stream to a single storage node for better locality.
// This should work great for log streams containing small number of logs, since will be distributed
// evenly among available storage nodes because they have different streamHash.
return streamHash % uint64(srt.nodesCount)
}
// The log stream contains more than 1000 rows. Distribute them among storage nodes at random
// in order to improve query performance over this stream (the data for the log stream
// can be processed in parallel on all the storage nodes).
//
// The random distribution is preferred over round-robin distribution in order to avoid possible
// dependency between the order of the ingested logs and the number of storage nodes,
// which may lead to non-uniform distribution of logs among storage nodes.
return uint64(fastrand.Uint32n(uint32(srt.nodesCount)))
}

View file

@ -0,0 +1,57 @@
package netinsert
import (
"fmt"
"math"
"math/rand"
"testing"
"github.com/cespare/xxhash/v2"
)
func TestStreamRowsTracker(t *testing.T) {
f := func(rowsCount, streamsCount, nodesCount int) {
t.Helper()
// generate stream hashes
streamHashes := make([]uint64, streamsCount)
for i := range streamHashes {
streamHashes[i] = xxhash.Sum64([]byte(fmt.Sprintf("stream %d.", i)))
}
srt := newStreamRowsTracker(nodesCount)
rng := rand.New(rand.NewSource(0))
rowsPerNode := make([]uint64, nodesCount)
for i := 0; i < rowsCount; i++ {
streamIdx := rng.Intn(streamsCount)
h := streamHashes[streamIdx]
nodeIdx := srt.getNodeIdx(h)
rowsPerNode[nodeIdx]++
}
// Verify that rows are uniformly distributed among nodes.
expectedRowsPerNode := float64(rowsCount) / float64(nodesCount)
for nodeIdx, nodeRows := range rowsPerNode {
if math.Abs(float64(nodeRows)-expectedRowsPerNode)/expectedRowsPerNode > 0.15 {
t.Fatalf("non-uniform distribution of rows among nodes; node %d has %d rows, while it must have %v rows; rowsPerNode=%d",
nodeIdx, nodeRows, expectedRowsPerNode, rowsPerNode)
}
}
}
rowsCount := 10000
streamsCount := 9
nodesCount := 2
f(rowsCount, streamsCount, nodesCount)
rowsCount = 10000
streamsCount = 100
nodesCount = 2
f(rowsCount, streamsCount, nodesCount)
rowsCount = 100000
streamsCount = 1000
nodesCount = 9
f(rowsCount, streamsCount, nodesCount)
}

View file

@ -0,0 +1,469 @@
package netselect
import (
"context"
"errors"
"fmt"
"io"
"math"
"net/http"
"net/url"
"strings"
"sync"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/contextutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding/zstd"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/slicesutil"
)
const (
// FieldNamesProtocolVersion is the version of the protocol used for /internal/select/field_names HTTP endpoint.
//
// It must be updated every time the protocol changes.
FieldNamesProtocolVersion = "v1"
// FieldValuesProtocolVersion is the version of the protocol used for /internal/select/field_values HTTP endpoint.
//
// It must be updated every time the protocol changes.
FieldValuesProtocolVersion = "v1"
// StreamFieldNamesProtocolVersion is the version of the protocol used for /internal/select/stream_field_names HTTP endpoint.
//
// It must be updated every time the protocol changes.
StreamFieldNamesProtocolVersion = "v1"
// StreamFieldValuesProtocolVersion is the version of the protocol used for /internal/select/stream_field_values HTTP endpoint.
//
// It must be updated every time the protocol changes.
StreamFieldValuesProtocolVersion = "v1"
// StreamsProtocolVersion is the version of the protocol used for /internal/select/streams HTTP endpoint.
//
// It must be updated every time the protocol changes.
StreamsProtocolVersion = "v1"
// StreamIDsProtocolVersion is the version of the protocol used for /internal/select/stream_ids HTTP endpoint.
//
// It must be updated every time the protocol changes.
StreamIDsProtocolVersion = "v1"
// QueryProtocolVersion is the version of the protocol used for /internal/select/query HTTP endpoint.
//
// It must be updated every time the protocol changes.
QueryProtocolVersion = "v1"
)
// Storage is a network storage for querying remote storage nodes in the cluster.
type Storage struct {
sns []*storageNode
disableCompression bool
}
type storageNode struct {
// scheme is http or https scheme to communicate with addr
scheme string
// addr is TCP address of the storage node to query
addr string
// s is a storage, which holds the given storageNode
s *Storage
// c is an http client used for querying storage node at addr.
c *http.Client
// ac is auth config used for setting request headers such as Authorization and Host.
ac *promauth.Config
}
func newStorageNode(s *Storage, addr string, ac *promauth.Config, isTLS bool) *storageNode {
tr := httputil.NewTransport(false, "vlselect_backend")
tr.TLSHandshakeTimeout = 20 * time.Second
tr.DisableCompression = true
scheme := "http"
if isTLS {
scheme = "https"
}
sn := &storageNode{
scheme: scheme,
addr: addr,
s: s,
c: &http.Client{
Transport: ac.NewRoundTripper(tr),
},
ac: ac,
}
return sn
}
func (sn *storageNode) runQuery(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query, processBlock func(db *logstorage.DataBlock)) error {
args := sn.getCommonArgs(QueryProtocolVersion, tenantIDs, q)
reqURL := sn.getRequestURL("/internal/select/query", args)
req, err := http.NewRequestWithContext(ctx, "GET", reqURL, nil)
if err != nil {
logger.Panicf("BUG: unexpected error when creating a request: %s", err)
}
if err := sn.ac.SetHeaders(req, true); err != nil {
return fmt.Errorf("cannot set auth headers for %q: %w", reqURL, err)
}
// send the request to the storage node
resp, err := sn.c.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
responseBody = []byte(err.Error())
}
return fmt.Errorf("unexpected status code for the request to %q: %d; want %d; response: %q", reqURL, resp.StatusCode, http.StatusOK, responseBody)
}
// read the response
var dataLenBuf [8]byte
var buf []byte
var db logstorage.DataBlock
var valuesBuf []string
for {
if _, err := io.ReadFull(resp.Body, dataLenBuf[:]); err != nil {
if errors.Is(err, io.EOF) {
// The end of response stream
return nil
}
return fmt.Errorf("cannot read block size from %q: %w", reqURL, err)
}
blockLen := encoding.UnmarshalUint64(dataLenBuf[:])
if blockLen > math.MaxInt {
return fmt.Errorf("too big data block: %d bytes; mustn't exceed %v bytes", blockLen, math.MaxInt)
}
buf = slicesutil.SetLength(buf, int(blockLen))
if _, err := io.ReadFull(resp.Body, buf); err != nil {
return fmt.Errorf("cannot read block with size of %d bytes from %q: %w", blockLen, reqURL, err)
}
src := buf
if !sn.s.disableCompression {
bufLen := len(buf)
var err error
buf, err = zstd.Decompress(buf, buf)
if err != nil {
return fmt.Errorf("cannot decompress data block: %w", err)
}
src = buf[bufLen:]
}
for len(src) > 0 {
tail, vb, err := db.UnmarshalInplace(src, valuesBuf[:0])
if err != nil {
return fmt.Errorf("cannot unmarshal data block received from %q: %w", reqURL, err)
}
valuesBuf = vb
src = tail
processBlock(&db)
clear(valuesBuf)
}
}
}
func (sn *storageNode) getFieldNames(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query) ([]logstorage.ValueWithHits, error) {
args := sn.getCommonArgs(FieldNamesProtocolVersion, tenantIDs, q)
return sn.getValuesWithHits(ctx, "/internal/select/field_names", args)
}
func (sn *storageNode) getFieldValues(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query, fieldName string, limit uint64) ([]logstorage.ValueWithHits, error) {
args := sn.getCommonArgs(FieldValuesProtocolVersion, tenantIDs, q)
args.Set("field", fieldName)
args.Set("limit", fmt.Sprintf("%d", limit))
return sn.getValuesWithHits(ctx, "/internal/select/field_values", args)
}
func (sn *storageNode) getStreamFieldNames(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query) ([]logstorage.ValueWithHits, error) {
args := sn.getCommonArgs(StreamFieldNamesProtocolVersion, tenantIDs, q)
return sn.getValuesWithHits(ctx, "/internal/select/stream_field_names", args)
}
func (sn *storageNode) getStreamFieldValues(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query, fieldName string, limit uint64) ([]logstorage.ValueWithHits, error) {
args := sn.getCommonArgs(StreamFieldValuesProtocolVersion, tenantIDs, q)
args.Set("field", fieldName)
args.Set("limit", fmt.Sprintf("%d", limit))
return sn.getValuesWithHits(ctx, "/internal/select/stream_field_values", args)
}
func (sn *storageNode) getStreams(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query, limit uint64) ([]logstorage.ValueWithHits, error) {
args := sn.getCommonArgs(StreamsProtocolVersion, tenantIDs, q)
args.Set("limit", fmt.Sprintf("%d", limit))
return sn.getValuesWithHits(ctx, "/internal/select/streams", args)
}
func (sn *storageNode) getStreamIDs(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query, limit uint64) ([]logstorage.ValueWithHits, error) {
args := sn.getCommonArgs(StreamIDsProtocolVersion, tenantIDs, q)
args.Set("limit", fmt.Sprintf("%d", limit))
return sn.getValuesWithHits(ctx, "/internal/select/stream_ids", args)
}
func (sn *storageNode) getCommonArgs(version string, tenantIDs []logstorage.TenantID, q *logstorage.Query) url.Values {
args := url.Values{}
args.Set("version", version)
args.Set("tenant_ids", string(logstorage.MarshalTenantIDs(nil, tenantIDs)))
args.Set("query", q.String())
args.Set("timestamp", fmt.Sprintf("%d", q.GetTimestamp()))
args.Set("disable_compression", fmt.Sprintf("%v", sn.s.disableCompression))
return args
}
func (sn *storageNode) getValuesWithHits(ctx context.Context, path string, args url.Values) ([]logstorage.ValueWithHits, error) {
data, err := sn.executeRequestAt(ctx, path, args)
if err != nil {
return nil, err
}
return unmarshalValuesWithHits(data)
}
func (sn *storageNode) executeRequestAt(ctx context.Context, path string, args url.Values) ([]byte, error) {
reqURL := sn.getRequestURL(path, args)
req, err := http.NewRequestWithContext(ctx, "GET", reqURL, nil)
if err != nil {
logger.Panicf("BUG: unexpected error when creating a request: %s", err)
}
// send the request to the storage node
resp, err := sn.c.Do(req)
if err != nil {
return nil, err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
responseBody = []byte(err.Error())
}
return nil, fmt.Errorf("unexpected status code for the request to %q: %d; want %d; response: %q", reqURL, resp.StatusCode, http.StatusOK, responseBody)
}
// read the response
var bb bytesutil.ByteBuffer
if _, err := bb.ReadFrom(resp.Body); err != nil {
return nil, fmt.Errorf("cannot read response from %q: %w", reqURL, err)
}
if sn.s.disableCompression {
return bb.B, nil
}
bbLen := len(bb.B)
bb.B, err = zstd.Decompress(bb.B, bb.B)
if err != nil {
return nil, err
}
return bb.B[bbLen:], nil
}
func (sn *storageNode) getRequestURL(path string, args url.Values) string {
return fmt.Sprintf("%s://%s%s?%s", sn.scheme, sn.addr, path, args.Encode())
}
// NewStorage returns new Storage for the given addrs and the given authCfgs.
//
// If disableCompression is set, then uncompressed responses are received from storage nodes.
//
// Call MustStop on the returned storage when it is no longer needed.
func NewStorage(addrs []string, authCfgs []*promauth.Config, isTLSs []bool, disableCompression bool) *Storage {
s := &Storage{
disableCompression: disableCompression,
}
sns := make([]*storageNode, len(addrs))
for i, addr := range addrs {
sns[i] = newStorageNode(s, addr, authCfgs[i], isTLSs[i])
}
s.sns = sns
return s
}
// MustStop stops the s.
func (s *Storage) MustStop() {
s.sns = nil
}
// RunQuery runs the given q and calls writeBlock for the returned data blocks
func (s *Storage) RunQuery(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query, writeBlock logstorage.WriteDataBlockFunc) error {
nqr, err := logstorage.NewNetQueryRunner(ctx, tenantIDs, q, s.RunQuery, writeBlock)
if err != nil {
return err
}
search := func(stopCh <-chan struct{}, q *logstorage.Query, writeBlock logstorage.WriteDataBlockFunc) error {
return s.runQuery(stopCh, tenantIDs, q, writeBlock)
}
concurrency := q.GetConcurrency()
return nqr.Run(ctx, concurrency, search)
}
func (s *Storage) runQuery(stopCh <-chan struct{}, tenantIDs []logstorage.TenantID, q *logstorage.Query, writeBlock logstorage.WriteDataBlockFunc) error {
ctxWithCancel, cancel := contextutil.NewStopChanContext(stopCh)
defer cancel()
errs := make([]error, len(s.sns))
var wg sync.WaitGroup
for i := range s.sns {
wg.Add(1)
go func(nodeIdx int) {
defer wg.Done()
sn := s.sns[nodeIdx]
err := sn.runQuery(ctxWithCancel, tenantIDs, q, func(db *logstorage.DataBlock) {
writeBlock(uint(nodeIdx), db)
})
if err != nil {
// Cancel the remaining parallel queries
cancel()
}
errs[nodeIdx] = err
}(i)
}
wg.Wait()
return getFirstNonCancelError(errs)
}
// GetFieldNames executes q and returns field names seen in results.
func (s *Storage) GetFieldNames(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query) ([]logstorage.ValueWithHits, error) {
return s.getValuesWithHits(ctx, 0, false, func(ctx context.Context, sn *storageNode) ([]logstorage.ValueWithHits, error) {
return sn.getFieldNames(ctx, tenantIDs, q)
})
}
// GetFieldValues executes q and returns unique values for the fieldName seen in results.
//
// If limit > 0, then up to limit unique values are returned.
func (s *Storage) GetFieldValues(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query, fieldName string, limit uint64) ([]logstorage.ValueWithHits, error) {
return s.getValuesWithHits(ctx, limit, true, func(ctx context.Context, sn *storageNode) ([]logstorage.ValueWithHits, error) {
return sn.getFieldValues(ctx, tenantIDs, q, fieldName, limit)
})
}
// GetStreamFieldNames executes q and returns stream field names seen in results.
func (s *Storage) GetStreamFieldNames(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query) ([]logstorage.ValueWithHits, error) {
return s.getValuesWithHits(ctx, 0, false, func(ctx context.Context, sn *storageNode) ([]logstorage.ValueWithHits, error) {
return sn.getStreamFieldNames(ctx, tenantIDs, q)
})
}
// GetStreamFieldValues executes q and returns stream field values for the given fieldName seen in results.
//
// If limit > 0, then up to limit unique stream field values are returned.
func (s *Storage) GetStreamFieldValues(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query, fieldName string, limit uint64) ([]logstorage.ValueWithHits, error) {
return s.getValuesWithHits(ctx, limit, true, func(ctx context.Context, sn *storageNode) ([]logstorage.ValueWithHits, error) {
return sn.getStreamFieldValues(ctx, tenantIDs, q, fieldName, limit)
})
}
// GetStreams executes q and returns streams seen in query results.
//
// If limit > 0, then up to limit unique streams are returned.
func (s *Storage) GetStreams(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query, limit uint64) ([]logstorage.ValueWithHits, error) {
return s.getValuesWithHits(ctx, limit, true, func(ctx context.Context, sn *storageNode) ([]logstorage.ValueWithHits, error) {
return sn.getStreams(ctx, tenantIDs, q, limit)
})
}
// GetStreamIDs executes q and returns streamIDs seen in query results.
//
// If limit > 0, then up to limit unique streamIDs are returned.
func (s *Storage) GetStreamIDs(ctx context.Context, tenantIDs []logstorage.TenantID, q *logstorage.Query, limit uint64) ([]logstorage.ValueWithHits, error) {
return s.getValuesWithHits(ctx, limit, true, func(ctx context.Context, sn *storageNode) ([]logstorage.ValueWithHits, error) {
return sn.getStreamIDs(ctx, tenantIDs, q, limit)
})
}
func (s *Storage) getValuesWithHits(ctx context.Context, limit uint64, resetHitsOnLimitExceeded bool,
callback func(ctx context.Context, sn *storageNode) ([]logstorage.ValueWithHits, error)) ([]logstorage.ValueWithHits, error) {
ctxWithCancel, cancel := context.WithCancel(ctx)
defer cancel()
results := make([][]logstorage.ValueWithHits, len(s.sns))
errs := make([]error, len(s.sns))
var wg sync.WaitGroup
for i := range s.sns {
wg.Add(1)
go func(nodeIdx int) {
defer wg.Done()
sn := s.sns[nodeIdx]
vhs, err := callback(ctxWithCancel, sn)
results[nodeIdx] = vhs
errs[nodeIdx] = err
if err != nil {
// Cancel the remaining parallel requests
cancel()
}
}(i)
}
wg.Wait()
if err := getFirstNonCancelError(errs); err != nil {
return nil, err
}
vhs := logstorage.MergeValuesWithHits(results, limit, resetHitsOnLimitExceeded)
return vhs, nil
}
func getFirstNonCancelError(errs []error) error {
for _, err := range errs {
if err != nil && !errors.Is(err, context.Canceled) {
return err
}
}
return nil
}
func unmarshalValuesWithHits(src []byte) ([]logstorage.ValueWithHits, error) {
var vhs []logstorage.ValueWithHits
for len(src) > 0 {
var vh logstorage.ValueWithHits
tail, err := vh.UnmarshalInplace(src)
if err != nil {
return nil, fmt.Errorf("cannot unmarshal ValueWithHits #%d: %w", len(vhs), err)
}
src = tail
// Clone vh.Value, since it points to src.
vh.Value = strings.Clone(vh.Value)
vhs = append(vhs, vh)
}
return vhs, nil
}

View file

@ -1,3 +1,3 @@
See vmagent docs [here](https://docs.victoriametrics.com/vmagent/).
See vmagent docs [here](https://docs.victoriametrics.com/victoriametrics/vmagent/).
vmagent docs can be edited at [docs/vmagent.md](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/docs/vmagent.md).
vmagent docs can be edited at [docs/vmagent.md](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/docs/victoriametrics/vmagent.md).

View file

@ -7,9 +7,9 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/csvimport"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/csvimport"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/csvimport/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics"
)
@ -22,16 +22,16 @@ var (
// InsertHandler processes csv data from req.
func InsertHandler(at *auth.Token, req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
return stream.Parse(req, func(rows []parser.Row) error {
return stream.Parse(req, func(rows []csvimport.Row) error {
return insertRows(at, rows, extraLabels)
})
}
func insertRows(at *auth.Token, rows []parser.Row, extraLabels []prompbmarshal.Label) error {
func insertRows(at *auth.Token, rows []csvimport.Row, extraLabels []prompbmarshal.Label) error {
ctx := common.GetPushCtx()
defer common.PutPushCtx(ctx)

View file

@ -7,10 +7,10 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogsketches"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogsketches/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics"
)
@ -23,7 +23,7 @@ var (
// InsertHandlerForHTTP processes remote write for DataDog POST /api/beta/sketches request.
func InsertHandlerForHTTP(at *auth.Token, req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
@ -56,7 +56,7 @@ func insertRows(at *auth.Token, sketches []*datadogsketches.Sketch, extraLabels
})
}
for _, tag := range sketch.Tags {
name, value := datadogutils.SplitTag(tag)
name, value := datadogutil.SplitTag(tag)
if name == "host" {
name = "exported_host"
}

View file

@ -7,10 +7,10 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv1"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv1/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics"
)
@ -23,7 +23,7 @@ var (
// InsertHandlerForHTTP processes remote write for DataDog POST /api/v1/series request.
func InsertHandlerForHTTP(at *auth.Token, req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
@ -62,7 +62,7 @@ func insertRows(at *auth.Token, series []datadogv1.Series, extraLabels []prompbm
})
}
for _, tag := range ss.Tags {
name, value := datadogutils.SplitTag(tag)
name, value := datadogutil.SplitTag(tag)
if name == "host" {
name = "exported_host"
}

View file

@ -7,10 +7,10 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv2"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/datadogv2/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics"
)
@ -25,7 +25,7 @@ var (
//
// See https://docs.datadoghq.com/api/latest/metrics/#submit-metrics
func InsertHandlerForHTTP(at *auth.Token, req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
@ -65,7 +65,7 @@ func insertRows(at *auth.Token, series []datadogv2.Series, extraLabels []prompbm
})
}
for _, tag := range ss.Tags {
name, value := datadogutils.SplitTag(tag)
name, value := datadogutil.SplitTag(tag)
if name == "host" {
name = "exported_host"
}

View file

@ -21,7 +21,7 @@ var (
//
// See https://graphite.readthedocs.io/en/latest/feeding-carbon.html#the-plaintext-protocol
func InsertHandler(r io.Reader) error {
return stream.Parse(r, false, func(rows []parser.Row) error {
return stream.Parse(r, "", func(rows []parser.Row) error {
return insertRows(nil, rows)
})
}

View file

@ -12,9 +12,9 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/influx"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/influx"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/influx/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics"
)
@ -35,8 +35,8 @@ var (
// InsertHandlerForReader processes remote write for influx line protocol.
//
// See https://github.com/influxdata/telegraf/tree/master/plugins/inputs/socket_listener/
func InsertHandlerForReader(at *auth.Token, r io.Reader, isGzipped bool) error {
return stream.Parse(r, true, isGzipped, "", "", func(db string, rows []parser.Row) error {
func InsertHandlerForReader(at *auth.Token, r io.Reader, encoding string) error {
return stream.Parse(r, encoding, true, "", "", func(db string, rows []influx.Row) error {
return insertRows(at, db, rows, nil)
})
}
@ -45,22 +45,22 @@ func InsertHandlerForReader(at *auth.Token, r io.Reader, isGzipped bool) error {
//
// See https://github.com/influxdata/influxdb/blob/4cbdc197b8117fee648d62e2e5be75c6575352f0/tsdb/README.md
func InsertHandlerForHTTP(at *auth.Token, req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
isGzipped := req.Header.Get("Content-Encoding") == "gzip"
isStreamMode := req.Header.Get("Stream-Mode") == "1"
q := req.URL.Query()
precision := q.Get("precision")
// Read db tag from https://docs.influxdata.com/influxdb/v1.7/tools/api/#write-http-endpoint
db := q.Get("db")
return stream.Parse(req.Body, isStreamMode, isGzipped, precision, db, func(db string, rows []parser.Row) error {
encoding := req.Header.Get("Content-Encoding")
isStreamMode := req.Header.Get("Stream-Mode") == "1"
return stream.Parse(req.Body, encoding, isStreamMode, precision, db, func(db string, rows []influx.Row) error {
return insertRows(at, db, rows, extraLabels)
})
}
func insertRows(at *auth.Token, db string, rows []parser.Row, extraLabels []prompbmarshal.Label) error {
func insertRows(at *auth.Token, db string, rows []influx.Row, extraLabels []prompbmarshal.Label) error {
ctx := getPushCtx()
defer putPushCtx(ctx)

View file

@ -34,7 +34,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/envflag"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/influxutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/influxutil"
graphiteserver "github.com/VictoriaMetrics/VictoriaMetrics/lib/ingestserver/graphite"
influxserver "github.com/VictoriaMetrics/VictoriaMetrics/lib/ingestserver/influx"
opentsdbserver "github.com/VictoriaMetrics/VictoriaMetrics/lib/ingestserver/opentsdb"
@ -42,8 +42,8 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/procutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentelemetry/firehose"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/pushmetrics"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/stringsutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeserieslimits"
@ -105,7 +105,7 @@ func main() {
// Some workloads may need increased GOGC values. Then such values can be set via GOGC environment variable.
// It is recommended increasing GOGC if go_memstats_gc_cpu_fraction metric exposed at /metrics page
// exceeds 0.05 for extended periods of time.
cgroup.SetGOGC(30)
cgroup.SetGOGC(50)
// Write flags and help message to stdout, since it is easier to grep or pipe.
flag.CommandLine.SetOutput(os.Stdout)
@ -145,10 +145,10 @@ func main() {
startTime := time.Now()
remotewrite.StartIngestionRateLimiter()
remotewrite.Init()
common.StartUnmarshalWorkers()
protoparserutil.StartUnmarshalWorkers()
if len(*influxListenAddr) > 0 {
influxServer = influxserver.MustStart(*influxListenAddr, *influxUseProxyProtocol, func(r io.Reader) error {
return influx.InsertHandlerForReader(nil, r, false)
return influx.InsertHandlerForReader(nil, r, "")
})
}
if len(*graphiteListenAddr) > 0 {
@ -195,7 +195,7 @@ func main() {
if len(*opentsdbHTTPListenAddr) > 0 {
opentsdbhttpServer.MustStop()
}
common.StopUnmarshalWorkers()
protoparserutil.StopUnmarshalWorkers()
remotewrite.Stop()
logger.Infof("successfully stopped vmagent in %.3f seconds", time.Since(startTime).Seconds())
@ -242,7 +242,7 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
}
w.Header().Add("Content-Type", "text/html; charset=utf-8")
fmt.Fprintf(w, "<h2>vmagent</h2>")
fmt.Fprintf(w, "See docs at <a href='https://docs.victoriametrics.com/vmagent/'>https://docs.victoriametrics.com/vmagent/</a></br>")
fmt.Fprintf(w, "See docs at <a href='https://docs.victoriametrics.com/victoriametrics/vmagent/'>https://docs.victoriametrics.com/victoriametrics/vmagent/</a></br>")
fmt.Fprintf(w, "Useful endpoints:</br>")
httpserver.WriteAPIHelp(w, [][2]string{
{"targets", "status for discovered active targets"},
@ -282,7 +282,7 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
}
switch path {
case "/prometheus/api/v1/write", "/api/v1/write", "/api/v1/push", "/prometheus/api/v1/push":
if common.HandleVMProtoServerHandshake(w, r) {
if protoparserutil.HandleVMProtoServerHandshake(w, r) {
return true
}
prometheusWriteRequests.Inc()
@ -331,11 +331,11 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
return true
case "/influx/query", "/query":
influxQueryRequests.Inc()
influxutils.WriteDatabaseNames(w)
influxutil.WriteDatabaseNames(w)
return true
case "/influx/health":
influxHealthRequests.Inc()
influxutils.WriteHealthCheckResponse(w)
influxutil.WriteHealthCheckResponse(w)
return true
case "/opentelemetry/api/v1/push", "/opentelemetry/v1/metrics":
opentelemetryPushRequests.Inc()
@ -443,8 +443,10 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
case "/prometheus/api/v1/targets", "/api/v1/targets":
promscrapeAPIV1TargetsRequests.Inc()
w.Header().Set("Content-Type", "application/json")
// https://prometheus.io/docs/prometheus/latest/querying/api/#targets
state := r.FormValue("state")
promscrape.WriteAPIV1Targets(w, state)
scrapePool := r.FormValue("scrapePool")
promscrape.WriteAPIV1Targets(w, state, scrapePool)
return true
case "/prometheus/target_response", "/target_response":
promscrapeTargetResponseRequests.Inc()
@ -587,11 +589,11 @@ func processMultitenantRequest(w http.ResponseWriter, r *http.Request, path stri
return true
case "influx/query":
influxQueryRequests.Inc()
influxutils.WriteDatabaseNames(w)
influxutil.WriteDatabaseNames(w)
return true
case "influx/health":
influxHealthRequests.Inc()
influxutils.WriteHealthCheckResponse(w)
influxutil.WriteHealthCheckResponse(w)
return true
case "opentelemetry/api/v1/push", "opentelemetry/v1/metrics":
opentelemetryPushRequests.Inc()
@ -750,7 +752,7 @@ func usage() {
const s = `
vmagent collects metrics data via popular data ingestion protocols and routes it to VictoriaMetrics.
See the docs at https://docs.victoriametrics.com/vmagent/ .
See the docs at https://docs.victoriametrics.com/victoriametrics/vmagent/ .
`
flagutil.Usage(s)
}

View file

@ -9,8 +9,8 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/native/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics"
)
@ -25,12 +25,12 @@ var (
//
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6
func InsertHandler(at *auth.Token, req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
isGzip := req.Header.Get("Content-Encoding") == "gzip"
return stream.Parse(req.Body, isGzip, func(block *stream.Block) error {
encoding := req.Header.Get("Content-Encoding")
return stream.Parse(req.Body, encoding, func(block *stream.Block) error {
return insertRows(at, block, extraLabels)
})
}

View file

@ -10,9 +10,9 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/newrelic"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/newrelic/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
)
@ -24,13 +24,12 @@ var (
// InsertHandlerForHTTP processes remote write for NewRelic POST /infra/v2/metrics/events/bulk request.
func InsertHandlerForHTTP(at *auth.Token, req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
ce := req.Header.Get("Content-Encoding")
isGzip := ce == "gzip"
return stream.Parse(req.Body, isGzip, func(rows []newrelic.Row) error {
encoding := req.Header.Get("Content-Encoding")
return stream.Parse(req.Body, encoding, func(rows []newrelic.Row) error {
return insertRows(at, rows, extraLabels)
})
}

View file

@ -8,9 +8,9 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentelemetry/firehose"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentelemetry/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics"
)
@ -23,11 +23,11 @@ var (
// InsertHandler processes opentelemetry metrics.
func InsertHandler(at *auth.Token, req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
isGzipped := req.Header.Get("Content-Encoding") == "gzip"
encoding := req.Header.Get("Content-Encoding")
var processBody func([]byte) ([]byte, error)
if req.Header.Get("Content-Type") == "application/json" {
if req.Header.Get("X-Amz-Firehose-Protocol-Version") != "" {
@ -36,7 +36,7 @@ func InsertHandler(at *auth.Token, req *http.Request) error {
return fmt.Errorf("json encoding isn't supported for opentelemetry format. Use protobuf encoding")
}
}
return stream.ParseStream(req.Body, isGzipped, processBody, func(tss []prompbmarshal.TimeSeries) error {
return stream.ParseStream(req.Body, encoding, processBody, func(tss []prompbmarshal.TimeSeries) error {
return insertRows(at, tss, extraLabels)
})
}

View file

@ -7,9 +7,9 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentsdbhttp"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentsdbhttp"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentsdbhttp/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/metrics"
)
@ -21,16 +21,16 @@ var (
// InsertHandler processes HTTP OpenTSDB put requests.
// See http://opentsdb.net/docs/build/html/api_http/put.html
func InsertHandler(at *auth.Token, req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
return stream.Parse(req, func(rows []parser.Row) error {
return stream.Parse(req, func(rows []opentsdbhttp.Row) error {
return insertRows(at, rows, extraLabels)
})
}
func insertRows(at *auth.Token, rows []parser.Row, extraLabels []prompbmarshal.Label) error {
func insertRows(at *auth.Token, rows []opentsdbhttp.Row, extraLabels []prompbmarshal.Label) error {
ctx := common.GetPushCtx()
defer common.PutPushCtx(ctx)

View file

@ -8,9 +8,9 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/prometheus"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/prometheus"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/prometheus/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics"
)
@ -23,23 +23,23 @@ var (
// InsertHandler processes `/api/v1/import/prometheus` request.
func InsertHandler(at *auth.Token, req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
defaultTimestamp, err := parserCommon.GetTimestamp(req)
defaultTimestamp, err := protoparserutil.GetTimestamp(req)
if err != nil {
return err
}
isGzipped := req.Header.Get("Content-Encoding") == "gzip"
return stream.Parse(req.Body, defaultTimestamp, isGzipped, true, func(rows []parser.Row) error {
encoding := req.Header.Get("Content-Encoding")
return stream.Parse(req.Body, defaultTimestamp, encoding, true, func(rows []prometheus.Row) error {
return insertRows(at, rows, extraLabels)
}, func(s string) {
httpserver.LogError(req, s)
})
}
func insertRows(at *auth.Token, rows []parser.Row, extraLabels []prompbmarshal.Label) error {
func insertRows(at *auth.Token, rows []prometheus.Row, extraLabels []prompbmarshal.Label) error {
ctx := common.GetPushCtx()
defer common.PutPushCtx(ctx)

View file

@ -12,7 +12,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
)
var (
@ -26,7 +26,7 @@ func TestInsertHandler(t *testing.T) {
req := httptest.NewRequest(http.MethodPost, "/insert/0/api/v1/import/prometheus", bytes.NewBufferString(`{"foo":"bar"}
go_memstats_alloc_bytes_total 1`))
if err := InsertHandler(nil, req); err != nil {
t.Fatalf("unxepected error %s", err)
t.Fatalf("unexpected error %s", err)
}
expectedMsg := "cannot unmarshal Prometheus line"
if !strings.Contains(testOutput.String(), expectedMsg) {
@ -44,14 +44,14 @@ func setUp() {
log.Fatalf("unable to set %q with value %q, err: %v", remoteWriteFlag, srv.URL, err)
}
logger.Init()
common.StartUnmarshalWorkers()
protoparserutil.StartUnmarshalWorkers()
remotewrite.Init()
testOutput = &bytes.Buffer{}
logger.SetOutputForTests(testOutput)
}
func tearDown() {
common.StopUnmarshalWorkers()
protoparserutil.StopUnmarshalWorkers()
srv.Close()
logger.ResetOutputForTest()
tmpDataDir := flag.Lookup("remoteWrite.tmpDataPath").Value.String()

View file

@ -8,8 +8,8 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/auth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/promremotewrite/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics"
)
@ -22,7 +22,7 @@ var (
// InsertHandler processes remote write for prometheus.
func InsertHandler(at *auth.Token, req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}

View file

@ -10,26 +10,29 @@ import (
"strconv"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/awsapi"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding/zstd"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/netutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/persistentqueue"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/ratelimiter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timerpool"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeutil"
"github.com/VictoriaMetrics/metrics"
"github.com/golang/snappy"
)
var (
forcePromProto = flagutil.NewArrayBool("remoteWrite.forcePromProto", "Whether to force Prometheus remote write protocol for sending data "+
"to the corresponding -remoteWrite.url . See https://docs.victoriametrics.com/vmagent/#victoriametrics-remote-write-protocol")
"to the corresponding -remoteWrite.url . See https://docs.victoriametrics.com/victoriametrics/vmagent/#victoriametrics-remote-write-protocol")
forceVMProto = flagutil.NewArrayBool("remoteWrite.forceVMProto", "Whether to force VictoriaMetrics remote write protocol for sending data "+
"to the corresponding -remoteWrite.url . See https://docs.victoriametrics.com/vmagent/#victoriametrics-remote-write-protocol")
"to the corresponding -remoteWrite.url . See https://docs.victoriametrics.com/victoriametrics/vmagent/#victoriametrics-remote-write-protocol")
rateLimit = flagutil.NewArrayInt("remoteWrite.rateLimit", 0, "Optional rate limit in bytes per second for data sent to the corresponding -remoteWrite.url. "+
"By default, the rate limit is disabled. It can be useful for limiting load on remote storage when big amounts of buffered data "+
@ -87,7 +90,8 @@ type client struct {
remoteWriteURL string
// Whether to use VictoriaMetrics remote write protocol for sending the data to remoteWriteURL
useVMProto bool
useVMProto atomic.Bool
canDowngradeVMProto atomic.Bool
fq *persistentqueue.FastQueue
hc *http.Client
@ -124,14 +128,14 @@ func newHTTPClient(argIdx int, remoteWriteURL, sanitizedURL string, fq *persiste
if err != nil {
logger.Fatalf("cannot initialize AWS Config for -remoteWrite.url=%q: %s", remoteWriteURL, err)
}
tr := &http.Transport{
DialContext: netutil.NewStatDialFunc("vmagent_remotewrite"),
TLSHandshakeTimeout: tlsHandshakeTimeout.GetOptionalArg(argIdx),
MaxConnsPerHost: 2 * concurrency,
MaxIdleConnsPerHost: 2 * concurrency,
IdleConnTimeout: time.Minute,
WriteBufferSize: 64 * 1024,
}
tr := httputil.NewTransport(false, "vmagent_remotewrite")
tr.TLSHandshakeTimeout = tlsHandshakeTimeout.GetOptionalArg(argIdx)
tr.MaxConnsPerHost = 2 * concurrency
tr.MaxIdleConnsPerHost = 2 * concurrency
tr.IdleConnTimeout = time.Minute
tr.WriteBufferSize = 64 * 1024
pURL := proxyURL.GetOptionalArg(argIdx)
if len(pURL) > 0 {
if !strings.Contains(pURL, "://") {
@ -166,17 +170,11 @@ func newHTTPClient(argIdx int, remoteWriteURL, sanitizedURL string, fq *persiste
logger.Fatalf("-remoteWrite.useVMProto and -remoteWrite.usePromProto cannot be set simultaneously for -remoteWrite.url=%s", sanitizedURL)
}
if !useVMProto && !usePromProto {
// Auto-detect whether the remote storage supports VictoriaMetrics remote write protocol.
doRequest := func(url string) (*http.Response, error) {
return c.doRequest(url, nil)
}
useVMProto = common.HandleVMProtoClientHandshake(c.remoteWriteURL, doRequest)
if !useVMProto {
logger.Infof("the remote storage at %q doesn't support VictoriaMetrics remote write protocol. Switching to Prometheus remote write protocol. "+
"See https://docs.victoriametrics.com/vmagent/#victoriametrics-remote-write-protocol", sanitizedURL)
}
// The VM protocol could be downgraded later at runtime if unsupported media type response status is received.
useVMProto = true
c.canDowngradeVMProto.Store(true)
}
c.useVMProto = useVMProto
c.useVMProto.Store(useVMProto)
return c
}
@ -384,7 +382,7 @@ func (c *client) newRequest(url string, body []byte) (*http.Request, error) {
h := req.Header
h.Set("User-Agent", "vmagent")
h.Set("Content-Type", "application/x-protobuf")
if c.useVMProto {
if encoding.IsZstd(body) {
h.Set("Content-Encoding", "zstd")
h.Set("X-VictoriaMetrics-Remote-Write-Version", "1")
} else {
@ -420,7 +418,7 @@ again:
if retryDuration > maxRetryDuration {
retryDuration = maxRetryDuration
}
logger.Warnf("couldn't send a block with size %d bytes to %q: %s; re-sending the block in %.3f seconds",
remoteWriteRetryLogger.Warnf("couldn't send a block with size %d bytes to %q: %s; re-sending the block in %.3f seconds",
len(block), c.sanitizedURL, err, retryDuration.Seconds())
t := timerpool.Get(retryDuration)
select {
@ -433,6 +431,7 @@ again:
c.retriesCount.Inc()
goto again
}
statusCode := resp.StatusCode
if statusCode/100 == 2 {
_ = resp.Body.Close()
@ -441,24 +440,46 @@ again:
c.blocksSent.Inc()
return true
}
metrics.GetOrCreateCounter(fmt.Sprintf(`vmagent_remotewrite_requests_total{url=%q, status_code="%d"}`, c.sanitizedURL, statusCode)).Inc()
if statusCode == 409 || statusCode == 400 {
body, err := io.ReadAll(resp.Body)
_ = resp.Body.Close()
if err != nil {
remoteWriteRejectedLogger.Errorf("sending a block with size %d bytes to %q was rejected (skipping the block): status code %d; "+
"failed to read response body: %s",
len(block), c.sanitizedURL, statusCode, err)
} else {
remoteWriteRejectedLogger.Errorf("sending a block with size %d bytes to %q was rejected (skipping the block): status code %d; response body: %s",
len(block), c.sanitizedURL, statusCode, string(body))
}
// Just drop block on 409 and 400 status codes like Prometheus does.
if statusCode == 409 {
logBlockRejected(block, c.sanitizedURL, resp)
// Just drop block on 409 status code like Prometheus does.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/873
// and https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1149
_ = resp.Body.Close()
c.packetsDropped.Inc()
return true
// - Remote Write v1 specification implicitly expects a `400 Bad Request` when the encoding is not supported.
// - Remote Write v2 specification explicitly specifies a `415 Unsupported Media Type` for unsupported encodings.
// - Real-world implementations of v1 use both 400 and 415 status codes.
// See more in research: https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8462#issuecomment-2786918054
} else if statusCode == 415 || statusCode == 400 {
if c.canDowngradeVMProto.Swap(false) {
logger.Infof("received unsupported media type or bad request from remote storage at %q. Downgrading protocol from VictoriaMetrics to Prometheus remote write for all future requests. "+
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#victoriametrics-remote-write-protocol", c.sanitizedURL)
c.useVMProto.Store(false)
}
if encoding.IsZstd(block) {
logger.Infof("received unsupported media type or bad request from remote storage at %q. Re-packing the block to Prometheus remote write and retrying."+
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#victoriametrics-remote-write-protocol", c.sanitizedURL)
block = mustRepackBlockFromZstdToSnappy(block)
c.retriesCount.Inc()
_ = resp.Body.Close()
goto again
}
// Just drop snappy blocks on 400 or 415 status codes like Prometheus does.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/873
// and https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1149
logBlockRejected(block, c.sanitizedURL, resp)
_ = resp.Body.Close()
c.packetsDropped.Inc()
return true
}
// Unexpected status code returned
@ -488,6 +509,7 @@ again:
}
var remoteWriteRejectedLogger = logger.WithThrottler("remoteWriteRejected", 5*time.Second)
var remoteWriteRetryLogger = logger.WithThrottler("remoteWriteRetry", 5*time.Second)
// getRetryDuration returns retry duration.
// retryAfterDuration has the highest priority.
@ -510,6 +532,28 @@ func getRetryDuration(retryAfterDuration, retryDuration, maxRetryDuration time.D
return retryDuration
}
func mustRepackBlockFromZstdToSnappy(zstdBlock []byte) []byte {
plainBlock := make([]byte, 0, len(zstdBlock)*2)
plainBlock, err := zstd.Decompress(plainBlock, zstdBlock)
if err != nil {
logger.Panicf("FATAL: cannot re-pack block with size %d bytes from Zstd to Snappy: %s", len(zstdBlock), err)
}
return snappy.Encode(nil, plainBlock)
}
func logBlockRejected(block []byte, sanitizedURL string, resp *http.Response) {
body, err := io.ReadAll(resp.Body)
if err != nil {
remoteWriteRejectedLogger.Errorf("sending a block with size %d bytes to %q was rejected (skipping the block): status code %d; "+
"failed to read response body: %s",
len(block), sanitizedURL, resp.StatusCode, err)
} else {
remoteWriteRejectedLogger.Errorf("sending a block with size %d bytes to %q was rejected (skipping the block): status code %d; response body: %s",
len(block), sanitizedURL, resp.StatusCode, string(body))
}
}
// parseRetryAfterHeader parses `Retry-After` value retrieved from HTTP response header.
// retryAfterString should be in either HTTP-date or a number of seconds.
// It will return time.Duration(0) if `retryAfterString` does not follow RFC 7231.

View file

@ -5,6 +5,9 @@ import (
"net/http"
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding"
"github.com/golang/snappy"
)
func TestCalculateRetryDuration(t *testing.T) {
@ -97,3 +100,19 @@ func helper(d time.Duration) time.Duration {
return d + dv
}
func TestRepackBlockFromZstdToSnappy(t *testing.T) {
expectedPlainBlock := []byte(`foobar`)
zstdBlock := encoding.CompressZSTDLevel(nil, expectedPlainBlock, 1)
snappyBlock := mustRepackBlockFromZstdToSnappy(zstdBlock)
actualPlainBlock, err := snappy.Decode(nil, snappyBlock)
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
if string(actualPlainBlock) != string(expectedPlainBlock) {
t.Fatalf("unexpected plain block; got %q; want %q", actualPlainBlock, expectedPlainBlock)
}
}

View file

@ -16,6 +16,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/persistentqueue"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/slicesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeutil"
"github.com/VictoriaMetrics/metrics"
"github.com/golang/snappy"
@ -28,7 +29,7 @@ var (
maxRowsPerBlock = flag.Int("remoteWrite.maxRowsPerBlock", 10000, "The maximum number of samples to send in each block to remote storage. Higher number may improve performance at the cost of the increased memory usage. See also -remoteWrite.maxBlockSize")
vmProtoCompressLevel = flag.Int("remoteWrite.vmProtoCompressLevel", 0, "The compression level for VictoriaMetrics remote write protocol. "+
"Higher values reduce network traffic at the cost of higher CPU usage. Negative values reduce CPU usage at the cost of increased network traffic. "+
"See https://docs.victoriametrics.com/vmagent/#victoriametrics-remote-write-protocol")
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#victoriametrics-remote-write-protocol")
)
type pendingSeries struct {
@ -39,7 +40,7 @@ type pendingSeries struct {
periodicFlusherWG sync.WaitGroup
}
func newPendingSeries(fq *persistentqueue.FastQueue, isVMRemoteWrite bool, significantFigures, roundDigits int) *pendingSeries {
func newPendingSeries(fq *persistentqueue.FastQueue, isVMRemoteWrite *atomic.Bool, significantFigures, roundDigits int) *pendingSeries {
var ps pendingSeries
ps.wr.fq = fq
ps.wr.isVMRemoteWrite = isVMRemoteWrite
@ -99,7 +100,7 @@ type writeRequest struct {
fq *persistentqueue.FastQueue
// Whether to encode the write request with VictoriaMetrics remote write protocol.
isVMRemoteWrite bool
isVMRemoteWrite *atomic.Bool
// How many significant figures must be left before sending the writeRequest to fq.
significantFigures int
@ -118,7 +119,7 @@ type writeRequest struct {
}
func (wr *writeRequest) reset() {
// Do not reset lastFlushTime, fq, isVMRemoteWrite, significantFigures and roundDigits, since they are re-used.
// Do not reset lastFlushTime, fq, isVMRemoteWrite, significantFigures and roundDigits, since they are reused.
wr.wr.Timeseries = nil
@ -137,7 +138,7 @@ func (wr *writeRequest) reset() {
// This is needed in order to properly save in-memory data to persistent queue on graceful shutdown.
func (wr *writeRequest) mustFlushOnStop() {
wr.wr.Timeseries = wr.tss
if !tryPushWriteRequest(&wr.wr, wr.mustWriteBlock, wr.isVMRemoteWrite) {
if !tryPushWriteRequest(&wr.wr, wr.mustWriteBlock, wr.isVMRemoteWrite.Load()) {
logger.Panicf("BUG: final flush must always return true")
}
wr.reset()
@ -151,7 +152,7 @@ func (wr *writeRequest) mustWriteBlock(block []byte) bool {
func (wr *writeRequest) tryFlush() bool {
wr.wr.Timeseries = wr.tss
wr.lastFlushTime.Store(fasttime.UnixTimestamp())
if !tryPushWriteRequest(&wr.wr, wr.fq.TryWriteBlock, wr.isVMRemoteWrite) {
if !tryPushWriteRequest(&wr.wr, wr.fq.TryWriteBlock, wr.isVMRemoteWrite.Load()) {
return false
}
wr.reset()
@ -197,28 +198,43 @@ func (wr *writeRequest) tryPush(src []prompbmarshal.TimeSeries) bool {
}
func (wr *writeRequest) copyTimeSeries(dst, src *prompbmarshal.TimeSeries) {
labelsDst := wr.labels
labelsSrc := src.Labels
// Pre-allocate memory for labels.
labelsLen := len(wr.labels)
samplesDst := wr.samples
buf := wr.buf
for i := range src.Labels {
labelsDst = append(labelsDst, prompbmarshal.Label{})
dstLabel := &labelsDst[len(labelsDst)-1]
srcLabel := &src.Labels[i]
wr.labels = slicesutil.SetLength(wr.labels, labelsLen+len(labelsSrc))
labelsDst := wr.labels[labelsLen:]
buf = append(buf, srcLabel.Name...)
dstLabel.Name = bytesutil.ToUnsafeString(buf[len(buf)-len(srcLabel.Name):])
buf = append(buf, srcLabel.Value...)
dstLabel.Value = bytesutil.ToUnsafeString(buf[len(buf)-len(srcLabel.Value):])
// Pre-allocate memory for byte slice needed for storing label names and values.
neededBufLen := 0
for i := range labelsSrc {
label := &labelsSrc[i]
neededBufLen += len(label.Name) + len(label.Value)
}
dst.Labels = labelsDst[labelsLen:]
bufLen := len(wr.buf)
wr.buf = slicesutil.SetLength(wr.buf, bufLen+neededBufLen)
buf := wr.buf[:bufLen]
samplesDst = append(samplesDst, src.Samples...)
dst.Samples = samplesDst[len(samplesDst)-len(src.Samples):]
// Copy labels
for i := range labelsSrc {
dstLabel := &labelsDst[i]
srcLabel := &labelsSrc[i]
wr.samples = samplesDst
wr.labels = labelsDst
bufLen := len(buf)
buf = append(buf, srcLabel.Name...)
dstLabel.Name = bytesutil.ToUnsafeString(buf[bufLen:])
bufLen = len(buf)
buf = append(buf, srcLabel.Value...)
dstLabel.Value = bytesutil.ToUnsafeString(buf[bufLen:])
}
wr.buf = buf
dst.Labels = labelsDst
// Copy samples
samplesLen := len(wr.samples)
wr.samples = append(wr.samples, src.Samples...)
dst.Samples = wr.samples[samplesLen:]
}
// marshalConcurrency limits the maximum number of concurrent workers, which marshal and compress WriteRequest.

View file

@ -19,10 +19,10 @@ var (
relabelConfigPathGlobal = flag.String("remoteWrite.relabelConfig", "", "Optional path to file with relabeling configs, which are applied "+
"to all the metrics before sending them to -remoteWrite.url. See also -remoteWrite.urlRelabelConfig. "+
"The path can point either to local file or to http url. "+
"See https://docs.victoriametrics.com/vmagent/#relabeling")
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#relabeling")
relabelConfigPaths = flagutil.NewArrayString("remoteWrite.urlRelabelConfig", "Optional path to relabel configs for the corresponding -remoteWrite.url. "+
"See also -remoteWrite.relabelConfig. The path can point either to local file or to http url. "+
"See https://docs.victoriametrics.com/vmagent/#relabeling")
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#relabeling")
usePromCompatibleNaming = flag.Bool("usePromCompatibleNaming", false, "Whether to replace characters unsupported by Prometheus with underscores "+
"in the ingested metric names and label names. For example, foo.bar{a.b='c'} is transformed into foo_bar{a_b='c'} during data ingestion if this flag is set. "+

View file

@ -6,7 +6,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutil"
)
func TestApplyRelabeling(t *testing.T) {
@ -63,7 +63,7 @@ func TestAppendExtraLabels(t *testing.T) {
func parseSeries(data string) []prompbmarshal.TimeSeries {
var tss []prompbmarshal.TimeSeries
tss = append(tss, prompbmarshal.TimeSeries{
Labels: promutils.MustNewLabelsFromString(data).GetLabels(),
Labels: promutil.MustNewLabelsFromString(data).GetLabels(),
})
return tss
}

View file

@ -15,6 +15,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bloomfilter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/consistenthash"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
@ -25,7 +26,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/procutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/ratelimiter"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/streamaggr"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/timeserieslimits"
@ -41,20 +42,19 @@ var (
enableMultitenantHandlers = flag.Bool("enableMultitenantHandlers", false, "Whether to process incoming data via multitenant insert handlers according to "+
"https://docs.victoriametrics.com/cluster-victoriametrics/#url-format . By default incoming data is processed via single-node insert handlers "+
"according to https://docs.victoriametrics.com/#how-to-import-time-series-data ."+
"See https://docs.victoriametrics.com/vmagent/#multitenancy for details")
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#multitenancy for details")
shardByURL = flag.Bool("remoteWrite.shardByURL", false, "Whether to shard outgoing series across all the remote storage systems enumerated via -remoteWrite.url . "+
"By default the data is replicated across all the -remoteWrite.url . See https://docs.victoriametrics.com/vmagent/#sharding-among-remote-storages . "+
shardByURL = flag.Bool("remoteWrite.shardByURL", false, "Whether to shard outgoing series across all the remote storage systems enumerated via -remoteWrite.url. "+
"By default the data is replicated across all the -remoteWrite.url . See https://docs.victoriametrics.com/victoriametrics/vmagent/#sharding-among-remote-storages . "+
"See also -remoteWrite.shardByURLReplicas")
shardByURLReplicas = flag.Int("remoteWrite.shardByURLReplicas", 1, "How many copies of data to make among remote storage systems enumerated via -remoteWrite.url "+
"when -remoteWrite.shardByURL is set. See https://docs.victoriametrics.com/vmagent/#sharding-among-remote-storages")
"when -remoteWrite.shardByURL is set. See https://docs.victoriametrics.com/victoriametrics/vmagent/#sharding-among-remote-storages")
shardByURLLabels = flagutil.NewArrayString("remoteWrite.shardByURL.labels", "Optional list of labels, which must be used for sharding outgoing samples "+
"among remote storage systems if -remoteWrite.shardByURL command-line flag is set. By default all the labels are used for sharding in order to gain "+
"even distribution of series over the specified -remoteWrite.url systems. See also -remoteWrite.shardByURL.ignoreLabels")
shardByURLIgnoreLabels = flagutil.NewArrayString("remoteWrite.shardByURL.ignoreLabels", "Optional list of labels, which must be ignored when sharding outgoing samples "+
"among remote storage systems if -remoteWrite.shardByURL command-line flag is set. By default all the labels are used for sharding in order to gain "+
"even distribution of series over the specified -remoteWrite.url systems. See also -remoteWrite.shardByURL.labels")
tmpDataPath = flag.String("remoteWrite.tmpDataPath", "vmagent-remotewrite-data", "Path to directory for storing pending data, which isn't sent to the configured -remoteWrite.url . "+
"See also -remoteWrite.maxDiskUsagePerURL and -remoteWrite.disableOnDiskQueue")
keepDanglingQueues = flag.Bool("remoteWrite.keepDanglingQueues", false, "Keep persistent queues contents at -remoteWrite.tmpDataPath in case there are no matching -remoteWrite.url. "+
@ -81,28 +81,30 @@ var (
`For example, if m{k1="v1",k2="v2"} may be sent as m{k2="v2",k1="v1"}`+
`Enabled sorting for labels can slow down ingestion performance a bit`)
maxHourlySeries = flag.Int("remoteWrite.maxHourlySeries", 0, "The maximum number of unique series vmagent can send to remote storage systems during the last hour. "+
"Excess series are logged and dropped. This can be useful for limiting series cardinality. See https://docs.victoriametrics.com/vmagent/#cardinality-limiter")
"Excess series are logged and dropped. This can be useful for limiting series cardinality. See https://docs.victoriametrics.com/victoriametrics/vmagent/#cardinality-limiter")
maxDailySeries = flag.Int("remoteWrite.maxDailySeries", 0, "The maximum number of unique series vmagent can send to remote storage systems during the last 24 hours. "+
"Excess series are logged and dropped. This can be useful for limiting series churn rate. See https://docs.victoriametrics.com/vmagent/#cardinality-limiter")
"Excess series are logged and dropped. This can be useful for limiting series churn rate. See https://docs.victoriametrics.com/victoriametrics/vmagent/#cardinality-limiter")
maxIngestionRate = flag.Int("maxIngestionRate", 0, "The maximum number of samples vmagent can receive per second. Data ingestion is paused when the limit is exceeded. "+
"By default there are no limits on samples ingestion rate. See also -remoteWrite.rateLimit")
disableOnDiskQueue = flagutil.NewArrayBool("remoteWrite.disableOnDiskQueue", "Whether to disable storing pending data to -remoteWrite.tmpDataPath "+
"when the remote storage system at the corresponding -remoteWrite.url cannot keep up with the data ingestion rate. "+
"See https://docs.victoriametrics.com/vmagent#disabling-on-disk-persistence . See also -remoteWrite.dropSamplesOnOverload")
"See https://docs.victoriametrics.com/victoriametrics/vmagent/#disabling-on-disk-persistence . See also -remoteWrite.dropSamplesOnOverload")
dropSamplesOnOverload = flag.Bool("remoteWrite.dropSamplesOnOverload", false, "Whether to drop samples when -remoteWrite.disableOnDiskQueue is set and if the samples "+
"cannot be pushed into the configured -remoteWrite.url systems in a timely manner. See https://docs.victoriametrics.com/vmagent#disabling-on-disk-persistence")
"cannot be pushed into the configured -remoteWrite.url systems in a timely manner. See https://docs.victoriametrics.com/victoriametrics/vmagent/#disabling-on-disk-persistence")
)
var (
// rwctxsGlobal contains statically populated entries when -remoteWrite.url is specified.
rwctxsGlobal []*remoteWriteCtx
rwctxsGlobal []*remoteWriteCtx
rwctxsGlobalIdx []int
rwctxConsistentHashGlobal *consistenthash.ConsistentHash
// ErrQueueFullHTTPRetry must be returned when TryPush() returns false.
ErrQueueFullHTTPRetry = &httpserver.ErrorWithStatusCode{
Err: fmt.Errorf("remote storage systems cannot keep up with the data ingestion rate; retry the request later " +
"or remove -remoteWrite.disableOnDiskQueue from vmagent command-line flags, so it could save pending data to -remoteWrite.tmpDataPath; " +
"see https://docs.victoriametrics.com/vmagent/#disabling-on-disk-persistence"),
"see https://docs.victoriametrics.com/victoriametrics/vmagent/#disabling-on-disk-persistence"),
StatusCode: http.StatusTooManyRequests,
}
@ -183,7 +185,7 @@ func Init() {
if len(*shardByURLLabels) > 0 && len(*shardByURLIgnoreLabels) > 0 {
logger.Fatalf("-remoteWrite.shardByURL.labels and -remoteWrite.shardByURL.ignoreLabels cannot be set simultaneously; " +
"see https://docs.victoriametrics.com/vmagent/#sharding-among-remote-storages")
"see https://docs.victoriametrics.com/victoriametrics/vmagent/#sharding-among-remote-storages")
}
shardByURLLabelsMap = newMapFromStrings(*shardByURLLabels)
shardByURLIgnoreLabelsMap = newMapFromStrings(*shardByURLIgnoreLabels)
@ -205,7 +207,7 @@ func Init() {
initStreamAggrConfigGlobal()
rwctxsGlobal = newRemoteWriteCtxs(*remoteWriteURLs)
initRemoteWriteCtxs(*remoteWriteURLs)
disableOnDiskQueues := []bool(*disableOnDiskQueue)
disableOnDiskQueueAny = slices.Contains(disableOnDiskQueues, true)
@ -290,7 +292,7 @@ var (
relabelConfigTimestamp = metrics.NewCounter(`vmagent_relabel_config_last_reload_success_timestamp_seconds`)
)
func newRemoteWriteCtxs(urls []string) []*remoteWriteCtx {
func initRemoteWriteCtxs(urls []string) {
if len(urls) == 0 {
logger.Panicf("BUG: urls must be non-empty")
}
@ -306,6 +308,7 @@ func newRemoteWriteCtxs(urls []string) []*remoteWriteCtx {
maxInmemoryBlocks = 2
}
rwctxs := make([]*remoteWriteCtx, len(urls))
rwctxIdx := make([]int, len(urls))
for i, remoteWriteURLRaw := range urls {
remoteWriteURL, err := url.Parse(remoteWriteURLRaw)
if err != nil {
@ -316,8 +319,19 @@ func newRemoteWriteCtxs(urls []string) []*remoteWriteCtx {
sanitizedURL = fmt.Sprintf("%d:%s", i+1, remoteWriteURL)
}
rwctxs[i] = newRemoteWriteCtx(i, remoteWriteURL, maxInmemoryBlocks, sanitizedURL)
rwctxIdx[i] = i
}
return rwctxs
if *shardByURL {
consistentHashNodes := make([]string, 0, len(urls))
for i, url := range urls {
consistentHashNodes = append(consistentHashNodes, fmt.Sprintf("%d:%s", i+1, url))
}
rwctxConsistentHashGlobal = consistenthash.NewConsistentHash(consistentHashNodes, 0)
}
rwctxsGlobal = rwctxs
rwctxsGlobalIdx = rwctxIdx
}
var (
@ -501,6 +515,10 @@ func tryPush(at *auth.Token, wr *prompbmarshal.WriteRequest, forceDropSamplesOnF
return true
}
// getEligibleRemoteWriteCtxs checks whether writes to configured remote storage systems are blocked and
// returns only the unblocked rwctx.
//
// calculateHealthyRwctxIdx will rely on the order of rwctx to be in ascending order.
func getEligibleRemoteWriteCtxs(tss []prompbmarshal.TimeSeries, forceDropSamplesOnFailure bool) ([]*remoteWriteCtx, bool) {
if !disableOnDiskQueueAny {
return rwctxsGlobal, true
@ -517,6 +535,12 @@ func getEligibleRemoteWriteCtxs(tss []prompbmarshal.TimeSeries, forceDropSamples
return nil, false
}
rowsCount := getRowsCount(tss)
if *shardByURL {
// Todo: When shardByURL is enabled, the following metrics won't be 100% accurate. Because vmagent don't know
// which rwctx should data be pushed to yet. Let's consider the hashing algorithm fair and will distribute
// data to all rwctxs evenly.
rowsCount = rowsCount / len(rwctxsGlobal)
}
rwctx.rowsDroppedOnPushFailure.Add(rowsCount)
}
}
@ -528,6 +552,7 @@ func pushToRemoteStoragesTrackDropped(tss []prompbmarshal.TimeSeries) {
if len(rwctxs) == 0 {
return
}
if !tryPushBlockToRemoteStorages(rwctxs, tss, true) {
logger.Panicf("BUG: tryPushBlockToRemoteStorages() must return true when forceDropSamplesOnFailure=true")
}
@ -578,42 +603,7 @@ func tryShardingBlockAmongRemoteStorages(rwctxs []*remoteWriteCtx, tssBlock []pr
defer putTSSShards(x)
shards := x.shards
tmpLabels := promutils.GetLabels()
for _, ts := range tssBlock {
hashLabels := ts.Labels
if len(shardByURLLabelsMap) > 0 {
hashLabels = tmpLabels.Labels[:0]
for _, label := range ts.Labels {
if _, ok := shardByURLLabelsMap[label.Name]; ok {
hashLabels = append(hashLabels, label)
}
}
tmpLabels.Labels = hashLabels
} else if len(shardByURLIgnoreLabelsMap) > 0 {
hashLabels = tmpLabels.Labels[:0]
for _, label := range ts.Labels {
if _, ok := shardByURLIgnoreLabelsMap[label.Name]; !ok {
hashLabels = append(hashLabels, label)
}
}
tmpLabels.Labels = hashLabels
}
h := getLabelsHash(hashLabels)
idx := h % uint64(len(shards))
i := 0
for {
shards[idx] = append(shards[idx], ts)
i++
if i >= replicas {
break
}
idx++
if idx >= uint64(len(shards)) {
idx = 0
}
}
}
promutils.PutLabels(tmpLabels)
shardAmountRemoteWriteCtx(tssBlock, shards, rwctxs, replicas)
// Push sharded samples to remote storage systems in parallel in order to reduce
// the time needed for sending the data to multiple remote storage systems.
@ -636,6 +626,86 @@ func tryShardingBlockAmongRemoteStorages(rwctxs []*remoteWriteCtx, tssBlock []pr
return !anyPushFailed.Load()
}
// calculateHealthyRwctxIdx returns the index of healthyRwctxs in rwctxsGlobal.
// It relies on the order of rwctx in healthyRwctxs, which is appended by getEligibleRemoteWriteCtxs.
func calculateHealthyRwctxIdx(healthyRwctxs []*remoteWriteCtx) ([]int, []int) {
// fast path: all rwctxs are healthy.
if len(healthyRwctxs) == len(rwctxsGlobal) {
return rwctxsGlobalIdx, nil
}
unhealthyIdx := make([]int, 0, len(rwctxsGlobal))
healthyIdx := make([]int, 0, len(rwctxsGlobal))
var i int
for j := range rwctxsGlobal {
if i < len(healthyRwctxs) && rwctxsGlobal[j].idx == healthyRwctxs[i].idx {
healthyIdx = append(healthyIdx, j)
i++
} else {
unhealthyIdx = append(unhealthyIdx, j)
}
}
return healthyIdx, unhealthyIdx
}
// shardAmountRemoteWriteCtx distribute time series to shards by consistent hashing.
func shardAmountRemoteWriteCtx(tssBlock []prompbmarshal.TimeSeries, shards [][]prompbmarshal.TimeSeries, rwctxs []*remoteWriteCtx, replicas int) {
tmpLabels := promutil.GetLabels()
defer promutil.PutLabels(tmpLabels)
healthyIdx, unhealthyIdx := calculateHealthyRwctxIdx(rwctxs)
// shardsIdxMap is a map to find which the shard idx by rwctxs idx.
// rwctxConsistentHashGlobal will tell which the rwctxs idx a time series should be written to.
// And this time series should be appended to the shards by correct shard idx.
shardsIdxMap := make(map[int]int, len(healthyIdx))
for idx, rwctxsIdx := range healthyIdx {
shardsIdxMap[rwctxsIdx] = idx
}
for _, ts := range tssBlock {
hashLabels := ts.Labels
if len(shardByURLLabelsMap) > 0 {
hashLabels = tmpLabels.Labels[:0]
for _, label := range ts.Labels {
if _, ok := shardByURLLabelsMap[label.Name]; ok {
hashLabels = append(hashLabels, label)
}
}
tmpLabels.Labels = hashLabels
} else if len(shardByURLIgnoreLabelsMap) > 0 {
hashLabels = tmpLabels.Labels[:0]
for _, label := range ts.Labels {
if _, ok := shardByURLIgnoreLabelsMap[label.Name]; !ok {
hashLabels = append(hashLabels, label)
}
}
tmpLabels.Labels = hashLabels
}
h := getLabelsHash(hashLabels)
// Get the rwctxIdx through consistent hashing and then map it to the index in shards.
// The rwctxIdx is not always equal to the shardIdx, for example, when some rwctx are not available.
rwctxIdx := rwctxConsistentHashGlobal.GetNodeIdx(h, unhealthyIdx)
shardIdx := shardsIdxMap[rwctxIdx]
replicated := 0
for {
shards[shardIdx] = append(shards[shardIdx], ts)
replicated++
if replicated >= replicas {
break
}
shardIdx++
if shardIdx >= len(shards) {
shardIdx = 0
}
}
}
}
type tssShards struct {
shards [][]prompbmarshal.TimeSeries
}
@ -807,7 +877,7 @@ func newRemoteWriteCtx(argIdx int, remoteWriteURL *url.URL, maxInmemoryBlocks in
}
pss := make([]*pendingSeries, pssLen)
for i := range pss {
pss[i] = newPendingSeries(fq, c.useVMProto, sf, rd)
pss[i] = newPendingSeries(fq, &c.useVMProto, sf, rd)
}
rwctx := &remoteWriteCtx{

View file

@ -4,13 +4,17 @@ import (
"fmt"
"math"
"reflect"
"strconv"
"sync/atomic"
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/consistenthash"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promrelabel"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/prometheus"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/streamaggr"
"github.com/VictoriaMetrics/metrics"
)
@ -68,7 +72,9 @@ func TestRemoteWriteContext_TryPush_ImmutableTimeseries(t *testing.T) {
allRelabelConfigs.Store(rcs)
pss := make([]*pendingSeries, 1)
pss[0] = newPendingSeries(nil, true, 0, 100)
isVMProto := &atomic.Bool{}
isVMProto.Store(true)
pss[0] = newPendingSeries(nil, isVMProto, 0, 100)
rwctx := &remoteWriteCtx{
idx: 0,
streamAggrKeepInput: keepInput,
@ -171,3 +177,173 @@ metric{env="dev"} 15
metric{env="bar"} 25
`)
}
func TestShardAmountRemoteWriteCtx(t *testing.T) {
// 1. distribute 100000 series to n nodes.
// 2. remove the last node from healthy list.
// 3. distribute the same 10000 series to (n-1) node again.
// 4. check active time series change rate:
// change rate must < (3/total nodes). e.g. +30% if 10 you have 10 nodes.
f := func(remoteWriteCount int, healthyIdx []int, replicas int) {
t.Helper()
defer func() {
rwctxsGlobal = nil
rwctxsGlobalIdx = nil
rwctxConsistentHashGlobal = nil
}()
rwctxsGlobal = make([]*remoteWriteCtx, remoteWriteCount)
rwctxsGlobalIdx = make([]int, remoteWriteCount)
rwctxs := make([]*remoteWriteCtx, 0, len(healthyIdx))
for i := range remoteWriteCount {
rwCtx := &remoteWriteCtx{
idx: i,
}
rwctxsGlobalIdx[i] = i
if i >= len(healthyIdx) {
rwctxsGlobal[i] = rwCtx
continue
}
hIdx := healthyIdx[i]
if hIdx != i {
rwctxs = append(rwctxs, &remoteWriteCtx{
idx: hIdx,
})
} else {
rwctxs = append(rwctxs, rwCtx)
}
rwctxsGlobal[i] = rwCtx
}
seriesCount := 100000
// build 1000000 series
tssBlock := make([]prompbmarshal.TimeSeries, 0, seriesCount)
for i := 0; i < seriesCount; i++ {
tssBlock = append(tssBlock, prompbmarshal.TimeSeries{
Labels: []prompbmarshal.Label{
{
Name: "label",
Value: strconv.Itoa(i),
},
},
Samples: []prompbmarshal.Sample{
{
Timestamp: 0,
Value: 0,
},
},
})
}
// build consistent hash for x remote write context
// build active time series set
nodes := make([]string, 0, remoteWriteCount)
activeTimeSeriesByNodes := make([]map[string]struct{}, remoteWriteCount)
for i := 0; i < remoteWriteCount; i++ {
nodes = append(nodes, fmt.Sprintf("node%d", i))
activeTimeSeriesByNodes[i] = make(map[string]struct{})
}
rwctxConsistentHashGlobal = consistenthash.NewConsistentHash(nodes, 0)
// create shards
x := getTSSShards(len(rwctxs))
shards := x.shards
// execute
shardAmountRemoteWriteCtx(tssBlock, shards, rwctxs, replicas)
for i, nodeIdx := range healthyIdx {
for _, ts := range shards[i] {
// add it to node[nodeIdx]'s active time series
activeTimeSeriesByNodes[nodeIdx][prompbmarshal.LabelsToString(ts.Labels)] = struct{}{}
}
}
totalActiveTimeSeries := 0
for _, activeTimeSeries := range activeTimeSeriesByNodes {
totalActiveTimeSeries += len(activeTimeSeries)
}
avgActiveTimeSeries1 := totalActiveTimeSeries / remoteWriteCount
putTSSShards(x)
// removed last node
rwctxs = rwctxs[:len(rwctxs)-1]
healthyIdx = healthyIdx[:len(healthyIdx)-1]
x = getTSSShards(len(rwctxs))
shards = x.shards
// execute
shardAmountRemoteWriteCtx(tssBlock, shards, rwctxs, replicas)
for i, nodeIdx := range healthyIdx {
for _, ts := range shards[i] {
// add it to node[nodeIdx]'s active time series
activeTimeSeriesByNodes[nodeIdx][prompbmarshal.LabelsToString(ts.Labels)] = struct{}{}
}
}
totalActiveTimeSeries = 0
for _, activeTimeSeries := range activeTimeSeriesByNodes {
totalActiveTimeSeries += len(activeTimeSeries)
}
avgActiveTimeSeries2 := totalActiveTimeSeries / remoteWriteCount
changed := math.Abs(float64(avgActiveTimeSeries2-avgActiveTimeSeries1) / float64(avgActiveTimeSeries1))
threshold := 3 / float64(remoteWriteCount)
if changed >= threshold {
t.Fatalf("average active time series before: %d, after: %d, changed: %.2f. threshold: %.2f", avgActiveTimeSeries1, avgActiveTimeSeries2, changed, threshold)
}
}
f(5, []int{0, 1, 2, 3, 4}, 1)
f(5, []int{0, 1, 2, 3, 4}, 2)
f(10, []int{0, 1, 2, 3, 4, 5, 6, 7, 9}, 1)
f(10, []int{0, 1, 2, 3, 4, 5, 6, 7, 9}, 3)
}
func TestCalculateHealthyRwctxIdx(t *testing.T) {
f := func(total int, healthyIdx []int, unhealthyIdx []int) {
t.Helper()
healthyMap := make(map[int]bool)
for _, idx := range healthyIdx {
healthyMap[idx] = true
}
rwctxsGlobal = make([]*remoteWriteCtx, total)
rwctxsGlobalIdx = make([]int, total)
rwctxs := make([]*remoteWriteCtx, 0, len(healthyIdx))
for i := range rwctxsGlobal {
rwctx := &remoteWriteCtx{idx: i}
rwctxsGlobal[i] = rwctx
if healthyMap[i] {
rwctxs = append(rwctxs, rwctx)
}
rwctxsGlobalIdx[i] = i
}
gotHealthyIdx, gotUnhealthyIdx := calculateHealthyRwctxIdx(rwctxs)
if !reflect.DeepEqual(healthyIdx, gotHealthyIdx) {
t.Errorf("calculateHealthyRwctxIdx want healthyIdx = %v, got %v", healthyIdx, gotHealthyIdx)
}
if !reflect.DeepEqual(unhealthyIdx, gotUnhealthyIdx) {
t.Errorf("calculateHealthyRwctxIdx want unhealthyIdx = %v, got %v", unhealthyIdx, gotUnhealthyIdx)
}
}
f(5, []int{0, 1, 2, 3, 4}, nil)
f(5, []int{0, 1, 2, 4}, []int{3})
f(5, []int{2, 4}, []int{0, 1, 3})
f(5, []int{0, 2, 4}, []int{1, 3})
f(5, []int{}, []int{0, 1, 2, 3, 4})
f(5, []int{4}, []int{0, 1, 2, 3})
f(1, []int{0}, nil)
f(1, []int{}, []int{0})
}

View file

@ -56,7 +56,7 @@ var (
"See https://docs.victoriametrics.com/stream-aggregation/#ignoring-old-samples")
streamAggrIgnoreFirstIntervals = flagutil.NewArrayInt("remoteWrite.streamAggr.ignoreFirstIntervals", 0, "Number of aggregation intervals to skip after the start "+
"for the corresponding -remoteWrite.streamAggr.config at the corresponding -remoteWrite.url. Increase this value if "+
"you observe incorrect aggregation results after vmagent restarts. It could be caused by receiving bufferred delayed data from clients pushing data into the vmagent. "+
"you observe incorrect aggregation results after vmagent restarts. It could be caused by receiving buffered delayed data from clients pushing data into the vmagent. "+
"See https://docs.victoriametrics.com/stream-aggregation/#ignore-aggregation-intervals-on-start")
streamAggrDropInputLabels = flagutil.NewArrayString("remoteWrite.streamAggr.dropInputLabels", "An optional list of labels to drop from samples "+
"before stream de-duplication and aggregation with -remoteWrite.streamAggr.config and -remoteWrite.streamAggr.dedupInterval at the corresponding -remoteWrite.url. "+
@ -131,7 +131,7 @@ func reloadStreamAggrConfigGlobal() {
func initStreamAggrConfigGlobal() {
sas, err := newStreamAggrConfigGlobal()
if err != nil {
logger.Fatalf("cannot initialize gloabl stream aggregators: %s", err)
logger.Fatalf("cannot initialize global stream aggregators: %s", err)
}
if sas != nil {
filePath := sas.FilePath()

View file

@ -9,8 +9,8 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/vmimport"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/protoparserutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/vmimport"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/vmimport/stream"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/tenantmetrics"
"github.com/VictoriaMetrics/metrics"
@ -26,17 +26,17 @@ var (
//
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6
func InsertHandler(at *auth.Token, req *http.Request) error {
extraLabels, err := parserCommon.GetExtraLabels(req)
extraLabels, err := protoparserutil.GetExtraLabels(req)
if err != nil {
return err
}
isGzipped := req.Header.Get("Content-Encoding") == "gzip"
return stream.Parse(req.Body, isGzipped, func(rows []parser.Row) error {
encoding := req.Header.Get("Content-Encoding")
return stream.Parse(req.Body, encoding, func(rows []vmimport.Row) error {
return insertRows(at, rows, extraLabels)
})
}
func insertRows(at *auth.Token, rows []parser.Row, extraLabels []prompbmarshal.Label) error {
func insertRows(at *auth.Token, rows []vmimport.Row, extraLabels []prompbmarshal.Label) error {
ctx := common.GetPushCtx()
defer common.PutPushCtx(ctx)

View file

@ -1,3 +1,3 @@
See vmalert-tool docs [here](https://docs.victoriametrics.com/vmalert-tool.html).
See vmalert-tool docs [here](https://docs.victoriametrics.com/victoriametrics/vmalert-tool/).
vmalert-tool docs can be edited at [docs/vmalert-tool.md](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/docs/vmalert-tool.md).
vmalert-tool docs can be edited at [docs/vmalert-tool.md](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/docs/victoriametrics/vmalert-tool.md).

View file

@ -0,0 +1,8 @@
ARG base_image=non-existing
FROM $base_image
EXPOSE 8880
ENTRYPOINT ["/vmalert-tool-prod"]
ARG src_binary=non-existing
COPY $src_binary ./vmalert-tool-prod

View file

@ -17,13 +17,13 @@ func main() {
app := &cli.App{
Name: "vmalert-tool",
Usage: "VMAlert command-line tool",
UsageText: "More info in https://docs.victoriametrics.com/vmalert-tool.html",
UsageText: "More info in https://docs.victoriametrics.com/victoriametrics/vmalert-tool/",
Version: buildinfo.Version,
Commands: []*cli.Command{
{
Name: "unittest",
Usage: "Run unittest for alerting and recording rules.",
UsageText: "More info in https://docs.victoriametrics.com/vmalert-tool.html#Unit-testing-for-rules",
UsageText: "More info in https://docs.victoriametrics.com/victoriametrics/vmalert-tool/#unit-testing-for-rules",
Flags: []cli.Flag{
&cli.StringSliceFlag{
Name: "files",

View file

@ -1,15 +1,15 @@
package unittest
import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutil"
)
// alertTestCase holds alert_rule_test cases defined in test file
type alertTestCase struct {
EvalTime *promutils.Duration `yaml:"eval_time"`
GroupName string `yaml:"groupname"`
Alertname string `yaml:"alertname"`
ExpAlerts []expAlert `yaml:"exp_alerts"`
EvalTime *promutil.Duration `yaml:"eval_time"`
GroupName string `yaml:"groupname"`
Alertname string `yaml:"alertname"`
ExpAlerts []expAlert `yaml:"exp_alerts"`
}
// expAlert holds exp_alerts defined in test file

View file

@ -14,7 +14,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/decimal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutil"
"github.com/VictoriaMetrics/metricsql"
)
@ -41,7 +41,7 @@ func httpWrite(address string, r io.Reader) {
}
// writeInputSeries send input series to vmstorage and flush them
func writeInputSeries(input []series, interval *promutils.Duration, startStamp time.Time, dst string) error {
func writeInputSeries(input []series, interval *promutil.Duration, startStamp time.Time, dst string) error {
r := testutil.WriteRequest{}
var err error
r.Timeseries, err = parseInputSeries(input, interval, startStamp)
@ -56,7 +56,7 @@ func writeInputSeries(input []series, interval *promutils.Duration, startStamp t
return nil
}
func parseInputSeries(input []series, interval *promutils.Duration, startStamp time.Time) ([]testutil.TimeSeries, error) {
func parseInputSeries(input []series, interval *promutil.Duration, startStamp time.Time) ([]testutil.TimeSeries, error) {
var res []testutil.TimeSeries
for _, data := range input {
expr, err := metricsql.Parse(data.Series)

View file

@ -5,7 +5,7 @@ import (
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/decimal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutil"
)
func TestParseInputValue_Failure(t *testing.T) {
@ -70,7 +70,7 @@ func TestParseInputValue_Success(t *testing.T) {
func TestParseInputSeries_Success(t *testing.T) {
f := func(input []series) {
t.Helper()
var interval promutils.Duration
var interval promutil.Duration
_, err := parseInputSeries(input, &interval, time.Now())
if err != nil {
t.Fatalf("expect to see no error: %v", err)
@ -86,7 +86,7 @@ func TestParseInputSeries_Success(t *testing.T) {
func TestParseInputSeries_Fail(t *testing.T) {
f := func(input []series) {
t.Helper()
var interval promutils.Duration
var interval promutil.Duration
_, err := parseInputSeries(input, &interval, time.Now())
if err == nil {
t.Fatalf("expect to see error: %v", err)

View file

@ -10,15 +10,15 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/datasource"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutil"
"github.com/VictoriaMetrics/metricsql"
)
// metricsqlTestCase holds metricsql_expr_test cases defined in test file
type metricsqlTestCase struct {
Expr string `yaml:"expr"`
EvalTime *promutils.Duration `yaml:"eval_time"`
ExpSamples []expSample `yaml:"exp_samples"`
Expr string `yaml:"expr"`
EvalTime *promutil.Duration `yaml:"eval_time"`
ExpSamples []expSample `yaml:"exp_samples"`
}
type expSample struct {
@ -95,7 +95,7 @@ Outer:
return
}
func durationToTime(pd *promutils.Duration) time.Time {
func durationToTime(pd *promutil.Duration) time.Time {
if pd == nil {
return time.Time{}
}

View file

@ -36,7 +36,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutil"
"github.com/VictoriaMetrics/metrics"
)
@ -182,7 +182,7 @@ func ruleUnitTest(filename string, content []byte, externalLabels map[string]str
if unitTestInp.EvaluationInterval.Duration() == 0 {
fmt.Println("evaluation_interval set to 1m by default")
unitTestInp.EvaluationInterval = &promutils.Duration{D: 1 * time.Minute}
unitTestInp.EvaluationInterval = &promutil.Duration{D: 1 * time.Minute}
}
groupOrderMap := make(map[string]int)
@ -312,7 +312,7 @@ func (tg *testGroup) test(evalInterval time.Duration, groupOrderMap map[string]i
defer tearDown()
if tg.Interval == nil {
tg.Interval = promutils.NewDuration(evalInterval)
tg.Interval = promutil.NewDuration(evalInterval)
}
err := writeInputSeries(tg.InputSeries, tg.Interval, testStartTime, fmt.Sprintf("http://127.0.0.1:%s/api/v1/write", httpListenAddr))
if err != nil {
@ -472,15 +472,15 @@ func (tg *testGroup) test(evalInterval time.Duration, groupOrderMap map[string]i
// unitTestFile holds the contents of a single unit test file
type unitTestFile struct {
RuleFiles []string `yaml:"rule_files"`
EvaluationInterval *promutils.Duration `yaml:"evaluation_interval"`
GroupEvalOrder []string `yaml:"group_eval_order"`
Tests []testGroup `yaml:"tests"`
RuleFiles []string `yaml:"rule_files"`
EvaluationInterval *promutil.Duration `yaml:"evaluation_interval"`
GroupEvalOrder []string `yaml:"group_eval_order"`
Tests []testGroup `yaml:"tests"`
}
// testGroup is a group of input series and test cases associated with it
type testGroup struct {
Interval *promutils.Duration `yaml:"interval"`
Interval *promutil.Duration `yaml:"interval"`
InputSeries []series `yaml:"input_series"`
AlertRuleTests []alertTestCase `yaml:"alert_rule_test"`
MetricsqlExprTests []metricsqlTestCase `yaml:"metricsql_expr_test"`

View file

@ -74,7 +74,7 @@ test-vmalert:
go test -v -race -cover ./app/vmalert/notifier
go test -v -race -cover ./app/vmalert/config
go test -v -race -cover ./app/vmalert/remotewrite
go test -v -race -cover ./app/vmalert/utils
go test -v -race -cover ./app/vmalert/vmalertutil
run-vmalert: vmalert
./bin/vmalert -rule=app/vmalert/config/testdata/rules/rules2-good.rules \

View file

@ -1,3 +1,3 @@
See vmalert docs [here](https://docs.victoriametrics.com/vmalert/).
See vmalert docs [here](https://docs.victoriametrics.com/victoriametrics/vmalert/).
vmalert docs can be edited at [docs/vmalert.md](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/docs/vmalert.md).
vmalert docs can be edited at [docs/vmalert.md](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/docs/victoriametrics/vmalert.md).

View file

@ -2,7 +2,6 @@ package config
import (
"bytes"
"crypto/md5"
"flag"
"fmt"
"hash/fnv"
@ -11,15 +10,16 @@ import (
"sort"
"strings"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/config/log"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/utils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/envtemplate"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutils"
"gopkg.in/yaml.v2"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/config/log"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/vmalertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/envtemplate"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutil"
)
var (
defaultRuleType = flag.String("rule.defaultRuleType", "prometheus", `Default type for rule expressions, can be overridden via "type" parameter on the group level, see https://docs.victoriametrics.com/vmalert/#groups. Supported values: "graphite", "prometheus" and "vlogs".`)
defaultRuleType = flag.String("rule.defaultRuleType", "prometheus", `Default type for rule expressions, can be overridden via "type" parameter on the group level, see https://docs.victoriametrics.com/victoriametrics/vmalert/#groups. Supported values: "graphite", "prometheus" and "vlogs".`)
)
// Group contains list of Rules grouped into
@ -27,15 +27,15 @@ var (
type Group struct {
Type Type `yaml:"type,omitempty"`
File string
Name string `yaml:"name"`
Interval *promutils.Duration `yaml:"interval,omitempty"`
EvalOffset *promutils.Duration `yaml:"eval_offset,omitempty"`
Name string `yaml:"name"`
Interval *promutil.Duration `yaml:"interval,omitempty"`
EvalOffset *promutil.Duration `yaml:"eval_offset,omitempty"`
// EvalDelay will adjust the `time` parameter of rule evaluation requests to compensate intentional query delay from datasource.
// see https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5155
EvalDelay *promutils.Duration `yaml:"eval_delay,omitempty"`
Limit int `yaml:"limit,omitempty"`
Rules []Rule `yaml:"rules"`
Concurrency int `yaml:"concurrency"`
EvalDelay *promutil.Duration `yaml:"eval_delay,omitempty"`
Limit int `yaml:"limit,omitempty"`
Rules []Rule `yaml:"rules"`
Concurrency int `yaml:"concurrency"`
// Labels is a set of label value pairs, that will be added to every rule.
// It has priority over the external labels.
Labels map[string]string `yaml:"labels"`
@ -67,7 +67,7 @@ func (g *Group) UnmarshalYAML(unmarshal func(any) error) error {
if g.Type.Get() == "" {
g.Type = NewRawType(*defaultRuleType)
}
h := md5.New()
h := fnv.New64a()
h.Write(b)
g.Checksum = fmt.Sprintf("%x", h.Sum(nil))
return nil
@ -135,15 +135,15 @@ func (g *Group) Validate(validateTplFn ValidateTplFn, validateExpressions bool)
// recording rule or alerting rule.
type Rule struct {
ID uint64
Record string `yaml:"record,omitempty"`
Alert string `yaml:"alert,omitempty"`
Expr string `yaml:"expr"`
For *promutils.Duration `yaml:"for,omitempty"`
Record string `yaml:"record,omitempty"`
Alert string `yaml:"alert,omitempty"`
Expr string `yaml:"expr"`
For *promutil.Duration `yaml:"for,omitempty"`
// Alert will continue firing for this long even when the alerting expression no longer has results.
KeepFiringFor *promutils.Duration `yaml:"keep_firing_for,omitempty"`
Labels map[string]string `yaml:"labels,omitempty"`
Annotations map[string]string `yaml:"annotations,omitempty"`
Debug bool `yaml:"debug,omitempty"`
KeepFiringFor *promutil.Duration `yaml:"keep_firing_for,omitempty"`
Labels map[string]string `yaml:"labels,omitempty"`
Annotations map[string]string `yaml:"annotations,omitempty"`
Debug bool `yaml:"debug,omitempty"`
// UpdateEntriesLimit defines max number of rule's state updates stored in memory.
// Overrides `-rule.updateEntriesLimit`.
UpdateEntriesLimit *int `yaml:"update_entries_limit,omitempty"`
@ -265,7 +265,7 @@ func Parse(pathPatterns []string, validateTplFn ValidateTplFn, validateExpressio
}
func parse(files map[string][]byte, validateTplFn ValidateTplFn, validateExpressions bool) ([]Group, error) {
errGroup := new(utils.ErrGroup)
errGroup := new(vmalertutil.ErrGroup)
var groups []Group
for file, data := range files {
uniqueGroups := map[string]struct{}{}

View file

@ -11,7 +11,7 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/notifier"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/templates"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutil"
"gopkg.in/yaml.v2"
)
@ -162,7 +162,7 @@ func TestGroupValidate_Failure(t *testing.T) {
Rules: []Rule{
{
Expr: "sum(up == 0 ) by (host)",
For: promutils.NewDuration(10 * time.Millisecond),
For: promutil.NewDuration(10 * time.Millisecond),
},
{
Expr: "sumSeries(time('foo.bar',10))",
@ -172,13 +172,13 @@ func TestGroupValidate_Failure(t *testing.T) {
f(&Group{
Name: "negative interval",
Interval: promutils.NewDuration(-1),
Interval: promutil.NewDuration(-1),
}, false, "interval shouldn't be lower than 0")
f(&Group{
Name: "wrong eval_offset",
Interval: promutils.NewDuration(time.Minute),
EvalOffset: promutils.NewDuration(2 * time.Minute),
Interval: promutil.NewDuration(time.Minute),
EvalOffset: promutil.NewDuration(2 * time.Minute),
}, false, "eval_offset should be smaller than interval")
f(&Group{
@ -309,7 +309,7 @@ func TestGroupValidate_Failure(t *testing.T) {
Record: "r1",
ID: 1,
Expr: "sumSeries(time('foo.bar',10))",
For: promutils.NewDuration(10 * time.Millisecond),
For: promutil.NewDuration(10 * time.Millisecond),
},
{
Record: "r2",
@ -326,7 +326,7 @@ func TestGroupValidate_Failure(t *testing.T) {
{
Record: "r1",
Expr: "sum(up == 0 ) by (host)",
For: promutils.NewDuration(10 * time.Millisecond),
For: promutil.NewDuration(10 * time.Millisecond),
},
},
}, true, "bad LogsQL expr")
@ -338,7 +338,7 @@ func TestGroupValidate_Failure(t *testing.T) {
{
Record: "r1",
Expr: "* | stats by (path) count()",
For: promutils.NewDuration(10 * time.Millisecond),
For: promutil.NewDuration(10 * time.Millisecond),
},
},
}, true, "bad prometheus expr")
@ -488,7 +488,7 @@ func TestHashRule_Equal(t *testing.T) {
f(Rule{Alert: "record", Expr: "up == 1"}, Rule{Alert: "record", Expr: "up == 1"})
f(Rule{
Alert: "alert", Expr: "up == 1", For: promutils.NewDuration(time.Minute), KeepFiringFor: promutils.NewDuration(time.Minute),
Alert: "alert", Expr: "up == 1", For: promutil.NewDuration(time.Minute), KeepFiringFor: promutil.NewDuration(time.Minute),
}, Rule{Alert: "alert", Expr: "up == 1"})
}

View file

@ -9,7 +9,7 @@ groups:
annotations:
summary: "{{ }}"
description: "{{$labels}}"
- alert: UnkownAnnotationsFunction
- alert: UnknownAnnotationsFunction
for: 5m
expr: vm_rows > 0
labels:

View file

@ -1,7 +1,7 @@
groups:
- name: group
rules:
- alert: UnkownLabelFunction
- alert: UnknownLabelFunction
for: 5m
expr: vm_rows > 0
labels:

View file

@ -35,6 +35,8 @@ type promResponse struct {
Stats struct {
SeriesFetched *string `json:"seriesFetched,omitempty"`
} `json:"stats,omitempty"`
// IsPartial supported by VictoriaMetrics
IsPartial *bool `json:"isPartial,omitempty"`
}
// see https://prometheus.io/docs/prometheus/latest/querying/api/#instant-queries
@ -209,7 +211,7 @@ func parsePrometheusResponse(req *http.Request, resp *http.Response) (res Result
if err != nil {
return res, err
}
res = Result{Data: ms}
res = Result{Data: ms, IsPartial: r.IsPartial}
if r.Stats.SeriesFetched != nil {
intV, err := strconv.Atoi(*r.Stats.SeriesFetched)
if err != nil {

View file

@ -12,7 +12,7 @@ import (
"testing"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/utils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/vmalertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
)
@ -72,12 +72,14 @@ func TestVMInstantQuery(t *testing.T) {
w.Write([]byte(`{"status":"success","data":{"resultType":"scalar","result":[1583786142, "1"]}}`))
case 7:
w.Write([]byte(`{"status":"success","data":{"resultType":"scalar","result":[1583786142, "1"]},"stats":{"seriesFetched": "42"}}`))
case 8:
w.Write([]byte(`{"status":"success", "isPartial":true, "data":{"resultType":"scalar","result":[1583786142, "1"]}}`))
}
})
mux.HandleFunc("/render", func(w http.ResponseWriter, _ *http.Request) {
c++
switch c {
case 8:
case 9:
w.Write([]byte(`[{"target":"constantLine(10)","tags":{"name":"constantLine(10)"},"datapoints":[[10,1611758343],[10,1611758373],[10,1611758403]]}]`))
}
})
@ -100,9 +102,9 @@ func TestVMInstantQuery(t *testing.T) {
t.Fatalf("failed to parse 'time' query param %q: %s", timeParam, err)
}
switch c {
case 9:
w.Write([]byte("[]"))
case 10:
w.Write([]byte("[]"))
case 11:
w.Write([]byte(`{"status":"success","data":{"resultType":"vector","result":[{"metric":{"__name__":"total","foo":"bar"},"value":[1583786142,"13763"]},{"metric":{"__name__":"total","foo":"baz"},"value":[1583786140,"2000"]}]}}`))
}
})
@ -203,10 +205,18 @@ func TestVMInstantQuery(t *testing.T) {
*res.SeriesFetched)
}
res, _, err = pq.Query(ctx, vmQuery, ts) // 8
if err != nil {
t.Fatalf("unexpected %s", err)
}
if res.IsPartial != nil && !*res.IsPartial {
t.Fatalf("unexpected metric isPartial want %+v", true)
}
// test graphite
gq := s.BuildWithParams(QuerierParams{DataSourceType: string(datasourceGraphite)})
res, _, err = gq.Query(ctx, queryRender, ts) // 8 - graphite
res, _, err = gq.Query(ctx, queryRender, ts) // 9 - graphite
if err != nil {
t.Fatalf("unexpected %s", err)
}
@ -226,9 +236,9 @@ func TestVMInstantQuery(t *testing.T) {
vlogs := datasourceVLogs
pq = s.BuildWithParams(QuerierParams{DataSourceType: string(vlogs), EvaluationInterval: 15 * time.Second})
expErr(vlogsQuery, "error parsing response") // 9
expErr(vlogsQuery, "error parsing response") // 10
res, _, err = pq.Query(ctx, vlogsQuery, ts) // 10
res, _, err = pq.Query(ctx, vlogsQuery, ts) // 11
if err != nil {
t.Fatalf("unexpected %s", err)
}
@ -753,7 +763,7 @@ func TestHeaders(t *testing.T) {
// basic auth
f(func() *Client {
cfg, err := utils.AuthConfig(utils.WithBasicAuth("foo", "bar", ""))
cfg, err := vmalertutil.AuthConfig(vmalertutil.WithBasicAuth("foo", "bar", ""))
if err != nil {
t.Fatalf("Error get auth config: %s", err)
}
@ -766,7 +776,7 @@ func TestHeaders(t *testing.T) {
// bearer auth
f(func() *Client {
cfg, err := utils.AuthConfig(utils.WithBearer("foo", ""))
cfg, err := vmalertutil.AuthConfig(vmalertutil.WithBearer("foo", ""))
if err != nil {
t.Fatalf("Error get auth config: %s", err)
}
@ -798,7 +808,7 @@ func TestHeaders(t *testing.T) {
// custom header overrides basic auth
f(func() *Client {
cfg, err := utils.AuthConfig(utils.WithBasicAuth("foo", "bar", ""))
cfg, err := vmalertutil.AuthConfig(vmalertutil.WithBasicAuth("foo", "bar", ""))
if err != nil {
t.Fatalf("Error get auth config: %s", err)
}

View file

@ -34,6 +34,9 @@ type Result struct {
// If nil, then this feature is not supported by the datasource.
// SeriesFetched is supported by VictoriaMetrics since v1.90.
SeriesFetched *int
// IsPartial is used by VictoriaMetrics to indicate
// whether response data is partial.
IsPartial *bool
}
// QuerierBuilder builds Querier with given params.

View file

@ -10,8 +10,9 @@ import (
// FakeQuerier is a mock querier that return predefined results and error message
type FakeQuerier struct {
sync.Mutex
metrics []Metric
err error
metrics []Metric
err error
isPartial *bool
}
// SetErr sets query error message
@ -21,11 +22,19 @@ func (fq *FakeQuerier) SetErr(err error) {
fq.Unlock()
}
// SetPartialResponse marks query response as partial
func (fq *FakeQuerier) SetPartialResponse(partial bool) {
fq.Lock()
fq.isPartial = &partial
fq.Unlock()
}
// Reset reset querier's error message and results
func (fq *FakeQuerier) Reset() {
fq.Lock()
fq.err = nil
fq.metrics = fq.metrics[:0]
fq.isPartial = nil
fq.Unlock()
}
@ -57,7 +66,7 @@ func (fq *FakeQuerier) Query(_ context.Context, _ string, _ time.Time) (Result,
cp := make([]Metric, len(fq.metrics))
copy(cp, fq.metrics)
req, _ := http.NewRequest(http.MethodPost, "foo.com", nil)
return Result{Data: cp}, req, nil
return Result{Data: cp, IsPartial: fq.isPartial}, req, nil
}
// FakeQuerierWithRegistry can store different results for different query expr

View file

@ -8,10 +8,10 @@ import (
"strings"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/utils"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/vmalertutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/netutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httputil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promauth"
)
var (
@ -79,15 +79,13 @@ type Param struct {
// Provided extraParams will be added as GET params for
// each request.
func Init(extraParams url.Values) (QuerierBuilder, error) {
if *addr == "" {
return nil, fmt.Errorf("datasource.url is empty")
if err := httputil.CheckURL(*addr); err != nil {
return nil, fmt.Errorf("invalid -datasource.url: %w", err)
}
tr, err := httputils.Transport(*addr, *tlsCertFile, *tlsKeyFile, *tlsCAFile, *tlsServerName, *tlsInsecureSkipVerify)
tr, err := promauth.NewTLSTransport(*tlsCertFile, *tlsKeyFile, *tlsCAFile, *tlsServerName, *tlsInsecureSkipVerify, "vmalert_datasource")
if err != nil {
return nil, fmt.Errorf("failed to create transport for -datasource.url=%q: %w", *addr, err)
}
tr.DialContext = netutil.NewStatDialFunc("vmalert_datasource")
tr.DisableKeepAlives = *disableKeepAlive
tr.MaxIdleConnsPerHost = *maxIdleConnections
if tr.MaxIdleConns != 0 && tr.MaxIdleConns < tr.MaxIdleConnsPerHost {
@ -106,11 +104,11 @@ func Init(extraParams url.Values) (QuerierBuilder, error) {
if err != nil {
return nil, fmt.Errorf("cannot parse JSON for -datasource.oauth2.endpointParams=%s: %w", *oauth2EndpointParams, err)
}
authCfg, err := utils.AuthConfig(
utils.WithBasicAuth(*basicAuthUsername, *basicAuthPassword, *basicAuthPasswordFile),
utils.WithBearer(*bearerToken, *bearerTokenFile),
utils.WithOAuth(*oauth2ClientID, *oauth2ClientSecret, *oauth2ClientSecretFile, *oauth2TokenURL, *oauth2Scopes, endpointParams),
utils.WithHeaders(*headers))
authCfg, err := vmalertutil.AuthConfig(
vmalertutil.WithBasicAuth(*basicAuthUsername, *basicAuthPassword, *basicAuthPasswordFile),
vmalertutil.WithBearer(*bearerToken, *bearerTokenFile),
vmalertutil.WithOAuth(*oauth2ClientID, *oauth2ClientSecret, *oauth2ClientSecretFile, *oauth2TokenURL, *oauth2Scopes, endpointParams),
vmalertutil.WithHeaders(*headers))
if err != nil {
return nil, fmt.Errorf("failed to configure auth: %w", err)
}

View file

@ -44,7 +44,7 @@ Enterprise version of vmalert supports S3 and GCS paths to rules.
For example: gs://bucket/path/to/rules, s3://bucket/path/to/rules
S3 and GCS paths support only matching by prefix, e.g. s3://bucket/dir/rule_ matches
all files with prefix rule_ in folder dir.
See https://docs.victoriametrics.com/vmalert/#reading-rules-from-object-storage
See https://docs.victoriametrics.com/victoriametrics/vmalert/#reading-rules-from-object-storage
`)
ruleTemplatesPath = flagutil.NewArrayString("rule.templates", `Path or glob pattern to location with go template definitions `+
@ -71,7 +71,7 @@ absolute path to all .tpl files in root.
externalURL = flag.String("external.url", "", "External URL is used as alert's source for sent alerts to the notifier. By default, hostname is used as address.")
externalAlertSource = flag.String("external.alert.source", "", `External Alert Source allows to override the Source link for alerts sent to AlertManager `+
`for cases where you want to build a custom link to Grafana, Prometheus or any other service. `+
`Supports templating - see https://docs.victoriametrics.com/vmalert/#templating . `+
`Supports templating - see https://docs.victoriametrics.com/victoriametrics/vmalert/#templating . `+
`For example, link to Grafana: -external.alert.source='explore?orgId=1&left={"datasource":"VictoriaMetrics","queries":[{"expr":{{.Expr|jsonEscape|queryEscape}},"refId":"A"}],"range":{"from":"now-1h","to":"now"}}'. `+
`Link to VMUI: -external.alert.source='vmui/#/?g0.expr={{.Expr|queryEscape}}'. `+
`If empty 'vmalert/alert?group_id={{.GroupID}}&alert_id={{.AlertID}}' is used.`)
@ -318,7 +318,7 @@ func usage() {
const s = `
vmalert processes alerts and recording rules.
See the docs at https://docs.victoriametrics.com/vmalert/ .
See the docs at https://docs.victoriametrics.com/victoriametrics/vmalert/ .
`
flagutil.Usage(s)
}

View file

@ -169,7 +169,6 @@ groups:
checkCfg(nil)
groupsLen = lenLocked(m)
if groupsLen != 2 {
fmt.Println(m.groups)
t.Fatalf("expected to have exactly 2 groups loaded; got %d", groupsLen)
}

View file

@ -83,7 +83,8 @@ func (m *manager) close() {
func (m *manager) startGroup(ctx context.Context, g *rule.Group, restore bool) error {
m.wg.Add(1)
id := g.ID()
id := g.GetID()
g.Init()
go func() {
defer m.wg.Done()
if restore {
@ -112,7 +113,7 @@ func (m *manager) update(ctx context.Context, groupsCfg []config.Group, restore
}
}
ng := rule.NewGroup(cfg, m.querierBuilder, *evaluationInterval, m.labels)
groupsRegistry[ng.ID()] = ng
groupsRegistry[ng.GetID()] = ng
}
if rrPresent && m.rw == nil {
@ -130,17 +131,17 @@ func (m *manager) update(ctx context.Context, groupsCfg []config.Group, restore
m.groupsMu.Lock()
for _, og := range m.groups {
ng, ok := groupsRegistry[og.ID()]
ng, ok := groupsRegistry[og.GetID()]
if !ok {
// old group is not present in new list,
// so must be stopped and deleted
og.Close()
delete(m.groups, og.ID())
delete(m.groups, og.GetID())
og = nil
continue
}
delete(groupsRegistry, ng.ID())
if og.Checksum != ng.Checksum {
delete(groupsRegistry, ng.GetID())
if og.GetCheckSum() != ng.GetCheckSum() {
toUpdate = append(toUpdate, updateItem{old: og, new: ng})
}
}

Some files were not shown because too many files have changed in this diff Show more