Merge branch 'master' into tenant-from-headers-single

This commit is contained in:
Andrii Chubatiuk 2024-10-12 08:11:48 +03:00 committed by GitHub
commit 62904db42c
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
229 changed files with 22419 additions and 1840 deletions

View file

@ -27,6 +27,7 @@ include package/release/Makefile
all: \
victoria-metrics-prod \
victoria-logs-prod \
vlogscli-prod \
vmagent-prod \
vmalert-prod \
vmalert-tool-prod \
@ -51,6 +52,7 @@ publish: \
package: \
package-victoria-metrics \
package-victoria-logs \
package-vlogscli \
package-vmagent \
package-vmalert \
package-vmalert-tool \
@ -263,6 +265,14 @@ release-victoria-metrics-windows-goarch: victoria-metrics-windows-$(GOARCH)-prod
cd bin && rm -rf \
victoria-metrics-windows-$(GOARCH)-prod.exe
release-victoria-logs-bundle: \
release-victoria-logs \
release-vlogscli
publish-victoria-logs-bundle: \
publish-victoria-logs \
publish-vlogscli
release-victoria-logs:
$(MAKE_PARALLEL) release-victoria-logs-linux-386 \
release-victoria-logs-linux-amd64 \
@ -320,6 +330,63 @@ release-victoria-logs-windows-goarch: victoria-logs-windows-$(GOARCH)-prod
cd bin && rm -rf \
victoria-logs-windows-$(GOARCH)-prod.exe
release-vlogscli:
$(MAKE_PARALLEL) release-vlogscli-linux-386 \
release-vlogscli-linux-amd64 \
release-vlogscli-linux-arm \
release-vlogscli-linux-arm64 \
release-vlogscli-darwin-amd64 \
release-vlogscli-darwin-arm64 \
release-vlogscli-freebsd-amd64 \
release-vlogscli-openbsd-amd64 \
release-vlogscli-windows-amd64
release-vlogscli-linux-386:
GOOS=linux GOARCH=386 $(MAKE) release-vlogscli-goos-goarch
release-vlogscli-linux-amd64:
GOOS=linux GOARCH=amd64 $(MAKE) release-vlogscli-goos-goarch
release-vlogscli-linux-arm:
GOOS=linux GOARCH=arm $(MAKE) release-vlogscli-goos-goarch
release-vlogscli-linux-arm64:
GOOS=linux GOARCH=arm64 $(MAKE) release-vlogscli-goos-goarch
release-vlogscli-darwin-amd64:
GOOS=darwin GOARCH=amd64 $(MAKE) release-vlogscli-goos-goarch
release-vlogscli-darwin-arm64:
GOOS=darwin GOARCH=arm64 $(MAKE) release-vlogscli-goos-goarch
release-vlogscli-freebsd-amd64:
GOOS=freebsd GOARCH=amd64 $(MAKE) release-vlogscli-goos-goarch
release-vlogscli-openbsd-amd64:
GOOS=openbsd GOARCH=amd64 $(MAKE) release-vlogscli-goos-goarch
release-vlogscli-windows-amd64:
GOARCH=amd64 $(MAKE) release-vlogscli-windows-goarch
release-vlogscli-goos-goarch: vlogscli-$(GOOS)-$(GOARCH)-prod
cd bin && \
tar $(TAR_OWNERSHIP) --transform="flags=r;s|-$(GOOS)-$(GOARCH)||" -czf vlogscli-$(GOOS)-$(GOARCH)-$(PKG_TAG).tar.gz \
vlogscli-$(GOOS)-$(GOARCH)-prod \
&& sha256sum vlogscli-$(GOOS)-$(GOARCH)-$(PKG_TAG).tar.gz \
vlogscli-$(GOOS)-$(GOARCH)-prod \
| sed s/-$(GOOS)-$(GOARCH)-prod/-prod/ > vlogscli-$(GOOS)-$(GOARCH)-$(PKG_TAG)_checksums.txt
cd bin && rm -rf vlogscli-$(GOOS)-$(GOARCH)-prod
release-vlogscli-windows-goarch: vlogscli-windows-$(GOARCH)-prod
cd bin && \
zip vlogscli-windows-$(GOARCH)-$(PKG_TAG).zip \
vlogscli-windows-$(GOARCH)-prod.exe \
&& sha256sum vlogscli-windows-$(GOARCH)-$(PKG_TAG).zip \
vlogscli-windows-$(GOARCH)-prod.exe \
> vlogscli-windows-$(GOARCH)-$(PKG_TAG)_checksums.txt
cd bin && rm -rf \
vlogscli-windows-$(GOARCH)-prod.exe
release-vmutils: \
release-vmutils-linux-386 \
release-vmutils-linux-amd64 \

View file

@ -22,7 +22,7 @@ Here are some resources and information about VictoriaMetrics:
- Case studies: [Grammarly, Roblox, Wix,...](https://docs.victoriametrics.com/casestudies/).
- Available: [Binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest), [Docker images](https://hub.docker.com/r/victoriametrics/victoria-metrics/), [Source code](https://github.com/VictoriaMetrics/VictoriaMetrics)
- Deployment types: [Single-node version](https://docs.victoriametrics.com/), [Cluster version](https://docs.victoriametrics.com/cluster-victoriametrics/), and [Enterprise version](https://docs.victoriametrics.com/enterprise/)
- Changelog: [CHANGELOG](https://docs.victoriametrics.com/changelog/), and [How to upgrade](#how-to-upgrade-victoriametrics)
- Changelog: [CHANGELOG](https://docs.victoriametrics.com/changelog/), and [How to upgrade](https://docs.victoriametrics.com/#how-to-upgrade-victoriametrics)
- Community: [Slack](https://slack.victoriametrics.com/), [Twitter](https://twitter.com/VictoriaMetrics), [LinkedIn](https://www.linkedin.com/company/victoriametrics/), [YouTube](https://www.youtube.com/@VictoriaMetrics)
Yes, we open-source both the single-node VictoriaMetrics and the cluster version.
@ -38,17 +38,17 @@ VictoriaMetrics is optimized for timeseries data, even when old time series are
* **Easy to setup**: No dependencies, single [small binary](https://medium.com/@valyala/stripping-dependency-bloat-in-victoriametrics-docker-image-983fb5912b0d), configuration through command-line flags, but the default is also fine-tuned; backup and restore with [instant snapshots](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282).
* **Global query view**: Multiple Prometheus instances or any other data sources may ingest data into VictoriaMetrics and queried via a single query.
* **Various Protocols**: Support metric scraping, ingestion and backfilling in various protocol.
* [Prometheus exporters](#how-to-scrape-prometheus-exporters-such-as-node-exporter), [Prometheus remote write API](#prometheus-setup), [Prometheus exposition format](#how-to-import-data-in-prometheus-exposition-format).
* [InfluxDB line protocol](#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf) over HTTP, TCP and UDP.
* [Graphite plaintext protocol](#how-to-send-data-from-graphite-compatible-agents-such-as-statsd) with [tags](https://graphite.readthedocs.io/en/latest/tags.html#carbon).
* [OpenTSDB put message](#sending-data-via-telnet-put-protocol).
* [HTTP OpenTSDB /api/put requests](#sending-opentsdb-data-via-http-apiput-requests).
* [JSON line format](#how-to-import-data-in-json-line-format).
* [Arbitrary CSV data](#how-to-import-csv-data).
* [Native binary format](#how-to-import-data-in-native-format).
* [DataDog agent or DogStatsD](#how-to-send-data-from-datadog-agent).
* [NewRelic infrastructure agent](#how-to-send-data-from-newrelic-agent).
* [OpenTelemetry metrics format](#sending-data-via-opentelemetry).
* [Prometheus exporters](https://docs.victoriametrics.com/#how-to-scrape-prometheus-exporters-such-as-node-exporter), [Prometheus remote write API](https://docs.victoriametrics.com/#prometheus-setup), [Prometheus exposition format](https://docs.victoriametrics.com/#how-to-import-data-in-prometheus-exposition-format).
* [InfluxDB line protocol](https://docs.victoriametrics.com/#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf) over HTTP, TCP and UDP.
* [Graphite plaintext protocol](https://docs.victoriametrics.com/#how-to-send-data-from-graphite-compatible-agents-such-as-statsd) with [tags](https://graphite.readthedocs.io/en/latest/tags.html#carbon).
* [OpenTSDB put message](https://docs.victoriametrics.com/#sending-data-via-telnet-put-protocol).
* [HTTP OpenTSDB /api/put requests](https://docs.victoriametrics.com/#sending-opentsdb-data-via-http-apiput-requests).
* [JSON line format](https://docs.victoriametrics.com/#how-to-import-data-in-json-line-format).
* [Arbitrary CSV data](https://docs.victoriametrics.com/#how-to-import-csv-data).
* [Native binary format](https://docs.victoriametrics.com/#how-to-import-data-in-native-format).
* [DataDog agent or DogStatsD](https://docs.victoriametrics.com/#how-to-send-data-from-datadog-agent).
* [NewRelic infrastructure agent](https://docs.victoriametrics.com/#how-to-send-data-from-newrelic-agent).
* [OpenTelemetry metrics format](https://docs.victoriametrics.com/#sending-data-via-opentelemetry).
* **NFS-based storages**: Supports storing data on NFS-based storages such as Amazon EFS, Google Filestore.
* And many other features such as metrics relabeling, cardinality limiter, etc.

109
app/vlogscli/Makefile Normal file
View file

@ -0,0 +1,109 @@
# All these commands must run from repository root.
vlogscli:
APP_NAME=vlogscli $(MAKE) app-local
vlogscli-race:
APP_NAME=vlogscli RACE=-race $(MAKE) app-local
vlogscli-prod:
APP_NAME=vlogscli $(MAKE) app-via-docker
vlogscli-pure-prod:
APP_NAME=vlogscli $(MAKE) app-via-docker-pure
vlogscli-linux-amd64-prod:
APP_NAME=vlogscli $(MAKE) app-via-docker-linux-amd64
vlogscli-linux-arm-prod:
APP_NAME=vlogscli $(MAKE) app-via-docker-linux-arm
vlogscli-linux-arm64-prod:
APP_NAME=vlogscli $(MAKE) app-via-docker-linux-arm64
vlogscli-linux-ppc64le-prod:
APP_NAME=vlogscli $(MAKE) app-via-docker-linux-ppc64le
vlogscli-linux-386-prod:
APP_NAME=vlogscli $(MAKE) app-via-docker-linux-386
vlogscli-darwin-amd64-prod:
APP_NAME=vlogscli $(MAKE) app-via-docker-darwin-amd64
vlogscli-darwin-arm64-prod:
APP_NAME=vlogscli $(MAKE) app-via-docker-darwin-arm64
vlogscli-freebsd-amd64-prod:
APP_NAME=vlogscli $(MAKE) app-via-docker-freebsd-amd64
vlogscli-openbsd-amd64-prod:
APP_NAME=vlogscli $(MAKE) app-via-docker-openbsd-amd64
vlogscli-windows-amd64-prod:
APP_NAME=vlogscli $(MAKE) app-via-docker-windows-amd64
package-vlogscli:
APP_NAME=vlogscli $(MAKE) package-via-docker
package-vlogscli-pure:
APP_NAME=vlogscli $(MAKE) package-via-docker-pure
package-vlogscli-amd64:
APP_NAME=vlogscli $(MAKE) package-via-docker-amd64
package-vlogscli-arm:
APP_NAME=vlogscli $(MAKE) package-via-docker-arm
package-vlogscli-arm64:
APP_NAME=vlogscli $(MAKE) package-via-docker-arm64
package-vlogscli-ppc64le:
APP_NAME=vlogscli $(MAKE) package-via-docker-ppc64le
package-vlogscli-386:
APP_NAME=vlogscli $(MAKE) package-via-docker-386
publish-vlogscli:
APP_NAME=vlogscli $(MAKE) publish-via-docker
vlogscli-linux-amd64:
APP_NAME=vlogscli CGO_ENABLED=1 GOOS=linux GOARCH=amd64 $(MAKE) app-local-goos-goarch
vlogscli-linux-arm:
APP_NAME=vlogscli CGO_ENABLED=0 GOOS=linux GOARCH=arm $(MAKE) app-local-goos-goarch
vlogscli-linux-arm64:
APP_NAME=vlogscli CGO_ENABLED=0 GOOS=linux GOARCH=arm64 $(MAKE) app-local-goos-goarch
vlogscli-linux-ppc64le:
APP_NAME=vlogscli CGO_ENABLED=0 GOOS=linux GOARCH=ppc64le $(MAKE) app-local-goos-goarch
vlogscli-linux-s390x:
APP_NAME=vlogscli CGO_ENABLED=0 GOOS=linux GOARCH=s390x $(MAKE) app-local-goos-goarch
vlogscli-linux-loong64:
APP_NAME=vlogscli CGO_ENABLED=0 GOOS=linux GOARCH=loong64 $(MAKE) app-local-goos-goarch
vlogscli-linux-386:
APP_NAME=vlogscli CGO_ENABLED=0 GOOS=linux GOARCH=386 $(MAKE) app-local-goos-goarch
vlogscli-darwin-amd64:
APP_NAME=vlogscli CGO_ENABLED=0 GOOS=darwin GOARCH=amd64 $(MAKE) app-local-goos-goarch
vlogscli-darwin-arm64:
APP_NAME=vlogscli CGO_ENABLED=0 GOOS=darwin GOARCH=arm64 $(MAKE) app-local-goos-goarch
vlogscli-freebsd-amd64:
APP_NAME=vlogscli CGO_ENABLED=0 GOOS=freebsd GOARCH=amd64 $(MAKE) app-local-goos-goarch
vlogscli-openbsd-amd64:
APP_NAME=vlogscli CGO_ENABLED=0 GOOS=openbsd GOARCH=amd64 $(MAKE) app-local-goos-goarch
vlogscli-windows-amd64:
GOARCH=amd64 APP_NAME=vlogscli $(MAKE) app-local-windows-goarch
vlogscli-pure:
APP_NAME=vlogscli $(MAKE) app-local-pure
run-vlogscli:
APP_NAME=vlogscli $(MAKE) run-via-docker

5
app/vlogscli/README.md Normal file
View file

@ -0,0 +1,5 @@
# vlogscli
Command-line utility for querying [VictoriaLogs](https://docs.victoriametrics.com/victorialogs/).
See [these docs](https://docs.victoriametrics.com/victorialogs/querying/vlogscli/).

View file

@ -0,0 +1,6 @@
ARG base_image=non-existing
FROM $base_image
ENTRYPOINT ["/vlogscli-prod"]
ARG src_binary=non-existing
COPY $src_binary ./vlogscli-prod

View file

@ -0,0 +1,238 @@
package main
import (
"bufio"
"encoding/json"
"fmt"
"io"
"sort"
"sync"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
)
type outputMode int
const (
outputModeJSONMultiline = outputMode(0)
outputModeJSONSingleline = outputMode(1)
outputModeLogfmt = outputMode(2)
outputModeCompact = outputMode(3)
)
func getOutputFormatter(outputMode outputMode) func(w io.Writer, fields []logstorage.Field) error {
switch outputMode {
case outputModeJSONMultiline:
return func(w io.Writer, fields []logstorage.Field) error {
return writeJSONObject(w, fields, true)
}
case outputModeJSONSingleline:
return func(w io.Writer, fields []logstorage.Field) error {
return writeJSONObject(w, fields, false)
}
case outputModeLogfmt:
return writeLogfmtObject
case outputModeCompact:
return writeCompactObject
default:
panic(fmt.Errorf("BUG: unexpected outputMode=%d", outputMode))
}
}
type jsonPrettifier struct {
r io.ReadCloser
formatter func(w io.Writer, fields []logstorage.Field) error
d *json.Decoder
pr *io.PipeReader
pw *io.PipeWriter
bw *bufio.Writer
wg sync.WaitGroup
}
func newJSONPrettifier(r io.ReadCloser, outputMode outputMode) *jsonPrettifier {
d := json.NewDecoder(r)
pr, pw := io.Pipe()
bw := bufio.NewWriter(pw)
formatter := getOutputFormatter(outputMode)
jp := &jsonPrettifier{
r: r,
formatter: formatter,
d: d,
pr: pr,
pw: pw,
bw: bw,
}
jp.wg.Add(1)
go func() {
defer jp.wg.Done()
err := jp.prettifyJSONLines()
jp.closePipesWithError(err)
}()
return jp
}
func (jp *jsonPrettifier) closePipesWithError(err error) {
_ = jp.pr.CloseWithError(err)
_ = jp.pw.CloseWithError(err)
}
func (jp *jsonPrettifier) prettifyJSONLines() error {
for jp.d.More() {
fields, err := readNextJSONObject(jp.d)
if err != nil {
return err
}
sort.Slice(fields, func(i, j int) bool {
return fields[i].Name < fields[j].Name
})
if err := jp.formatter(jp.bw, fields); err != nil {
return err
}
// Flush bw after every output line in order to show results as soon as they appear.
if err := jp.bw.Flush(); err != nil {
return err
}
}
return nil
}
func (jp *jsonPrettifier) Close() error {
jp.closePipesWithError(io.ErrUnexpectedEOF)
err := jp.r.Close()
jp.wg.Wait()
return err
}
func (jp *jsonPrettifier) Read(p []byte) (int, error) {
return jp.pr.Read(p)
}
func readNextJSONObject(d *json.Decoder) ([]logstorage.Field, error) {
t, err := d.Token()
if err != nil {
return nil, fmt.Errorf("cannot read '{': %w", err)
}
delim, ok := t.(json.Delim)
if !ok || delim.String() != "{" {
return nil, fmt.Errorf("unexpected token read; got %q; want '{'", delim)
}
var fields []logstorage.Field
for {
// Read object key
t, err := d.Token()
if err != nil {
return nil, fmt.Errorf("cannot read JSON object key or closing brace: %w", err)
}
delim, ok := t.(json.Delim)
if ok {
if delim.String() == "}" {
return fields, nil
}
return nil, fmt.Errorf("unexpected delimiter read; got %q; want '}'", delim)
}
key, ok := t.(string)
if !ok {
return nil, fmt.Errorf("unexpected token read for object key: %v; want string or '}'", t)
}
// read object value
t, err = d.Token()
if err != nil {
return nil, fmt.Errorf("cannot read JSON object value: %w", err)
}
value, ok := t.(string)
if !ok {
return nil, fmt.Errorf("unexpected token read for oject value: %v; want string", t)
}
fields = append(fields, logstorage.Field{
Name: key,
Value: value,
})
}
}
func writeLogfmtObject(w io.Writer, fields []logstorage.Field) error {
data := logstorage.MarshalFieldsToLogfmt(nil, fields)
_, err := fmt.Fprintf(w, "%s\n", data)
return err
}
func writeCompactObject(w io.Writer, fields []logstorage.Field) error {
if len(fields) == 1 {
// Just write field value as is without name
_, err := fmt.Fprintf(w, "%s\n", fields[0].Value)
return err
}
if len(fields) == 2 && fields[0].Name == "_time" || fields[1].Name == "_time" {
// Write _time\tfieldValue as is
if fields[0].Name == "_time" {
_, err := fmt.Fprintf(w, "%s\t%s\n", fields[0].Value, fields[1].Value)
return err
}
_, err := fmt.Fprintf(w, "%s\t%s\n", fields[1].Value, fields[0].Value)
return err
}
// Fall back to logfmt
return writeLogfmtObject(w, fields)
}
func writeJSONObject(w io.Writer, fields []logstorage.Field, isMultiline bool) error {
if len(fields) == 0 {
fmt.Fprintf(w, "{}\n")
return nil
}
fmt.Fprintf(w, "{")
writeNewlineIfNeeded(w, isMultiline)
if err := writeJSONObjectKeyValue(w, fields[0], isMultiline); err != nil {
return err
}
for _, f := range fields[1:] {
fmt.Fprintf(w, ",")
writeNewlineIfNeeded(w, isMultiline)
if err := writeJSONObjectKeyValue(w, f, isMultiline); err != nil {
return err
}
}
writeNewlineIfNeeded(w, isMultiline)
fmt.Fprintf(w, "}\n")
return nil
}
func writeNewlineIfNeeded(w io.Writer, isMultiline bool) {
if isMultiline {
fmt.Fprintf(w, "\n")
}
}
func writeJSONObjectKeyValue(w io.Writer, f logstorage.Field, isMultiline bool) error {
key := getJSONString(f.Name)
value := getJSONString(f.Value)
if isMultiline {
_, err := fmt.Fprintf(w, " %s: %s", key, value)
return err
}
_, err := fmt.Fprintf(w, "%s:%s", key, value)
return err
}
func getJSONString(s string) string {
data, err := json.Marshal(s)
if err != nil {
panic(fmt.Errorf("unexpected error when marshaling string to JSON: %w", err))
}
return string(data)
}

View file

@ -0,0 +1,116 @@
package main
import (
"errors"
"fmt"
"io"
"os"
"os/exec"
"os/signal"
"sync"
"syscall"
"github.com/mattn/go-isatty"
)
func isTerminal() bool {
return isatty.IsTerminal(os.Stdout.Fd()) && isatty.IsTerminal(os.Stderr.Fd())
}
func readWithLess(r io.Reader) error {
if !isTerminal() {
// Just write everything to stdout if no terminal is available.
_, err := io.Copy(os.Stdout, r)
if err != nil && !isErrPipe(err) {
return fmt.Errorf("error when forwarding data to stdout: %w", err)
}
if err := os.Stdout.Sync(); err != nil {
return fmt.Errorf("cannot sync data to stdout: %w", err)
}
return nil
}
pr, pw, err := os.Pipe()
if err != nil {
return fmt.Errorf("cannot create pipe: %w", err)
}
defer func() {
_ = pr.Close()
_ = pw.Close()
}()
// Ignore Ctrl+C in the current process, so 'less' could handle it properly
cancel := ignoreSignals(os.Interrupt)
defer cancel()
// Start 'less' process
path, err := exec.LookPath("less")
if err != nil {
return fmt.Errorf("cannot find 'less' command: %w", err)
}
p, err := os.StartProcess(path, []string{"less", "-F", "-X"}, &os.ProcAttr{
Env: append(os.Environ(), "LESSCHARSET=utf-8"),
Files: []*os.File{pr, os.Stdout, os.Stderr},
})
if err != nil {
return fmt.Errorf("cannot start 'less' process: %w", err)
}
// Close pr after 'less' finishes in a parallel goroutine
// in order to unblock forwarding data to stopped 'less' below.
waitch := make(chan *os.ProcessState)
go func() {
// Wait for 'less' process to finish.
ps, err := p.Wait()
if err != nil {
fatalf("unexpected error when waiting for 'less' process: %w", err)
}
_ = pr.Close()
waitch <- ps
}()
// Forward data from r to 'less'
_, err = io.Copy(pw, r)
_ = pw.Sync()
_ = pw.Close()
// Wait until 'less' finished
ps := <-waitch
// Verify 'less' status.
if !ps.Success() {
return fmt.Errorf("'less' finished with unexpected code %d", ps.ExitCode())
}
if err != nil && !isErrPipe(err) {
return fmt.Errorf("error when forwarding data to 'less': %w", err)
}
return nil
}
func isErrPipe(err error) bool {
return errors.Is(err, syscall.EPIPE) || errors.Is(err, io.ErrClosedPipe)
}
func ignoreSignals(sigs ...os.Signal) func() {
ch := make(chan os.Signal, 1)
signal.Notify(ch, sigs...)
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
for {
_, ok := <-ch
if !ok {
return
}
}
}()
return func() {
signal.Stop(ch)
close(ch)
wg.Wait()
}
}

420
app/vlogscli/main.go Normal file
View file

@ -0,0 +1,420 @@
package main
import (
"context"
"errors"
"flag"
"fmt"
"io"
"io/fs"
"net/http"
"net/url"
"os"
"os/signal"
"strconv"
"strings"
"syscall"
"time"
"github.com/ergochat/readline"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/buildinfo"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/envflag"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logstorage"
)
var (
datasourceURL = flag.String("datasource.url", "http://localhost:9428/select/logsql/query", "URL for querying VictoriaLogs; "+
"see https://docs.victoriametrics.com/victorialogs/querying/#querying-logs . See also -tail.url")
tailURL = flag.String("tail.url", "", "URL for live tailing queries to VictoriaLogs; see https://docs.victoriametrics.com/victorialogs/querying/#live-tailing ."+
"The url is automatically detected from -datasource.url by replacing /query with /tail at the end if -tail.url is empty")
historyFile = flag.String("historyFile", "vlogscli-history", "Path to file with command history")
header = flagutil.NewArrayString("header", "Optional header to pass in request -datasource.url in the form 'HeaderName: value'")
accountID = flag.Int("accountID", 0, "Account ID to query; see https://docs.victoriametrics.com/victorialogs/#multitenancy")
projectID = flag.Int("projectID", 0, "Project ID to query; see https://docs.victoriametrics.com/victorialogs/#multitenancy")
)
const (
firstLinePrompt = ";> "
nextLinePrompt = ""
)
func main() {
// Write flags and help message to stdout, since it is easier to grep or pipe.
flag.CommandLine.SetOutput(os.Stdout)
flag.Usage = usage
envflag.Parse()
buildinfo.Init()
logger.InitNoLogFlags()
hes, err := parseHeaders(*header)
if err != nil {
fatalf("cannot parse -header command-line flag: %s", err)
}
headers = hes
incompleteLine := ""
cfg := &readline.Config{
Prompt: firstLinePrompt,
DisableAutoSaveHistory: true,
Listener: func(line []rune, pos int, _ rune) ([]rune, int, bool) {
incompleteLine = string(line)
return line, pos, false
},
}
rl, err := readline.NewFromConfig(cfg)
if err != nil {
fatalf("cannot initialize readline: %s", err)
}
fmt.Fprintf(rl, "sending queries to %s\n", *datasourceURL)
runReadlineLoop(rl, &incompleteLine)
if err := rl.Close(); err != nil {
fatalf("cannot close readline: %s", err)
}
}
func runReadlineLoop(rl *readline.Instance, incompleteLine *string) {
historyLines, err := loadFromHistory(*historyFile)
if err != nil {
fatalf("cannot load query history: %s", err)
}
for _, line := range historyLines {
if err := rl.SaveToHistory(line); err != nil {
fatalf("cannot initialize query history: %s", err)
}
}
outputMode := outputModeJSONMultiline
s := ""
for {
line, err := rl.ReadLine()
if err != nil {
switch err {
case io.EOF:
if s != "" {
// This is non-interactive query execution.
executeQuery(context.Background(), rl, s, outputMode)
}
return
case readline.ErrInterrupt:
if s == "" && *incompleteLine == "" {
fmt.Fprintf(rl, "interrupted\n")
os.Exit(128 + int(syscall.SIGINT))
}
// Default value for Ctrl+C - clear the prompt and store the incompletely entered line into history
s += *incompleteLine
historyLines = pushToHistory(rl, historyLines, s)
s = ""
rl.SetPrompt(firstLinePrompt)
continue
default:
fatalf("unexpected error in readline: %s", err)
}
}
s += line
if s == "" {
// Skip empty lines
continue
}
if isQuitCommand(s) {
fmt.Fprintf(rl, "bye!\n")
_ = pushToHistory(rl, historyLines, s)
return
}
if isHelpCommand(s) {
printCommandsHelp(rl)
historyLines = pushToHistory(rl, historyLines, s)
s = ""
continue
}
if s == `\s` {
fmt.Fprintf(rl, "singleline json output mode\n")
outputMode = outputModeJSONSingleline
historyLines = pushToHistory(rl, historyLines, s)
s = ""
continue
}
if s == `\m` {
fmt.Fprintf(rl, "multiline json output mode\n")
outputMode = outputModeJSONMultiline
historyLines = pushToHistory(rl, historyLines, s)
s = ""
continue
}
if s == `\c` {
fmt.Fprintf(rl, "compact output mode\n")
outputMode = outputModeCompact
historyLines = pushToHistory(rl, historyLines, s)
s = ""
continue
}
if s == `\logfmt` {
fmt.Fprintf(rl, "logfmt output mode\n")
outputMode = outputModeLogfmt
historyLines = pushToHistory(rl, historyLines, s)
s = ""
continue
}
if line != "" && !strings.HasSuffix(line, ";") {
// Assume the query is incomplete and allow the user finishing the query on the next line
s += "\n"
rl.SetPrompt(nextLinePrompt)
continue
}
// Execute the query
ctx, cancel := signal.NotifyContext(context.Background(), os.Interrupt)
executeQuery(ctx, rl, s, outputMode)
cancel()
historyLines = pushToHistory(rl, historyLines, s)
s = ""
rl.SetPrompt(firstLinePrompt)
}
}
func pushToHistory(rl *readline.Instance, historyLines []string, s string) []string {
s = strings.TrimSpace(s)
if len(historyLines) == 0 || historyLines[len(historyLines)-1] != s {
historyLines = append(historyLines, s)
if len(historyLines) > 500 {
historyLines = historyLines[len(historyLines)-500:]
}
if err := saveToHistory(*historyFile, historyLines); err != nil {
fatalf("cannot save query history: %s", err)
}
}
if err := rl.SaveToHistory(s); err != nil {
fatalf("cannot update query history: %s", err)
}
return historyLines
}
func loadFromHistory(filePath string) ([]string, error) {
data, err := os.ReadFile(filePath)
if err != nil {
if errors.Is(err, fs.ErrNotExist) {
return nil, nil
}
return nil, err
}
linesQuoted := strings.Split(string(data), "\n")
lines := make([]string, 0, len(linesQuoted))
i := 0
for _, lineQuoted := range linesQuoted {
i++
if lineQuoted == "" {
continue
}
line, err := strconv.Unquote(lineQuoted)
if err != nil {
return nil, fmt.Errorf("cannot parse line #%d at %s: %w; line: [%s]", i, filePath, err, line)
}
lines = append(lines, line)
}
return lines, nil
}
func saveToHistory(filePath string, lines []string) error {
linesQuoted := make([]string, len(lines))
for i, line := range lines {
lineQuoted := strconv.Quote(line)
linesQuoted[i] = lineQuoted
}
data := strings.Join(linesQuoted, "\n")
return os.WriteFile(filePath, []byte(data), 0600)
}
func isQuitCommand(s string) bool {
switch s {
case `\q`, "q", "quit", "exit":
return true
default:
return false
}
}
func isHelpCommand(s string) bool {
switch s {
case `\h`, "h", "help", "?":
return true
default:
return false
}
}
func printCommandsHelp(w io.Writer) {
fmt.Fprintf(w, "%s", `List of available commands:
\q - quit
\h - show this help
\s - singleline json output mode
\m - multiline json output mode
\c - compact output
\logfmt - logfmt output mode
\tail <query> - live tail <query> results
`)
}
func executeQuery(ctx context.Context, output io.Writer, qStr string, outputMode outputMode) {
if strings.HasPrefix(qStr, `\tail `) {
tailQuery(ctx, output, qStr, outputMode)
return
}
respBody := getQueryResponse(ctx, output, qStr, outputMode, *datasourceURL)
if respBody == nil {
return
}
defer func() {
_ = respBody.Close()
}()
if err := readWithLess(respBody); err != nil {
fmt.Fprintf(output, "error when reading query response: %s\n", err)
return
}
}
func tailQuery(ctx context.Context, output io.Writer, qStr string, outputMode outputMode) {
qStr = strings.TrimPrefix(qStr, `\tail `)
qURL, err := getTailURL()
if err != nil {
fmt.Fprintf(output, "%s\n", err)
return
}
respBody := getQueryResponse(ctx, output, qStr, outputMode, qURL)
if respBody == nil {
return
}
defer func() {
_ = respBody.Close()
}()
if _, err := io.Copy(output, respBody); err != nil {
if !errors.Is(err, context.Canceled) && !isErrPipe(err) {
fmt.Fprintf(output, "error when live tailing query response: %s\n", err)
}
fmt.Fprintf(output, "\n")
return
}
}
func getTailURL() (string, error) {
if *tailURL != "" {
return *tailURL, nil
}
u, err := url.Parse(*datasourceURL)
if err != nil {
return "", fmt.Errorf("cannot parse -datasource.url=%q: %w", *datasourceURL, err)
}
if !strings.HasSuffix(u.Path, "/query") {
return "", fmt.Errorf("cannot find /query suffix in -datasource.url=%q", *datasourceURL)
}
u.Path = u.Path[:len(u.Path)-len("/query")] + "/tail"
return u.String(), nil
}
func getQueryResponse(ctx context.Context, output io.Writer, qStr string, outputMode outputMode, qURL string) io.ReadCloser {
// Parse the query and convert it to canonical view.
qStr = strings.TrimSuffix(qStr, ";")
q, err := logstorage.ParseQuery(qStr)
if err != nil {
fmt.Fprintf(output, "cannot parse query: %s\n", err)
return nil
}
qStr = q.String()
fmt.Fprintf(output, "executing [%s]...", qStr)
// Prepare HTTP request for qURL
args := make(url.Values)
args.Set("query", qStr)
data := strings.NewReader(args.Encode())
req, err := http.NewRequestWithContext(ctx, "POST", qURL, data)
if err != nil {
panic(fmt.Errorf("BUG: cannot prepare request to server: %w", err))
}
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
for _, h := range headers {
req.Header.Set(h.Name, h.Value)
}
req.Header.Set("AccountID", strconv.Itoa(*accountID))
req.Header.Set("ProjectID", strconv.Itoa(*projectID))
// Execute HTTP request at qURL
startTime := time.Now()
resp, err := httpClient.Do(req)
queryDuration := time.Since(startTime)
fmt.Fprintf(output, "; duration: %.3fs\n", queryDuration.Seconds())
if err != nil {
if errors.Is(err, context.Canceled) {
fmt.Fprintf(output, "\n")
} else {
fmt.Fprintf(output, "cannot execute query: %s\n", err)
}
return nil
}
// Verify response code
if resp.StatusCode != http.StatusOK {
body, err := io.ReadAll(resp.Body)
if err != nil {
body = []byte(fmt.Sprintf("cannot read response body: %s", err))
}
fmt.Fprintf(output, "unexpected status code: %d; response body:\n%s\n", resp.StatusCode, body)
return nil
}
// Prettify the response body
jp := newJSONPrettifier(resp.Body, outputMode)
return jp
}
var httpClient = &http.Client{}
var headers []headerEntry
type headerEntry struct {
Name string
Value string
}
func parseHeaders(a []string) ([]headerEntry, error) {
hes := make([]headerEntry, len(a))
for i, s := range a {
a := strings.SplitN(s, ":", 2)
if len(a) != 2 {
return nil, fmt.Errorf("cannot parse header=%q; it must contain at least one ':'; for example, 'Cookie: foo'", s)
}
hes[i] = headerEntry{
Name: strings.TrimSpace(a[0]),
Value: strings.TrimSpace(a[1]),
}
}
return hes, nil
}
func fatalf(format string, args ...any) {
fmt.Fprintf(os.Stderr, format+"\n", args...)
os.Exit(1)
}
func usage() {
const s = `
vlogscli is a command-line tool for querying VictoriaLogs.
See the docs at https://docs.victoriametrics.com/victorialogs/querying/vlogscli/
`
flagutil.Usage(s)
}

View file

@ -0,0 +1,11 @@
# See https://medium.com/on-docker/use-multi-stage-builds-to-inject-ca-certs-ad1e8f01de1b
ARG certs_image=non-existing
ARG root_image=non-existing
FROM $certs_image AS certs
RUN apk update && apk upgrade && apk --update --no-cache add ca-certificates
FROM $root_image
COPY --from=certs /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
ENTRYPOINT ["/vlogscli-prod"]
ARG TARGETARCH
COPY vlogscli-linux-${TARGETARCH}-prod ./vlogscli-prod

View file

@ -394,7 +394,9 @@ func ProcessLiveTailRequest(ctx context.Context, w http.ResponseWriter, r *http.
return
}
if !q.CanLiveTail() {
httpserver.Errorf(w, r, "the query [%s] cannot be used in live tailing; see https://docs.victoriametrics.com/victorialogs/querying/#live-tailing for details", q)
httpserver.Errorf(w, r, "the query [%s] cannot be used in live tailing; "+
"see https://docs.victoriametrics.com/victorialogs/querying/#live-tailing for details", q)
return
}
q.Optimize()

View file

@ -35,7 +35,7 @@ var (
"By default, the rate limit is disabled. It can be useful for limiting load on remote storage when big amounts of buffered data "+
"is sent after temporary unavailability of the remote storage. See also -maxIngestionRate")
sendTimeout = flagutil.NewArrayDuration("remoteWrite.sendTimeout", time.Minute, "Timeout for sending a single block of data to the corresponding -remoteWrite.url")
retryMinInterval = flagutil.NewArrayDuration("remoteWrite.retryMinInterval", time.Second, "The minimum delay between retry attempts to send a block of data to the corresponding -remoteWrite.url. Every next retry attempt will double the delay to prevent hammering of remote database. See also -remoteWrite.retryMaxInterval")
retryMinInterval = flagutil.NewArrayDuration("remoteWrite.retryMinInterval", time.Second, "The minimum delay between retry attempts to send a block of data to the corresponding -remoteWrite.url. Every next retry attempt will double the delay to prevent hammering of remote database. See also -remoteWrite.retryMaxTime")
retryMaxTime = flagutil.NewArrayDuration("remoteWrite.retryMaxTime", time.Minute, "The max time spent on retry attempts to send a block of data to the corresponding -remoteWrite.url. Change this value if it is expected for -remoteWrite.url to be unreachable for more than -remoteWrite.retryMaxTime. See also -remoteWrite.retryMinInterval")
proxyURL = flagutil.NewArrayString("remoteWrite.proxyURL", "Optional proxy URL for writing data to the corresponding -remoteWrite.url. "+
"Supported proxies: http, https, socks5. Example: -remoteWrite.proxyURL=socks5://proxy:1234")

View file

@ -1,19 +1,20 @@
package config
import (
"bytes"
"crypto/md5"
"fmt"
"hash/fnv"
"io"
"net/url"
"sort"
"strings"
"gopkg.in/yaml.v2"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/config/log"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/utils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/envtemplate"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutils"
"gopkg.in/yaml.v2"
)
// Group contains list of Rules grouped into
@ -298,16 +299,30 @@ func parseConfig(data []byte) ([]Group, error) {
if err != nil {
return nil, fmt.Errorf("cannot expand environment vars: %w", err)
}
g := struct {
var result []Group
type cfgFile struct {
Groups []Group `yaml:"groups"`
// Catches all undefined fields and must be empty after parsing.
XXX map[string]any `yaml:",inline"`
}{}
err = yaml.Unmarshal(data, &g)
if err != nil {
}
decoder := yaml.NewDecoder(bytes.NewReader(data))
for {
var cf cfgFile
if err = decoder.Decode(&cf); err != nil {
if err == io.EOF { // EOF indicates no more documents to read
break
}
return nil, err
}
return g.Groups, checkOverflow(g.XXX, "config")
if err = checkOverflow(cf.XXX, "config"); err != nil {
return nil, err
}
result = append(result, cf.Groups...)
}
return result, nil
}
func checkOverflow(m map[string]any, ctx string) error {

View file

@ -9,11 +9,10 @@ import (
"testing"
"time"
"gopkg.in/yaml.v2"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/notifier"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmalert/templates"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promutils"
"gopkg.in/yaml.v2"
)
func TestMain(m *testing.M) {
@ -40,6 +39,34 @@ groups:
w.Write([]byte(`
groups:
- name: TestGroup
rules:
- record: conns
expr: max(vm_tcplistener_conns)`))
})
mux.HandleFunc("/good-multi-doc", func(w http.ResponseWriter, _ *http.Request) {
w.Write([]byte(`
groups:
- name: foo
rules:
- record: conns
expr: max(vm_tcplistener_conns)
---
groups:
- name: bar
rules:
- record: conns
expr: max(vm_tcplistener_conns)`))
})
mux.HandleFunc("/bad-multi-doc", func(w http.ResponseWriter, _ *http.Request) {
w.Write([]byte(`
bad_field:
- name: foo
rules:
- record: conns
expr: max(vm_tcplistener_conns)
---
groups:
- name: bar
rules:
- record: conns
expr: max(vm_tcplistener_conns)`))
@ -48,13 +75,23 @@ groups:
srv := httptest.NewServer(mux)
defer srv.Close()
if _, err := Parse([]string{srv.URL + "/good-alert", srv.URL + "/good-rr"}, notifier.ValidateTemplates, true); err != nil {
f := func(urls []string, expErr bool) {
for i, u := range urls {
urls[i] = srv.URL + u
}
_, err := Parse(urls, notifier.ValidateTemplates, true)
if err != nil && !expErr {
t.Fatalf("error parsing URLs %s", err)
}
if _, err := Parse([]string{srv.URL + "/bad"}, notifier.ValidateTemplates, true); err == nil {
t.Fatalf("expected parsing error: %s", err)
if err == nil && expErr {
t.Fatalf("expecting error parsing URLs but got none")
}
}
f([]string{"/good-alert", "/good-rr", "/good-multi-doc"}, false)
f([]string{"/bad"}, true)
f([]string{"/bad-multi-doc"}, true)
f([]string{"/good-alert", "/bad"}, true)
}
func TestParse_Success(t *testing.T) {
@ -86,6 +123,8 @@ func TestParse_Failure(t *testing.T) {
f([]string{"testdata/dir/rules4-bad.rules"}, "either `record` or `alert` must be set")
f([]string{"testdata/rules/rules1-bad.rules"}, "bad graphite expr")
f([]string{"testdata/dir/rules6-bad.rules"}, "missing ':' in header")
f([]string{"testdata/rules/rules-multi-doc-bad.rules"}, "unknown fields")
f([]string{"testdata/rules/rules-multi-doc-duplicates-bad.rules"}, "duplicate")
f([]string{"http://unreachable-url"}, "failed to")
}

View file

@ -0,0 +1,29 @@
groups:
- name: groupTest
rules:
- alert: VMRows
for: 1ms
expr: vm_rows > 0
labels:
label: bar
host: "{{ $labels.instance }}"
annotations:
summary: "{{ $value }}"
invalid-field-1: invalid-value-1
invalid-field-2: invalid-value-2
---
groups:
- name: TestGroup
interval: 2s
concurrency: 2
type: graphite
rules:
- alert: Conns
expr: filterSeries(sumSeries(host.receiver.interface.cons),'last','>', 500)
for: 3m
annotations:
summary: Too high connection number for {{$labels.instance}}
description: "It is {{ $value }} connections for {{$labels.instance}}"
invalid-field-2: invalid-value-2
invalid-field-3: invalid-value-3

View file

@ -0,0 +1,11 @@
groups:
- name: foo
rules:
- alert: VMRows
expr: vm_rows > 0
---
groups:
- name: foo
rules:
- alert: VMRows
expr: vm_rows > 0

View file

@ -0,0 +1,15 @@
---
groups:
- name: groupTest
rules:
- alert: VMRows
for: 1ms
expr: vm_rows > 0
labels:
label: bar
host: "{{ $labels.instance }}"
annotations:
summary: "{{ $value }}"
---
groups:

View file

@ -0,0 +1,46 @@
---
groups:
- name: groupTest
rules:
- alert: VMRows
for: 1ms
expr: vm_rows > 0
labels:
label: bar
host: "{{ $labels.instance }}"
annotations:
summary: "{{ $value }}"
- name: groupTest-2
rules:
- alert: VMRows-2
for: 1ms
expr: vm_rows_2 > 0
labels:
label: bar2
host: "{{ $labels.instance }}"
annotations:
summary: "\n markdown result is : \n---\n # header\n body: \n text \n----\n"
---
groups:
- name: groupTest-3
rules:
- alert: VMRows-3
for: 1ms
expr: vm_rows_3 > 0
labels:
label: bar_3
host: "{{ $labels.instance }}"
annotations:
summary: "{{ $value }}"
- name: groupTest-4
rules:
- alert: VMRows-4
for: 1ms
expr: vm_rows_4 > 0
labels:
label: bar4
host: "{{ $labels.instance }}"
annotations:
summary: "{{ $value }}"
---
groups:

View file

@ -31,14 +31,14 @@ import (
)
var (
rulePath = flagutil.NewArrayString("rule", `Path to the files or http url with alerting and/or recording rules.
rulePath = flagutil.NewArrayString("rule", `Path to the files or http url with alerting and/or recording rules in YAML format.
Supports hierarchical patterns and regexpes.
Examples:
-rule="/path/to/file". Path to a single file with alerting rules.
-rule="http://<some-server-addr>/path/to/rules". HTTP URL to a page with alerting rules.
-rule="dir/*.yaml" -rule="/*.yaml" -rule="gcs://vmalert-rules/tenant_%{TENANT_ID}/prod".
-rule="dir/**/*.yaml". Includes all the .yaml files in "dir" subfolders recursively.
Rule files may contain %{ENV_VAR} placeholders, which are substituted by the corresponding env vars.
Rule files support YAML multi-document. Files may contain %{ENV_VAR} placeholders, which are substituted by the corresponding env vars.
Enterprise version of vmalert supports S3 and GCS paths to rules.
For example: gs://bucket/path/to/rules, s3://bucket/path/to/rules

View file

@ -33,7 +33,7 @@ const (
var (
disablePathAppend = flag.Bool("remoteWrite.disablePathAppend", false, "Whether to disable automatic appending of '/api/v1/write' path to the configured -remoteWrite.url.")
sendTimeout = flag.Duration("remoteWrite.sendTimeout", 30*time.Second, "Timeout for sending data to the configured -remoteWrite.url.")
retryMinInterval = flag.Duration("remoteWrite.retryMinInterval", time.Second, "The minimum delay between retry attempts. Every next retry attempt will double the delay to prevent hammering of remote database. See also -remoteWrite.retryMaxInterval")
retryMinInterval = flag.Duration("remoteWrite.retryMinInterval", time.Second, "The minimum delay between retry attempts. Every next retry attempt will double the delay to prevent hammering of remote database. See also -remoteWrite.retryMaxTime")
retryMaxTime = flag.Duration("remoteWrite.retryMaxTime", time.Second*30, "The max time spent on retry attempts for the failed remote-write request. Change this value if it is expected for remoteWrite.url to be unreachable for more than -remoteWrite.retryMaxTime. See also -remoteWrite.retryMinInterval")
)

View file

@ -1092,7 +1092,6 @@ func evalInstantRollup(qt *querytracer.Tracer, ec *EvalConfig, funcName string,
again:
offset := int64(0)
tssCached := rollupResultCacheV.GetInstantValues(qt, expr, window, ec.Step, ec.EnforcedTagFilterss)
ec.QueryStats.addSeriesFetched(len(tssCached))
if len(tssCached) == 0 {
// Cache miss. Re-populate the missing data.
start := int64(fasttime.UnixTimestamp()*1000) - cacheTimestampOffset.Milliseconds()
@ -1136,6 +1135,7 @@ func evalInstantRollup(qt *querytracer.Tracer, ec *EvalConfig, funcName string,
deleteCachedSeries(qt)
goto again
}
ec.QueryStats.addSeriesFetched(len(tssCached))
return tssCached, offset, nil
}
@ -1647,6 +1647,10 @@ func evalRollupFuncWithMetricExpr(qt *querytracer.Tracer, ec *EvalConfig, funcNa
ecNew = copyEvalConfig(ec)
ecNew.Start = start
}
// call to evalWithConfig also updates QueryStats.addSeriesFetched
// without checking whether tss has intersection with tssCached.
// So final number could be bigger than actual number of unique series.
// This discrepancy is acceptable, since seriesFetched stat is used as info only.
tss, err := evalWithConfig(ecNew)
if err != nil {
return nil, err

View file

@ -526,7 +526,6 @@ func writeStorageMetrics(w io.Writer, strg *storage.Storage) {
metrics.WriteCounterUint64(w, `vm_deduplicated_samples_total{type="merge"}`, m.DedupsDuringMerge)
metrics.WriteGaugeUint64(w, `vm_snapshots`, m.SnapshotsCount)
metrics.WriteCounterUint64(w, `vm_rows_ignored_total{reason="nan_value"}`, m.NaNValueRows)
metrics.WriteCounterUint64(w, `vm_rows_ignored_total{reason="big_timestamp"}`, m.TooBigTimestampRows)
metrics.WriteCounterUint64(w, `vm_rows_ignored_total{reason="small_timestamp"}`, m.TooSmallTimestampRows)
metrics.WriteCounterUint64(w, `vm_rows_ignored_total{reason="invalid_raw_metric_name"}`, m.InvalidRawMetricNames)

File diff suppressed because it is too large Load diff

View file

@ -137,14 +137,14 @@ export const barDisp = (stroke: Stroke, fill: Fill): Disp => {
export const delSeries = (u: uPlot) => {
for (let i = u.series.length - 1; i >= 0; i--) {
u.delSeries(i);
i && u.delSeries(i);
}
};
export const addSeries = (u: uPlot, series: uPlotSeries[], spanGaps = false) => {
series.forEach((s) => {
series.forEach((s,i) => {
if (s.label) s.spanGaps = spanGaps;
u.addSeries(s);
i && u.addSeries(s);
});
};

View file

@ -6200,7 +6200,7 @@
"type": "prometheus",
"uid": "$ds"
},
"description": "How many datapoints are in RAM queue waiting to be written into storage. The number of pending data points should be in the range from 0 to `2*<ingestion_rate>`, since VictoriaMetrics pushes pending data to persistent storage every second. The index datapoints value in general is much lower.",
"description": "How many datapoints are in RAM queue waiting to be written into storage. The number of pending data points should be in the range from 0 to `3*<ingestion_rate>`, since VictoriaMetrics pushes pending data to persistent storage every two seconds. The index datapoints value in general is much lower.",
"fieldConfig": {
"defaults": {
"color": {

View file

@ -4607,7 +4607,7 @@
"type": "prometheus",
"uid": "$ds"
},
"description": "How many datapoints are in RAM queue waiting to be written into storage. The number of pending data points should be in the range from 0 to `2*<ingestion_rate>`, since VictoriaMetrics pushes pending data to persistent storage every second.",
"description": "How many datapoints are in RAM queue waiting to be written into storage. The number of pending data points should be in the range from 0 to `3*<ingestion_rate>`, since VictoriaMetrics pushes pending data to persistent storage every two seconds.",
"fieldConfig": {
"defaults": {
"color": {

View file

@ -6201,7 +6201,7 @@
"type": "victoriametrics-datasource",
"uid": "$ds"
},
"description": "How many datapoints are in RAM queue waiting to be written into storage. The number of pending data points should be in the range from 0 to `2*<ingestion_rate>`, since VictoriaMetrics pushes pending data to persistent storage every second. The index datapoints value in general is much lower.",
"description": "How many datapoints are in RAM queue waiting to be written into storage. The number of pending data points should be in the range from 0 to `3*<ingestion_rate>`, since VictoriaMetrics pushes pending data to persistent storage every two seconds. The index datapoints value in general is much lower.",
"fieldConfig": {
"defaults": {
"color": {

View file

@ -4608,7 +4608,7 @@
"type": "victoriametrics-datasource",
"uid": "$ds"
},
"description": "How many datapoints are in RAM queue waiting to be written into storage. The number of pending data points should be in the range from 0 to `2*<ingestion_rate>`, since VictoriaMetrics pushes pending data to persistent storage every second.",
"description": "How many datapoints are in RAM queue waiting to be written into storage. The number of pending data points should be in the range from 0 to `3*<ingestion_rate>`, since VictoriaMetrics pushes pending data to persistent storage every two seconds.",
"fieldConfig": {
"defaults": {
"color": {

View file

@ -42,7 +42,7 @@ app-via-docker: package-builder
$(BUILDER_IMAGE) \
go build $(RACE) -trimpath -buildvcs=false \
-ldflags "-extldflags '-static' $(GO_BUILDINFO)" \
-tags 'netgo osusergo nethttpomithttp2 musl' \
-tags 'netgo osusergo musl' \
-o bin/$(APP_NAME)$(APP_SUFFIX)-prod $(PKG_PREFIX)/app/$(APP_NAME)
app-via-docker-windows: package-builder
@ -57,7 +57,7 @@ app-via-docker-windows: package-builder
$(BUILDER_IMAGE) \
go build $(RACE) -trimpath -buildvcs=false \
-ldflags "-s -w -extldflags '-static' $(GO_BUILDINFO)" \
-tags 'netgo osusergo nethttpomithttp2' \
-tags 'netgo osusergo' \
-o bin/$(APP_NAME)-windows$(APP_SUFFIX)-prod.exe $(PKG_PREFIX)/app/$(APP_NAME)
package-via-docker: package-base

View file

@ -40,7 +40,7 @@ groups:
labels:
severity: critical
annotations:
dashboard: http://localhost:3000/d/oS7Bi_0Wz?viewPanel=200&var-instance={{ $labels.instance }}"
dashboard: "http://localhost:3000/d/oS7Bi_0Wz?viewPanel=200&var-instance={{ $labels.instance }}"
summary: "Instance {{ $labels.instance }} (job={{ $labels.job }}) will run out of disk space soon"
description: "Disk utilisation on instance {{ $labels.instance }} is more than 80%.\n
Having less than 20% of free disk space could cripple merges processes and overall performance.

View file

@ -4,7 +4,7 @@ services:
# And forward them to --remoteWrite.url
vmagent:
container_name: vmagent
image: victoriametrics/vmagent:v1.103.0
image: victoriametrics/vmagent:v1.104.0
depends_on:
- "vminsert"
ports:
@ -39,7 +39,7 @@ services:
# where N is number of vmstorages (2 in this case).
vmstorage-1:
container_name: vmstorage-1
image: victoriametrics/vmstorage:v1.103.0-cluster
image: victoriametrics/vmstorage:v1.104.0-cluster
ports:
- 8482
- 8400
@ -51,7 +51,7 @@ services:
restart: always
vmstorage-2:
container_name: vmstorage-2
image: victoriametrics/vmstorage:v1.103.0-cluster
image: victoriametrics/vmstorage:v1.104.0-cluster
ports:
- 8482
- 8400
@ -66,7 +66,7 @@ services:
# pre-process them and distributes across configured vmstorage shards.
vminsert:
container_name: vminsert
image: victoriametrics/vminsert:v1.103.0-cluster
image: victoriametrics/vminsert:v1.104.0-cluster
depends_on:
- "vmstorage-1"
- "vmstorage-2"
@ -81,7 +81,7 @@ services:
# vmselect collects results from configured `--storageNode` shards.
vmselect-1:
container_name: vmselect-1
image: victoriametrics/vmselect:v1.103.0-cluster
image: victoriametrics/vmselect:v1.104.0-cluster
depends_on:
- "vmstorage-1"
- "vmstorage-2"
@ -94,7 +94,7 @@ services:
restart: always
vmselect-2:
container_name: vmselect-2
image: victoriametrics/vmselect:v1.103.0-cluster
image: victoriametrics/vmselect:v1.104.0-cluster
depends_on:
- "vmstorage-1"
- "vmstorage-2"
@ -112,7 +112,7 @@ services:
# It can be used as an authentication proxy.
vmauth:
container_name: vmauth
image: victoriametrics/vmauth:v1.103.0
image: victoriametrics/vmauth:v1.104.0
depends_on:
- "vmselect-1"
- "vmselect-2"
@ -127,7 +127,7 @@ services:
# vmalert executes alerting and recording rules
vmalert:
container_name: vmalert
image: victoriametrics/vmalert:v1.103.0
image: victoriametrics/vmalert:v1.104.0
depends_on:
- "vmauth"
ports:

View file

@ -40,7 +40,7 @@ services:
# storing logs and serving read queries.
victorialogs:
container_name: victorialogs
image: docker.io/victoriametrics/victoria-logs:v0.32.1-victorialogs
image: docker.io/victoriametrics/victoria-logs:v0.35.0-victorialogs
command:
- "--storageDataPath=/vlogs"
- "--httpListenAddr=:9428"
@ -55,7 +55,7 @@ services:
# scraping, storing metrics and serve read requests.
victoriametrics:
container_name: victoriametrics
image: victoriametrics/victoria-metrics:v1.103.0
image: victoriametrics/victoria-metrics:v1.104.0
ports:
- 8428:8428
volumes:

View file

@ -4,7 +4,7 @@ services:
# And forward them to --remoteWrite.url
vmagent:
container_name: vmagent
image: victoriametrics/vmagent:v1.103.0
image: victoriametrics/vmagent:v1.104.0
depends_on:
- "victoriametrics"
ports:
@ -22,7 +22,7 @@ services:
# storing metrics and serve read requests.
victoriametrics:
container_name: victoriametrics
image: victoriametrics/victoria-metrics:v1.103.0
image: victoriametrics/victoria-metrics:v1.104.0
ports:
- 8428:8428
- 8089:8089
@ -65,7 +65,7 @@ services:
# vmalert executes alerting and recording rules
vmalert:
container_name: vmalert
image: victoriametrics/vmalert:v1.103.0
image: victoriametrics/vmalert:v1.104.0
depends_on:
- "victoriametrics"
- "alertmanager"

View file

@ -1,7 +1,7 @@
services:
# meta service will be ignored by compose
.victorialogs:
image: docker.io/victoriametrics/victoria-logs:v0.32.1-victorialogs
image: docker.io/victoriametrics/victoria-logs:v0.35.0-victorialogs
command:
- -storageDataPath=/vlogs
- -loggerFormat=json

View file

@ -1,7 +1,7 @@
services:
vmagent:
container_name: vmagent
image: victoriametrics/vmagent:v1.103.0
image: victoriametrics/vmagent:v1.104.0
depends_on:
- "victoriametrics"
ports:
@ -18,7 +18,7 @@ services:
victoriametrics:
container_name: victoriametrics
image: victoriametrics/victoria-metrics:v1.103.0
image: victoriametrics/victoria-metrics:v1.104.0
ports:
- 8428:8428
volumes:
@ -51,7 +51,7 @@ services:
vmalert:
container_name: vmalert
image: victoriametrics/vmalert:v1.103.0
image: victoriametrics/vmalert:v1.104.0
depends_on:
- "victoriametrics"
ports:
@ -73,7 +73,7 @@ services:
restart: always
vmanomaly:
container_name: vmanomaly
image: victoriametrics/vmanomaly:v1.11.0
image: victoriametrics/vmanomaly:v1.16.3
depends_on:
- "victoriametrics"
ports:
@ -87,7 +87,7 @@ services:
platform: "linux/amd64"
command:
- "/config.yaml"
- "--license-file=/license"
- "--licenseFile=/license"
alertmanager:
container_name: alertmanager
image: prom/alertmanager:v0.27.0

View file

@ -3,7 +3,7 @@ version: '3'
services:
# Run `make package-victoria-logs` to build victoria-logs image
vlogs:
image: docker.io/victoriametrics/victoria-logs:v0.32.1-victorialogs
image: docker.io/victoriametrics/victoria-logs:v0.35.0-victorialogs
volumes:
- vlogs:/vlogs
ports:
@ -46,7 +46,7 @@ services:
- '--config=/config.yml'
vmsingle:
image: victoriametrics/victoria-metrics:v1.103.0
image: victoriametrics/victoria-metrics:v1.104.0
ports:
- '8428:8428'
command:

View file

@ -19,8 +19,8 @@ On the server:
* VictoriaMetrics is running on ports: 8428, 8089, 4242, 2003 and they are bound to the local interface.
********************************************************************************
# This image includes v1.103.0 release of VictoriaMetrics.
# See Release notes https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.103.0
# This image includes v1.104.0 release of VictoriaMetrics.
# See Release notes https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.104.0
# Welcome to VictoriaMetrics droplet!

View file

@ -61,7 +61,7 @@ It increases cluster availability, and simplifies cluster maintenance as well as
## Multitenancy
VictoriaMetrics cluster supports multiple isolated tenants (aka namespaces).
Tenants are identified by `accountID` or `accountID:projectID`, which are either put inside request urls or inside headers.
Tenants are identified by `accountID` or `accountID:projectID`, which are either put inside request URLs or inside headers for writes and reads.
Its also possible to accept data from multiple tenants via a special `multitenant` endpoints `http://vminsert:8480/insert/multitenant/<suffix>`,
where `<suffix>` can be replaced with any supported suffix for data ingestion from [this list](#url-format). In this case tenants can be passed via:
- [`headers`](#multitenancy-via-headers)
@ -82,8 +82,6 @@ when different tenants have different amounts of data and different query load.
- The database performance and resource usage doesn't depend on the number of tenants. It depends mostly on the total number of active time series in all the tenants. A time series is considered active if it received at least a single sample during the last hour or it has been touched by queries during the last hour.
- VictoriaMetrics doesn't support querying multiple tenants in a single request.
- The list of registered tenants can be obtained via `http://<vmselect>:8481/admin/tenants` url. See [these docs](#url-format).
- VictoriaMetrics exposes various per-tenant statistics via metrics - see [these docs](https://docs.victoriametrics.com/pertenantstatistic/).
@ -103,9 +101,9 @@ If no `TenantID` header is set, tenant information will be obtained from [specia
In this case the account id and project id are obtained from optional `vm_account_id` and `vm_project_id` labels of the incoming samples.
If `vm_account_id` or `vm_project_id` labels are missing or invalid, then the corresponding `accountID` or `projectID` is set to 0.
These labels are automatically removed from samples before forwarding them to `vmstorage`.
For example, if the following samples are written into `http://vminsert:8480/insert/multitenant/prometheus/api/v1/write`:
```
http_requests_total{path="/foo",vm_account_id="42"} 12
http_requests_total{path="/bar",vm_account_id="7",vm_project_id="9"} 34
@ -122,8 +120,54 @@ such as [Graphite](https://docs.victoriametrics.com/#how-to-send-data-from-graph
[InfluxDB line protocol via TCP and UDP](https://docs.victoriametrics.com/#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf) and
[OpenTSDB telnet put protocol](https://docs.victoriametrics.com/#sending-data-via-telnet-put-protocol).
**Security considerations:** it is recommended restricting access to `multitenant` endpoints only to trusted sources,
since untrusted source may break per-tenant data by writing unwanted samples to arbitrary tenants.
**Reads**
_Available from [v1.104.0](https://docs.victoriametrics.com/changelog/#v11040)._
_For better performance prefer specifying [tenants in read URL](https://docs.victoriametrics.com/cluster-victoriametrics/#url-format)._
`vmselect` can execute queries over multiple [tenants](#multitenancy) via special `multitenant` endpoints `http://vmselect:8481/select/multitenant/<suffix>`.
Currently supported endpoints for `<suffix>` are:
- `/prometheus/api/v1/query`
- `/prometheus/api/v1/query_range`
- `/prometheus/api/v1/series`
- `/prometheus/api/v1/labels`
- `/prometheus/api/v1/label/<label_name>/values`
- `/prometheus/api/v1/status/active_queries`
- `/prometheus/api/v1/status/top_queries`
- `/prometheus/api/v1/status/tsdb`
- `/prometheus/api/v1/export`
- `/prometheus/api/v1/export/csv`
- `/vmui`
It is allowed to explicitly specify tenant IDs via `vm_account_id` and `vm_project_id` labels in the query.
For example, the following query fetches metric `up` for the tenants `accountID=42` and `accountID=7, projectID=9`:
```
up{vm_account_id="7", vm_project_id="9" or vm_account_id="42"}
```
`vm_account_id` and `vm_project_id` labels support all operators for label matching. For example:
```
up{vm_account_id!="42"} # selects all the time series except those belonging to accountID=42
up{vm_account_id=~"4.*"} # selects all the time series belonging to accountIDs starting with 4
```
Alternatively, it is possible to use [`extra_filters[]` and `extra_label`](https://docs.victoriametrics.com/#prometheus-querying-api-enhancements)
query args to apply additional filters for the query:
```
curl 'http://vmselect:8481/select/multitenant/prometheus/api/v1/query' \
-d 'query=up' \
-d 'extra_filters[]={vm_account_id="7",vm_project_id="9"}' \
-d 'extra_filters[]={vm_account_id="42"}'
```
The precedence for applying filters for tenants follows this order:
1. Filter tenants by `extra_label` and `extra_filters` filters.
These filters have the highest priority and are applied first when provided through the query arguments.
2. Filter tenants from labels selectors defined at metricsQL query expression.
**Security considerations**
It is recommended restricting access to `multitenant` endpoints only to trusted sources,
since untrusted source may break per-tenant data by writing unwanted samples or get access to data of arbitrary tenants.
## Binaries
@ -715,6 +759,13 @@ Some workloads may need fine-grained resource usage limits. In these cases the f
Queries to this endpoint may take big amounts of CPU time and memory at `vmstorage` and `vmselect` when the database contains
big number of unique time series because of [high churn rate](https://docs.victoriametrics.com/faq/#what-is-high-churn-rate).
In this case it might be useful to set the `-search.maxSeries` to quite low value in order limit CPU and memory usage.
- `-search.maxDeleteSeries` at `vmselect` limits the number of unique time
series that can be deleted by a single
[/api/v1/admin/tsdb/delete_series](https://docs.victoriametrics.com/url-examples/#apiv1admintsdbdelete_series)
call. Deleting too many time series may require big amount of CPU and memory
at `vmstorage` and this limit guards against unplanned resource usage spikes.
Also see [How to delete time series](#how-to-delete-time-series) section to
learn about different ways of deleting series.
- `-search.maxTagKeys` at `vmstorage` limits the number of items, which may be returned from
[/api/v1/labels](https://docs.victoriametrics.com/url-examples/#apiv1labels). This endpoint is used mostly by Grafana
for auto-completion of label names. Queries to this endpoint may take big amounts of CPU time and memory at `vmstorage` and `vmselect`
@ -1519,6 +1570,8 @@ Below is the output for `/path/to/vmselect -help`:
Log queries with execution time exceeding this value. Zero disables slow query logging. See also -search.logQueryMemoryUsage (default 5s)
-search.maxConcurrentRequests int
The maximum number of concurrent search requests. It shouldn't be high, since a single request can saturate all the CPU cores, while many concurrently executed requests may require high amounts of memory. See also -search.maxQueueDuration and -search.maxMemoryPerQuery (default 16)
-search.maxDeleteSeries int
The maximum number of time series, which can be deleted using /api/v1/admin/tsdb/delete_series. This option allows limiting memory usage (default 1000000)
-search.maxExportDuration duration
The maximum duration for /api/v1/export call (default 720h0m0s)
-search.maxExportSeries int
@ -1598,6 +1651,8 @@ Below is the output for `/path/to/vmselect -help`:
-search.inmemoryBufSizeBytes size
Size for in-memory data blocks used during processing search requests. By default, the size is automatically calculated based on available memory. Adjust this flag value if you observe that vm_tmp_blocks_max_inmemory_file_size_bytes metric constantly shows much higher values than vm_tmp_blocks_inmemory_file_size_bytes. See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/6851
Supports the following optional suffixes for size values: KB, MB, GB, TB, KiB, MiB, GiB, TiB (default 0)
-search.tenantCacheExpireDuration duration
The expiry duration for list of tenants for multi-tenant queries. (default 5m0s)
-search.treatDotsAsIsInRegexps
Whether to treat dots as is in regexp label filters used in queries. For example, foo{bar=~"a.b.c"} will be automatically converted to foo{bar=~"a\\.b\\.c"}, i.e. all the dots in regexp filters will be automatically escaped in order to match only dot char instead of matching any char. Dots in ".+", ".*" and ".{n}" regexps aren't escaped. This option is DEPRECATED in favor of {__graphite__="a.*.c"} syntax for selecting metrics matching the given Graphite metrics filter
-selectNode array

View file

@ -214,7 +214,7 @@ We provide commercial support for both versions. [Contact us](mailto:info@victor
The following commercial versions of VictoriaMetrics are available:
* [VictoriaMetrics Cloud](https://cloud.victoriametrics.com/signUp?utm_source=website&utm_campaign=docs_vm_faq) the most cost-efficient hosted monitoring platform, operated by VictoriaMetrics core team.
* [VictoriaMetrics Cloud](https://console.victoriametrics.cloud/signUp?utm_source=website&utm_campaign=docs_vm_faq) the most cost-efficient hosted monitoring platform, operated by VictoriaMetrics core team.
The following commercial versions of VictoriaMetrics are planned:

View file

@ -22,5 +22,5 @@ to [the latest available releases](https://docs.victoriametrics.com/changelog/).
## Currently supported LTS release lines
- v1.102.x - the latest one is [v1.102.2 LTS release](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.102.2)
- v1.97.x - the latest one is [v1.97.7 LTS release](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.97.7)
- v1.102.x - the latest one is [v1.102.4 LTS release](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.102.2)
- v1.97.x - the latest one is [v1.97.9 LTS release](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.97.7)

View file

@ -18,7 +18,7 @@ VictoriaMetrics is distributed in the following forms:
Single-server-VictoriaMetrics perfectly scales vertically and easily handles millions of metrics/s;
* [VictoriaMetrics Cluster](https://docs.victoriametrics.com/cluster-victoriametrics/) - set of components
for building horizontally scalable clusters.
* [VictoriaMetrics Cloud](https://cloud.victoriametrics.com/signUp?utm_source=website&utm_campaign=docs_vm_quickstart_guide) - allows
* [VictoriaMetrics Cloud](https://console.victoriametrics.cloud/signUp?utm_source=website&utm_campaign=docs_vm_quickstart_guide) - allows
users to run VictoriaMetrics, hosted on AWS, without the need to perform typical DevOps tasks such as proper configuration, monitoring, logs collection, access protection, software updates, backups, etc.
VictoriaMetrics is available as:
@ -42,7 +42,7 @@ VictoriaMetrics is developed at a fast pace, so it is recommended periodically c
### Starting VictoriaMetrics Single Node or Cluster on VictoriaMetrics Cloud {anchor="starting-vm-on-cloud"}
The following steps will guide you through starting VictoriaMetrics on VictoriaMetrics Cloud:
1. Go to [VictoriaMetrics Cloud](https://cloud.victoriametrics.com/signUp?utm_source=website&utm_campaign=docs_vm_quickstart_guide) and sign up (it's free).
1. Go to [VictoriaMetrics Cloud](https://console.victoriametrics.cloud/signUp?utm_source=website&utm_campaign=docs_vm_quickstart_guide) and sign up (it's free).
1. After signing up, you will be immediately granted $200 of trial credits you can spend on running Single node or Cluster.
1. Navigate to the VictoriaMetrics Cloud [quick start](https://docs.victoriametrics.com/victoriametrics-cloud/quickstart/#creating-deployments) guide for detailed instructions.

View file

@ -1359,7 +1359,7 @@ Additionally, VictoriaMetrics can accept metrics via the following popular data
* `/api/v1/import/prometheus` for importing data in Prometheus exposition format and in [Pushgateway format](https://github.com/prometheus/pushgateway#url).
See [these docs](#how-to-import-data-in-prometheus-exposition-format) for details.
Please note, most of the ingestion APIs (except [Prometheus remote_write API](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write))
Please note, most of the ingestion APIs (except [Prometheus remote_write API](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write) and [OpenTelemetry](#sending-data-via-opentelemetry))
are optimized for performance and processes data in a streaming fashion.
It means that client can transfer unlimited amount of data through the open connection. Because of this, import APIs
may not return parsing errors to the client, as it is expected for data stream to be not interrupted.
@ -1538,10 +1538,15 @@ VictoriaMetrics supports data ingestion via [OpenTelemetry protocol for metrics]
VictoriaMetrics expects `protobuf`-encoded requests at `/opentelemetry/v1/metrics`.
Set HTTP request header `Content-Encoding: gzip` when sending gzip-compressed data to `/opentelemetry/v1/metrics`.
VictoriaMetrics supports only [cumulative temporality](https://opentelemetry.io/docs/specs/otel/metrics/data-model/#temporality)
for received measurements. The number of dropped unsupported samples is exposed via `vm_protoparser_rows_dropped_total{type="opentelemetry"` metric.
VictoriaMetrics stores the ingested OpenTelemetry [raw samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples) as is without any transformations.
Pass `-opentelemetry.usePrometheusNaming` command-line flag to VictoriaMetrics for automatic conversion of metric names and labels into Prometheus-compatible format.
OpenTelemetry [exponential histogram](https://opentelemetry.io/docs/specs/otel/metrics/data-model/#exponentialhistogram) is automatically converted
to [VictoriaMetrics histogram format](https://valyala.medium.com/improving-histogram-usability-for-prometheus-and-grafana-bc7e5df0e350).
Using the following exporter configuration in the opentelemetry collector will allow you to send metrics into VictoriaMetrics:
Using the following exporter configuration in the OpenTelemetry collector will allow you to send metrics into VictoriaMetrics:
```yaml
exporters:
@ -3038,6 +3043,8 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
How frequently to reload the full state from Kubernetes API server (default 30m0s)
-promscrape.kubernetes.attachNodeMetadataAll
Whether to set attach_metadata.node=true for all the kubernetes_sd_configs at -promscrape.config . It is possible to set attach_metadata.node=false individually per each kubernetes_sd_configs . See https://docs.victoriametrics.com/sd_configs/#kubernetes_sd_configs
-promscrape.kubernetes.useHTTP2Client
Whether to use HTTP/2 client for connection to Kubernetes API server. This may reduce amount of concurrent connections to API server when watching for a big number of Kubernetes objects.
-promscrape.kubernetesSDCheckInterval duration
Interval for checking for changes in Kubernetes API server. This works only if kubernetes_sd_configs is configured in '-promscrape.config' file. See https://docs.victoriametrics.com/sd_configs/#kubernetes_sd_configs for details (default 30s)
-promscrape.kumaSDCheckInterval duration

View file

@ -88,11 +88,12 @@ Bumping the limits may significantly improve build speed.
file created at the step `a`.
- To run the command `TAG=v1.xx.y make github-create-release github-upload-assets`, so new release is created
and all the needed assets are re-uploaded to it.
1. Test new images on [sandbox](https://github.com/VictoriaMetrics/VictoriaMetrics-enterprise/blob/master/Release-Guide.md#testing-releases).
1. Go to <https://github.com/VictoriaMetrics/VictoriaMetrics/releases> and verify that draft release with the name `TAG` has been created
and this release contains all the needed binaries and checksums.
1. Update the release description with the content of [CHANGELOG](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/docs/CHANGELOG.md) for this release.
1. Publish release by pressing "Publish release" green button in GitHub's UI.
1. Bump version of the VictoriaMetrics cluster in the sandbox environment by opening and merging PR.
1. Update GitHub tickets related to the new release. Usually, such tickets have label [waiting for release](https://github.com/VictoriaMetrics/VictoriaMetrics/issues?q=is%3Aopen+is%3Aissue+label%3A%22waiting+for+release%22). Close such tickets by mentioning which release they were included into, and remove the label. See example [here](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6637#issuecomment-2390729511).
1. Bump VictoriaMetrics version at `deployment/docker/docker-compose.yml` and at `deployment/docker/docker-compose-cluster.yml`.
1. Follow the instructions in [release follow-up](https://github.com/VictoriaMetrics/VictoriaMetrics-enterprise/blob/master/Release-Guide.md).

View file

@ -15,6 +15,28 @@ according to [these docs](https://docs.victoriametrics.com/victorialogs/quicksta
## tip
## [v0.35.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v0.35.0-victorialogs)
* FEATURE: [vlogscli](https://docs.victoriametrics.com/victorialogs/querying/vlogscli/): add ability to live tail query results - see [these docs](https://docs.victoriametrics.com/victorialogs/querying/vlogscli/#live-tailing).
* FEATURE: [vlogscli](https://docs.victoriametrics.com/victorialogs/querying/vlogscli/): add compact output mode for query results. It can be enabled by typing `\c` and then pressing `enter`. See [these docs](https://docs.victoriametrics.com/victorialogs/querying/vlogscli/#output-modes).
* FEATURE: [vlogscli](https://docs.victoriametrics.com/victorialogs/querying/vlogscli/): add `-accountID` and `-projectID` command-line flags for setting `AccountID` and `ProjectID` values when querying the specific [tenants](https://docs.victoriametrics.com/victorialogs/#multitenancy).
## [v0.34.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v0.34.0-victorialogs)
Released at 2024-10-08
* FEATURE: [vlogscli](https://docs.victoriametrics.com/victorialogs/querying/vlogscli/): add ability to display results in `logfmt` mode, single-line and multi-line JSON modes according [these docs](https://docs.victoriametrics.com/victorialogs/querying/vlogscli/#output-modes).
* FEATURE: [vlogscli](https://docs.victoriametrics.com/victorialogs/querying/vlogscli/): preserve `less` output after the exit from scrolling mode. This should help re-using previous query results in subsequent queries.
* FEATURE: add [`len` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#len-pipe) for calculating the length for the given [log field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) value in bytes.
## [v0.33.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v0.33.0-victorialogs)
Released at 2024-10-01
* FEATURE: add interactive command-line tool for querying VictoriaLogs - [`vlogscli`](https://docs.victoriametrics.com/victorialogs/querying/vlogscli/).
* BUGFIX: [`count_uniq` stats function](https://docs.victoriametrics.com/victorialogs/logsql/#count_uniq-stats): do not count field values, which aren't matched by the used filters. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7152).
## [v0.32.1](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v0.32.1-victorialogs)
Released at 2024-09-30

View file

@ -22,16 +22,17 @@ They aren't optimized specifically for logs. This results in the following issue
- High RAM usage
- High disk space usage
- Non-trivial index setup
- Inability to select more than 10K matching log lines in a single query
- Inability to select more than 10K matching log lines in a single query with default configs
VictoriaLogs is optimized specifically for logs. So it provides the following features useful for logs, which are missing in Elasticsearch:
- Easy to setup and operate. There is no need in tuning configuration for optimal performance or in creating any indexes for various log types.
Just run VictoriaLogs on the most suitable hardware - and it automatically provides the best performance.
Just run VictoriaLogs on the most suitable hardware, ingest logs into it via [supported data ingestion protocols](https://docs.victoriametrics.com/victorialogs/data-ingestion/)
and get the best available performance out of the box.
- Up to 30x less RAM usage than Elasticsearch for the same workload.
- Up to 15x less disk space usage than Elasticsearch for the same amounts of stored logs.
- Ability to work with hundreds of terabytes of logs on a single node.
- Very easy to use query language optimized for typical log analysis tasks - [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/).
- Ability to work efficiently with hundreds of terabytes of logs on a single node.
- Easy to use query language optimized for typical log analysis tasks - [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/).
- Fast full-text search over all the [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) out of the box.
- Good integration with traditional command-line tools for log analysis. See [these docs](https://docs.victoriametrics.com/victorialogs/querying/#command-line).
@ -43,23 +44,25 @@ Both systems support [log stream](https://docs.victoriametrics.com/victorialogs/
VictoriaLogs and Grafana Loki have the following differences:
- Grafana Loki doesn't support high-cardinality log fields (aka labels) such as `user_id`, `trace_id` or `ip`.
It starts consuming huge amounts of RAM and working very slow when logs with high-cardinality fields are ingested into it.
- VictoriaLogs is much easier to setup and operate than Grafana Loki. There is no need in non-trivial tuning -
it works great with default configuration.
- VictoriaLogs performs typical full-text search queries up to 1000x faster than Grafana Loki.
- Grafana Loki doesn't support log fields with many unique values (aka high cardinality labels) such as `user_id`, `trace_id` or `ip`.
It consumes huge amounts of RAM and slows down significantly when logs with high-cardinality fields are ingested into it.
See [these docs](https://grafana.com/docs/loki/latest/best-practices/) for details.
VictoriaLogs supports high-cardinality [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model).
It automatically indexes all the ingested log fields and allows performing fast full-text search over any field.
VictoriaLogs supports high-cardinality [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
out of the box without any additional configuration. It automatically indexes all the ingested log fields,
so fast full-text search over any log field works without issues.
- Grafana Loki provides very inconvenient query language - [LogQL](https://grafana.com/docs/loki/latest/logql/).
This query language is hard to use for typical log analysis tasks.
VictoriaLogs provides easy to use query language for typical log analysis tasks - [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/).
- VictoriaLogs performs typical full-text queries up to 1000x faster than Grafana Loki.
- VictoriaLogs needs less storage space than Grafana Loki for the same amounts of logs.
- VictoriaLogs is much easier to setup and operate than Grafana Loki.
- VictoriaLogs usually needs less RAM and storage space than Grafana Loki for the same amounts of logs.
## What is the difference between VictoriaLogs and ClickHouse?
@ -67,14 +70,12 @@ VictoriaLogs and Grafana Loki have the following differences:
ClickHouse is an extremely fast and efficient analytical database. It can be used for logs storage, analysis and processing.
VictoriaLogs is designed solely for logs. VictoriaLogs uses [similar design ideas as ClickHouse](#how-does-victorialogs-work) for achieving high performance.
- ClickHouse is good for logs if you know the set of [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) beforehand.
Then you can create a table with a column per each log field and achieve the maximum possible query performance.
- ClickHouse is good for logs if you know the set of [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
and the expected query types beforehand. Then you can create a table with a column per each log field, and use the most optimal settings for the table -
sort order, partitioning and indexing - for achieving the maximum possible storage efficiency and query performance.
If the set of log fields isn't known beforehand, or if it can change at any time, then ClickHouse can still be used,
but its' efficiency may suffer significantly depending on how you design the database schema for log storage.
ClickHouse efficiency highly depends on the used database schema. It must be optimized for the particular workload
for achieving high efficiency and query performance.
If the expected log fields or the expected query types aren't known beforehand, or if they may change over any time,
then ClickHouse can still be used, but its' efficiency may suffer significantly depending on how you design the database schema for log storage.
VictoriaLogs works optimally with any log types out of the box - structured, unstructured and mixed.
It works optimally with any sets of [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model),
@ -85,7 +86,7 @@ VictoriaLogs is designed solely for logs. VictoriaLogs uses [similar design idea
VictoriaLogs provides easy to use query language with full-text search specifically optimized
for log analysis - [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/).
LogsQL is usually much easier to use than SQL for typical log analysis tasks, while some
LogsQL is usually easier to use than SQL for typical log analysis tasks, while some
non-trivial analytics may require SQL power.
- VictoriaLogs accepts logs from popular log shippers out of the box - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/).
@ -97,8 +98,8 @@ VictoriaLogs is designed solely for logs. VictoriaLogs uses [similar design idea
## How does VictoriaLogs work?
VictoriaLogs accepts logs as [JSON entries](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model).
It then stores every field value into a distinct data block. E.g. values for the same field across multiple log entries
are stored in a single data block. This allow reading data blocks only for the needed fields during querying.
Then it stores log fields into a distinct data block. E.g. values for the same log field across multiple log entries
are stored in a single data block. This allows reading data blocks only for the needed fields during querying.
Data blocks are compressed before being saved to persistent storage. This allows saving disk space and improving query performance
when it is limited by disk read IO bandwidth.
@ -117,9 +118,9 @@ On top of this, VictoriaLogs employs additional optimizations for achieving high
- It uses [bloom filters](https://en.wikipedia.org/wiki/Bloom_filter) for skipping blocks without the given
[word](https://docs.victoriametrics.com/victorialogs/logsql/#word-filter) or [phrase](https://docs.victoriametrics.com/victorialogs/logsql/#phrase-filter).
- It uses custom encoding and compression for fields with different data types.
For example, it encodes IP addresses as 4-byte tuples. Custom fields' encoding reduces data size on disk and improves query performance.
For example, it encodes IP addresses int 4 bytes. Custom fields' encoding reduces data size on disk and improves query performance.
- It physically groups logs for the same [log stream](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields)
close to each other. This improves compression ratio, which helps reducing disk space usage. This also improves query performance
close to each other in the storage. This improves compression ratio, which helps reducing disk space usage. This also improves query performance
by skipping blocks for unneeded streams when [stream filter](https://docs.victoriametrics.com/victorialogs/logsql/#stream-filter) is used.
- It maintains sparse index for [log timestamps](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field),
which allow improving query performance when [time filter](https://docs.victoriametrics.com/victorialogs/logsql/#time-filter) is used.

View file

@ -36,7 +36,7 @@ For example, the following query finds all the logs with `error` word:
error
```
See [how to send queries to VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/).
It is recommended to use [vlogscli](https://docs.victoriametrics.com/victorialogs/querying/vlogscli/) for querying VictoriaLogs.
If the queried [word](#word) clashes with LogsQL keywords, then just wrap it into quotes.
For example, the following query finds all the log messages with `and` [word](#word):
@ -1304,6 +1304,7 @@ LogsQL supports the following pipes:
- [`fields`](#fields-pipe) selects the given set of [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model).
- [`filter`](#filter-pipe) applies additional [filters](#filters) to results.
- [`format`](#format-pipe) formats output field from input [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model).
- [`len`](#len-pipe) calculates byte length of the given [log field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) value.
- [`limit`](#limit-pipe) limits the number selected logs.
- [`math`](#math-pipe) performs mathematical calculations over [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model).
- [`offset`](#offset-pipe) skips the given number of selected logs.
@ -1758,6 +1759,22 @@ only if `ip` and `host` [fields](https://docs.victoriametrics.com/victorialogs/k
_time:5m | format if (ip:* and host:*) "request from <ip>:<host>" as message
```
### len pipe
The `| len(field) as result` pipe stores byte length of the given `field` value into the `result` field.
For example, the following query shows top 5 log entries with the maximum byte length of `_msg` field across
logs for the last 5 minutes:
```logsql
_time:5m | len(_msg) as msg_len | sort by (msg_len desc) | limit 1
```
See also:
- [`sum_len` stats function](#sum-len-stats)
- [`sort` pipe](#sort-pipe)
- [`limit` pipe](#limit-pipe)
### limit pipe
If only a subset of selected logs must be processed, then `| limit N` [pipe](#pipes) can be used, where `N` can contain any [supported integer numeric value](#numeric-values).
@ -2024,7 +2041,7 @@ The `<filters>` can contain arbitrary [filters](#filters). For example, the foll
only if `user_type` field equals to `admin`:
```logsql
_time:5m | replace if (user_type:=admin) replace ("secret", "***") at password
_time:5m | replace if (user_type:=admin) ("secret", "***") at password
```
### replace_regexp pipe
@ -2076,7 +2093,7 @@ The `<filters>` can contain arbitrary [filters](#filters). For example, the foll
with `***` in the `foo` field only if `user_type` field equals to `admin`:
```logsql
_time:5m | replace_regexp if (user_type:=admin) replace ("password: [^ ]+", "") at foo
_time:5m | replace_regexp if (user_type:=admin) ("password: [^ ]+", "") at foo
```
### sort pipe
@ -3027,10 +3044,10 @@ See also:
### sum_len stats
`sum_len(field1, ..., fieldN)` [stats pipe function](#stats-pipe-functions) calculates the sum of lengths of all the values
`sum_len(field1, ..., fieldN)` [stats pipe function](#stats-pipe-functions) calculates the sum of byte lengths of all the values
for the given [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model).
For example, the following query returns the sum of lengths of [`_msg` fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field)
For example, the following query returns the sum of byte lengths of [`_msg` fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#message-field)
across all the logs for the last 5 minutes:
```logsql
@ -3040,6 +3057,7 @@ _time:5m | stats sum_len(_msg) messages_len
See also:
- [`count`](#count-stats)
- [`len` pipe](#len-pipe)
### uniq_values stats

View file

@ -33,8 +33,8 @@ Just download archive for the needed Operating system and architecture, unpack i
For example, the following commands download VictoriaLogs archive for Linux/amd64, unpack and run it:
```sh
curl -L -O https://github.com/VictoriaMetrics/VictoriaMetrics/releases/download/v0.32.1-victorialogs/victoria-logs-linux-amd64-v0.32.1-victorialogs.tar.gz
tar xzf victoria-logs-linux-amd64-v0.32.1-victorialogs.tar.gz
curl -L -O https://github.com/VictoriaMetrics/VictoriaMetrics/releases/download/v0.35.0-victorialogs/victoria-logs-linux-amd64-v0.35.0-victorialogs.tar.gz
tar xzf victoria-logs-linux-amd64-v0.35.0-victorialogs.tar.gz
./victoria-logs-prod
```
@ -58,7 +58,7 @@ Here is the command to run VictoriaLogs in a Docker container:
```sh
docker run --rm -it -p 9428:9428 -v ./victoria-logs-data:/victoria-logs-data \
docker.io/victoriametrics/victoria-logs:v0.32.1-victorialogs
docker.io/victoriametrics/victoria-logs:v0.35.0-victorialogs
```
See also:

View file

@ -3,25 +3,27 @@ from [VictoriaMetrics](https://github.com/VictoriaMetrics/VictoriaMetrics/).
VictoriaLogs provides the following features:
- VictoriaLogs can accept logs from popular log collectors. See [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/).
- VictoriaLogs is much easier to set up and operate compared to Elasticsearch and Grafana Loki.
- It can accept logs from popular log collectors. See [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/).
- It is much easier to set up and operate compared to Elasticsearch and Grafana Loki.
See [these docs](https://docs.victoriametrics.com/victorialogs/quickstart/).
- VictoriaLogs provides easy yet powerful query language with full-text search across
- It provides easy yet powerful query language with full-text search across
all the [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model).
See [LogsQL docs](https://docs.victoriametrics.com/victorialogs/logsql/).
- VictoriaLogs can be seamlessly combined with good old Unix tools for log analysis such as `grep`, `less`, `sort`, `jq`, etc.
- It provides interactive command-line tool for querying VictoriaLogs - [vlogscli](https://docs.victoriametrics.com/victorialogs/querying/vlogscli/).
- It can be seamlessly combined with good old Unix tools for log analysis such as `grep`, `less`, `sort`, `jq`, etc.
See [these docs](https://docs.victoriametrics.com/victorialogs/querying/#command-line) for details.
- VictoriaLogs capacity and performance scales linearly with the available resources (CPU, RAM, disk IO, disk space).
It runs smoothly on both Raspberry PI and a server with hundreds of CPU cores and terabytes of RAM.
- VictoriaLogs can handle up to 30x bigger data volumes than Elasticsearch and Grafana Loki when running on the same hardware.
- VictoriaLogs' capacity and performance scales linearly with the available resources (CPU, RAM, disk IO, disk space).
It runs smoothly on Raspberry PI and on servers with hundreds of CPU cores and terabytes of RAM.
- It can handle up to 30x bigger data volumes than Elasticsearch and Grafana Loki when running on the same hardware.
See [these docs](#benchmarks).
- VictoriaLogs supports fast full-text search over high-cardinality [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
such as `trace_id`, `user_id` and `ip`.
- VictoriaLogs supports multitenancy - see [these docs](#multitenancy).
- VictoriaLogs supports out-of-order logs' ingestion aka backfilling.
- VictoriaLogs supports live tailing for newly ingested logs. See [these docs](https://docs.victoriametrics.com/victorialogs/querying/#live-tailing).
- VictoriaLogs supports selecting surrounding logs in front and after the selected logs. See [these docs](https://docs.victoriametrics.com/victorialogs/logsql/#stream_context-pipe).
- VictoriaLogs provides web UI for querying logs - see [these docs](https://docs.victoriametrics.com/victorialogs/querying/#web-ui).
- It provides fast full-text search out of the box for [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
with high cardinality (e.g. high number of unique values) such as `trace_id`, `user_id` and `ip`.
- It supports multitenancy - see [these docs](#multitenancy).
- It supports out-of-order logs' ingestion aka backfilling.
- It supports live tailing for newly ingested logs. See [these docs](https://docs.victoriametrics.com/victorialogs/querying/#live-tailing).
- It supports selecting surrounding logs in front and after the selected logs. See [these docs](https://docs.victoriametrics.com/victorialogs/logsql/#stream_context-pipe).
- It provides web UI for querying logs - see [these docs](https://docs.victoriametrics.com/victorialogs/querying/#web-ui).
- It provides [Grafana plugin for querying logs](https://docs.victoriametrics.com/victorialogs/victorialogs-datasource/).
If you have questions about VictoriaLogs, then read [this FAQ](https://docs.victoriametrics.com/victorialogs/faq/).
Also feel free asking any questions at [VictoriaMetrics community Slack chat](https://victoriametrics.slack.com/),
@ -115,7 +117,7 @@ Set the `-retentionPeriod` to some big value (e.g. `100y` - 100 years) if logs s
For example:
```sh
/path/to/victoria-logs -retention.maxDiskSpaceUsageBytes=10TiB -retention=100y
/path/to/victoria-logs -retention.maxDiskSpaceUsageBytes=10TiB -retentionPeriod=100y
```
## Storage

View file

@ -48,7 +48,7 @@ for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.co
sinks:
vlogs:
type: "loki"
endpoint = "http://localhost:9428/insert/loki/"
endpoint: "http://localhost:9428/insert/loki/"
inputs:
- your_input
compression: gzip
@ -132,7 +132,7 @@ sinks:
_msg_field: message
_time_field: timestamp
_stream_fields: host,container_name
batch]
batch:
max_events: 1000
```

View file

@ -1,10 +1,10 @@
[VictoriaLogs](https://docs.victoriametrics.com/victorialogs/) can be queried with [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/)
via the following ways:
- [Web UI](#web-ui) - a web-based UI for querying logs
- [Visualization in Grafana](#visualization-in-grafana)
- [HTTP API](#http-api)
- [Command-line interface](#command-line)
- [HTTP API](#http-api)
- [Web UI](#web-ui) - a web-based UI for querying logs
- [Grafana plugin](#visualization-in-grafana)
## HTTP API
@ -800,7 +800,10 @@ See also [command line interface](#command-line).
## Command-line
VictoriaLogs integrates well with `curl` and other command-line tools during querying because of the following features:
VictoriaLogs provides `vlogsqcli` interactive command-line tool for querying logs. See [these docs](https://docs.victoriametrics.com/victorialogs/querying/vlogscli/).
VictoriaLogs [querying API](https://docs.victoriametrics.com/victorialogs/querying/#querying-logs) integrates well with `curl`
and other Unix command-line tools because of the following features:
- Matching log entries are sent to the response stream as soon as they are found.
This allows forwarding the response stream to arbitrary [Unix pipes](https://en.wikipedia.org/wiki/Pipeline_(Unix))
@ -825,7 +828,7 @@ curl http://localhost:9428/select/logsql/query -d 'query=error'
If the command above returns "never-ending" response, then just press `ctrl+C` at any time in order to cancel the query.
VictoriaLogs notices that the response stream is closed, so it cancels the query and stops consuming CPU, RAM and disk IO for this query.
Then just use `head` command for investigating the returned log messages and narrowing down the query:
Then use `head` command for investigating the returned log messages and narrowing down the query:
```sh
curl http://localhost:9428/select/logsql/query -d 'query=error' | head -10
@ -862,7 +865,8 @@ curl http://localhost:9428/select/logsql/query -d 'query=error AND "cannot open
```
Note that the `query` arg must be properly encoded with [percent encoding](https://en.wikipedia.org/wiki/URL_encoding) when passing it to `curl`
or similar tools.
or similar tools. It is highly recommended to use [vlogscli](https://docs.victoriametrics.com/victorialogs/querying/vlogscli/) -
it automatically performs all the needed encoding.
The `pipe the query to "head" or "less" -> investigate the results -> refine the query` iteration
can be repeated multiple times until the needed log messages are found.

View file

@ -0,0 +1,161 @@
---
weight:
title: vlogscli
disableToc: true
menu:
docs:
parent: "victorialogs-querying"
weight: 1
---
`vlogsqcli` is an interactive command-line tool for querying [VictoriaLogs](https://docs.victoriametrics.com/victorialogs/).
It has the following features:
- It supports scrolling and searching over query results in the same way as `less` command does - see [these docs](#scrolling-query-results).
- It supports canceling long-running queries at any time via `Ctrl+C`.
- It supports query history - see [these docs](#query-history).
- It supports diffent formats for query results (JSON, logfmt, compact, etc.) - see [these docs](#output-modes).
- It supports live tailing - see [these docs](#live-tailing).
This tool can be obtained from the linked release pages at the [changelog](https://docs.victoriametrics.com/victorialogs/changelog/)
or from [docker images](https://hub.docker.com/r/victoriametrics/vlogscli/tags):
### Running `vlogscli` from release binary
```sh
curl -L -O https://github.com/VictoriaMetrics/VictoriaMetrics/releases/download/v0.35.0-victorialogs/vlogscli-linux-amd64-v0.35.0-victorialogs.tar.gz
tar xzf vlogscli-linux-amd64-v0.35.0-victorialogs.tar.gz
./vlogscli-prod
```
### Running `vlogscli` from Docker image
```sh
docker run --rm -it docker.io/victoriametrics/vlogscli:v0.35.0-victorialogs
```
## Configuration
By default `vlogscli` sends queries to [`http://localhost:8429/select/logsql/query`](https://docs.victoriametrics.com/victorialogs/querying/#querying-logs).
The url to query can be changed via `-datasource.url` command-line flag. For example, the following command instructs
`vlogsql` sending queries to `https://victoria-logs.some-domain.com/select/logsql/query`:
```sh
./vlogsql -datasource.url='https://victoria-logs.some-domain.com/select/logsql/query'
```
If some HTTP request headers must be passed to the querying API, then set `-header` command-line flag.
For example, the following command starts `vlogsql`,
which queries `(AccountID=123, ProjectID=456)` [tenant](https://docs.victoriametrics.com/victorialogs/#multitenancy):
```sh
./vlogsql -header='AccountID: 123' -header='ProjectID: 456'
```
## Multitenancy
`AccountID` and `ProjectID` [values](https://docs.victoriametrics.com/victorialogs/#multitenancy)
can be set via `-accountID` and `-projectID` command-line flags:
```sh
./vlogsql -accountID=123 -projectID=456
```
## Querying
After the start `vlogsql` provides a prompt for writing [LogsQL](https://docs.victoriametrics.com/victorialogs/logsql/) queries.
The query can be multi-line. It is sent to VictoriaLogs as soon as it contains `;` at the end or if a blank line follows the query.
For example:
```sh
;> _time:1y | count();
executing [_time:1y | stats count(*) as "count(*)"]...; duration: 0.688s
{
"count(*)": "1923019991"
}
```
`vlogscli` shows the actually executed query on the next line after the query input prompt.
This helps debugging issues related to incorrectly written queries.
The next line after the query input prompt also shows the query duration. This helps debugging
and optimizing slow queries.
Query execution can be interrupted at any time by pressing `Ctrl+C`.
Type `q` and then press `Enter` for exit from `vlogsql` (if you want to search for `q` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word),
then just wrap it into quotes: `"q"` or `'q'`).
See also:
- [output modes](#output-modes)
- [query history](#query-history)
- [scrolling query results](#scrolling-query-results)
- [live tailing](#live-tailing)
## Scrolling query results
If the query response exceeds vertical screen space, `vlogsql` pipes query response to `less` utility,
so you can scroll the response as needed. This allows executing queries, which potentially
may return billions of rows, without any problems at both VictoriaMetrics and `vlogsql` sides,
thanks to the way how `less` interacts with [`/select/logsql/query`](https://docs.victoriametrics.com/victorialogs/querying/#querying-logs):
- `less` reads the response when needed, e.g. when you scroll it down.
`less` pauses reading the response when you stop scrolling. VictoriaLogs pauses processing the query
when `less` stops reading the response, and automatically resumes processing the response
when `less` continues reading it.
- `less` closes the response stream after exit from scroll mode (e.g. by typing `q`).
VictoriaLogs stops query processing and frees up all the associated resources
after the response stream is closed.
See also [`less` docs](https://man7.org/linux/man-pages/man1/less.1.html) and
[command-line integration docs for VictoriaMetrics](https://docs.victoriametrics.com/victorialogs/querying/#command-line).
## Live tailing
`vlogsql` enters live tailing mode when the query is prepended with `\tail ` command. For example,
the following query shows all the newly ingested logs with `error` [word](https://docs.victoriametrics.com/victorialogs/logsql/#word)
in real time:
```
;> \tail error;
```
By default `vlogscli` derives [the URL for live tailing](https://docs.victoriametrics.com/victorialogs/querying/#live-tailing) from the `-datasource.url` command-line flag
by replacing `/query` with `/tail` at the end of `-datasource.url`. The URL for live tailing can be specified explicitly via `-tail.url` command-line flag.
Live tailing can show query results in different formats - see [these docs](#output-modes).
## Query history
`vlogsql` supports query history - press `up` and `down` keys for navigating the history.
By default the history is stored in the `vlogsql-history` file at the directory where `vlogsql` runs,
so the history is available between `vlogsql` runs.
The path to the file can be changed via `-historyFile` command-line flag.
Quick tip: type some text and then press `Ctrl+R` for searching queries with the given text in the history.
Press `Ctrl+R` multiple times for searching other matching queries in the history.
Press `Enter` when the needed query is found in order to execute it.
Press `Ctrl+C` for exit from the `search history` mode.
See also [other available shortcuts](https://github.com/chzyer/readline/blob/f533ef1caae91a1fcc90875ff9a5a030f0237c6a/doc/shortcut.md).
## Output modes
By default `vlogscli` displays query results as prettified JSON object with every field on a separate line.
Fields in every JSON object are sorted in alphabetical order. This simplifies locating the needed fields.
`vlogscli` supports the following output modes:
* A single JSON line per every result. Type `\s` and press `enter` for this mode.
* Multline JSON per every result. Type `\m` and press `enter` for this mode.
* Compact output. Type `\c` and press `enter` for this mode.
This mode shows field values as is if the response contains a single field
(for example if [`fields _msg` pipe](https://docs.victoriametrics.com/victorialogs/logsql/#fields-pipe) is used)
plus optional [`_time` field](https://docs.victoriametrics.com/victorialogs/keyconcepts/#time-field).
* [Logfmt output](https://brandur.org/logfmt). Type `\logfmt` and press `enter` for this mode.

View file

@ -11,7 +11,38 @@ aliases:
---
Please find the changelog for VictoriaMetrics Anomaly Detection below.
> **Important note: Users are strongly encouraged to upgrade to `vmanomaly` [v1.9.2](https://hub.docker.com/repository/docker/victoriametrics/vmanomaly/tags?page=1&ordering=name) or newer for optimal performance and accuracy. <br><br> This recommendation is crucial for configurations with a low `infer_every` parameter [in your scheduler](https://docs.victoriametrics.com/anomaly-detection/components/scheduler/#parameters-1), and in scenarios where data exhibits significant high-order seasonality patterns (such as hourly or daily cycles). Previous versions from v1.5.1 to v1.8.0 were identified to contain a critical issue impacting model training, where models were inadvertently trained on limited data subsets, leading to suboptimal fits, affecting the accuracy of anomaly detection. <br><br> Upgrading to v1.9.2 addresses this issue, ensuring proper model training and enhanced reliability. For users utilizing Helm charts, it is recommended to upgrade to version [1.0.0](https://github.com/VictoriaMetrics/helm-charts/blob/master/charts/victoria-metrics-anomaly/CHANGELOG.md#100) or newer.**
## v1.16.3
Released: 2024-10-08
- IMPROVEMENT: Added `tls_cert_file` and `tls_key_file` arguments to support mTLS (mutual TLS) in `vmanomaly` components. This enhancement applies to the following components: [VmReader](https://docs.victoriametrics.com/anomaly-detection/components/reader/#vm-reader), [VmWriter](https://docs.victoriametrics.com/anomaly-detection/components/writer/#vm-writer), and [Monitoring/Push](https://docs.victoriametrics.com/anomaly-detection/components/monitoring/#push-config-parameters). You can also use these arguments in conjunction with `verify_tls` when it is set as a path to a custom CA certificate file.
## v1.16.2
Released: 2024-10-06
- FEATURE: Added support for `multitenant` value in `tenant_id` arg to enable querying across multiple tenants in [VictoriaMetrics cluster](https://docs.victoriametrics.com/cluster-victoriametrics/) (option available from [v1.104.0](https://docs.victoriametrics.com/cluster-victoriametrics/#multitenancy-via-labels)):
- Applied when reading input data from `vmselect` via the [VmReader](https://docs.victoriametrics.com/anomaly-detection/components/reader#vm-reader).
- Applied when writing generated results through `vminsert` via the [VmWriter](https://docs.victoriametrics.com/anomaly-detection/components/writer#vm-writer).
- For more details, refer to the `tenant_id` arg description in the documentation of the mentioned components.
- FIX: Resolved an issue with handling an empty `preset` value (e.g., `preset: ""`) that was preventing the [default helm chart](https://github.com/VictoriaMetrics/helm-charts/blob/7f5a2c00b14c2c088d7d8d8bcee7a440a5ff11c6/charts/victoria-metrics-anomaly/values.yaml#L139) from being deployed.
## v1.16.1
Released: 2024-10-02
- FIX: This patch release prevents the service from crashing by rolling back the version of a third-party dependency. Affected releases: [v1.16.0](#v1160).
## v1.16.0
Released: 2024-10-01
> **Note**: A bug was discovered in this release that causes the service to crash. Please use the patch [v1.16.1](#v1161) to resolve this issue.
- FEATURE: Introduced data dumps to a host filesystem for [VmReader](https://docs.victoriametrics.com/anomaly-detection/components/reader#vm-reader). Resource-intensive setups (multiple queries returning many metrics, bigger `fit_window` arg) will have RAM consumption reduced during fit calls.
- IMPROVEMENT: Added a `groupby` argument for logical grouping in [multivariate models](https://docs.victoriametrics.com/anomaly-detection/components/models#multivariate-models). When specified, a separate multivariate model is trained for each unique combination of label values in the `groupby` columns. For example, to perform multivariate anomaly detection on metrics at the machine level without cross-entity interference, you can use `groupby: [host]` or `groupby: [instance]`, ensuring one model per entity being trained (e.g., per host). Please find more details [here](https://docs.victoriametrics.com/anomaly-detection/components/models/#group-by).
- IMPROVEMENT: Improved performance of [VmReader](https://docs.victoriametrics.com/anomaly-detection/components/reader#vm-reader) on multicore instances for reading and data processing.
- IMPROVEMENT: Introduced new CLI argument aliases to enhance compatibility with [Helm charts](https://github.com/VictoriaMetrics/helm-charts/blob/master/charts/victoria-metrics-anomaly/README.md) (i.e. using secrets) and better align with [VictoriaMetrics flags](https://docs.victoriametrics.com/#list-of-command-line-flags):
- `--licenseFile` as an alias for `--license-file`
- `--license.forceOffline` as an alias for `--license-verify-offline`
- `--loggerLevel` as an alias for `--log-level`
- The previous argument format is retained for backward compatibility.
- FIX: The `provide_series` [common argument](https://docs.victoriametrics.com/anomaly-detection/components/models/#provide-series) now correctly filters the written time series in the [IsolationForestMultivariate](https://docs.victoriametrics.com/anomaly-detection/components/models/#isolation-forest-multivariate) model.
## v1.15.9
Released: 2024-08-27
@ -45,7 +76,7 @@ Released: 2024-08-14
## v1.15.2
Released: 2024-08-13
- IMPROVEMENT: Enhanced [online models](https://docs.victoriametrics.com/anomaly-detection/components/models/#online-models) (e.g., [`OnlineQuantileModel`](https://docs.victoriametrics.com/anomaly-detection/components/models/#online-seasonal-quantile)) to automatically create model instances for unseen time series during `infer` calls, eliminating the need to wait for the next `fit` call. This ensures no inferences are skipped **when using online models**.
- FIX: Corrected an issue with the [`OnlineMADModel`](https://docs.victoriametrics.com/anomaly-detection/components/models/#online-mad) to ensure proper functionality when used in combination with [on-disk model dump mode](https://docs.victoriametrics.com/anomaly-detection/faq/#resource-consumption-of-vmanomaly).
- FIX: Corrected an issue with the [`OnlineMADModel`](https://docs.victoriametrics.com/anomaly-detection/components/models/#online-mad) to ensure proper functionality when used in combination with [on-disk model dump mode](https://docs.victoriametrics.com/anomaly-detection/faq/#on-disk-mode).
- FIX: Addressed numerical instability in the [`OnlineQuantileModel`](https://docs.victoriametrics.com/anomaly-detection/components/models/#online-seasonal-quantile) when `use_transform` is set to `True`.
- FIX: Resolved a logging issue that could cause a `RuntimeError: reentrant call inside <_io.BufferedWriter name='<stderr>'>` when a termination event was received.
@ -93,7 +124,7 @@ Released: 2024-07-17
Released: 2024-07-15
- IMPROVEMENT: update `node-exporter` [preset](https://docs.victoriametrics.com/anomaly-detection/presets/#node-exporter) to reduce [false positives](https://victoriametrics.com/blog/victoriametrics-anomaly-detection-handbook-chapter-1/#false-positive)
- FIX: add `verify_tls` arg for [`push`](https://docs.victoriametrics.com/anomaly-detection/components/monitoring/#push-config-parameters) monitoring section. Also, `verify_tls` is now correctly used in [VmWriter](https://docs.victoriametrics.com/anomaly-detection/components/writer/#vm-writer).
- FIX: now [`AutoTuned`](https://docs.victoriametrics.com/anomaly-detection/components/models/#autotuned) model wrapper works correctly in [on-disk model storage mode](https://docs.victoriametrics.com/anomaly-detection/faq/#resource-consumption-of-vmanomaly).
- FIX: now [`AutoTuned`](https://docs.victoriametrics.com/anomaly-detection/components/models/#autotuned) model wrapper works correctly in [on-disk model storage mode](https://docs.victoriametrics.com/anomaly-detection/faq/#on-disk-mode).
- FIX: now [rolling models](https://docs.victoriametrics.com/anomaly-detection/components/models/#rolling-models), like [`RollingQuantile`](https://docs.victoriametrics.com/anomaly-detection/components/models/#rolling-quantile) are properly handled in [One-off scheduler](https://docs.victoriametrics.com/anomaly-detection/components/scheduler/#oneoff-scheduler), when wrapped in [`AutoTuned`](https://docs.victoriametrics.com/anomaly-detection/components/models/#autotuned)
## v1.13.0

View file

@ -118,9 +118,13 @@ writer:
Configuration above will produce N intervals of full length (`fit_window`=14d + `fit_every`=1h) until `to_iso` timestamp is reached to run N consecutive `fit` calls to train models; Then these models will be used to produce `M = [fit_every / sampling_frequency]` infer datapoints for `fit_every` range at the end of each such interval, imitating M consecutive calls of `infer_every` in `PeriodicScheduler` [config](https://docs.victoriametrics.com/anomaly-detection/components/scheduler#periodic-scheduler). These datapoints then will be written back to VictoriaMetrics TSDB, defined in `writer` [section](https://docs.victoriametrics.com/anomaly-detection/components/writer#vm-writer) for further visualization (i.e. in VMUI or Grafana)
## Resource consumption of vmanomaly
`vmanomaly` itself is a lightweight service, resource usage is primarily dependent on [scheduling](https://docs.victoriametrics.com/anomaly-detection/components/scheduler) (how often and on what data to fit/infer your models), [# and size of timeseries returned by your queries](https://docs.victoriametrics.com/anomaly-detection/components/reader/#vm-reader), and the complexity of the employed [models](https://docs.victoriametrics.com/anomaly-detection/components/models). Its resource usage is directly related to these factors, making it adaptable to various operational scales.
`vmanomaly` itself is a lightweight service, resource usage is primarily dependent on [scheduling](https://docs.victoriametrics.com/anomaly-detection/components/scheduler) (how often and on what data to fit/infer your models), [# and size of timeseries returned by your queries](https://docs.victoriametrics.com/anomaly-detection/components/reader/#vm-reader), and the complexity of the employed [models](https://docs.victoriametrics.com/anomaly-detection/components/models). Its resource usage is directly related to these factors, making it adaptable to various operational scales. Various optimizations are available to balance between RAM usage, processing speed, and model capacity. These options are described in the sections below.
> **Note**: Starting from [v1.13.0](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1130), there is a mode to save anomaly detection models on host filesystem after `fit` stage (instead of keeping them in-memory by default). **Resource-intensive setups** (many models, many metrics, bigger [`fit_window` arg](https://docs.victoriametrics.com/anomaly-detection/components/scheduler#periodic-scheduler-config-example)) and/or 3rd-party models that store fit data (like [ProphetModel](https://docs.victoriametrics.com/anomaly-detection/components/models#prophet) or [HoltWinters](https://docs.victoriametrics.com/anomaly-detection/components/models#holt-winters)) will have RAM consumption greatly reduced at a cost of slightly slower `infer` stage. To enable it, you need to set environment variable `VMANOMALY_MODEL_DUMPS_DIR` to desired location. [Helm charts](https://github.com/VictoriaMetrics/helm-charts/blob/master/charts/victoria-metrics-anomaly/README.md) are being updated accordingly ([`StatefulSet`](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) for persistent storage starting from chart version `1.3.0`).
### On-disk mode
> **Note**: Starting from [v1.13.0](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1130), there is an option to save anomaly detection models to the host filesystem after the `fit` stage (instead of keeping them in memory by default). This is particularly useful for **resource-intensive setups** (e.g., many models, many metrics, or larger [`fit_window` argument](https://docs.victoriametrics.com/anomaly-detection/components/scheduler#periodic-scheduler-config-example)) and for 3rd-party models that store fit data (such as [ProphetModel](https://docs.victoriametrics.com/anomaly-detection/components/models#prophet) or [HoltWinters](https://docs.victoriametrics.com/anomaly-detection/components/models#holt-winters)). This reduces RAM consumption significantly, though at the cost of slightly slower `infer` stages. To enable this, set the environment variable `VMANOMALY_MODEL_DUMPS_DIR` to the desired location. If using [Helm charts](https://github.com/VictoriaMetrics/helm-charts/blob/master/charts/victoria-metrics-anomaly/README.md), starting from chart version `1.3.0` `.persistentVolume.enabled` should be set to `true` in [values.yaml](https://github.com/VictoriaMetrics/helm-charts/blob/master/charts/victoria-metrics-anomaly/values.yaml).
> **Note**: Starting from [v1.16.0](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1160), a similar optimization is available for data read from VictoriaMetrics TSDB. To use this, set the environment variable `VMANOMALY_DATA_DUMPS_DIR` to the desired location.
Here's an example of how to set it up in docker-compose using volumes:
```yaml
@ -128,7 +132,7 @@ services:
# ...
vmanomaly:
container_name: vmanomaly
image: victoriametrics/vmanomaly:latest
image: victoriametrics/vmanomaly:v1.16.3
# ...
ports:
- "8490:8490"
@ -138,22 +142,27 @@ services:
- ./vmanomaly_license:/license
# map the host directory to the container directory
- vmanomaly_model_dump_dir:/vmanomaly/tmp/models
- vmanomaly_data_dump_dir:/vmanomaly/tmp/data
environment:
# set the environment variable for the model dump directory
- VMANOMALY_MODEL_DUMPS_DIR=/vmanomaly/tmp/models/
- VMANOMALY_DATA_DUMPS_DIR=/vmanomaly/tmp/data/
platform: "linux/amd64"
command:
- "/config.yaml"
- "--license-file=/license"
- "--licenseFile=/license"
volumes:
# ...
vmanomaly_model_dump_dir: {}
vmanomaly_data_dump_dir: {}
```
> **Note**: Starting from [v1.15.0](https://docs.victoriametrics.com/anomaly-detection/changelog#v1150) with the introduction of [online models](https://docs.victoriametrics.com/anomaly-detection/components/models/#online-models), you can additionally reduce resource consumption (e.g., flatten `fit` stage peaks by querying less data from VictoriaMetrics at once).
For Helm chart users, refer to the `persistentVolume` [section](https://github.com/VictoriaMetrics/helm-charts/blob/7f5a2c00b14c2c088d7d8d8bcee7a440a5ff11c6/charts/victoria-metrics-anomaly/values.yaml#L183) in the [`values.yaml`](https://github.com/VictoriaMetrics/helm-charts/blob/master/charts/victoria-metrics-anomaly/values.yaml) file. Ensure that the boolean flags `dumpModels` and `dumpData` are set as needed (both are *enabled* by default).
**Additional Benefits of Switching to Online Models**:
### Online models
> **Note**: Starting from [v1.15.0](https://docs.victoriametrics.com/anomaly-detection/changelog#v1150) with the introduction of [online models](https://docs.victoriametrics.com/anomaly-detection/components/models/#online-models), you can additionally reduce resource consumption (e.g., flatten `fit` stage peaks by querying less data from VictoriaMetrics at once).
- **Reduced Latency**: Online models update incrementally, which can lead to faster response times for anomaly detection since the model continuously adapts to new data without waiting for a batch `fit`.
- **Scalability**: Handling smaller data chunks at a time reduces memory and computational overhead, making it easier to scale the anomaly detection system.

View file

@ -229,7 +229,7 @@ This will expose metrics at `http://0.0.0.0:8080/metrics` page.
To use *vmanomaly* you need to pull docker image:
```sh
docker pull victoriametrics/vmanomaly:latest
docker pull victoriametrics/vmanomaly:v1.16.3
```
> Note: please check what is latest release in [CHANGELOG](https://docs.victoriametrics.com/anomaly-detection/changelog/)
@ -239,7 +239,7 @@ docker pull victoriametrics/vmanomaly:latest
You can put a tag on it for your convenience:
```sh
docker image tag victoriametrics/vmanomaly:latest vmanomaly
docker image tag victoriametrics/vmanomaly:v1.16.3 vmanomaly
```
Here is an example of how to run *vmanomaly* docker container with [license file](#licensing):
@ -250,26 +250,25 @@ docker run -it --net [YOUR_NETWORK] \
-v YOUR_LICENSE_FILE_PATH:/license \
-v YOUR_CONFIG_FILE_PATH:/config.yml \
vmanomaly /config.yml \
--license-file=/license
--licenseFile=/license
```
### Licensing
The license key can be passed via the following command-line flags:
```
--license LICENSE See https://victoriametrics.com/products/enterprise/
for trial license
--license-file LICENSE_FILE
See https://victoriametrics.com/products/enterprise/
for trial license
--license-verify-offline {true,false}
Force offline verification of license code. License is
verified online by default. This flag runs license
verification offline.
--license STRING License key for VictoriaMetrics Enterprise.
See https://victoriametrics.com/products/enterprise/trial/ to obtain a trial license.
--licenseFile STRING Path to file with license key for VictoriaMetrics Enterprise.
See https://victoriametrics.com/products/enterprise/trial/ to obtain a trial license.
--license.forceOffline
Whether to force offline verification for VictoriaMetrics Enterprise license key,
which has been passed either via -license or via -licenseFile command-line flag.
The issued license key must support offline verification feature.
Contact info@victoriametrics.com if you need offline license verification.
```
In order to make it easier to monitor the license expiration date, the following metrics are exposed(see
[Monitoring](#monitoring) section for details on how to scrape them):

View file

@ -21,7 +21,9 @@ The following options are available:
- [To run Docker image](#docker)
- [To run in Kubernetes with Helm charts](#kubernetes-with-helm-charts)
> **Note**: Starting from [v1.13.0](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1130) there is a mode to keep anomaly detection models on host filesystem after `fit` stage (instead of keeping them in-memory by default); This may lead to **noticeable reduction of RAM used** on bigger setups. See instructions [here](https://docs.victoriametrics.com/anomaly-detection/faq/#resource-consumption-of-vmanomaly).
> **Note**: Starting from [v1.13.0](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1130) there is a mode to keep anomaly detection models on host filesystem after `fit` stage (instead of keeping them in-memory by default); This may lead to **noticeable reduction of RAM used** on bigger setups. See instructions [here](https://docs.victoriametrics.com/anomaly-detection/faq/#on-disk-mode).
> **Note**: Starting from [v1.16.0](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1160), a similar optimization is available for data read from VictoriaMetrics TSDB. See instructions [here](https://docs.victoriametrics.com/anomaly-detection/faq/#on-disk-mode).
### Docker
@ -32,13 +34,13 @@ Below are the steps to get `vmanomaly` up and running inside a Docker container:
1. Pull Docker image:
```sh
docker pull victoriametrics/vmanomaly:latest
docker pull victoriametrics/vmanomaly:v1.16.3
```
2. (Optional step) tag the `vmanomaly` Docker image:
```sh
docker image tag victoriametrics/vmanomaly:latest vmanomaly
docker image tag victoriametrics/vmanomaly:v1.16.3 vmanomaly
```
3. Start the `vmanomaly` Docker container with a *license file*, use the command below.
@ -50,7 +52,7 @@ export YOUR_CONFIG_FILE_PATH=path/to/config/file
docker run -it -v $YOUR_LICENSE_FILE_PATH:/license \
-v $YOUR_CONFIG_FILE_PATH:/config.yml \
vmanomaly /config.yml \
--license-file=/license
--licenseFile=/license
```
In case you found `PermissionError: [Errno 13] Permission denied:` in `vmanomaly` logs, set user/user group to 1000 in the run command above / in a docker-compose file:
@ -62,7 +64,7 @@ docker run -it --user 1000:1000 \
-v $YOUR_LICENSE_FILE_PATH:/license \
-v $YOUR_CONFIG_FILE_PATH:/config.yml \
vmanomaly /config.yml \
--license-file=/license
--licenseFile=/license
```
```yaml
@ -70,13 +72,13 @@ docker run -it --user 1000:1000 \
services:
# ...
vmanomaly:
image: victoriametrics/vmanomaly:latest
image: victoriametrics/vmanomaly:v1.16.3
volumes:
$YOUR_LICENSE_FILE_PATH:/license
$YOUR_CONFIG_FILE_PATH:/config.yml
command:
- "/config.yml"
- "--license-file=/license"
- "--licenseFile=/license"
# ...
```

View file

@ -17,7 +17,7 @@ This section covers the `Models` component of VictoriaMetrics Anomaly Detection
- You can also integrate a **custom model**—see the [custom model guide](#custom-model-guide) for more details.
- Models have **different types and properties**—refer to the [model types section](#model-types) for more information.
> **Note:** Starting from [v1.13.0](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1130), models can be dumped to disk instead of being stored in RAM. This option **slightly reduces inference speed but significantly decreases RAM usage**, particularly useful for larger setups. For more details, see the [relevant FAQ section](https://docs.victoriametrics.com/anomaly-detection/faq/#resource-consumption-of-vmanomaly).
> **Note:** Starting from [v1.13.0](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1130), models can be dumped to disk instead of being stored in RAM. This option **slightly reduces inference speed but significantly decreases RAM usage**, particularly useful for larger setups. For more details, see the [relevant FAQ section](https://docs.victoriametrics.com/anomaly-detection/faq/#on-disk-mode).
> **Note:** Starting from [v1.10.0](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1100) model section in config supports multiple models via aliasing. <br>Also, `vmanomaly` expects model section to be named `models`. Using old (flat) format with `model` key is deprecated and will be removed in future versions. Having `model` and `models` sections simultaneously in a config will result in only `models` being used:
@ -126,7 +126,7 @@ models:
provide_series: ['anomaly_score'] # only `anomaly_score` metric will be available for writing back to the database
```
**Note** If `provide_series` is not specified in model config, the model will produce its default [model-dependent output](#vmanomaly-output). The output can't be less than `['anomaly_score']`. Even if `timestamp` column is omitted, it will be implicitly added to `provide_series` list, as it's required for metrics to be properly written.
> **Note**: If `provide_series` is not specified in model config, the model will produce its default [model-dependent output](#vmanomaly-output). The output can't be less than `['anomaly_score']`. Even if `timestamp` column is omitted, it will be implicitly added to `provide_series` list, as it's required for metrics to be properly written.
### Detection direction
Introduced in [1.13.0](https://docs.victoriametrics.com/anomaly-detection/changelog/#1130), `detection_direction` arg can help in reducing the number of [false positives](https://victoriametrics.com/blog/victoriametrics-anomaly-detection-handbook-chapter-1/#false-positive) and increasing the accuracy, when domain knowledge suggest to identify anomalies occurring when actual values (`y`) are *above, below, or in both directions* relative to the expected values (`yhat`). Available choices are: `both`, `above_expected`, `below_expected`.
@ -226,6 +226,45 @@ models:
queries: ['normal_behavior'] # use the default where it's not needed
```
### Group By
> **Note**: The `groupby` argument works only in combination with [multivariate models](#multivariate-models).
Introduced in [v1.16.0](https://docs.victoriametrics.com/anomaly-detection/changelog#v1160), the `groupby` argument (`list[string]`) enables logical grouping within [multivariate models](#multivariate-models). When specified, **a separate multivariate model is trained for each unique combination of label values present in the `groupby` columns**.
For example, to perform multivariate anomaly detection at the machine level while avoiding interference between different entities, you can set `groupby: [host]` or `groupby: [instance]`. This ensures that a **separate multivariate** model is trained for each individual entity (e.g., per host). Below is a simplified example illustrating how to track multivariate anomalies using CPU, RAM, and network data for each host.
```yaml
# other config sections ...
reader:
# other reader params ...
# assume there are M unique hosts identified by the `host` label
queries:
# return one timeseries for each CPU mode per host, total = N*M timeseries
cpu: sum(rate(node_cpu_seconds_total[5m])) by (host, mode)
# return one timeseries per host, total = 1*M timeseries
ram: |
(
(node_memory_MemTotal_bytes - node_memory_MemAvailable_bytes)
/ node_memory_MemTotal_bytes
) * 100 by (host)
# return one timeseries per host for both network receive and transmit data, total = 1*M timeseries
network: |
sum(rate(node_network_receive_bytes_total[5m])) by (host)
+ sum(rate(node_network_transmit_bytes_total[5m])) by (host)
models:
iforest: # alias for the model
class: isolation_forest_multivariate
contamination: 0.01
# the multivariate model can be trained on 2+ timeseries returned by 1+ queries
queries: [cpu, ram, network]
# train a distinct multivariate model for each unique value found in the `host` label
# a single multivariate model will be trained on (N + 1 + 1) timeseries, total = M models
groupby: [host]
```
## Model types
There are **2 model types**, supported in `vmanomaly`, resulting in **4 possible combinations**:
@ -260,6 +299,8 @@ For a multivariate type, **one shared model** is fit/used for inference on **all
For example, if you have some **multivariate** model to use 3 [MetricQL queries](https://docs.victoriametrics.com/metricsql/), each returning 5 time series, there will be one shared model created in total. Once fit, this model will expect **exactly 15 time series with exact same labelsets as an input**. This model will produce **one shared [output](#vmanomaly-output)**.
> **Note:** Starting from [v1.16.0](https://docs.victoriametrics.com/anomaly-detection/changelog#v1160), N models — one for each unique combination of label values specified in the `groupby` [common argument](#group-by) — can be trained. This allows for context separation (e.g., one model per host, region, or other relevant grouping label), leading to improved accuracy and faster training. See an example [here](#group-by).
If during an inference, you got a **different amount of series** or some series having a **new labelset** (not present in any of fitted models), the inference will be skipped until you get a model, trained particularly for such labelset during forthcoming re-fit step.
**Implications:** Multivariate models are a go-to default, when your queries returns **fixed** amount of **individual** time series (say, some aggregations), to be used for adding cross-series (and cross-query) context, useful for catching [collective anomalies](https://victoriametrics.com/blog/victoriametrics-anomaly-detection-handbook-chapter-2/#collective-anomalies) or [novelties](https://victoriametrics.com/blog/victoriametrics-anomaly-detection-handbook-chapter-2/#novelties) (expanded to multi-input scenario). For example, you may set it up for anomaly detection of CPU usage in different modes (`idle`, `user`, `system`, etc.) and use its cross-dependencies to detect **unseen (in fit data)** behavior.
@ -325,6 +366,7 @@ Infer stage
- The ability to distribute the data load evenly between the initial `fit` and subsequent `infer` calls. For example, an online model can be fit on 10 `1m` datapoints during the initial `fit` stage once per month and then be gradually updated on the same 10 `1m` datapoints during each `infer` call each 10 minutes.
- The model can adapt to new data patterns (gradually updating itself during each `infer` call) without needing to wait for the next `fit` call and one big re-training.
- Slightly faster training/updating times compared to similar offline models.
- Please refer to additional benefits for data-intensive setups in correspondent [FAQ](https://docs.victoriametrics.com/anomaly-detection/faq#online-models) section.
**Limitations**:
@ -405,7 +447,7 @@ models:
> **Note**: There are some expected limitations of Autotune mode:
> - It can't be made on your [custom model](#custom-model-guide).
> - It can't be applied to itself (like `tuned_class_name: 'model.auto.AutoTunedModel'`)
> - `AutoTunedModel` can't be used on [rolling models](https://docs.victoriametrics.com/anomaly-detection/components/models/#rolling-models) like [`RollingQuantile`](https://docs.victoriametrics.com/anomaly-detection/components/models/#rolling-quantile) in combination with [on-disk model storage mode](https://docs.victoriametrics.com/anomaly-detection/faq/#resource-consumption-of-vmanomaly), as the rolling models exists only during `infer` calls and aren't persisted neither in RAM, nor on disk.
> - `AutoTunedModel` can't be used on [rolling models](https://docs.victoriametrics.com/anomaly-detection/components/models/#rolling-models) like [`RollingQuantile`](https://docs.victoriametrics.com/anomaly-detection/components/models/#rolling-quantile) in combination with [on-disk model storage mode](https://docs.victoriametrics.com/anomaly-detection/faq/#on-disk-mode), as the rolling models exists only during `infer` calls and aren't persisted neither in RAM, nor on disk.
### [Prophet](https://facebook.github.io/prophet/)
@ -920,7 +962,7 @@ monitoring:
Let's pull the docker image for `vmanomaly`:
```sh
docker pull victoriametrics/vmanomaly:latest
docker pull victoriametrics/vmanomaly:v1.16.3
```
Now we can run the docker container putting as volumes both config and model file:
@ -934,8 +976,8 @@ docker run -it \
-v $(PWD)/license:/license \
-v $(PWD)/custom_model.py:/vmanomaly/model/custom.py \
-v $(PWD)/custom.yaml:/config.yaml \
victoriametrics/vmanomaly:latest /config.yaml \
--license-file=/license
victoriametrics/vmanomaly:v1.16.3 /config.yaml \
--licenseFile=/license
```
Please find more detailed instructions (license, etc.) [here](https://docs.victoriametrics.com/anomaly-detection/overview/#run-vmanomaly-docker-container)

View file

@ -134,14 +134,38 @@ Path to a file, which contains token, that is passed in the standard format with
</tr>
<tr>
<td>
`verify_tls`
</td>
<td>
`False`
`false`
</td>
<td>
Verify TLS certificate. If `False`, it will not verify the TLS certificate.
If `True`, it will verify the certificate using the system's CA store.
If a path to a CA bundle file (like `ca.crt`), it will verify the certificate using the provided CA bundle.
</td>
</tr>
<tr>
<td>
`tls_cert_file`
</td>
<td>
`path/to/cert.crt`
</td>
<td>
Path to a file with the client certificate, i.e. `client.crt`. Available since [v1.16.3](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1163).
</td>
</tr>
<tr>
<td>
`tls_key_file`
</td>
<td>
`path/to/key.crt`
</td>
<td>
Path to a file with the client certificate key, i.e. `client.key`. Available since [v1.16.3](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1163).
</td>
<td>Allows disabling TLS verification of the remote certificate.</td>
</tr>
<tr>
<td>
@ -184,6 +208,12 @@ monitoring:
test: "test-1"
```
## mTLS protection
Starting from [v1.16.3](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1163), `vmanomaly` components such as [VmWriter](https://docs.victoriametrics.com/anomaly-detection/components/writer/#vm-writer) support [mTLS](https://en.wikipedia.org/wiki/Mutual_authentication) to ensure secure communication with [VictoriaMetrics Enterprise, configured with mTLS](https://docs.victoriametrics.com/#mtls-protection).
For detailed guidance on configuring mTLS parameters such as `verify_tls`, `tls_cert_file`, and `tls_key_file`, please refer to the [mTLS protection section](https://docs.victoriametrics.com/anomaly-detection/components/reader/#mtls-protection) in the [Reader](https://docs.victoriametrics.com/anomaly-detection/components/reader/#vm-reader) documentation. The configuration principles apply consistently across all these `vmanomaly` components.
## Metrics generated by vmanomaly
<table class="params">

View file

@ -131,10 +131,10 @@ Datasource URL address
`tenant_id`
</td>
<td>
`0:0`
`0:0`, `multitenant`
</td>
<td>
For VictoriaMetrics Cluster version only, tenants are identified by accountID or accountID:projectID. See VictoriaMetrics Cluster [multitenancy docs](https://docs.victoriametrics.com/cluster-victoriametrics/#multitenancy)
For VictoriaMetrics Cluster version only, tenants are identified by `accountID` or `accountID:projectID`. Starting from [v1.16.2](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1162), `multitenant` [endpoint](https://docs.victoriametrics.com/cluster-victoriametrics/?highlight=reads#multitenancy-via-labels) is supported, to execute queries over multiple [tenants](https://docs.victoriametrics.com/cluster-victoriametrics/#multitenancy). See VictoriaMetrics Cluster [multitenancy docs](https://docs.victoriametrics.com/cluster-victoriametrics/#multitenancy)
</td>
</tr>
<tr>
@ -211,7 +211,31 @@ Timeout for the requests, passed as a string
`false`
</td>
<td>
Allows disabling TLS verification of the remote certificate.
Verify TLS certificate. If `False`, it will not verify the TLS certificate.
If `True`, it will verify the certificate using the system's CA store.
If a path to a CA bundle file (like `ca.crt`), it will verify the certificate using the provided CA bundle.
</td>
</tr>
<tr>
<td>
`tls_cert_file`
</td>
<td>
`path/to/cert.crt`
</td>
<td>
Path to a file with the client certificate, i.e. `client.crt`. Available since [v1.16.3](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1163).
</td>
</tr>
<tr>
<td>
`tls_key_file`
</td>
<td>
`path/to/key.crt`
</td>
<td>
Path to a file with the client certificate key, i.e. `client.key`. Available since [v1.16.3](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1163).
</td>
</tr>
<tr>
@ -289,6 +313,50 @@ reader:
latency_offset: '1ms'
```
<!-- ### mTLS protection
Starting from [v1.16.3](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1163), `vmanomaly` supports [mTLS](https://en.wikipedia.org/wiki/Mutual_authentication) requests in its components, like [VmReader](https://docs.victoriametrics.com/anomaly-detection/components/reader/#vm-reader), [VmWriter](https://docs.victoriametrics.com/anomaly-detection/components/writer/#vm-writer), and [Monitoring/Push](https://docs.victoriametrics.com/anomaly-detection/components/monitoring/#push-config-parameters) to query from and write to [VictoriaMetrics Enterprise, configured in the same mode](https://docs.victoriametrics.com/#mtls-protection).
Please see the description of next arguments in a [config](#config-parameters):
- `verify_tls` (if string, acts similar to `-mtlsCAFile` command line arg of VictoriaMetrics).
- `tls_cert_file` (if given, acts similar to `-tlsCertFile` command line arg of VictoriaMetrics).
- `tls_key_file` (if given, acts similar to `-tlsKeyFile` command line arg of VictoriaMetrics). -->
### mTLS protection
As of [v1.16.3](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1163), `vmanomaly` supports [mutual TLS (mTLS)](https://en.wikipedia.org/wiki/Mutual_authentication) for secure communication across its components, including [VmReader](https://docs.victoriametrics.com/anomaly-detection/components/reader/#vm-reader), [VmWriter](https://docs.victoriametrics.com/anomaly-detection/components/writer/#vm-writer), and [Monitoring/Push](https://docs.victoriametrics.com/anomaly-detection/components/monitoring/#push-config-parameters). This allows for mutual authentication between the client and server when querying or writing data to [VictoriaMetrics Enterprise, configured for mTLS](https://docs.victoriametrics.com/#mtls-protection).
mTLS ensures that both the client and server verify each other's identity using certificates, which enhances security by preventing unauthorized access.
To configure mTLS, the following parameters can be set in the [config](#config-parameters):
- `verify_tls`: If set to a string, it functions like the `-mtlsCAFile` command-line argument of VictoriaMetrics, specifying the CA bundle to use. Set to `True` to use the system's default certificate store.
- `tls_cert_file`: Specifies the path to the client certificate, analogous to the `-tlsCertFile` argument of VictoriaMetrics.
- `tls_key_file`: Specifies the path to the client certificate key, similar to the `-tlsKeyFile` argument of VictoriaMetrics.
These options allow you to securely interact with mTLS-enabled VictoriaMetrics endpoints.
Example configuration to enable mTLS with custom certificates:
```yaml
reader:
class: "vm"
datasource_url: "https://your-victoriametrics-instance-with-mtls"
# tenant_id: "0:0" uncomment and set for cluster version
queries:
vm_blocks_example:
expr: 'avg(rate(vm_blocks[5m]))'
step: 30s
sampling_period: 30s
verify_tls: "path/to/ca.crt" # path to CA bundle for TLS verification
tls_cert_file: "path/to/client.crt" # path to the client certificate
tls_key_file: "path/to/client.key" # path to the client certificate key
# additional reader parameters ...
# other config sections, like models, schedulers, writer, ...
```
### Healthcheck metrics
`VmReader` exposes [several healthchecks metrics](https://docs.victoriametrics.com/anomaly-detection/components/monitoring/#reader-behaviour-metrics).

View file

@ -60,11 +60,11 @@ Datasource URL address
</td>
<td>
`0:0`
`0:0`, `multitenant` (starting from [v1.16.2](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1162))
</td>
<td>
For VictoriaMetrics Cluster version only, tenants are identified by accountID or accountID:projectID. See VictoriaMetrics Cluster [multitenancy docs](https://docs.victoriametrics.com/cluster-victoriametrics/#multitenancy)
For VictoriaMetrics Cluster version only, tenants are identified by `accountID` or `accountID:projectID`. Starting from [v1.16.2](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1162), `multitenant` [endpoint](https://docs.victoriametrics.com/cluster-victoriametrics/?highlight=writes#multitenancy-via-labels) is supported, to write data to multiple [tenants](https://docs.victoriametrics.com/cluster-victoriametrics/#multitenancy). See VictoriaMetrics Cluster [multitenancy docs](https://docs.victoriametrics.com/cluster-victoriametrics/#multitenancy)
</td>
</tr>
<!-- Additional rows for metric_format -->
@ -185,16 +185,37 @@ Timeout for the requests, passed as a string
</tr>
<tr>
<td>
`verify_tls`
</td>
<td>
`false`
</td>
<td>
Allows disabling TLS verification of the remote certificate.
Verify TLS certificate. If `False`, it will not verify the TLS certificate.
If `True`, it will verify the certificate using the system's CA store.
If a path to a CA bundle file (like `ca.crt`), it will verify the certificate using the provided CA bundle.
</td>
</tr>
<tr>
<td>
`tls_cert_file`
</td>
<td>
`path/to/cert.crt`
</td>
<td>
Path to a file with the client certificate, i.e. `client.crt`. Available since [v1.16.3](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1163).
</td>
</tr>
<tr>
<td>
`tls_key_file`
</td>
<td>
`path/to/key.crt`
</td>
<td>
Path to a file with the client certificate key, i.e. `client.key`. Available since [v1.16.3](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1163).
</td>
</tr>
<tr>
@ -240,6 +261,45 @@ writer:
password: "bar"
```
### Multitenancy support
> This feature applies to the VictoriaMetrics Cluster version only. Tenants are identified by either `accountID` or `accountID:projectID`. Starting with [v1.16.2](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1162), the `multitenant` [endpoint](https://docs.victoriametrics.com/cluster-victoriametrics/?highlight=writes#multitenancy-via-labels) is supported for writing data across multiple [tenants](https://docs.victoriametrics.com/cluster-victoriametrics/#multitenancy). For more details, refer to the VictoriaMetrics Cluster [multitenancy documentation](https://docs.victoriametrics.com/cluster-victoriametrics/#multitenancy).
Please note the different behaviors depending on the `tenant_id` value:
1. **When `writer.tenant_id != 'multitenant'` (e.g., `"0:0"`) and `reader.tenant_id != 'multitenant'` (can be different but valid, like `"0:1")**:
- The `vm_account_id` label is **not created** in the reader, **not persisted** to the writer, and is **not expected** in the output.
- **Result**: Data is written successfully with no logs or errors.
2. **When `writer.tenant_id = 'multitenant'` and `vm_project_id` is present in the label set**:
- This typically happens when `reader.tenant_id` is also set to `multitenant`, meaning the `vm_account_id` label is stored in the results returned from the queries.
- **Result**: Everything functions as expected. Data is written successfully with no logs or errors.
3. **When `writer.tenant_id = 'multitenant'` but `vm_account_id` is missing** (e.g., due to aggregation in the reader or missing `keep_metric_names` in the query):
- **Result**: The data is still written to `"0:0"`, but a warning is raised:
```
The label `vm_account_id` was not found in the label set of {query_result.key},
but tenant_id='multitenant' is set in writer. The data will be written to the default tenant 0:0.
Ensure that the query retains the necessary multi-tenant labels,
or adjust the aggregation settings to preserve `vm_account_id` key in the label set.
```
4. **When `writer.tenant_id != 'multitenant'` (e.g., `"0:0"`) and `vm_account_id` exists in the label set**:
- **Result**: Writing is allowed, but a warning is raised:
```
The label set for the metric {query_result.key} contains multi-tenancy labels,
but the write endpoint is configured for single-tenant mode (tenant_id != 'multitenant').
Either adjust the query in the reader to avoid multi-tenancy labels
or ensure that reserved key `vm_account_id` is not explicitly set for single-tenant environments.
```
### mTLS protection
Starting from [v1.16.3](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1163), `vmanomaly` components such as [VmWriter](https://docs.victoriametrics.com/anomaly-detection/components/writer/#vm-writer) support [mTLS](https://en.wikipedia.org/wiki/Mutual_authentication) to ensure secure communication with [VictoriaMetrics Enterprise, configured with mTLS](https://docs.victoriametrics.com/#mtls-protection).
For detailed guidance on configuring mTLS parameters such as `verify_tls`, `tls_cert_file`, and `tls_key_file`, please refer to the [mTLS protection section](https://docs.victoriametrics.com/anomaly-detection/components/reader/#mtls-protection) in the [Reader](https://docs.victoriametrics.com/anomaly-detection/components/reader/#vm-reader) documentation. The configuration principles apply consistently across all these `vmanomaly` components.
### Healthcheck metrics
`VmWriter` exposes [several healthchecks metrics](https://docs.victoriametrics.com/anomaly-detection/components/monitoring/#writer-behaviour-metrics).

View file

@ -385,7 +385,7 @@ services:
restart: always
vmanomaly:
container_name: vmanomaly
image: victoriametrics/vmanomaly:v1.11.0
image: victoriametrics/vmanomaly:v1.16.3
depends_on:
- "victoriametrics"
ports:
@ -399,7 +399,7 @@ services:
platform: "linux/amd64"
command:
- "/config.yaml"
- "--license-file=/license"
- "--licenseFile=/license"
alertmanager:
container_name: alertmanager
image: prom/alertmanager:v0.25.0

View file

@ -18,25 +18,37 @@ See also [LTS releases](https://docs.victoriametrics.com/lts-releases/).
## tip
* FEATURE: add Darwin binaries for [VictoriaMetrics cluster](https://docs.victoriametrics.com/cluster-victoriametrics/) to the release flow. The binaries will be available in the new release.
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent/): allow using HTTP/2 client for Kubernetes service discovery if `-promscrape.kubernetes.useHTTP2Client` cmd-line flag is set. This could help to reduce the amount of opened connections to the Kubernetes API server. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5971) for the details.
* FEATURE: [vmalert](https://docs.victoriametrics.com/vmalert/): `-rule` cmd-line flag now supports multi-document YAML files. This could be useful when rules are retrieved via HTTP URL where multiple rule files were merged together in one response. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6753). Thanks to @Irene-123 for [the pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/6995).
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent/) and [Single-node VictoriaMetrics](https://docs.victoriametrics.com/): add support of [exponential histograms](https://opentelemetry.io/docs/specs/otel/metrics/data-model/#exponentialhistogram) ingested via [OpenTelemetry protocol for metrics](https://docs.victoriametrics.com/#sending-data-via-opentelemetry). Such histograms will be automatically converted to [VictoriaMetrics histogram format](https://valyala.medium.com/improving-histogram-usability-for-prometheus-and-grafana-bc7e5df0e350). See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/6354).
## [v1.104.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.104.0)
Released at 2024-10-02
**Update note 1: `*.passwordFile` and similar flags are trimming trailing whitespaces at the end of content. If authorization check performed with `*.passwordFile` content, make sure to update authorization settings to not include trailing whitespaces before the upgrade. In case of [operator](https://docs.victoriametrics.com/operator/) managed installations, make sure to update operator version to [v0.48.*](https://docs.victoriametrics.com/operator/changelog/#v0480---25-sep-2024). See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6986) for the details. This change reverts behavior introduced at [v1.102.0-rc2](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.102.0-rc2) release**
* SECURITY: upgrade Go builder from Go1.23.0 to Go1.23.1. See the list of issues addressed in [Go1.23.1](https://github.com/golang/go/issues?q=milestone%3AGo1.23.1+label%3ACherryPickApproved).
* SECURITY: upgrade base docker image (Alpine) from 3.20.2 to 3.20.3. See [alpine 3.20.3 release notes](https://alpinelinux.org/posts/Alpine-3.17.10-3.18.9-3.19.4-3.20.3-released.html).
* FEATURE: `vmselect` in [VictoriaMetrics cluster](https://docs.victoriametrics.com/cluster-victoriametrics/): add support for querying over multiple tenants. See [these docs](https://docs.victoriametrics.com/cluster-victoriametrics/#multitenancy-via-labels) and [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1434) for details.
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent/) and [Single-node VictoriaMetrics](https://docs.victoriametrics.com/): hide jobs that contain only healthy targets when `show_only_unhealthy` filter is enabled. Before, jobs without unhealthy targets were still displayed on the page. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3536).
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): add service discovery support for [OVH Cloud VPS](https://www.ovhcloud.com/en/vps/) and [OVH Cloud dedicated server](https://ovhcloud.com/en/bare-metal/). See [these docs](https://docs.victoriametrics.com/sd_configs/#ovhcloud_sd_configs) and [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6071).
* FEATURE: [vmalert](https://docs.victoriametrics.com/vmalert): bump default values for sending data to `remoteWrite.url`: `remoteWrite.maxQueueSize` from `100_000` to `1_000_000`, `remoteWrite.maxBatchSize` from `1_000` to `10_000`, `remoteWrite.concurrency` from `1` to `4`. The new settings should improve remote write performance of vmalert with default settings.
* FEATURE: [Single-node VictoriaMetrics](https://docs.victoriametrics.com/) and `vmstorage` in [VictoriaMetrics cluster](https://docs.victoriametrics.com/cluster-victoriametrics/): introduce the `-search.maxDeleteSeries` command line flag to limit the number of series that can be deleted by a single `/api/v1/admin/tsdb/delete_series` call. Previously, one could delete any number of series and if this number was big (millions) the deletion could result in OOM. See this [issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7027) for details.
* FEATURE: [vmbackupmanager](https://docs.victoriametrics.com/vmbackupmanager/): add API for creating/scheduling backups. See [documentation](https://docs.victoriametrics.com/vmbackupmanager/#api-methods)
* FEATURE: all VictoriaMetrics [enterprise](https://docs.victoriametrics.com/enterprise/) components: add support of hot-reload for license key supplied by `-licenseFile` command-line flag.
* FEATURE: [vmgateway](https://docs.victoriametrics.com/vmgateway/): allow disabling `Bearer` prefix enforcement for authentication header. This is useful for cases when identity token is used instead of access token.
* FEATURE: [vmgateway](https://docs.victoriametrics.com/vmgateway/): support parsing `vm_access` claims in string format. This is useful for cases when identity provider does not support mapping claims to JSON format.
* FEATURE: [Single-node VictoriaMetrics](https://docs.victoriametrics.com/) and `vmstorage` in [VictoriaMetrics cluster](https://docs.victoriametrics.com/cluster-victoriametrics/): add new metrics for data ingestion: `vm_rows_received_by_storage_total`, `vm_rows_ignored_total{reason="nan_value"}`, `vm_rows_ignored_total{reason="invalid_raw_metric_name"}`, `vm_rows_ignored_total{reason="hourly_limit_exceeded"}`, `vm_rows_ignored_total{reason="daily_limit_exceeded"}`. See this [PR](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/6663) for details.
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): respect `Retry-After` header when making retries for pushing the data to remote destination via remote write protocol. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6097).
* FEATURE: [Single-node VictoriaMetrics](https://docs.victoriametrics.com/) and `vmstorage` in [VictoriaMetrics cluster](https://docs.victoriametrics.com/cluster-victoriametrics/): add new metrics for data ingestion: `vm_rows_received_by_storage_total`, `vm_rows_ignored_total{reason="invalid_raw_metric_name"}`, `vm_rows_ignored_total{reason="hourly_limit_exceeded"}`, `vm_rows_ignored_total{reason="daily_limit_exceeded"}`. See this [PR](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/6663) for details.
* FEATURE: [vmui](https://docs.victoriametrics.com/#vmui): change request method for `/query_range` and `/query` calls from `GET` to `POST`. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6288).
* FEATURE: [vmui](https://docs.victoriametrics.com/#vmui): add link to vmalert when proxy is enabled. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5924).
* FEATURE: [vmui](https://docs.victoriametrics.com/#vmui): keep selected columns in table view on page reloads. Before, selected columns were reset on each update. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7016).
* FEATURE: [dashboards](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/dashboards) for VM single-node, cluster, vmalert, vmagent, VictoriaLogs: add `Go scheduling latency` panel to show the 99th quantile of Go goroutines scheduling. This panel should help identifying insufficient CPU resources for the service. It is especially useful if CPU gets throttled, which now should be visible on this panel.
* FEATURE: [dashboards](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/dashboards): update [dashboard for VictoriaMetrics k8s operator](https://grafana.com/grafana/dashboards/17869-victoriametrics-operator/): replace deprecated TimeSeries panels, add latency panels for rest client and Golang scheduler, add overview panels. The dashboard is now also available for [VictoriaMetrics Grafana datasource](https://github.com/VictoriaMetrics/victoriametrics-datasource).
* FEATURE: [alerts](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/alerts-health.yml): add alerting rule to track the Go scheduling latency for goroutines. It should notify users if VM component doesn't have enough CPU to run or gets throttled.
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent/) and [Single-node VictoriaMetrics](https://docs.victoriametrics.com/): hide jobs that contain only healthy targets when `show_only_unhealthy` filter is enabled. Before, jobs without unhealthy targets were still displayed on the page. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3536).
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): add service discovery support for [OVH Cloud VPS](https://www.ovhcloud.com/en/vps/) and [OVH Cloud dedicated server](https://ovhcloud.com/en/bare-metal/). See [these docs](https://docs.victoriametrics.com/sd_configs/#ovhcloud_sd_configs) and [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6071).
* FEATURE: [vmalert](https://docs.victoriametrics.com/vmalert): bump default values for sending data to `remoteWrite.url`: `remoteWrite.maxQueueSize` from `100_000` to `1_000_000`, `remoteWrite.maxBatchSize` from `1_000` to `10_000`, `remoteWrite.concurrency` from `1` to `4`. The new settings should improve remote write performance of vmalert with default settings.
* FEATURE: [vmbackupmanager](https://docs.victoriametrics.com/vmbackupmanager/): add API for creating/scheduling backups. See [documentation](https://docs.victoriametrics.com/vmbackupmanager/#api-methods)
* BUGFIX: [vmagent](https://docs.victoriametrics.com/vmagent/) fix service discovery of Azure Virtual Machines for response contains `nextLink` in `Host:Port` format. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6912).
* BUGFIX: [vmagent](https://docs.victoriametrics.com/vmagent/): properly consume messages [from kafka](https://docs.victoriametrics.com/vmagent/#kafka-integration). Previously vmagent could skip some messages during start-up.
@ -57,7 +69,6 @@ See also [LTS releases](https://docs.victoriametrics.com/lts-releases/).
* BUGFIX: [MetricsQL](https://docs.victoriametrics.com/metricsql/): consistently return the last non-`NaN` value from [`range_last`](https://docs.victoriametrics.com/metricsql/#range_last) function across all the returned data points. Previously `NaN` data points weren't replaced with the last non-`NaN` value.
* BUGFIX: all VictoriaMetrics components: increase default value of `-loggerMaxArgLen` cmd-line flag from 1000 to 5000. This should improve visibility on errors produced by very long queries.
* BUGFIX: all VictoriaMetrics components: trim trailing spaces when reading content from `*.passwordFile` and similar flags. It reverts changes introduced at [v1.102.0-rc2](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.102.0-rc2) release. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6986) for details.
* BUGFIX: [Single-node VictoriaMetrics](https://docs.victoriametrics.com/) and `vmstorage` in [VictoriaMetrics cluster](https://docs.victoriametrics.com/cluster-victoriametrics/): introduce the `-search.maxDeleteSeries` command line flag to limit the number of series that can be deleted by a single `/api/v1/admin/tsdb/delete_series` call. Previously, one could delete any number of series and if this number was big (millions) the deletion could result in OOM. See this [issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7027) for details.
## [v1.103.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.103.0)
@ -71,7 +82,6 @@ Released at 2024-08-28
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent/): reduce memory usage when scraping targets with big response body. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6759).
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent/) and [Single-node VictoriaMetrics](https://docs.victoriametrics.com/): stop adding default port 80/443 for scrape URLs without a port. The value in `instance` label will still carry port for backward compatibility. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6792).
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): add flags `-remoteWrite.retryMinInterval` and `-remoteWrite.retryMaxTime` for adjusting remote-write requests retry policy. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5486). Thanks to @yorik for the [pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/6289).
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): respect `Retry-After` header when making retries for pushing the data to remote destination via remote write protocol. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6097).
* FEATURE: [vmalert](https://docs.victoriametrics.com/vmalert/): add command-line flag `-notifier.headers` to allow configuring additional headers for all requests sent to the corresponding `-notifier.url`.
* FEATURE: [vmalert-tool](https://docs.victoriametrics.com/vmalert-tool/): add `-external.label` and `-external.url` command-line flags, in the same way as these flags are supported by vmalert. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6735).
* FEATURE: [vmbackup](https://docs.victoriametrics.com/vmbackup/), [vmrestore](https://docs.victoriametrics.com/vmrestore/), [vmbackupmanager](https://docs.victoriametrics.com/vmbackupmanager/): use exponential backoff for retries when uploading or downloading data from S3. This should reduce the number of failed uploads and downloads when S3 is temporarily unavailable. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6732).
@ -81,7 +91,6 @@ Released at 2024-08-28
* FEATURE: `vmselect` in [VictoriaMetrics cluster](https://docs.victoriametrics.com/cluster-victoriametrics/): add command-line flag `-search.inmemoryBufSizeBytes` for configuring size of in-memory buffers used by vmselect during processing of vmstorage responses. A new summary metric `vm_tmp_blocks_inmemory_file_size_bytes` is exposed to show the size of the buffer during requests processing. See this [pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/6851) for details. Thanks to @tydhot for implementation.
* BUGFIX: [vmagent](https://docs.victoriametrics.com/vmagent): fixes `proxy_url` authorization for scrape targets. Previously proxy authorization configuration was ignored for `https` targets. See [this](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6771) issue for details.
* BUGFIX: [vmagent](https://docs.victoriametrics.com/vmagent/) fix service discovery of Azure Virtual Machines for response contains `nextLink`. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6784).
* BUGFIX: [vmalert](https://docs.victoriametrics.com/vmalert): respect HTTP headers defined in [notifier configuration file](https://docs.victoriametrics.com/vmalert/#notifier-configuration-file) for each request to notifiers. Previously, this param was ignored by mistake.
* BUGFIX: [stream aggregation](https://docs.victoriametrics.com/stream-aggregation/): correctly apply `-streamAggr.dropInputLabels` when global stream deduplication is enabled without `-streamAggr.config`. Previously, `-remoteWrite.streamAggr.dropInputLabels` was used instead.
* BUGFIX: [stream aggregation](https://docs.victoriametrics.com/stream-aggregation/): fix command-line flag `-remoteWrite.streamAggr.ignoreFirstIntervals` to accept multiple values and be applied per each corresponding `-remoteWrite.url`. Previously, this flag only could have been used globally for all URLs.
@ -97,6 +106,40 @@ Released at 2024-08-28
* BUGFIX: [Single-node VictoriaMetrics](https://docs.victoriametrics.com/) and `vmstorage` in [VictoriaMetrics cluster](https://docs.victoriametrics.com/cluster-victoriametrics/): fix metric names registering in the per-day index for new dates for existing time series when making calls to `/tags/tagSeries` and `/tags/tagMultiSeries` handlers of [Graphite API](https://docs.victoriametrics.com/#graphite-api-usage). See [this](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/6872/) for details.
* BUGFIX: [Single-node VictoriaMetrics](https://docs.victoriametrics.com/) and `vmstorage` in [VictoriaMetrics cluster](https://docs.victoriametrics.com/cluster-victoriametrics/): properly ignore deleted metrics when applying [retention filters](https://docs.victoriametrics.com/#retention-filters) and [downsampling](https://docs.victoriametrics.com/#downsampling). See [this](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6891) issue for the details.
## [v1.102.4](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.102.4)
Released at 2024-10-02
**v1.102.x is a line of [LTS releases](https://docs.victoriametrics.com/lts-releases/). It contains important up-to-date bugfixes for [VictoriaMetrics enterprise](https://docs.victoriametrics.com/enterprise.html).
All these fixes are also included in [the latest community release](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest).
The v1.102.x line will be supported for at least 12 months since [v1.102.0](https://docs.victoriametrics.com/changelog/#v11020) release**
* BUGFIX: [stream aggregation](https://docs.victoriametrics.com/stream-aggregation/): fix `-remoteWrite.streamAggr.dropInputLabels` labels parsing. Now, this flag allows specifying a list of labels to drop (by using '^^' separator, i.e. `dropInputLabels='replica^^az,replica'`) per each corresponding `remoteWrite.url`. Before, `-remoteWrite.streamAggr.dropInputLabels` labels were incorrectly applied to all configured `remoteWrite.url`s. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6780) for the details.
* BUGFIX: [stream aggregation](https://docs.victoriametrics.com/stream-aggregation/): fix duplicated aggregation results if there are multiple concurrent writing samples with the same label. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/7118).
* BUGFIX: [MetricsQL](https://docs.victoriametrics.com/metricsql/): consistently return the first non-`NaN` value from [`range_first`](https://docs.victoriametrics.com/metricsql/#range_first) function across all the returned data points. Previously `NaN` data points weren't replaced with the first non-`NaN` value.
## [v1.102.3](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.102.3)
Released at 2024-09-23
* SECURITY: upgrade Go builder from Go1.22.5 to Go1.23.1. See the list of issues addressed in [Go1.23.1](https://github.com/golang/go/issues?q=milestone%3AGo1.23.1+label%3ACherryPickApproved).
* SECURITY: upgrade base docker image (Alpine) from 3.20.2 to 3.20.3. See [alpine 3.20.3 release notes](https://alpinelinux.org/posts/Alpine-3.17.10-3.18.9-3.19.4-3.20.3-released.html).
* BUGFIX: [vmagent](https://docs.victoriametrics.com/vmagent/) fix service discovery of Azure Virtual Machines for response contains `nextLink` in `Host:Port` format. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6912).
* BUGFIX: [vmagent](https://docs.victoriametrics.com/vmagent/): properly consume messages [from kafka](https://docs.victoriametrics.com/vmagent/#kafka-integration). Previously vmagent could skip some messages during start-up.
* BUGFIX: [stream aggregation](https://docs.victoriametrics.com/stream-aggregation/): perform deduplication for all received data when specifying `-streamAggr.dedupInterval` or `-remoteWrite.streamAggr.dedupInterval` command-line flags are set. Previously, if the `-remoteWrite.streamAggr.config` or `-streamAggr.config` is set, only series that matched aggregation config were deduplicated. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/6711#issuecomment-2288361213) for details.
* BUGFIX: [vmagent dashboard](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/dashboards/vmagent.json): fix legend captions for stream aggregation related panels. Before they were displaying wrong label names.
* BUGFIX: [vmgateway](https://docs.victoriametrics.com/vmgateway/): add missing `datadog`, `newrelic`, `opentelemetry` and `pushgateway` routes to the `JWT` authorization routes. Allows prefixed (`prometheus/graphite`) routes for query requests.
* BUGFIX: [Single-node VictoriaMetrics](https://docs.victoriametrics.com/) and `vmstorage` in [VictoriaMetrics cluster](https://docs.victoriametrics.com/cluster-victoriametrics/): properly cache empty list of matching time series for the given [labels filter](https://docs.victoriametrics.com/keyconcepts/#filtering). This type of caching was broken since [v1.97.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.97.0), which could result in the increased CPU usage when performing queries, which match zero time series. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7009).
* BUGFIX: [Single-node VictoriaMetrics](https://docs.victoriametrics.com/) and `vmstorage` in [VictoriaMetrics cluster](https://docs.victoriametrics.com/cluster-victoriametrics/): Fixes start-up crash on Windows OS. See this [issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6973) for details.
* BUGFIX: [Single-node VictoriaMetrics](https://docs.victoriametrics.com/) and `vmstorage` in [VictoriaMetrics cluster](https://docs.victoriametrics.com/cluster-victoriametrics/): fix metric `vm_object_references{type="indexdb"}`. Previously, it was overcounted.
* BUGFIX: [Single-node VictoriaMetrics](https://docs.victoriametrics.com/) and `vmstorage` in [VictoriaMetrics cluster](https://docs.victoriametrics.com/cluster-victoriametrics/): properly ingest stale NaN samples. Previously it could be dropped if series didn't exist at storage node. See this issue [https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5069] for details.
* BUGFIX: [Single-node VictoriaMetrics](https://docs.victoriametrics.com/) and `vmstorage` in [VictoriaMetrics cluster](https://docs.victoriametrics.com/cluster-victoriametrics/): properly track `vm_missing_tsids_for_metric_id_total` metric. See this [issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6931) for details.
* BUGFIX: [vmalert](https://docs.victoriametrics.com/vmalert): do not send notifications without labels to Alertmanager. Such notifications are rejected by Alertmanager anyway. Before, vmalert could send alert notifications even if no label-value pairs left after applying `alert_relabel_configs` from [notifier config](https://docs.victoriametrics.com/vmalert/#notifier-configuration-file).
* BUGFIX: [vmalert](https://docs.victoriametrics.com/vmalert/): properly update value of variable `$activeAt` in rules annotation during replay mode. Before, `$activeAt` could have provided incorrect values during replay.
* BUGFIX: [MetricsQL](https://docs.victoriametrics.com/metricsql/): properly handle `c1 AND c2` and `c1 OR c1` queries for constants `c1` and `c2`. Previously such queries could return unexpected results. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6637).
* BUGFIX: all VictoriaMetrics components: increase default value of `-loggerMaxArgLen` cmd-line flag from 1000 to 5000. This should improve visibility on errors produced by very long queries.
## [v1.102.2](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.102.2)
Released at 2024-08-28
@ -456,6 +499,36 @@ Released at 2024-02-14
* BUGFIX: [dashboards](https://grafana.com/orgs/victoriametrics): update `Storage full ETA` panels for Single-node and Cluster dashboards to prevent them from showing negative or blank results caused by increase of deduplicated samples. Deduplicated samples were part of the expression to provide a better estimate for disk usage, but due to sporadic nature of [deduplication](https://docs.victoriametrics.com/#deduplication) in VictoriaMetrics it rather produced skewed results. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5747).
* BUGFIX: [vmalert](https://docs.victoriametrics.com/#vmalert): reduce memory usage for ENT version of vmalert for configurations with high number of groups with enabled multitenancy.
## [v1.97.9](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.97.9)
Released at 2024-10-02
**v1.97.x is a line of [LTS releases](https://docs.victoriametrics.com/lts-releases/). It contains important up-to-date bugfixes for [VictoriaMetrics enterprise](https://docs.victoriametrics.com/enterprise.html).
All these fixes are also included in [the latest community release](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/latest).
The v1.97.x line will be supported for at least 12 months since [v1.97.0](https://docs.victoriametrics.com/CHANGELOG.html#v1970) release**
* BUGFIX: [MetricsQL](https://docs.victoriametrics.com/metricsql/): consistently return the first non-`NaN` value from [`range_first`](https://docs.victoriametrics.com/metricsql/#range_first) function across all the returned data points. Previously `NaN` data points weren't replaced with the first non-`NaN` value.
## [v1.97.8](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.97.8)
Released at 2024-09-23
* SECURITY: upgrade Go builder from Go1.22.5 to Go1.23.1. See the list of issues addressed in [Go1.23.1](https://github.com/golang/go/issues?q=milestone%3AGo1.23.1+label%3ACherryPickApproved).
* SECURITY: upgrade base docker image (Alpine) from 3.20.2 to 3.20.3. See [alpine 3.20.3 release notes](https://alpinelinux.org/posts/Alpine-3.17.10-3.18.9-3.19.4-3.20.3-released.html).
* BUGFIX: [vmagent](https://docs.victoriametrics.com/vmagent/) fix service discovery of Azure Virtual Machines for response contains `nextLink` in `Host:Port` format. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6912).
* BUGFIX: [vmagent](https://docs.victoriametrics.com/vmagent/): properly consume messages [from kafka](https://docs.victoriametrics.com/vmagent/#kafka-integration). Previously vmagent could skip some messages during start-up.
* BUGFIX: [vmgateway](https://docs.victoriametrics.com/vmgateway/): add missing `datadog`, `newrelic`, `opentelemetry` and `pushgateway` routes to the `JWT` authorization routes. Allows prefixed (`prometheus/graphite`) routes for query requests.
* BUGFIX: [Single-node VictoriaMetrics](https://docs.victoriametrics.com/) and `vmstorage` in [VictoriaMetrics cluster](https://docs.victoriametrics.com/cluster-victoriametrics/): properly cache empty list of matching time series for the given [labels filter](https://docs.victoriametrics.com/keyconcepts/#filtering). This type of caching was broken since [v1.97.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.97.0), which could result in the increased CPU usage when performing queries, which match zero time series. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7009).
* BUGFIX: [Single-node VictoriaMetrics](https://docs.victoriametrics.com/) and `vmstorage` in [VictoriaMetrics cluster](https://docs.victoriametrics.com/cluster-victoriametrics/): Fixes start-up crash on Windows OS. See this [issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6973) for details.
* BUGFIX: [Single-node VictoriaMetrics](https://docs.victoriametrics.com/) and `vmstorage` in [VictoriaMetrics cluster](https://docs.victoriametrics.com/cluster-victoriametrics/): fix metric `vm_object_references{type="indexdb"}`. Previously, it was overcounted.
* BUGFIX: [Single-node VictoriaMetrics](https://docs.victoriametrics.com/) and `vmstorage` in [VictoriaMetrics cluster](https://docs.victoriametrics.com/cluster-victoriametrics/): properly ingest stale NaN samples. Previously it could be dropped if series didn't exist at storage node. See this issue [https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5069] for details.
* BUGFIX: [Single-node VictoriaMetrics](https://docs.victoriametrics.com/) and `vmstorage` in [VictoriaMetrics cluster](https://docs.victoriametrics.com/cluster-victoriametrics/): properly track `vm_missing_tsids_for_metric_id_total` metric. See this [issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6931) for details.
* BUGFIX: [vmalert](https://docs.victoriametrics.com/vmalert): do not send notifications without labels to Alertmanager. Such notifications are rejected by Alertmanager anyway. Before, vmalert could send alert notifications even if no label-value pairs left after applying `alert_relabel_configs` from [notifier config](https://docs.victoriametrics.com/vmalert/#notifier-configuration-file).
* BUGFIX: [vmalert](https://docs.victoriametrics.com/vmalert/): properly update value of variable `$activeAt` in rules annotation during replay mode. Before, `$activeAt` could have provided incorrect values during replay.
* BUGFIX: [MetricsQL](https://docs.victoriametrics.com/metricsql/): properly handle `c1 AND c2` and `c1 OR c1` queries for constants `c1` and `c2`. Previously such queries could return unexpected results. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6637).
* BUGFIX: all VictoriaMetrics components: increase default value of `-loggerMaxArgLen` cmd-line flag from 1000 to 5000. This should improve visibility on errors produced by very long queries.
## [v1.97.7](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.97.7)
Released at 2024-08-28

View file

@ -1,5 +1,13 @@
# Prerequisites to develop and test Helm Charts
---
weight: 0
title: Requirements
menu:
docs:
weight: 1
parent: helm
aliases:
- /helm/requirements/index.html
---
## Kubernetes Cluster
You will need to create a Kubernetes cluster locally using [minikube](https://kubernetes.io/docs/tasks/tools/install-minikube), [microk8s](https://microk8s.io), [kind](https://kind.sigs.k8s.io), [k3s](https://k3s.io) or other tools.

View file

@ -2,6 +2,24 @@
- TODO
## 0.6.6
**Release date:** 2024-10-11
![AppVersion: v0.29.0](https://img.shields.io/static/v1?label=AppVersion&message=v0.29.0&color=success&logo=)
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- Human-readable error about Helm version requirement
## 0.6.5
**Release date:** 2024-10-04
![AppVersion: v0.29.0](https://img.shields.io/static/v1?label=AppVersion&message=v0.29.0&color=success&logo=)
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- upgraded common chart dependency
## 0.6.4
**Release date:** 2024-09-23

View file

@ -1,4 +1,4 @@
![Version: 0.6.4](https://img.shields.io/badge/Version-0.6.4-informational?style=flat-square)
![Version: 0.6.6](https://img.shields.io/badge/Version-0.6.6-informational?style=flat-square)
[![Artifact Hub](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/victoriametrics)](https://artifacthub.io/packages/helm/victoriametrics/victoria-logs-single)
[![Slack](https://img.shields.io/badge/join%20slack-%23victoriametrics-brightgreen.svg)](https://slack.victoriametrics.com/)
@ -6,7 +6,7 @@ Victoria Logs Single version - high-performance, cost-effective and scalable log
## Prerequisites
* Install the follow packages: ``git``, ``kubectl``, ``helm``, ``helm-docs``. See this [tutorial](../../REQUIREMENTS.md).
* Install the follow packages: ``git``, ``kubectl``, ``helm``, ``helm-docs``. See this [tutorial](https://docs.victoriametrics.com/helm/requirements/).
* PV support on underlying infrastructure.
@ -117,7 +117,7 @@ helm uninstall vls -n NAMESPACE
## Documentation of Helm Chart
Install ``helm-docs`` following the instructions on this [tutorial](../../REQUIREMENTS.md).
Install ``helm-docs`` following the instructions on this [tutorial](https://docs.victoriametrics.com/helm/requirements/).
Generate docs with ``helm-docs`` command.

View file

@ -2,6 +2,33 @@
- TODO
## 0.14.2
**Release date:** 2024-10-11
![AppVersion: v1.104.0](https://img.shields.io/static/v1?label=AppVersion&message=v1.104.0&color=success&logo=)
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- Human-readable error about Helm version requirement
## 0.14.1
**Release date:** 2024-10-04
![AppVersion: v1.104.0](https://img.shields.io/static/v1?label=AppVersion&message=v1.104.0&color=success&logo=)
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- upgraded common chart dependency
## 0.14.0
**Release date:** 2024-10-02
![AppVersion: v1.104.0](https://img.shields.io/static/v1?label=AppVersion&message=v1.104.0&color=success&logo=)
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- bump version of VM components to [v1.104.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.104.0)
## 0.13.0
**Release date:** 2024-09-27

View file

@ -1,4 +1,4 @@
![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![Version: 0.13.0](https://img.shields.io/badge/Version-0.13.0-informational?style=flat-square)
![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![Version: 0.14.2](https://img.shields.io/badge/Version-0.14.2-informational?style=flat-square)
[![Artifact Hub](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/victoriametrics)](https://artifacthub.io/packages/helm/victoriametrics/victoria-metrics-agent)
[![Slack](https://img.shields.io/badge/join%20slack-%23victoriametrics-brightgreen.svg)](https://slack.victoriametrics.com/)
@ -6,7 +6,7 @@ Victoria Metrics Agent - collects metrics from various sources and stores them t
## Prerequisites
* Install the follow packages: ``git``, ``kubectl``, ``helm``, ``helm-docs``. See this [tutorial](../../REQUIREMENTS.md).
* Install the follow packages: ``git``, ``kubectl``, ``helm``, ``helm-docs``. See this [tutorial](https://docs.victoriametrics.com/helm/requirements/).
## How to install
@ -122,7 +122,7 @@ helm uninstall vma -n NAMESPACE
## Documentation of Helm Chart
Install ``helm-docs`` following the instructions on this [tutorial](../../REQUIREMENTS.md).
Install ``helm-docs`` following the instructions on this [tutorial](https://docs.victoriametrics.com/helm/requirements/).
Generate docs with ``helm-docs`` command.

View file

@ -2,6 +2,33 @@
- TODO
## 0.12.2
**Release date:** 2024-10-11
![AppVersion: v1.104.0](https://img.shields.io/static/v1?label=AppVersion&message=v1.104.0&color=success&logo=)
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- Human-readable error about Helm version requirement
## 0.12.1
**Release date:** 2024-10-04
![AppVersion: v1.104.0](https://img.shields.io/static/v1?label=AppVersion&message=v1.104.0&color=success&logo=)
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- upgraded common chart dependency
## 0.12.0
**Release date:** 2024-10-02
![AppVersion: v1.104.0](https://img.shields.io/static/v1?label=AppVersion&message=v1.104.0&color=success&logo=)
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- bump version of VM components to [v1.104.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.104.0)
## 0.11.1
**Release date:** 2024-09-10

View file

@ -1,4 +1,4 @@
![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![Version: 0.11.1](https://img.shields.io/badge/Version-0.11.1-informational?style=flat-square)
![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![Version: 0.12.2](https://img.shields.io/badge/Version-0.12.2-informational?style=flat-square)
[![Artifact Hub](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/victoriametrics)](https://artifacthub.io/packages/helm/victoriametrics/victoria-metrics-alert)
[![Slack](https://img.shields.io/badge/join%20slack-%23victoriametrics-brightgreen.svg)](https://slack.victoriametrics.com/)
@ -6,7 +6,7 @@ Victoria Metrics Alert - executes a list of given MetricsQL expressions (rules)
## Prerequisites
* Install the follow packages: ``git``, ``kubectl``, ``helm``, ``helm-docs``. See this [tutorial](../../REQUIREMENTS.md).
* Install the follow packages: ``git``, ``kubectl``, ``helm``, ``helm-docs``. See this [tutorial](https://docs.victoriametrics.com/helm/requirements/).
## How to install
@ -107,7 +107,7 @@ helm uninstall vma -n NAMESPACE
## Documentation of Helm Chart
Install ``helm-docs`` following the instructions on this [tutorial](../../REQUIREMENTS.md).
Install ``helm-docs`` following the instructions on this [tutorial](https://docs.victoriametrics.com/helm/requirements/).
Generate docs with ``helm-docs`` command.

View file

@ -2,6 +2,36 @@
- TODO
## 1.5.2
**Release date:** 2024-10-11
![AppVersion: v1.16.1](https://img.shields.io/static/v1?label=AppVersion&message=v1.16.1&color=success&logo=)
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- Human-readable error about Helm version requirement
## 1.5.1
**Release date:** 2024-10-04
![AppVersion: v1.16.1](https://img.shields.io/static/v1?label=AppVersion&message=v1.16.1&color=success&logo=)
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- upgraded common chart dependency
## 1.5.0
**Release date:** 2024-10-03
![AppVersion: v1.16.1](https://img.shields.io/static/v1?label=AppVersion&message=v1.16.1&color=success&logo=)
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- Upgraded vmanomaly to [1.16.1](https://docs.victoriametrics.com/anomaly-detection/changelog/#v1161)
- Added the ability to enable persistence for models and data via `.Values.persistentVolume.dumpModels` and `.Values.persistentVolume.dumpData` variables respectively.
- Fix default `podSecurityContext` configuration to ensure fs group matches container user.
- Fix passing empty `tenant_id` in case tenant is not defined in values.
## 1.4.6
**Release date:** 2024-09-16

View file

@ -1,4 +1,4 @@
![Version: 1.4.6](https://img.shields.io/badge/Version-1.4.6-informational?style=flat-square)
![Version: 1.5.2](https://img.shields.io/badge/Version-1.5.2-informational?style=flat-square)
[![Artifact Hub](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/victoriametrics)](https://artifacthub.io/packages/helm/victoriametrics/victoria-metrics-anomaly)
[![Slack](https://img.shields.io/badge/join%20slack-%23victoriametrics-brightgreen.svg)](https://slack.victoriametrics.com/)
[![GitHub license](https://img.shields.io/github/license/VictoriaMetrics/VictoriaMetrics.svg)](https://github.com/VictoriaMetrics/helm-charts/blob/master/LICENSE)
@ -9,7 +9,7 @@ Victoria Metrics Anomaly Detection - a service that continuously scans Victoria
## Prerequisites
* Install the follow packages: ``git``, ``kubectl``, ``helm``, ``helm-docs``. See this [tutorial](../../REQUIREMENTS.md).
* Install the follow packages: ``git``, ``kubectl``, ``helm``, ``helm-docs``. See this [tutorial](https://docs.victoriametrics.com/helm/requirements/).
* PV support on underlying infrastructure
@ -111,7 +111,7 @@ helm uninstall vma -n NAMESPACE
## Documentation of Helm Chart
Install ``helm-docs`` following the instructions on this [tutorial](../../REQUIREMENTS.md).
Install ``helm-docs`` following the instructions on this [tutorial](https://docs.victoriametrics.com/helm/requirements/).
Generate docs with ``helm-docs`` command.
@ -525,7 +525,7 @@ tenant_id: ""
<td>image.tag</td>
<td>string</td>
<td><pre class="helm-vars-default-value" language-yaml" lang="">
<code class="language-yaml">v1.15.9
<code class="language-yaml">""
</code>
</pre>
</td>
@ -631,6 +631,8 @@ name: ""
<code class="language-yaml">accessModes:
- ReadWriteOnce
annotations: {}
dumpData: true
dumpModels: true
enabled: false
existingClaim: ""
matchLabels: {}
@ -662,6 +664,28 @@ storageClassName: ""
</pre>
</td>
<td><p>Persistant volume annotations</p>
</td>
</tr>
<tr>
<td>persistentVolume.dumpData</td>
<td>bool</td>
<td><pre class="helm-vars-default-value" language-yaml" lang="">
<code class="language-yaml">true
</code>
</pre>
</td>
<td><p>Enables dumpling data which is fetched from VictoriaMetrics to persistence disk. This is helpful to reduce memory usage.</p>
</td>
</tr>
<tr>
<td>persistentVolume.dumpModels</td>
<td>bool</td>
<td><pre class="helm-vars-default-value" language-yaml" lang="">
<code class="language-yaml">true
</code>
</pre>
</td>
<td><p>Enables dumping models to persistence disk. This is helpful to reduce memory usage.</p>
</td>
</tr>
<tr>
@ -781,6 +805,7 @@ minAvailable: 1
<td>object</td>
<td><pre class="helm-vars-default-value" language-yaml" lang="plaintext">
<code class="language-yaml">enabled: true
fsGroup: 1000
</code>
</pre>
</td>

View file

@ -1,6 +1,34 @@
## Next release
- TODO
## 0.7.2
**Release date:** 2024-10-11
![AppVersion: v1.104.0](https://img.shields.io/static/v1?label=AppVersion&message=v1.104.0&color=success&logo=)
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- Human-readable error about Helm version requirement
## 0.7.1
**Release date:** 2024-10-04
![AppVersion: v1.104.0](https://img.shields.io/static/v1?label=AppVersion&message=v1.104.0&color=success&logo=)
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- upgraded common chart dependency
## 0.7.0
**Release date:** 2024-10-02
![AppVersion: v1.104.0](https://img.shields.io/static/v1?label=AppVersion&message=v1.104.0&color=success&logo=)
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- Added ability to override deployment namespace using `namespaceOverride` and `global.namespaceOverride` variables
- bump version of VM components to [v1.104.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.104.0)
## 0.6.0

View file

@ -1,4 +1,4 @@
![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![Version: 0.6.0](https://img.shields.io/badge/Version-0.6.0-informational?style=flat-square)
![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![Version: 0.7.2](https://img.shields.io/badge/Version-0.7.2-informational?style=flat-square)
[![Artifact Hub](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/victoriametrics)](https://artifacthub.io/packages/helm/victoriametrics/victoria-metrics-auth)
[![Slack](https://img.shields.io/badge/join%20slack-%23victoriametrics-brightgreen.svg)](https://slack.victoriametrics.com/)
@ -6,7 +6,7 @@ Victoria Metrics Auth - is a simple auth proxy and router for VictoriaMetrics.
## Prerequisites
* Install the follow packages: ``git``, ``kubectl``, ``helm``, ``helm-docs``. See this [tutorial](../../REQUIREMENTS.md).
* Install the follow packages: ``git``, ``kubectl``, ``helm``, ``helm-docs``. See this [tutorial](https://docs.victoriametrics.com/helm/requirements/).
## How to install
@ -100,7 +100,7 @@ helm uninstall vma -n NAMESPACE
## Documentation of Helm Chart
Install ``helm-docs`` following the instructions on this [tutorial](../../REQUIREMENTS.md).
Install ``helm-docs`` following the instructions on this [tutorial](https://docs.victoriametrics.com/helm/requirements/).
Generate docs with ``helm-docs`` command.

View file

@ -2,6 +2,37 @@
- TODO
## 0.14.2
**Release date:** 2024-10-11
![AppVersion: v1.104.0](https://img.shields.io/static/v1?label=AppVersion&message=v1.104.0&color=success&logo=)
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- Human-readable error about Helm version requirement
## 0.14.1
**Release date:** 2024-10-04
![AppVersion: v1.104.0](https://img.shields.io/static/v1?label=AppVersion&message=v1.104.0&color=success&logo=)
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- Support extra storageNodes. Fail if no storageNodes set
- Support enabling automatic discovery of vmstorage addresses using DNS SRV records in enterprise version
- Added HPA with scaledown disabled by default
- Allow excluding vmstorage nodes from vminsert. See [this issue](https://github.com/VictoriaMetrics/helm-charts/issues/1549)
- Upgraded common chart dependency
## 0.14.0
**Release date:** 2024-10-02
![AppVersion: v1.104.0](https://img.shields.io/static/v1?label=AppVersion&message=v1.104.0&color=success&logo=)
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- bump version of VM components to [v1.104.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.104.0)
## 0.13.7
**Release date:** 2024-09-12

File diff suppressed because it is too large Load diff

View file

@ -4,6 +4,22 @@
- TODO
## 0.0.15
**Release date:** 2024-10-11
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- Display compatibility error message
## 0.0.14
**Release date:** 2024-10-04
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- Fixed openshift compatibility templates
## 0.0.13
**Release date:** 2024-09-16

View file

@ -1,6 +1,15 @@
## Next release
- TODO
- Human-readable error about Helm version requirement
## 0.4.0
**Release date:** 2024-10-02
![AppVersion: v1.104.0](https://img.shields.io/static/v1?label=AppVersion&message=v1.104.0&color=success&logo=)
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- bump version of VM components to [v1.104.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.104.0)
## 0.3.1

View file

@ -1,4 +1,4 @@
![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![Version: 0.3.1](https://img.shields.io/badge/Version-0.3.1-informational?style=flat-square)
![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![Version: 0.4.0](https://img.shields.io/badge/Version-0.4.0-informational?style=flat-square)
[![Artifact Hub](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/victoriametrics)](https://artifacthub.io/packages/helm/victoriametrics/victoria-metrics-distributed)
[![Slack](https://img.shields.io/badge/join%20slack-%23victoriametrics-brightgreen.svg)](https://slack.victoriametrics.com/)
@ -6,7 +6,7 @@ A Helm chart for Running VMCluster on Multiple Availability Zones
## Prerequisites
* Install the follow packages: ``git``, ``kubectl``, ``helm``, ``helm-docs``. See this [tutorial](../../REQUIREMENTS.md).
* Install the follow packages: ``git``, ``kubectl``, ``helm``, ``helm-docs``. See this [tutorial](https://docs.victoriametrics.com/helm/requirements/).
* PV support on underlying infrastructure.
@ -213,7 +213,7 @@ helm uninstall vmd -n NAMESPACE
## Documentation of Helm Chart
Install ``helm-docs`` following the instructions on this [tutorial](../../REQUIREMENTS.md).
Install ``helm-docs`` following the instructions on this [tutorial](https://docs.victoriametrics.com/helm/requirements/).
Generate docs with ``helm-docs`` command.

View file

@ -2,6 +2,33 @@
- TODO
## 0.5.2
**Release date:** 2024-10-11
![AppVersion: v1.104.0](https://img.shields.io/static/v1?label=AppVersion&message=v1.104.0&color=success&logo=)
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- Human-readable error about Helm version requirement
## 0.5.1
**Release date:** 2024-10-04
![AppVersion: v1.104.0](https://img.shields.io/static/v1?label=AppVersion&message=v1.104.0&color=success&logo=)
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- upgraded common chart dependency
## 0.5.0
**Release date:** 2024-10-02
![AppVersion: v1.104.0](https://img.shields.io/static/v1?label=AppVersion&message=v1.104.0&color=success&logo=)
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- bump version of VM components to [v1.104.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.104.0)
## 0.4.0
**Release date:** 2024-09-12

View file

@ -1,4 +1,4 @@
![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![Version: 0.4.0](https://img.shields.io/badge/Version-0.4.0-informational?style=flat-square)
![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![Version: 0.5.2](https://img.shields.io/badge/Version-0.5.2-informational?style=flat-square)
[![Artifact Hub](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/victoriametrics)](https://artifacthub.io/packages/helm/victoriametrics/victoria-metrics-gateway)
[![Slack](https://img.shields.io/badge/join%20slack-%23victoriametrics-brightgreen.svg)](https://slack.victoriametrics.com/)
@ -15,7 +15,7 @@ Victoria Metrics Gateway - Auth & Rate-Limitting proxy for Victoria Metrics
## Prerequisites
* Install the follow packages: ``git``, ``kubectl``, ``helm``, ``helm-docs``. See this [tutorial](../../REQUIREMENTS.md).
* Install the follow packages: ``git``, ``kubectl``, ``helm``, ``helm-docs``. See this [tutorial](https://docs.victoriametrics.com/helm/requirements/).
* PV support on underlying infrastructure
## Chart Details
@ -167,7 +167,7 @@ helm uninstall vmg -n NAMESPACE
## Documentation of Helm Chart
Install ``helm-docs`` following the instructions on this [tutorial](../../REQUIREMENTS.md).
Install ``helm-docs`` following the instructions on this [tutorial](https://docs.victoriametrics.com/helm/requirements/).
Generate docs with ``helm-docs`` command.

View file

@ -2,6 +2,53 @@
- TODO
## 0.27.3
**Release date:** 2024-10-11
![AppVersion: v1.104.0](https://img.shields.io/static/v1?label=AppVersion&message=v1.104.0&color=success&logo=)
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- Grafana chart: 8.4.9 -> 8.5.2
- Prometheus operator chart: 11.0 -> 15.0
- Human-readable error about Helm version requirement
- Updated rules template context structure
## 0.27.2
**Release date:** 2024-10-10
![AppVersion: v1.104.0](https://img.shields.io/static/v1?label=AppVersion&message=v1.104.0&color=success&logo=)
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- Fixed dashboards variable queries
## 0.27.1
**Release date:** 2024-10-10
![AppVersion: v1.104.0](https://img.shields.io/static/v1?label=AppVersion&message=v1.104.0&color=success&logo=)
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- Generate VM components tag version from chart app name by default
- Added k8s apiserver, kube-proxy, controller-manager and kubelet dashboards
- Moved `dashboards.<dashboard>` to `defaultDashboards.dashboards.<dashboard>.enabled`
- Moved `defaultDashboardsEnabled` to `defaultDashboards.enabled`
- Moved `grafanaOperatorDashboardsFormat` to `defaultDashboards.grafanaOperator`
- Added condition for `grafana-overview`, `alertmanager-overview` and `vmbackupmanager` dashboards. See [this issue](https://github.com/VictoriaMetrics/helm-charts/issues/1564)
- Removed `experimentalDashboardsEnabled` param
- Upgraded default Alertmanager tag 0.25.0 -> 0.27.0
- Upgraded operator chart 0.35.2 -> 0.35.3
## 0.27.0
**Release date:** 2024-10-02
![AppVersion: v1.104.0](https://img.shields.io/static/v1?label=AppVersion&message=v1.104.0&color=success&logo=)
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- bump version of VM components to [v1.104.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.104.0)
## 0.26.0
**Release date:** 2024-09-29

View file

@ -1,4 +1,4 @@
![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![Version: 0.26.0](https://img.shields.io/badge/Version-0.26.0-informational?style=flat-square)
![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![Version: 0.27.3](https://img.shields.io/badge/Version-0.27.3-informational?style=flat-square)
[![Artifact Hub](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/victoriametrics)](https://artifacthub.io/packages/helm/victoriametrics/victoria-metrics-k8s-stack)
Kubernetes monitoring on VictoriaMetrics stack. Includes VictoriaMetrics Operator, Grafana dashboards, ServiceScrapes and VMRules
@ -20,7 +20,7 @@ Also it installs Custom Resources like [VMSingle](https://docs.victoriametrics.c
By default, the operator [converts all existing prometheus-operator API objects](https://docs.victoriametrics.com/operator/quick-start#migration-from-prometheus-operator-objects) into corresponding VictoriaMetrics Operator objects.
To enable metrics collection for kubernetes this chart installs multiple scrape configurations for kuberenetes components like kubelet and kube-proxy, etc. Metrics collection is done by [VMAgent](https://docs.victoriametrics.com/operator/quick-start#vmagent). So if want to ship metrics to external VictoriaMetrics database you can disable VMSingle installation by setting `vmsingle.enabled` to `false` and setting `vmagent.vmagentSpec.remoteWrite.url` to your external VictoriaMetrics database.
To enable metrics collection for kubernetes this chart installs multiple scrape configurations for kubernetes components like kubelet and kube-proxy, etc. Metrics collection is done by [VMAgent](https://docs.victoriametrics.com/operator/quick-start#vmagent). So if want to ship metrics to external VictoriaMetrics database you can disable VMSingle installation by setting `vmsingle.enabled` to `false` and setting `vmagent.vmagentSpec.remoteWrite.url` to your external VictoriaMetrics database.
This chart also installs bunch of dashboards and recording rules from [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus) project.
@ -158,7 +158,7 @@ grafana:
## Prerequisites
* Install the follow packages: ``git``, ``kubectl``, ``helm``, ``helm-docs``. See this [tutorial](../../REQUIREMENTS.md).
* Install the follow packages: ``git``, ``kubectl``, ``helm``, ``helm-docs``. See this [tutorial](https://docs.victoriametrics.com/helm/requirements/).
* Add dependency chart repositories
@ -259,7 +259,7 @@ To run VictoriaMetrics stack locally it's possible to use [Minikube](https://git
Run Minikube cluster
```
minikube start --container-runtime=containerd --extra-config=scheduler.bind-address=0.0.0.0 --extra-config=controller-manager.bind-address=0.0.0.0
minikube start --container-runtime=containerd --extra-config=scheduler.bind-address=0.0.0.0 --extra-config=controller-manager.bind-address=0.0.0.0 --extra-config=etcd.listen-metrics-urls=http://0.0.0.0:2381
```
Install helm chart
@ -370,7 +370,7 @@ kubectl apply -f https://raw.githubusercontent.com/VictoriaMetrics/operator/v0.1
## Documentation of Helm Chart
Install ``helm-docs`` following the instructions on this [tutorial](../../REQUIREMENTS.md).
Install ``helm-docs`` following the instructions on this [tutorial](https://docs.victoriametrics.com/helm/requirements/).
Generate docs with ``helm-docs`` command.
@ -493,7 +493,7 @@ tls: []
<code class="language-yaml">configSecret: ""
externalURL: ""
image:
tag: v0.25.0
tag: v0.27.0
port: "9093"
routePrefix: /
selectAllByDefault: true
@ -621,23 +621,26 @@ selectAllByDefault: true
</td>
</tr>
<tr>
<td>dashboards</td>
<td>defaultDashboards.dashboards</td>
<td>object</td>
<td><pre class="helm-vars-default-value" language-yaml" lang="plaintext">
<code class="language-yaml">node-exporter-full: true
operator: false
vmalert: false
<code class="language-yaml">node-exporter-full:
enabled: true
victoriametrics-operator:
enabled: false
victoriametrics-vmalert:
enabled: false
</code>
</pre>
</td>
<td><p>Enable dashboards despite it&rsquo;s dependency is not installed</p>
<td><p>Create dashboards as ConfigMap despite dependency it requires is not installed</p>
</td>
</tr>
<tr>
<td>dashboards.node-exporter-full</td>
<td>bool</td>
<td><pre class="helm-vars-default-value" language-yaml" lang="">
<code class="language-yaml">true
<td>defaultDashboards.dashboards.node-exporter-full</td>
<td>object</td>
<td><pre class="helm-vars-default-value" language-yaml" lang="plaintext">
<code class="language-yaml">enabled: true
</code>
</pre>
</td>
@ -645,16 +648,47 @@ vmalert: false
</td>
</tr>
<tr>
<td>defaultDashboardsEnabled</td>
<td>defaultDashboards.enabled</td>
<td>bool</td>
<td><pre class="helm-vars-default-value" language-yaml" lang="">
<code class="language-yaml">true
</code>
</pre>
</td>
<td><p>Create default dashboards</p>
<td><p>Enable custom dashboards installation</p>
</td>
</tr>
<tr>
<td>defaultDashboards.grafanaOperator.allowCrossNamespaceImport</td>
<td>bool</td>
<td><pre class="helm-vars-default-value" language-yaml" lang="">
<code class="language-yaml">false
</code>
</pre>
</td>
<td></td>
</tr>
<tr>
<td>defaultDashboards.grafanaOperator.enabled</td>
<td>bool</td>
<td><pre class="helm-vars-default-value" language-yaml" lang="">
<code class="language-yaml">false
</code>
</pre>
</td>
<td><p>Create dashboards as CRDs (reuqires grafana-operator to be installed)</p>
</td>
</tr>
<tr>
<td>defaultDashboards.grafanaOperator.instanceSelector.matchLabels.dashboards</td>
<td>string</td>
<td><pre class="helm-vars-default-value" language-yaml" lang="">
<code class="language-yaml">grafana
</code>
</pre>
</td>
<td></td>
</tr>
<tr>
<td>defaultRules</td>
<td>object</td>
@ -1091,17 +1125,6 @@ vmsingle:
</pre>
</td>
<td><p>Runbook url prefix for default rules</p>
</td>
</tr>
<tr>
<td>experimentalDashboardsEnabled</td>
<td>bool</td>
<td><pre class="helm-vars-default-value" language-yaml" lang="">
<code class="language-yaml">true
</code>
</pre>
</td>
<td><p>Create experimental dashboards</p>
</td>
</tr>
<tr>
@ -1295,21 +1318,6 @@ selector:
</pre>
</td>
<td><p><a href="https://docs.victoriametrics.com/operator/api#vmservicescrapespec" target="_blank">Scrape configuration</a> for Grafana</p>
</td>
</tr>
<tr>
<td>grafanaOperatorDashboardsFormat</td>
<td>object</td>
<td><pre class="helm-vars-default-value" language-yaml" lang="plaintext">
<code class="language-yaml">allowCrossNamespaceImport: false
enabled: false
instanceSelector:
matchLabels:
dashboards: grafana
</code>
</pre>
</td>
<td><p>Create dashboards as CRDs (reuqires grafana-operator to be installed)</p>
</td>
</tr>
<tr>
@ -2164,8 +2172,6 @@ tls: []
extraArgs:
promscrape.dropOriginalLabels: "true"
promscrape.streamParse: "true"
image:
tag: v1.103.0
port: "8429"
scrapeInterval: 20s
selectAllByDefault: true
@ -2257,8 +2263,6 @@ tls: []
externalLabels: {}
extraArgs:
http.pathPrefix: /
image:
tag: v1.103.0
port: "8080"
selectAllByDefault: true
</code>
@ -2639,16 +2643,12 @@ port: "8427"
retentionPeriod: "1"
vminsert:
extraArgs: {}
image:
tag: v1.103.0-cluster
port: "8480"
replicaCount: 2
resources: {}
vmselect:
cacheMountPath: /select-cache
extraArgs: {}
image:
tag: v1.103.0-cluster
port: "8481"
replicaCount: 2
resources: {}
@ -2659,8 +2659,6 @@ vmselect:
requests:
storage: 2Gi
vmstorage:
image:
tag: v1.103.0-cluster
replicaCount: 2
resources: {}
storage:
@ -2813,8 +2811,6 @@ vmstorage:
<td>object</td>
<td><pre class="helm-vars-default-value" language-yaml" lang="plaintext">
<code class="language-yaml">extraArgs: {}
image:
tag: v1.103.0
port: "8429"
replicaCount: 1
retentionPeriod: "1"

View file

@ -2,6 +2,27 @@
- TODO
## 0.35.4
**Release date:** 2024-10-11
![AppVersion: v0.48.3](https://img.shields.io/static/v1?label=AppVersion&message=v0.48.3&color=success&logo=)
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- Human-readable error about Helm version requirement
## 0.35.3
**Release date:** 2024-10-10
![AppVersion: v0.48.3](https://img.shields.io/static/v1?label=AppVersion&message=v0.48.3&color=success&logo=)
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- upgraded common chart dependency
- made webhook pod port configurable. See [this issue](https://github.com/VictoriaMetrics/helm-charts/issues/1565)
- added configurable cleanup hook resources. See [this issue](https://github.com/VictoriaMetrics/helm-charts/issues/1571)
- added ability to configure `terminationGracePeriodSeconds` and `lifecycle`. See [this issue](https://github.com/VictoriaMetrics/helm-charts/issues/1563) for details
## 0.35.2
**Release date:** 2024-09-29

View file

@ -1,11 +1,11 @@
![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![Version: 0.35.2](https://img.shields.io/badge/Version-0.35.2-informational?style=flat-square)
![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![Version: 0.35.4](https://img.shields.io/badge/Version-0.35.4-informational?style=flat-square)
[![Artifact Hub](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/victoriametrics)](https://artifacthub.io/packages/helm/victoriametrics/victoria-metrics-operator)
Victoria Metrics Operator
## Prerequisites
* Install the follow packages: ``git``, ``kubectl``, ``helm``, ``helm-docs``. See this [tutorial](../../REQUIREMENTS.md).
* Install the follow packages: ``git``, ``kubectl``, ``helm``, ``helm-docs``. See this [tutorial](https://docs.victoriametrics.com/helm/requirements/).
* PV support on underlying infrastructure.
## ArgoCD issues
@ -175,7 +175,7 @@ helm uninstall vmo -n NAMESPACE
## Documentation of Helm Chart
Install ``helm-docs`` following the instructions on this [tutorial](../../REQUIREMENTS.md).
Install ``helm-docs`` following the instructions on this [tutorial](https://docs.victoriametrics.com/helm/requirements/).
Generate docs with ``helm-docs`` command.
@ -332,6 +332,22 @@ tag: ""
</pre>
</td>
<td><p>Image configuration for CRD cleanup Job</p>
</td>
</tr>
<tr>
<td>crd.cleanup.resources</td>
<td>object</td>
<td><pre class="helm-vars-default-value" language-yaml" lang="plaintext">
<code class="language-yaml">limits:
cpu: 500m
memory: 256Mi
requests:
cpu: 100m
memory: 56Mi
</code>
</pre>
</td>
<td><p>Cleanup hook resources</p>
</td>
</tr>
<tr>
@ -567,6 +583,17 @@ variant: ""
</pre>
</td>
<td><p>Secret to pull images</p>
</td>
</tr>
<tr>
<td>lifecycle</td>
<td>object</td>
<td><pre class="helm-vars-default-value" language-yaml" lang="plaintext">
<code class="language-yaml">{}
</code>
</pre>
</td>
<td><p>Operator lifecycle. See <a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/" target="_blank">this article</a> for details.</p>
</td>
</tr>
<tr>
@ -673,7 +700,7 @@ labels: {}
<td>podSecurityContext</td>
<td>object</td>
<td><pre class="helm-vars-default-value" language-yaml" lang="plaintext">
<code class="language-yaml">{}
<code class="language-yaml">enabled: true
</code>
</pre>
</td>
@ -790,7 +817,7 @@ view:
<td>securityContext</td>
<td>object</td>
<td><pre class="helm-vars-default-value" language-yaml" lang="plaintext">
<code class="language-yaml">{}
<code class="language-yaml">enabled: true
</code>
</pre>
</td>
@ -979,6 +1006,17 @@ tlsConfig: {}
</pre>
</td>
<td><p>Configures monitoring with serviceScrape. VMServiceScrape must be pre-installed</p>
</td>
</tr>
<tr>
<td>terminationGracePeriodSeconds</td>
<td>int</td>
<td><pre class="helm-vars-default-value" language-yaml" lang="">
<code class="language-yaml">30
</code>
</pre>
</td>
<td><p>Graceful pod termination timeout. See <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#hook-handler-execution" target="_blank">this article</a> for details.</p>
</td>
</tr>
<tr>

View file

@ -2,6 +2,33 @@
- TODO
## 0.12.2
**Release date:** 2024-10-11
![AppVersion: v1.104.0](https://img.shields.io/static/v1?label=AppVersion&message=v1.104.0&color=success&logo=)
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- Human-readable error about Helm version requirement
## 0.12.1
**Release date:** 2024-10-04
![AppVersion: v1.104.0](https://img.shields.io/static/v1?label=AppVersion&message=v1.104.0&color=success&logo=)
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- upgraded common chart dependency
## 0.12.0
**Release date:** 2024-10-02
![AppVersion: v1.104.0](https://img.shields.io/static/v1?label=AppVersion&message=v1.104.0&color=success&logo=)
![Helm: v3](https://img.shields.io/static/v1?label=Helm&message=v3&color=informational&logo=helm)
- bump version of VM components to [v1.104.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.104.0)
## 0.11.2
**Release date:** 2024-09-12

View file

@ -1,11 +1,11 @@
![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![Version: 0.11.2](https://img.shields.io/badge/Version-0.11.2-informational?style=flat-square)
![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![Version: 0.12.2](https://img.shields.io/badge/Version-0.12.2-informational?style=flat-square)
[![Artifact Hub](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/victoriametrics)](https://artifacthub.io/packages/helm/victoriametrics/victoria-metrics-single)
Victoria Metrics Single version - high-performance, cost-effective and scalable TSDB, long-term remote storage for Prometheus
## Prerequisites
* Install the follow packages: ``git``, ``kubectl``, ``helm``, ``helm-docs``. See this [tutorial](../../REQUIREMENTS.md).
* Install the follow packages: ``git``, ``kubectl``, ``helm``, ``helm-docs``. See this [tutorial](https://docs.victoriametrics.com/helm/requirements/).
* PV support on underlying infrastructure.
## Chart Details
@ -106,7 +106,7 @@ helm uninstall vms -n NAMESPACE
## Documentation of Helm Chart
Install ``helm-docs`` following the instructions on this [tutorial](../../REQUIREMENTS.md).
Install ``helm-docs`` following the instructions on this [tutorial](https://docs.victoriametrics.com/helm/requirements/).
Generate docs with ``helm-docs`` command.

View file

@ -11,6 +11,8 @@ aliases:
- /operator/changelog/index.html
---
- [vmuser](https://docs.victoriametrics.com/operator/resources/vmuser/): fixes the protocol of generated CRD target access url for vminsert and vmstorage when TLS is enabled.
## [v0.48.3](https://github.com/VictoriaMetrics/operator/releases/tag/v0.48.3) - 29 Sep 2024
- [vmcluster](https://docs.victoriametrics.com/operator/resources/vmcluster): properly apply global container registry from configuration. It was ignored for `VMCluster` since `v0.48.0` release. See [this issue](https://github.com/VictoriaMetrics/operator/issues/1118) for details.

View file

@ -10,7 +10,7 @@ aliases:
- /operator/vars/index.html
---
<!-- this doc autogenerated - don't edit it manually -->
updated at Sun Sep 29 19:33:17 UTC 2024
updated at Mon Oct 7 04:29:16 UTC 2024
| variable name | variable default value | variable required | variable description |
@ -31,7 +31,7 @@ aliases:
| VM_VLOGSDEFAULT_CONFIGRELOADERCPU | - | false | ignored |
| VM_VLOGSDEFAULT_CONFIGRELOADERMEMORY | - | false | ignored |
| VM_VMALERTDEFAULT_IMAGE | victoriametrics/vmalert | false | - |
| VM_VMALERTDEFAULT_VERSION | v1.103.0 | false | - |
| VM_VMALERTDEFAULT_VERSION | v1.104.0 | false | - |
| VM_VMALERTDEFAULT_CONFIGRELOADIMAGE | jimmidyson/configmap-reload:v0.3.0 | false | - |
| VM_VMALERTDEFAULT_PORT | 8080 | false | - |
| VM_VMALERTDEFAULT_USEDEFAULTRESOURCES | true | false | - |
@ -42,7 +42,7 @@ aliases:
| VM_VMALERTDEFAULT_CONFIGRELOADERCPU | 100m | false | - |
| VM_VMALERTDEFAULT_CONFIGRELOADERMEMORY | 25Mi | false | - |
| VM_VMAGENTDEFAULT_IMAGE | victoriametrics/vmagent | false | - |
| VM_VMAGENTDEFAULT_VERSION | v1.103.0 | false | - |
| VM_VMAGENTDEFAULT_VERSION | v1.104.0 | false | - |
| VM_VMAGENTDEFAULT_CONFIGRELOADIMAGE | quay.io/prometheus-operator/prometheus-config-reloader:v0.68.0 | false | - |
| VM_VMAGENTDEFAULT_PORT | 8429 | false | - |
| VM_VMAGENTDEFAULT_USEDEFAULTRESOURCES | true | false | - |
@ -53,7 +53,7 @@ aliases:
| VM_VMAGENTDEFAULT_CONFIGRELOADERCPU | 100m | false | - |
| VM_VMAGENTDEFAULT_CONFIGRELOADERMEMORY | 25Mi | false | - |
| VM_VMSINGLEDEFAULT_IMAGE | victoriametrics/victoria-metrics | false | - |
| VM_VMSINGLEDEFAULT_VERSION | v1.103.0 | false | - |
| VM_VMSINGLEDEFAULT_VERSION | v1.104.0 | false | - |
| VM_VMSINGLEDEFAULT_CONFIGRELOADIMAGE | - | false | ignored |
| VM_VMSINGLEDEFAULT_PORT | 8429 | false | - |
| VM_VMSINGLEDEFAULT_USEDEFAULTRESOURCES | true | false | - |
@ -65,14 +65,14 @@ aliases:
| VM_VMSINGLEDEFAULT_CONFIGRELOADERMEMORY | - | false | ignored |
| VM_VMCLUSTERDEFAULT_USEDEFAULTRESOURCES | true | false | - |
| VM_VMCLUSTERDEFAULT_VMSELECTDEFAULT_IMAGE | victoriametrics/vmselect | false | - |
| VM_VMCLUSTERDEFAULT_VMSELECTDEFAULT_VERSION | v1.103.0-cluster | false | - |
| VM_VMCLUSTERDEFAULT_VMSELECTDEFAULT_VERSION | v1.104.0-cluster | false | - |
| VM_VMCLUSTERDEFAULT_VMSELECTDEFAULT_PORT | 8481 | false | - |
| VM_VMCLUSTERDEFAULT_VMSELECTDEFAULT_RESOURCE_LIMIT_MEM | 1000Mi | false | - |
| VM_VMCLUSTERDEFAULT_VMSELECTDEFAULT_RESOURCE_LIMIT_CPU | 500m | false | - |
| VM_VMCLUSTERDEFAULT_VMSELECTDEFAULT_RESOURCE_REQUEST_MEM | 500Mi | false | - |
| VM_VMCLUSTERDEFAULT_VMSELECTDEFAULT_RESOURCE_REQUEST_CPU | 100m | false | - |
| VM_VMCLUSTERDEFAULT_VMSTORAGEDEFAULT_IMAGE | victoriametrics/vmstorage | false | - |
| VM_VMCLUSTERDEFAULT_VMSTORAGEDEFAULT_VERSION | v1.103.0-cluster | false | - |
| VM_VMCLUSTERDEFAULT_VMSTORAGEDEFAULT_VERSION | v1.104.0-cluster | false | - |
| VM_VMCLUSTERDEFAULT_VMSTORAGEDEFAULT_VMINSERTPORT | 8400 | false | - |
| VM_VMCLUSTERDEFAULT_VMSTORAGEDEFAULT_VMSELECTPORT | 8401 | false | - |
| VM_VMCLUSTERDEFAULT_VMSTORAGEDEFAULT_PORT | 8482 | false | - |
@ -81,7 +81,7 @@ aliases:
| VM_VMCLUSTERDEFAULT_VMSTORAGEDEFAULT_RESOURCE_REQUEST_MEM | 500Mi | false | - |
| VM_VMCLUSTERDEFAULT_VMSTORAGEDEFAULT_RESOURCE_REQUEST_CPU | 250m | false | - |
| VM_VMCLUSTERDEFAULT_VMINSERTDEFAULT_IMAGE | victoriametrics/vminsert | false | - |
| VM_VMCLUSTERDEFAULT_VMINSERTDEFAULT_VERSION | v1.103.0-cluster | false | - |
| VM_VMCLUSTERDEFAULT_VMINSERTDEFAULT_VERSION | v1.104.0-cluster | false | - |
| VM_VMCLUSTERDEFAULT_VMINSERTDEFAULT_PORT | 8480 | false | - |
| VM_VMCLUSTERDEFAULT_VMINSERTDEFAULT_RESOURCE_LIMIT_MEM | 500Mi | false | - |
| VM_VMCLUSTERDEFAULT_VMINSERTDEFAULT_RESOURCE_LIMIT_CPU | 500m | false | - |
@ -100,7 +100,7 @@ aliases:
| VM_VMALERTMANAGER_RESOURCE_REQUEST_CPU | 30m | false | - |
| VM_DISABLESELFSERVICESCRAPECREATION | false | false | - |
| VM_VMBACKUP_IMAGE | victoriametrics/vmbackupmanager | false | - |
| VM_VMBACKUP_VERSION | v1.103.0-enterprise | false | - |
| VM_VMBACKUP_VERSION | v1.104.0-enterprise | false | - |
| VM_VMBACKUP_PORT | 8300 | false | - |
| VM_VMBACKUP_USEDEFAULTRESOURCES | true | false | - |
| VM_VMBACKUP_RESOURCE_LIMIT_MEM | 500Mi | false | - |
@ -108,7 +108,7 @@ aliases:
| VM_VMBACKUP_RESOURCE_REQUEST_MEM | 200Mi | false | - |
| VM_VMBACKUP_RESOURCE_REQUEST_CPU | 150m | false | - |
| VM_VMAUTHDEFAULT_IMAGE | victoriametrics/vmauth | false | - |
| VM_VMAUTHDEFAULT_VERSION | v1.103.0 | false | - |
| VM_VMAUTHDEFAULT_VERSION | v1.104.0 | false | - |
| VM_VMAUTHDEFAULT_CONFIGRELOADIMAGE | quay.io/prometheus-operator/prometheus-config-reloader:v0.68.0 | false | - |
| VM_VMAUTHDEFAULT_PORT | 8427 | false | - |
| VM_VMAUTHDEFAULT_USEDEFAULTRESOURCES | true | false | - |
@ -136,4 +136,4 @@ aliases:
| VM_PODWAITREADYINTERVALCHECK | 5s | false | Defines poll interval for pods ready check at statefulset rollout update |
| VM_FORCERESYNCINTERVAL | 60s | false | configures force resync interval for VMAgent, VMAlert, VMAlertmanager and VMAuth. |
| VM_ENABLESTRICTSECURITY | false | false | EnableStrictSecurity will add default `securityContext` to pods and containers created by operator Default PodSecurityContext include: 1. RunAsNonRoot: true 2. RunAsUser/RunAsGroup/FSGroup: 65534 '65534' refers to 'nobody' in all the used default images like alpine, busybox. If you're using customize image, please make sure '65534' is a valid uid in there or specify SecurityContext. 3. FSGroupChangePolicy: &onRootMismatch If KubeVersion>=1.20, use `FSGroupChangePolicy="onRootMismatch"` to skip the recursive permission change when the root of the volume already has the correct permissions 4. SeccompProfile: type: RuntimeDefault Use `RuntimeDefault` seccomp profile by default, which is defined by the container runtime, instead of using the Unconfined (seccomp disabled) mode. Default container SecurityContext include: 1. AllowPrivilegeEscalation: false 2. ReadOnlyRootFilesystem: true 3. Capabilities: drop: - all turn off `EnableStrictSecurity` by default, see https://github.com/VictoriaMetrics/operator/issues/749 for details |
[envconfig-sum]: bcf2b739473583d36da69a3c4c3835cf
[envconfig-sum]: 41649232efe6d908b9a973655bf62dd3

View file

@ -18,147 +18,7 @@ after applying all the configured [relabeling stages](https://docs.victoriametri
**By default, stream aggregation ignores timestamps associated with the input [samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples).
It expects that the ingested samples have timestamps close to the current time. See [how to ignore old samples](#ignoring-old-samples).**
## Configuration
Stream aggregation can be configured via the following command-line flags:
- `-streamAggr.config` at [single-node VictoriaMetrics](https://docs.victoriametrics.com/single-server-victoriametrics/)
and at [vmagent](https://docs.victoriametrics.com/vmagent/).
- `-remoteWrite.streamAggr.config` at [vmagent](https://docs.victoriametrics.com/vmagent/) only. This flag can be specified individually
per each `-remoteWrite.url`, so the aggregation happens independently per each remote storage destination.
This allows writing different aggregates to different remote storage systems.
These flags must point to a file containing [stream aggregation config](#stream-aggregation-config).
The file may contain `%{ENV_VAR}` placeholders which are substituted by the corresponding `ENV_VAR` environment variable values.
By default, the following data is written to the storage when stream aggregation is enabled:
- the aggregated samples;
- the raw input samples, which didn't match any `match` option in the provided [config](#stream-aggregation-config).
This behaviour can be changed via the following command-line flags:
- `-streamAggr.keepInput` at [single-node VictoriaMetrics](https://docs.victoriametrics.com/single-server-victoriametrics/)
and [vmagent](https://docs.victoriametrics.com/vmagent/). At [vmagent](https://docs.victoriametrics.com/vmagent/)
`-remoteWrite.streamAggr.keepInput` flag can be specified individually per each `-remoteWrite.url`.
If one of these flags is set, then all the input samples are written to the storage alongside the aggregated samples.
- `-streamAggr.dropInput` at [single-node VictoriaMetrics](https://docs.victoriametrics.com/single-server-victoriametrics/)
and [vmagent](https://docs.victoriametrics.com/vmagent/). At [vmagent](https://docs.victoriametrics.com/vmagent/)
`-remoteWrite.streamAggr.dropInput` flag can be specified individually per each `-remoteWrite.url`.
If one of these flags are set, then all the input samples are dropped, while only the aggregated samples are written to the storage.
## Routing
[Single-node VictoriaMetrics](https://docs.victoriametrics.com/single-server-victoriametrics/) supports relabeling,
deduplication and stream aggregation for all the received data, scraped or pushed.
The processed data is then stored in local storage and **can't be forwarded further**.
[vmagent](https://docs.victoriametrics.com/vmagent/) supports relabeling, deduplication and stream aggregation for all
the received data, scraped or pushed. Then, the collected data will be forwarded to specified `-remoteWrite.url` destinations.
The data processing order is the following:
1. all the received data is relabeled according to the specified [`-remoteWrite.relabelConfig`](https://docs.victoriametrics.com/vmagent/#relabeling) (if it is set)
1. all the received data is deduplicated according to specified [`-streamAggr.dedupInterval`](https://docs.victoriametrics.com/stream-aggregation/#deduplication)
(if it is set to duration bigger than 0)
1. all the received data is aggregated according to specified [`-streamAggr.config`](https://docs.victoriametrics.com/stream-aggregation/#configuration) (if it is set)
1. the resulting data is then replicated to each `-remoteWrite.url`
1. data sent to each `-remoteWrite.url` can be additionally relabeled according to the corresponding `-remoteWrite.urlRelabelConfig` (set individually per URL)
1. data sent to each `-remoteWrite.url` can be additionally deduplicated according to the corresponding `-remoteWrite.streamAggr.dedupInterval` (set individually per URL)
1. data sent to each `-remoteWrite.url` can be additionally aggregated according to the corresponding `-remoteWrite.streamAggr.config` (set individually per URL)
It isn't recommended using `-streamAggr.config` and `-remoteWrite.streamAggr.config` simultaneously, unless you understand the complications.
Typical scenarios for data routing with `vmagent`:
1. **Aggregate incoming data and replicate to N destinations**. Specify [`-streamAggr.config`](https://docs.victoriametrics.com/stream-aggregation/#configuration) command-line flag
to aggregate the incoming data before replicating it to all the configured `-remoteWrite.url` destinations.
2. **Individually aggregate incoming data for each destination**. Specify [`-remoteWrite.streamAggr.config`](https://docs.victoriametrics.com/stream-aggregation/#configuration)
command-line flag for each `-remoteWrite.url` destination. [Relabeling](https://docs.victoriametrics.com/vmagent/#relabeling) via `-remoteWrite.urlRelabelConfig`
can be used for routing only the selected metrics to each `-remoteWrite.url` destination.
## Deduplication
[vmagent](https://docs.victoriametrics.com/vmagent/) supports online [de-duplication](https://docs.victoriametrics.com/#deduplication) of samples
before sending them to the configured `-remoteWrite.url`. The de-duplication can be enabled via the following options:
- By specifying the desired de-duplication interval via `-streamAggr.dedupInterval` command-line flag for all received data
or via `-remoteWrite.streamAggr.dedupInterval` command-line flag for the particular `-remoteWrite.url` destination.
For example, `./vmagent -remoteWrite.url=http://remote-storage/api/v1/write -remoteWrite.streamAggr.dedupInterval=30s` instructs `vmagent` to leave
only the last sample per each seen [time series](https://docs.victoriametrics.com/keyconcepts/#time-series) per every 30 seconds.
The de-deduplication is performed after applying [relabeling](https://docs.victoriametrics.com/vmagent/#relabeling) and
before performing the aggregation.
- By specifying `dedup_interval` option individually per each [stream aggregation config](#stream-aggregation-config)
in `-remoteWrite.streamAggr.config` or `-streamAggr.config` configs.
[Single-node VictoriaMetrics](https://docs.victoriametrics.com/single-server-victoriametrics/) supports two types of de-duplication:
- After storing the duplicate samples to local storage. See [`-dedup.minScrapeInterval`](https://docs.victoriametrics.com/#deduplication) command-line option.
- Before storing the duplicate samples to local storage. This type of de-duplication can be enabled via the following options:
- By specifying the desired de-duplication interval via `-streamAggr.dedupInterval` command-line flag.
For example, `./victoria-metrics -streamAggr.dedupInterval=30s` instructs VictoriaMetrics to leave only the last sample per each
seen [time series](https://docs.victoriametrics.com/keyconcepts/#time-series) per every 30 seconds.
The de-duplication is performed after applying `-relabelConfig` [relabeling](https://docs.victoriametrics.com/#relabeling).
- By specifying `dedup_interval` option individually per each [stream aggregation config](#stream-aggregation-config) at `-streamAggr.config`.
It is possible to drop the given labels before applying the de-duplication. See [these docs](#dropping-unneeded-labels).
The online de-duplication uses the same logic as [`-dedup.minScrapeInterval` command-line flag](https://docs.victoriametrics.com/#deduplication) at VictoriaMetrics.
## Ignoring old samples
By default, all the input samples are taken into account during stream aggregation. If samples with old timestamps
outside the current [aggregation interval](#stream-aggregation-config) must be ignored, then the following options can be used:
- To pass `-streamAggr.ignoreOldSamples` command-line flag to [single-node VictoriaMetrics](https://docs.victoriametrics.com/)
or to [vmagent](https://docs.victoriametrics.com/vmagent/). At [vmagent](https://docs.victoriametrics.com/vmagent/)
`-remoteWrite.streamAggr.ignoreOldSamples` flag can be specified individually per each `-remoteWrite.url`.
This enables ignoring old samples for all the [aggregation configs](#stream-aggregation-config).
- To set `ignore_old_samples: true` option at the particular [aggregation config](#stream-aggregation-config).
This enables ignoring old samples for that particular aggregation config.
## Ignore aggregation intervals on start
Streaming aggregation results may be incorrect for some time after the restart of [vmagent](https://docs.victoriametrics.com/vmagent/)
or [single-node VictoriaMetrics](https://docs.victoriametrics.com/) until all the buffered [samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
are sent from remote sources to the `vmagent` or single-node VictoriaMetrics via [supported data ingestion protocols](https://docs.victoriametrics.com/vmagent/#how-to-push-data-to-vmagent).
In this case it may be a good idea to drop the aggregated data during the first `N` [aggregation intervals](#stream-aggregation-config)
just after the restart of `vmagent` or single-node VictoriaMetrics. This can be done via the following options:
- The `-streamAggr.ignoreFirstIntervals=N` command-line flag at `vmagent` and single-node VictoriaMetrics. This flag instructs skipping the first `N`
[aggregation intervals](#stream-aggregation-config) just after the restart across all the [configured stream aggregation configs](#configuration).
The `-remoteWrite.streamAggr.ignoreFirstIntervals` command-line flag can be specified individually per each `-remoteWrite.url` at [vmagent](https://docs.victoriametrics.com/vmagent/).
- The `ignore_first_intervals: N` option at the particular [aggregation config](#stream-aggregation-config).
See also:
- [Flush time alignment](#flush-time-alignment)
- [Ignoring old samples](#ignoring-old-samples)
## Flush time alignment
By default, the time for aggregated data flush is aligned by the `interval` option specified in [aggregate config](#stream-aggregation-config).
For example:
- if `interval: 1m` is set, then the aggregated data is flushed to the storage at the end of every minute
- if `interval: 1h` is set, then the aggregated data is flushed to the storage at the end of every hour
If you do not need such an alignment, then set `no_align_flush_to_interval: true` option in the [aggregate config](#stream-aggregation-config).
In this case aggregated data flushes will be aligned to the `vmagent` start time or to [config reload](#configuration-update) time.
The aggregated data on the first and the last interval is dropped during `vmagent` start, restart or [config reload](#configuration-update),
since the first and the last aggregation intervals are incomplete, so they usually contain incomplete confusing data.
If you need preserving the aggregated data on these intervals, then set `flush_on_shutdown: true` option in the [aggregate config](#stream-aggregation-config).
See also:
- [Ignore aggregation intervals on start](#ignore-aggregation-intervals-on-start)
- [Ignoring old samples](#ignoring-old-samples)
## Use cases
# Use cases
Stream aggregation can be used in the following cases:
@ -167,7 +27,7 @@ Stream aggregation can be used in the following cases:
* [Reducing the number of stored samples](#reducing-the-number-of-stored-samples)
* [Reducing the number of stored series](#reducing-the-number-of-stored-series)
### Statsd alternative
## Statsd alternative
Stream aggregation can be used as [statsd](https://github.com/statsd/statsd) alternative in the following cases:
@ -180,7 +40,7 @@ Stream aggregation can be used as [statsd](https://github.com/statsd/statsd) alt
Currently, streaming aggregation is available only for [supported data ingestion protocols](https://docs.victoriametrics.com/#how-to-import-time-series-data)
and not available for [Statsd metrics format](https://github.com/statsd/statsd/blob/master/docs/metric_types.md).
### Recording rules alternative
## Recording rules alternative
Sometimes [alerting queries](https://docs.victoriametrics.com/vmalert/#alerting-rules) may require non-trivial amounts of CPU, RAM,
disk IO and network bandwidth at metrics storage side. For example, if `http_request_duration_seconds` histogram is generated by thousands
@ -212,8 +72,7 @@ See also [aggregating by labels](#aggregating-by-labels).
Field `interval` is recommended to be set to a value at least several times higher than your metrics collect interval.
### Reducing the number of stored samples
## Reducing the number of stored samples
If per-[series](https://docs.victoriametrics.com/keyconcepts/#time-series) samples are ingested at high frequency,
then this may result in high disk space usage, since too much data must be stored to disk. This also may result
@ -254,7 +113,7 @@ some_metric:5m_max
See [the list of aggregate output](#aggregation-outputs), which can be specified at `output` field.
See also [aggregating histograms](#aggregating-histograms) and [aggregating by labels](#aggregating-by-labels).
### Reducing the number of stored series
## Reducing the number of stored series
Sometimes applications may generate too many [time series](https://docs.victoriametrics.com/keyconcepts/#time-series).
For example, the `http_requests_total` metric may have `path` or `user` label with too big number of unique values.
@ -279,7 +138,7 @@ http_requests_total:30s_without_path_user_total
See [the list of aggregate output](#aggregation-outputs), which can be specified at `output` field.
See also [aggregating histograms](#aggregating-histograms).
### Counting input samples
## Counting input samples
If the monitored application generates event-based metrics, then it may be useful to count the number of such metrics
at stream aggregation level.
@ -305,8 +164,7 @@ clicks:30s_count_samples count2
See [the list of aggregate output](#aggregation-outputs), which can be specified at `output` field.
See also [aggregating by labels](#aggregating-by-labels).
### Summing input metrics
## Summing input metrics
If the monitored application calculates some events and then sends the calculated number of events to VictoriaMetrics
at irregular intervals or at too high frequency, then stream aggregation can be used for summing such events
@ -332,8 +190,7 @@ clicks:1m_sum_samples sum2
See [the list of aggregate output](#aggregation-outputs), which can be specified at `output` field.
See also [aggregating by labels](#aggregating-by-labels).
### Quantiles over input metrics
## Quantiles over input metrics
If the monitored application generates measurement metrics per each request, then it may be useful to calculate
the pre-defined set of [percentiles](https://en.wikipedia.org/wiki/Percentile) over these measurements.
@ -363,7 +220,7 @@ response_size_bytes:30s_quantiles{quantile="0.99"} value2
See [the list of aggregate output](#aggregation-outputs), which can be specified at `output` field.
See also [histograms over input metrics](#histograms-over-input-metrics) and [aggregating by labels](#aggregating-by-labels).
### Histograms over input metrics
## Histograms over input metrics
If the monitored application generates measurement metrics per each request, then it may be useful to calculate
a [histogram](https://docs.victoriametrics.com/keyconcepts/#histogram) over these metrics.
@ -424,7 +281,7 @@ The resulting histogram buckets can be queried with [MetricsQL](https://docs.vic
See [the list of aggregate output](#aggregation-outputs), which can be specified at `output` field.
See also [quantiles over input metrics](#quantiles-over-input-metrics) and [aggregating by labels](#aggregating-by-labels).
### Aggregating histograms
## Aggregating histograms
[Histogram](https://docs.victoriametrics.com/keyconcepts/#histogram) is a set of [counter](https://docs.victoriametrics.com/keyconcepts/#counter)
metrics with different `vmrange` or `le` labels. As they're counters, the applicable aggregation output is
@ -462,81 +319,34 @@ have no such requirement.
See [the list of aggregate output](#aggregation-outputs), which can be specified at `output` field.
See also [histograms over input metrics](#histograms-over-input-metrics) and [quantiles over input metrics](#quantiles-over-input-metrics).
## Output metric names
# Configuration
Output metric names for stream aggregation are constructed according to the following pattern:
Stream aggregation can be configured via the following command-line flags:
```text
<metric_name>:<interval>[_by_<by_labels>][_without_<without_labels>]_<output>
```
- `-streamAggr.config` at [single-node VictoriaMetrics](https://docs.victoriametrics.com/single-server-victoriametrics/)
and at [vmagent](https://docs.victoriametrics.com/vmagent/).
- `-remoteWrite.streamAggr.config` at [vmagent](https://docs.victoriametrics.com/vmagent/) only. This flag can be specified individually
per each `-remoteWrite.url`, so the aggregation happens independently per each remote storage destination.
This allows writing different aggregates to different remote storage systems.
- `<metric_name>` is the original metric name.
- `<interval>` is the interval specified in the [stream aggregation config](#stream-aggregation-config).
- `<by_labels>` is `_`-delimited sorted list of `by` labels specified in the [stream aggregation config](#stream-aggregation-config).
If the `by` list is missing in the config, then the `_by_<by_labels>` part isn't included in the output metric name.
- `<without_labels>` is an optional `_`-delimited sorted list of `without` labels specified in the [stream aggregation config](#stream-aggregation-config).
If the `without` list is missing in the config, then the `_without_<without_labels>` part isn't included in the output metric name.
- `<output>` is the aggregate used for constructing the output metric. The aggregate name is taken from the `outputs` list
at the corresponding [stream aggregation config](#stream-aggregation-config).
These flags must point to a file containing [stream aggregation config](#stream-aggregation-config).
The file may contain `%{ENV_VAR}` placeholders which are substituted by the corresponding `ENV_VAR` environment variable values.
Both input and output metric names can be modified if needed via relabeling according to [these docs](#relabeling).
By default, the following data is written to the storage when stream aggregation is enabled:
It is possible to leave the original metric name after the aggregation by specifying `keep_metric_names: true` option at [stream aggregation config](#stream-aggregation-config).
The `keep_metric_names` option can be used if only a single output is set in [`outputs` list](#aggregation-outputs).
- the aggregated samples;
- the raw input samples, which didn't match any `match` option in the provided [config](#stream-aggregation-config).
## Relabeling
This behaviour can be changed via the following command-line flags:
It is possible to apply [arbitrary relabeling](https://docs.victoriametrics.com/vmagent/#relabeling) to input and output metrics
during stream aggregation via `input_relabel_configs` and `output_relabel_configs` options in [stream aggregation config](#stream-aggregation-config).
Relabeling rules inside `input_relabel_configs` are applied to samples matching the `match` filters before optional [deduplication](#deduplication).
Relabeling rules inside `output_relabel_configs` are applied to aggregated samples before sending them to the remote storage.
For example, the following config removes the `:1m_sum_samples` suffix added [to the output metric name](#output-metric-names):
```yaml
- interval: 1m
outputs: [sum_samples]
output_relabel_configs:
- source_labels: [__name__]
target_label: __name__
regex: "(.+):.+"
```
Another option to remove the suffix, which is added by stream aggregation, is to add `keep_metric_names: true` to the config:
```yaml
- interval: 1m
outputs: [sum_samples]
keep_metric_names: true
```
See also [dropping unneeded labels](#dropping-unneeded-labels).
## Dropping unneeded labels
If you need dropping some labels from input samples before [input relabeling](#relabeling), [de-duplication](#deduplication)
and [stream aggregation](#aggregation-outputs), then the following options exist:
- To specify comma-separated list of label names to drop in `-streamAggr.dropInputLabels` command-line flag
or via `-remoteWrite.streamAggr.dropInputLabels` individually per each `-remoteWrite.url`.
For example, `-streamAggr.dropInputLabels=replica,az` instructs to drop `replica` and `az` labels from input samples
before applying de-duplication and stream aggregation.
- To specify `drop_input_labels` list with the labels to drop in [stream aggregation config](#stream-aggregation-config).
For example, the following config drops `replica` label from input samples with the name `process_resident_memory_bytes`
before calculating the average over one minute:
```yaml
- match: process_resident_memory_bytes
interval: 1m
drop_input_labels: [replica]
outputs: [avg]
keep_metric_names: true
```
Typical use case is to drop `replica` label from samples, which are received from high availability replicas.
- `-streamAggr.keepInput` at [single-node VictoriaMetrics](https://docs.victoriametrics.com/single-server-victoriametrics/)
and [vmagent](https://docs.victoriametrics.com/vmagent/). At [vmagent](https://docs.victoriametrics.com/vmagent/)
`-remoteWrite.streamAggr.keepInput` flag can be specified individually per each `-remoteWrite.url`.
If one of these flags is set, then all the input samples are written to the storage alongside the aggregated samples.
- `-streamAggr.dropInput` at [single-node VictoriaMetrics](https://docs.victoriametrics.com/single-server-victoriametrics/)
and [vmagent](https://docs.victoriametrics.com/vmagent/). At [vmagent](https://docs.victoriametrics.com/vmagent/)
`-remoteWrite.streamAggr.dropInput` flag can be specified individually per each `-remoteWrite.url`.
If one of these flags are set, then all the input samples are dropped, while only the aggregated samples are written to the storage.
## Aggregation outputs
@ -623,7 +433,7 @@ See also:
### histogram_bucket
`histogram_bucket` returns [VictoriaMetrics histogram buckets](https://valyala.medium.com/improving-histogram-usability-for-prometheus-and-grafana-bc7e5df0e350)
for the input [sample values](https://docs.victoriametrics.com/keyconcepts/#raw-samples) over the given `interval`.
for the input [sample values](https://docs.victoriametrics.com/keyconcepts/#raw-samples) over the given `interval`.
`histogram_bucket` makes sense only for aggregating [gauges](https://docs.victoriametrics.com/keyconcepts/#gauge).
See how to aggregate regular histograms [here](#aggregating-histograms).
@ -958,47 +768,6 @@ See also:
- [max](#max)
- [min](#min)
## Aggregating by labels
All the labels for the input metrics are preserved by default in the output metrics. For example,
the input metric `foo{app="bar",instance="host1"}` results to the output metric `foo:1m_sum_samples{app="bar",instance="host1"}`
when the following [stream aggregation config](#stream-aggregation-config) is used:
```yaml
- interval: 1m
outputs: [sum_samples]
```
The input labels can be removed via `without` list specified in the config. For example, the following config
removes the `instance` label from output metrics by summing input samples across all the instances:
```yaml
- interval: 1m
without: [instance]
outputs: [sum_samples]
```
In this case the `foo{app="bar",instance="..."}` input metrics are transformed into `foo:1m_without_instance_sum_samples{app="bar"}`
output metric according to [output metric naming](#output-metric-names).
It is possible specifying the exact list of labels in the output metrics via `by` list.
For example, the following config sums input samples by the `app` label:
```yaml
- interval: 1m
by: [app]
outputs: [sum_samples]
```
In this case the `foo{app="bar",instance="..."}` input metrics are transformed into `foo:1m_by_app_sum_samples{app="bar"}`
output metric according to [output metric naming](#output-metric-names).
The labels used in `by` and `without` lists can be modified via `input_relabel_configs` section - see [these docs](#relabeling).
See also [aggregation outputs](#aggregation-outputs).
## Stream aggregation config
Below is the format for stream aggregation config file, which may be referred via `-streamAggr.config` command-line flag at
@ -1141,15 +910,241 @@ support the following approaches for hot reloading stream aggregation configs fr
* By sending HTTP request to `/-/reload` endpoint (e.g. `http://vmagent:8429/-/reload` or `http://victoria-metrics:8428/-/reload).
# Routing
## Troubleshooting
[Single-node VictoriaMetrics](https://docs.victoriametrics.com/single-server-victoriametrics/) supports relabeling,
deduplication and stream aggregation for all the received data, scraped or pushed.
The processed data is then stored in local storage and **can't be forwarded further**.
[vmagent](https://docs.victoriametrics.com/vmagent/) supports relabeling, deduplication and stream aggregation for all
the received data, scraped or pushed. Then, the collected data will be forwarded to specified `-remoteWrite.url` destinations.
The data processing order is the following:
1. all the received data is relabeled according to the specified [`-remoteWrite.relabelConfig`](https://docs.victoriametrics.com/vmagent/#relabeling) (if it is set)
1. all the received data is deduplicated according to specified [`-streamAggr.dedupInterval`](https://docs.victoriametrics.com/stream-aggregation/#deduplication)
(if it is set to duration bigger than 0)
1. all the received data is aggregated according to specified [`-streamAggr.config`](https://docs.victoriametrics.com/stream-aggregation/#configuration) (if it is set)
1. the resulting data is then replicated to each `-remoteWrite.url`
1. data sent to each `-remoteWrite.url` can be additionally relabeled according to the corresponding `-remoteWrite.urlRelabelConfig` (set individually per URL)
1. data sent to each `-remoteWrite.url` can be additionally deduplicated according to the corresponding `-remoteWrite.streamAggr.dedupInterval` (set individually per URL)
1. data sent to each `-remoteWrite.url` can be additionally aggregated according to the corresponding `-remoteWrite.streamAggr.config` (set individually per URL)
It isn't recommended using `-streamAggr.config` and `-remoteWrite.streamAggr.config` simultaneously, unless you understand the complications.
Typical scenarios for data routing with `vmagent`:
1. **Aggregate incoming data and replicate to N destinations**. Specify [`-streamAggr.config`](https://docs.victoriametrics.com/stream-aggregation/#configuration) command-line flag
to aggregate the incoming data before replicating it to all the configured `-remoteWrite.url` destinations.
2. **Individually aggregate incoming data for each destination**. Specify [`-remoteWrite.streamAggr.config`](https://docs.victoriametrics.com/stream-aggregation/#configuration)
command-line flag for each `-remoteWrite.url` destination. [Relabeling](https://docs.victoriametrics.com/vmagent/#relabeling) via `-remoteWrite.urlRelabelConfig`
can be used for routing only the selected metrics to each `-remoteWrite.url` destination.
# Deduplication
[vmagent](https://docs.victoriametrics.com/vmagent/) supports online [de-duplication](https://docs.victoriametrics.com/#deduplication) of samples
before sending them to the configured `-remoteWrite.url`. The de-duplication can be enabled via the following options:
- By specifying the desired de-duplication interval via `-streamAggr.dedupInterval` command-line flag for all received data
or via `-remoteWrite.streamAggr.dedupInterval` command-line flag for the particular `-remoteWrite.url` destination.
For example, `./vmagent -remoteWrite.url=http://remote-storage/api/v1/write -remoteWrite.streamAggr.dedupInterval=30s` instructs `vmagent` to leave
only the last sample per each seen [time series](https://docs.victoriametrics.com/keyconcepts/#time-series) per every 30 seconds.
The de-deduplication is performed after applying [relabeling](https://docs.victoriametrics.com/vmagent/#relabeling) and
before performing the aggregation.
- By specifying `dedup_interval` option individually per each [stream aggregation config](#stream-aggregation-config)
in `-remoteWrite.streamAggr.config` or `-streamAggr.config` configs.
[Single-node VictoriaMetrics](https://docs.victoriametrics.com/single-server-victoriametrics/) supports two types of de-duplication:
- After storing the duplicate samples to local storage. See [`-dedup.minScrapeInterval`](https://docs.victoriametrics.com/#deduplication) command-line option.
- Before storing the duplicate samples to local storage. This type of de-duplication can be enabled via the following options:
- By specifying the desired de-duplication interval via `-streamAggr.dedupInterval` command-line flag.
For example, `./victoria-metrics -streamAggr.dedupInterval=30s` instructs VictoriaMetrics to leave only the last sample per each
seen [time series](https://docs.victoriametrics.com/keyconcepts/#time-series) per every 30 seconds.
The de-duplication is performed after applying `-relabelConfig` [relabeling](https://docs.victoriametrics.com/#relabeling).
- By specifying `dedup_interval` option individually per each [stream aggregation config](#stream-aggregation-config) at `-streamAggr.config`.
It is possible to drop the given labels before applying the de-duplication. See [these docs](#dropping-unneeded-labels).
The online de-duplication uses the same logic as [`-dedup.minScrapeInterval` command-line flag](https://docs.victoriametrics.com/#deduplication) at VictoriaMetrics.
# Relabeling
It is possible to apply [arbitrary relabeling](https://docs.victoriametrics.com/vmagent/#relabeling) to input and output metrics
during stream aggregation via `input_relabel_configs` and `output_relabel_configs` options in [stream aggregation config](#stream-aggregation-config).
Relabeling rules inside `input_relabel_configs` are applied to samples matching the `match` filters before optional [deduplication](#deduplication).
Relabeling rules inside `output_relabel_configs` are applied to aggregated samples before sending them to the remote storage.
For example, the following config removes the `:1m_sum_samples` suffix added [to the output metric name](#output-metric-names):
```yaml
- interval: 1m
outputs: [sum_samples]
output_relabel_configs:
- source_labels: [__name__]
target_label: __name__
regex: "(.+):.+"
```
Another option to remove the suffix, which is added by stream aggregation, is to add `keep_metric_names: true` to the config:
```yaml
- interval: 1m
outputs: [sum_samples]
keep_metric_names: true
```
See also [dropping unneeded labels](#dropping-unneeded-labels).
# Advanced usage
## Ignoring old samples
By default, all the input samples are taken into account during stream aggregation. If samples with old timestamps
outside the current [aggregation interval](#stream-aggregation-config) must be ignored, then the following options can be used:
- To pass `-streamAggr.ignoreOldSamples` command-line flag to [single-node VictoriaMetrics](https://docs.victoriametrics.com/)
or to [vmagent](https://docs.victoriametrics.com/vmagent/). At [vmagent](https://docs.victoriametrics.com/vmagent/)
`-remoteWrite.streamAggr.ignoreOldSamples` flag can be specified individually per each `-remoteWrite.url`.
This enables ignoring old samples for all the [aggregation configs](#stream-aggregation-config).
- To set `ignore_old_samples: true` option at the particular [aggregation config](#stream-aggregation-config).
This enables ignoring old samples for that particular aggregation config.
## Ignore aggregation intervals on start
Streaming aggregation results may be incorrect for some time after the restart of [vmagent](https://docs.victoriametrics.com/vmagent/)
or [single-node VictoriaMetrics](https://docs.victoriametrics.com/) until all the buffered [samples](https://docs.victoriametrics.com/keyconcepts/#raw-samples)
are sent from remote sources to the `vmagent` or single-node VictoriaMetrics via [supported data ingestion protocols](https://docs.victoriametrics.com/vmagent/#how-to-push-data-to-vmagent).
In this case it may be a good idea to drop the aggregated data during the first `N` [aggregation intervals](#stream-aggregation-config)
just after the restart of `vmagent` or single-node VictoriaMetrics. This can be done via the following options:
- The `-streamAggr.ignoreFirstIntervals=N` command-line flag at `vmagent` and single-node VictoriaMetrics. This flag instructs skipping the first `N`
[aggregation intervals](#stream-aggregation-config) just after the restart across all the [configured stream aggregation configs](#configuration).
The `-remoteWrite.streamAggr.ignoreFirstIntervals` command-line flag can be specified individually per each `-remoteWrite.url` at [vmagent](https://docs.victoriametrics.com/vmagent/).
- The `ignore_first_intervals: N` option at the particular [aggregation config](#stream-aggregation-config).
See also:
- [Flush time alignment](#flush-time-alignment)
- [Ignoring old samples](#ignoring-old-samples)
## Flush time alignment
By default, the time for aggregated data flush is aligned by the `interval` option specified in [aggregate config](#stream-aggregation-config).
For example:
- if `interval: 1m` is set, then the aggregated data is flushed to the storage at the end of every minute
- if `interval: 1h` is set, then the aggregated data is flushed to the storage at the end of every hour
If you do not need such an alignment, then set `no_align_flush_to_interval: true` option in the [aggregate config](#stream-aggregation-config).
In this case aggregated data flushes will be aligned to the `vmagent` start time or to [config reload](#configuration-update) time.
The aggregated data on the first and the last interval is dropped during `vmagent` start, restart or [config reload](#configuration-update),
since the first and the last aggregation intervals are incomplete, so they usually contain incomplete confusing data.
If you need preserving the aggregated data on these intervals, then set `flush_on_shutdown: true` option in the [aggregate config](#stream-aggregation-config).
See also:
- [Ignore aggregation intervals on start](#ignore-aggregation-intervals-on-start)
- [Ignoring old samples](#ignoring-old-samples)
## Output metric names
Output metric names for stream aggregation are constructed according to the following pattern:
```text
<metric_name>:<interval>[_by_<by_labels>][_without_<without_labels>]_<output>
```
- `<metric_name>` is the original metric name.
- `<interval>` is the interval specified in the [stream aggregation config](#stream-aggregation-config).
- `<by_labels>` is `_`-delimited sorted list of `by` labels specified in the [stream aggregation config](#stream-aggregation-config).
If the `by` list is missing in the config, then the `_by_<by_labels>` part isn't included in the output metric name.
- `<without_labels>` is an optional `_`-delimited sorted list of `without` labels specified in the [stream aggregation config](#stream-aggregation-config).
If the `without` list is missing in the config, then the `_without_<without_labels>` part isn't included in the output metric name.
- `<output>` is the aggregate used for constructing the output metric. The aggregate name is taken from the `outputs` list
at the corresponding [stream aggregation config](#stream-aggregation-config).
Both input and output metric names can be modified if needed via relabeling according to [these docs](#relabeling).
It is possible to leave the original metric name after the aggregation by specifying `keep_metric_names: true` option at [stream aggregation config](#stream-aggregation-config).
The `keep_metric_names` option can be used if only a single output is set in [`outputs` list](#aggregation-outputs).
## Aggregating by labels
All the labels for the input metrics are preserved by default in the output metrics. For example,
the input metric `foo{app="bar",instance="host1"}` results to the output metric `foo:1m_sum_samples{app="bar",instance="host1"}`
when the following [stream aggregation config](#stream-aggregation-config) is used:
```yaml
- interval: 1m
outputs: [sum_samples]
```
The input labels can be removed via `without` list specified in the config. For example, the following config
removes the `instance` label from output metrics by summing input samples across all the instances:
```yaml
- interval: 1m
without: [instance]
outputs: [sum_samples]
```
In this case the `foo{app="bar",instance="..."}` input metrics are transformed into `foo:1m_without_instance_sum_samples{app="bar"}`
output metric according to [output metric naming](#output-metric-names).
It is possible specifying the exact list of labels in the output metrics via `by` list.
For example, the following config sums input samples by the `app` label:
```yaml
- interval: 1m
by: [app]
outputs: [sum_samples]
```
In this case the `foo{app="bar",instance="..."}` input metrics are transformed into `foo:1m_by_app_sum_samples{app="bar"}`
output metric according to [output metric naming](#output-metric-names).
The labels used in `by` and `without` lists can be modified via `input_relabel_configs` section - see [these docs](#relabeling).
See also [aggregation outputs](#aggregation-outputs).
## Dropping unneeded labels
If you need dropping some labels from input samples before [input relabeling](#relabeling), [de-duplication](#deduplication)
and [stream aggregation](#aggregation-outputs), then the following options exist:
- To specify comma-separated list of label names to drop in `-streamAggr.dropInputLabels` command-line flag
or via `-remoteWrite.streamAggr.dropInputLabels` individually per each `-remoteWrite.url`.
For example, `-streamAggr.dropInputLabels=replica,az` instructs to drop `replica` and `az` labels from input samples
before applying de-duplication and stream aggregation.
- To specify `drop_input_labels` list with the labels to drop in [stream aggregation config](#stream-aggregation-config).
For example, the following config drops `replica` label from input samples with the name `process_resident_memory_bytes`
before calculating the average over one minute:
```yaml
- match: process_resident_memory_bytes
interval: 1m
drop_input_labels: [replica]
outputs: [avg]
keep_metric_names: true
```
Typical use case is to drop `replica` label from samples, which are received from high availability replicas.
# Troubleshooting
- [Unexpected spikes for `total` or `increase` outputs](#staleness).
- [Lower than expected values for `total_prometheus` and `increase_prometheus` outputs](#staleness).
- [High memory usage and CPU usage](#high-resource-usage).
- [Unexpected results in vmagent cluster mode](#cluster-mode).
### Staleness
## Staleness
The following outputs track the last seen per-series values in order to properly calculate output values:
@ -1177,7 +1172,7 @@ These issues can be fixed in the following ways:
- By specifying the `staleness_interval` option at [stream aggregation config](#stream-aggregation-config), so it covers the expected
delays in data ingestion pipelines. By default, the `staleness_interval` equals to `2 x interval`.
### High resource usage
## High resource usage
The following solutions can help reducing memory usage and CPU usage during streaming aggregation:
@ -1187,7 +1182,7 @@ The following solutions can help reducing memory usage and CPU usage during stre
- To generate lower number of output time series by using less specific [`by` list](#aggregating-by-labels) or more specific [`without` list](#aggregating-by-labels).
- To drop unneeded long labels in input samples via [input_relabel_configs](#relabeling).
### Cluster mode
## Cluster mode
If you use [vmagent in cluster mode](https://docs.victoriametrics.com/vmagent/#scraping-big-number-of-targets) for streaming aggregation
then be careful when using [`by` or `without` options](#aggregating-by-labels) or when modifying sample labels
@ -1195,10 +1190,75 @@ via [relabeling](#relabeling), since incorrect usage may result in duplicates an
For example, if more than one `vmagent` instance calculates [increase](#increase) for `http_requests_total` metric
with `by: [path]` option, then all the `vmagent` instances will aggregate samples to the same set of time series with different `path` labels.
The proper fix would be [adding an unique label](https://docs.victoriametrics.com/vmagent/#adding-labels-to-metrics) for all the output samples
The proper fix would be [adding a unique label](https://docs.victoriametrics.com/vmagent/#adding-labels-to-metrics) for all the output samples
produced by each `vmagent`, so they are aggregated into distinct sets of [time series](https://docs.victoriametrics.com/keyconcepts/#time-series).
These time series then can be aggregated later as needed during querying.
If `vmagent` instances run in Docker or Kubernetes, then you can refer `POD_NAME` or `HOSTNAME` environment variables
as an unique label value per each `vmagent` via `-remoteWrite.label=vmagent=%{HOSTNAME}` command-line flag.
as a unique label value per each `vmagent` via `-remoteWrite.label=vmagent=%{HOSTNAME}` command-line flag.
See [these docs](https://docs.victoriametrics.com/#environment-variables) on how to refer environment variables in VictoriaMetrics components.
## Common mistakes
### Put aggregator behind load balancer
When configuring the aggregation rule, make sure that `vmagent` receives all the required data to satisfy the `match` rule.
If traffic to the vmagent goes through the load balancer, it could happen that vmagent will be receiving only fraction of the data
and produce incomplete aggregations.
To keep aggregation results consistent, make sure that vmagent receives all the required data for aggregation. In case if you need to
split the load across multiple vmagents, try sharding the traffic among them via metric names or labels.
For example, see how vmagent could consistently [shard data across remote write destinations](https://docs.victoriametrics.com/vmagent/#sharding-among-remote-storages)
via `-remoteWrite.shardByURL.labels` or `-remoteWrite.shardByURL.ignoreLabels` cmd-line flags.
### Create aggregator per each recording rule
Stream aggregation can be used as alternative for [recording rules](#recording-rules-alternative).
But creating an aggregation rule per each recording rule can lead to elevated resource usage on the vmagent,
because the ingestion stream should be matched against every configured aggregation rule.
To optimize this, we recommend merging together aggregations which only differ in match expressions.
For example, let's see the following list of recording rules:
```yaml
- expr: sum(rate(node_cpu_seconds_total{mode!="idle",mode!="iowait",mode!="steal"}[3m])) BY (instance)
record: instance:node_cpu:rate:sum
- expr: sum(rate(node_network_receive_bytes_total[3m])) BY (instance)
record: instance:node_network_receive_bytes:rate:sum
- expr: sum(rate(node_network_transmit_bytes_total[3m])) BY (instance)
record: instance:node_network_transmit_bytes:rate:sum
```
These rules can be effectively converted into a single aggregation rule:
```yaml
- match:
- node_cpu_seconds_total{mode!="idle",mode!="iowait",mode!="steal"}
- node_network_receive_bytes_total
- node_network_transmit_bytes_total
interval: 3m
outputs: [rate_sum]
by:
- instance
output_relabel_configs:
- source_labels: [__name__]
target_label: __name__
regex: "(.+):.+"
replacement: "instance:$1:rate:sum"
```
**Note**: having separate aggregator for a certain `match` expression can only be justified when aggregator cannot keep up with all
the data pushed to an aggregator within an aggregation interval.
### Use identical --remoteWrite.streamAggr.config for all remote writes
Each specified `-remoteWrite.streamAggr.config` aggregation config is processed independently on the copy of the data stream.
So if you want to aggregate incoming data and replicate it across multiple destinations, it would be more efficient
to use a global `-streamAggr.config` instead. In this way, vmagent will perform aggregation only once and then will replicate it
across multiple `-remoteWrite.url`.
### Use aggregated metrics like original ones
Stream aggregation allows keeping original metric names after aggregation by using `keep_metric_names` setting.
But the "meaning" of aggregated metrics is usually different to original ones after the aggregation.
Make sure that you updated queries in your alerting rules and dashboards accordingly if you used `keep_metric_names` setting.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

After

Width:  |  Height:  |  Size: 201 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

After

Width:  |  Height:  |  Size: 209 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 50 KiB

After

Width:  |  Height:  |  Size: 246 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

After

Width:  |  Height:  |  Size: 188 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 37 KiB

After

Width:  |  Height:  |  Size: 185 KiB

View file

@ -165,7 +165,7 @@ You can use the following API endpoints for the automation with rules:
* POST: `/api/v1/deployments/{deploymentId}/rule-sets/files/{fileName}` - create/update rules file
* DELETE `/api/v1/deployments/{deploymentId}/rule-sets/files/{fileName}` - delete rules file
For more details, please check [OpenAPI Reference](https://cloud.victoriametrics.com/api-docs).
For more details, please check [OpenAPI Reference](https://console.victoriametrics.cloud/api-docs).
### Example of alerting rules

View file

@ -0,0 +1,50 @@
---
weight: 8
title: VictoriaMetrics Cloud API Documentation
menu:
docs:
parent: "cloud"
weight: 8
name: API
---
VictoriaMetrics Cloud provides programmatic access for managing cloud resources and is useful for automation with tools like Terraform, OpenTofu, Infrastructure as a Code, GitOps framework, etc.
## Key Concepts
* **API Keys**: Used to manage VictoriaMetrics Cloud resources via API.
**Note: Access Tokens** are used for reading and writing data to deployments. They are separate from API Keys and should not be confused. API Keys are specifically for managing resources via the API, while Access Tokens handle data access for deployments.
## API Swagger/OpenAPI Reference: [https://console.victoriametrics.cloud/api-docs](https://console.victoriametrics.cloud/api-docs)
## API Key Properties:
* **Name**: Human-readable, for team context.
* **Lifetime**: Key expiration date (no expiration is an option).
* **Permissions**: Read-only or Read/Write access.
* **Deployment Access**: Limit access to single, multiple, or all deployments. ***Note**: selecting all deployments in the list and the “All” option are not the same thing, the “All" option will also apply to future deployments that are created.*
* **Key** or **Key Value**: Programmatically generated identifier. It's sensitive data used for Authentication. Any operation with API keys (including viewing/revealing Key Value), will be recorded in the [Audit Log](https://docs.victoriametrics.com/victoriametrics-cloud/audit-logs/).
![Create API Key](api_keys.webp)
## Authentication:
* **API Key Creation**: Required for using the VictoriaMetrics Cloud API. You need to issue the key manually [here](https://console.victoriametrics.cloud/api_keys).
* **HTTP Header**:
* **Header Name**: `X-VM-Cloud-Access`
* **Header Value**: `<Key-Value>`
## Alerting & Recording Rules API:
* **List Files**: [API reference](https://console.victoriametrics.cloud/api-docs)
* **View File**: [API reference](https://console.victoriametrics.cloud/api-docs)
* **Upload File**: [API reference](https://console.victoriametrics.cloud/api-docs)
* **Delete File**: [API reference](https://console.victoriametrics.cloud/api-docs)
For detailed setup instructions, check the [VictoriaMetrics Cloud - AlertManager Setup Guide](https://docs.victoriametrics.com/victoriametrics-cloud/alertmanager-setup-for-deployment/).
## Future API Features:
* **Deployments**: Create, Delete, Update, List, Get.
* **Access Token**: Create, Delete, List, Get/Reveal.
* **AlertManager**: Get Config, Upsert Config.

Binary file not shown.

After

Width:  |  Height:  |  Size: 171 KiB

View file

@ -0,0 +1,36 @@
---
weight: 9
title: VictoriaMetrics Cloud Audit Logs
menu:
docs:
parent: "cloud"
weight: 8
name: Audit Logs
---
An [**audit log**](https://console.victoriametrics.cloud/audit) is a record of user and system activities within an organization. It captures details of who performed an action, what was done, and when it occurred. Audit logs are essential for security, compliance, and troubleshooting processes.
## Cloud Audit Log Scopes
VictoriaMetrics Cloud provides two scopes for audit logs:
1. **Organization-Level Audit Logs**
These logs record all activities at the organization level, such as user logins, token reveals, updates to payment information, and deployments being created or destroyed.
2. **Deployment-Level Audit Logs**
These logs record activities related to a specific deployment only, such as changes to deployment parameters, creating or deleting access tokens, and modifying alerting or recording rules.
## Example Log Entry
* **Time**: 2024-10-0515:40 UTC
* **Email**: cloud-admin@victoriametrics.com
* **Action**: cluster updated: production-platform, changed properties: vmstorage settings changed: disk size changed from 50.0TB to 80.0TB,
## Filtering
The audit log page offers filtering options, allowing you to filter logs by time range, actor, or perform a full-text search by action.
## Export to CSV
The Export to CSV button on the audit log page allows you to export the entire audit log as a CSV file.
Filtering does not affect the export; you will always receive the entire audit log in the exported file.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 176 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 184 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

After

Width:  |  Height:  |  Size: 207 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

After

Width:  |  Height:  |  Size: 207 KiB

Some files were not shown because too many files have changed in this diff Show more