vmselect/promql: enable search.maxPointsSubqueryPerTimeseries for sub-queries (#2963)

* vmselect/promql: enable search.maxPointsPerTimeSeriesSubquery for sub-queries

* vmselect/promql: cleanup

* vmselect/promql: rename config flag

* vmselect/promql: add tests

* vmselect/promql: use test object instead of log

* vmselect/promql: fix posible panic is subquery has more points. add description

* vmselect/promql: update tests descriptions

* vmselect/promql: update doInternal validation

* vmselect/promql: fix linter

* vmselect/promql: fix linter

* vmselect/promql: update documentation and release notes

* wip

- Properly apply -search.maxPointsSubqueryPerTimeseries limit to subqueries.
  Previously the -search.maxPointsPerTimeseries limit was unexpectedly applied to subqueries
  if it was smaller than the -search.maxPointsSubqueryPerTimeseries .
- Clarify docs for -search.maxPointsSubqueryPerTimeseries command-line flag .
- Document -search.maxPointsPerTimeseries and -search.maxPointsSubqueryPerTimeseries flags at https://docs.victoriametrics.com/#resource-usage-limits .
- Update docs/CHANGELOG.md .

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2922

Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
This commit is contained in:
Dmytro Kozlov 2022-08-24 15:25:18 +03:00 committed by GitHub
parent 3d12ee47f9
commit 463ea6897b
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
12 changed files with 417 additions and 307 deletions

View file

@ -1213,6 +1213,8 @@ By default VictoriaMetrics is tuned for an optimal resource usage under typical
- `-search.maxConcurrentRequests` limits the number of concurrent requests VictoriaMetrics can process. Bigger number of concurrent requests usually means bigger memory usage. For example, if a single query needs 100 MiB of additional memory during its execution, then 100 concurrent queries may need `100 * 100 MiB = 10 GiB` of additional memory. So it is better to limit the number of concurrent queries, while suspending additional incoming queries if the concurrency limit is reached. VictoriaMetrics provides `-search.maxQueueDuration` command-line flag for limiting the max wait time for suspended queries.
- `-search.maxSamplesPerSeries` limits the number of raw samples the query can process per each time series. VictoriaMetrics sequentially processes raw samples per each found time series during the query. It unpacks raw samples on the selected time range per each time series into memory and then applies the given [rollup function](https://docs.victoriametrics.com/MetricsQL.html#rollup-functions). The `-search.maxSamplesPerSeries` command-line flag allows limiting memory usage in the case when the query is executed on a time range, which contains hundreds of millions of raw samples per each located time series.
- `-search.maxSamplesPerQuery` limits the number of raw samples a single query can process. This allows limiting CPU usage for heavy queries.
- `-search.maxPointsPerTimeseries` limits the number of calculated points, which can be returned per each matching time series from [range query](https://docs.victoriametrics.com/keyConcepts.html#range-query).
- `-search.maxPointsSubqueryPerTimeseries` limits the number of calculated points, which can be generated per each matching time series during [subquery](https://docs.victoriametrics.com/MetricsQL.html#subqueries) evaluation.
- `-search.maxSeries` limits the number of time series, which may be returned from [/api/v1/series](https://prometheus.io/docs/prometheus/latest/querying/api/#finding-series-by-label-matchers). This endpoint is used mostly by Grafana for auto-completion of metric names, label names and label values. Queries to this endpoint may take big amounts of CPU time and memory when the database contains big number of unique time series because of [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate). In this case it might be useful to set the `-search.maxSeries` to quite low value in order limit CPU and memory usage.
- `-search.maxTagKeys` limits the number of items, which may be returned from [/api/v1/labels](https://prometheus.io/docs/prometheus/latest/querying/api/#getting-label-names). This endpoint is used mostly by Grafana for auto-completion of label names. Queries to this endpoint may take big amounts of CPU time and memory when the database contains big number of unique time series because of [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate). In this case it might be useful to set the `-search.maxTagKeys` to quite low value in order to limit CPU and memory usage.
- `-search.maxTagValues` limits the number of items, which may be returned from [/api/v1/label/.../values](https://prometheus.io/docs/prometheus/latest/querying/api/#querying-label-values). This endpoint is used mostly by Grafana for auto-completion of label values. Queries to this endpoint may take big amounts of CPU time and memory when the database contains big number of unique time series because of [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate). In this case it might be useful to set the `-search.maxTagValues` to quite low value in order to limit CPU and memory usage.
@ -2173,7 +2175,9 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
-search.maxLookback duration
Synonym to -search.lookback-delta from Prometheus. The value is dynamically detected from interval between time series datapoints if not set. It can be overridden on per-query basis via max_lookback arg. See also '-search.maxStalenessInterval' flag, which has the same meaining due to historical reasons
-search.maxPointsPerTimeseries int
The maximum points per a single timeseries returned from /api/v1/query_range. This option doesn't limit the number of scanned raw samples in the database. The main purpose of this option is to limit the number of per-series points returned to graphing UI such as Grafana. There is no sense in setting this limit to values bigger than the horizontal resolution of the graph (default 30000)
The maximum points per a single timeseries returned from /api/v1/query_range. This option doesn't limit the number of scanned raw samples in the database. The main purpose of this option is to limit the number of per-series points returned to graphing UI such as VMUI or Grafana. There is no sense in setting this limit to values bigger than the horizontal resolution of the graph (default 30000)
-search.maxPointsSubqueryPerTimeseries int
The maximum number of points per series, which can be generated by subquery. See https://valyala.medium.com/prometheus-subqueries-in-victoriametrics-9b1492b720b3 (default 100000)
-search.maxQueryDuration duration
The maximum duration for query execution (default 30s)
-search.maxQueryLen size

View file

@ -44,11 +44,14 @@ var (
maxStepForPointsAdjustment = flag.Duration("search.maxStepForPointsAdjustment", time.Minute, "The maximum step when /api/v1/query_range handler adjusts "+
"points with timestamps closer than -search.latencyOffset to the current time. The adjustment is needed because such points may contain incomplete data")
maxUniqueTimeseries = flag.Int("search.maxUniqueTimeseries", 300e3, "The maximum number of unique time series, which can be selected during /api/v1/query and /api/v1/query_range queries. This option allows limiting memory usage")
maxFederateSeries = flag.Int("search.maxFederateSeries", 1e6, "The maximum number of time series, which can be returned from /federate. This option allows limiting memory usage")
maxExportSeries = flag.Int("search.maxExportSeries", 10e6, "The maximum number of time series, which can be returned from /api/v1/export* APIs. This option allows limiting memory usage")
maxTSDBStatusSeries = flag.Int("search.maxTSDBStatusSeries", 10e6, "The maximum number of time series, which can be processed during the call to /api/v1/status/tsdb. This option allows limiting memory usage")
maxSeriesLimit = flag.Int("search.maxSeries", 30e3, "The maximum number of time series, which can be returned from /api/v1/series. This option allows limiting memory usage")
maxUniqueTimeseries = flag.Int("search.maxUniqueTimeseries", 300e3, "The maximum number of unique time series, which can be selected during /api/v1/query and /api/v1/query_range queries. This option allows limiting memory usage")
maxFederateSeries = flag.Int("search.maxFederateSeries", 1e6, "The maximum number of time series, which can be returned from /federate. This option allows limiting memory usage")
maxExportSeries = flag.Int("search.maxExportSeries", 10e6, "The maximum number of time series, which can be returned from /api/v1/export* APIs. This option allows limiting memory usage")
maxTSDBStatusSeries = flag.Int("search.maxTSDBStatusSeries", 10e6, "The maximum number of time series, which can be processed during the call to /api/v1/status/tsdb. This option allows limiting memory usage")
maxSeriesLimit = flag.Int("search.maxSeries", 30e3, "The maximum number of time series, which can be returned from /api/v1/series. This option allows limiting memory usage")
maxPointsPerTimeseries = flag.Int("search.maxPointsPerTimeseries", 30e3, "The maximum points per a single timeseries returned from /api/v1/query_range. "+
"This option doesn't limit the number of scanned raw samples in the database. The main purpose of this option is to limit the number of per-series points "+
"returned to graphing UI such as VMUI or Grafana. There is no sense in setting this limit to values bigger than the horizontal resolution of the graph")
)
// Default step used if not set.
@ -733,6 +736,7 @@ func QueryHandler(qt *querytracer.Tracer, startTime time.Time, w http.ResponseWr
Start: start,
End: start,
Step: step,
MaxPointsPerSeries: *maxPointsPerTimeseries,
MaxSeries: *maxUniqueTimeseries,
QuotedRemoteAddr: httpserver.GetQuotedRemoteAddr(r),
Deadline: deadline,
@ -818,7 +822,7 @@ func queryRangeHandler(qt *querytracer.Tracer, startTime time.Time, w http.Respo
if start > end {
end = start + defaultStep
}
if err := promql.ValidateMaxPointsPerTimeseries(start, end, step); err != nil {
if err := promql.ValidateMaxPointsPerSeries(start, end, step, *maxPointsPerTimeseries); err != nil {
return err
}
if mayCache {
@ -829,6 +833,7 @@ func queryRangeHandler(qt *querytracer.Tracer, startTime time.Time, w http.Respo
Start: start,
End: end,
Step: step,
MaxPointsPerSeries: *maxPointsPerTimeseries,
MaxSeries: *maxUniqueTimeseries,
QuotedRemoteAddr: httpserver.GetQuotedRemoteAddr(r),
Deadline: deadline,

View file

@ -24,10 +24,9 @@ import (
)
var (
disableCache = flag.Bool("search.disableCache", false, "Whether to disable response caching. This may be useful during data backfilling")
maxPointsPerTimeseries = flag.Int("search.maxPointsPerTimeseries", 30e3, "The maximum points per a single timeseries returned from /api/v1/query_range. "+
"This option doesn't limit the number of scanned raw samples in the database. The main purpose of this option is to limit the number of per-series points "+
"returned to graphing UI such as Grafana. There is no sense in setting this limit to values bigger than the horizontal resolution of the graph")
disableCache = flag.Bool("search.disableCache", false, "Whether to disable response caching. This may be useful during data backfilling")
maxPointsSubqueryPerTimeseries = flag.Int("search.maxPointsSubqueryPerTimeseries", 100e3, "The maximum number of points per series, which can be generated by subquery. "+
"See https://valyala.medium.com/prometheus-subqueries-in-victoriametrics-9b1492b720b3")
noStaleMarkers = flag.Bool("search.noStaleMarkers", false, "Set this flag to true if the database doesn't contain Prometheus stale markers, so there is no need in spending additional CPU time on its handling. Staleness markers may exist only in data obtained from Prometheus scrape targets")
)
@ -36,15 +35,15 @@ var (
// big time ranges.
const minTimeseriesPointsForTimeRounding = 50
// ValidateMaxPointsPerTimeseries checks the maximum number of points that
// may be returned per each time series.
//
// The number mustn't exceed -search.maxPointsPerTimeseries.
func ValidateMaxPointsPerTimeseries(start, end, step int64) error {
// ValidateMaxPointsPerSeries validates that the number of points for the given start, end and step do not exceed maxPoints.
func ValidateMaxPointsPerSeries(start, end, step int64, maxPoints int) error {
if step == 0 {
return fmt.Errorf("step can't be equal to zero")
}
points := (end-start)/step + 1
if uint64(points) > uint64(*maxPointsPerTimeseries) {
return fmt.Errorf(`too many points for the given step=%d, start=%d and end=%d: %d; cannot exceed -search.maxPointsPerTimeseries=%d`,
step, start, end, uint64(points), *maxPointsPerTimeseries)
if points > int64(maxPoints) {
return fmt.Errorf("too many points for the given start=%d, end=%d and step=%d: %d; the maximum number of points is %d (see -search.maxPoints* command-line flags)",
start, end, step, points, maxPoints)
}
return nil
}
@ -99,6 +98,9 @@ type EvalConfig struct {
// Zero means 'no limit'
MaxSeries int
// MaxPointsPerSeries is the limit on the number of points, which can be generated per each returned time series.
MaxPointsPerSeries int
// QuotedRemoteAddr contains quoted remote address.
QuotedRemoteAddr string
@ -127,6 +129,7 @@ func copyEvalConfig(src *EvalConfig) *EvalConfig {
ec.End = src.End
ec.Step = src.Step
ec.MaxSeries = src.MaxSeries
ec.MaxPointsPerSeries = src.MaxPointsPerSeries
ec.Deadline = src.Deadline
ec.MayCache = src.MayCache
ec.LookbackDelta = src.LookbackDelta
@ -174,10 +177,10 @@ func (ec *EvalConfig) getSharedTimestamps() []int64 {
}
func (ec *EvalConfig) timestampsInit() {
ec.timestamps = getTimestamps(ec.Start, ec.End, ec.Step)
ec.timestamps = getTimestamps(ec.Start, ec.End, ec.Step, ec.MaxPointsPerSeries)
}
func getTimestamps(start, end, step int64) []int64 {
func getTimestamps(start, end, step int64, maxPointsPerSeries int) []int64 {
// Sanity checks.
if step <= 0 {
logger.Panicf("BUG: Step must be bigger than 0; got %d", step)
@ -185,7 +188,7 @@ func getTimestamps(start, end, step int64) []int64 {
if start > end {
logger.Panicf("BUG: Start cannot exceed End; got %d vs %d", start, end)
}
if err := ValidateMaxPointsPerTimeseries(start, end, step); err != nil {
if err := ValidateMaxPointsPerSeries(start, end, step, maxPointsPerSeries); err != nil {
logger.Panicf("BUG: %s; this must be validated before the call to getTimestamps", err)
}
@ -806,7 +809,8 @@ func evalRollupFuncWithSubquery(qt *querytracer.Tracer, ec *EvalConfig, funcName
ecSQ.Start -= window + maxSilenceInterval + step
ecSQ.End += step
ecSQ.Step = step
if err := ValidateMaxPointsPerTimeseries(ecSQ.Start, ecSQ.End, ecSQ.Step); err != nil {
ecSQ.MaxPointsPerSeries = *maxPointsSubqueryPerTimeseries
if err := ValidateMaxPointsPerSeries(ecSQ.Start, ecSQ.End, ecSQ.Step, ecSQ.MaxPointsPerSeries); err != nil {
return nil, err
}
// unconditionally align start and end args to step for subquery as Prometheus does.
@ -818,8 +822,8 @@ func evalRollupFuncWithSubquery(qt *querytracer.Tracer, ec *EvalConfig, funcName
if len(tssSQ) == 0 {
return nil, nil
}
sharedTimestamps := getTimestamps(ec.Start, ec.End, ec.Step)
preFunc, rcs, err := getRollupConfigs(funcName, rf, expr, ec.Start, ec.End, ec.Step, window, ec.LookbackDelta, sharedTimestamps)
sharedTimestamps := getTimestamps(ec.Start, ec.End, ec.Step, ec.MaxPointsPerSeries)
preFunc, rcs, err := getRollupConfigs(funcName, rf, expr, ec.Start, ec.End, ec.Step, ec.MaxPointsPerSeries, window, ec.LookbackDelta, sharedTimestamps)
if err != nil {
return nil, err
}
@ -956,8 +960,8 @@ func evalRollupFuncWithMetricExpr(qt *querytracer.Tracer, ec *EvalConfig, funcNa
// Obtain rollup configs before fetching data from db,
// so type errors can be caught earlier.
sharedTimestamps := getTimestamps(start, ec.End, ec.Step)
preFunc, rcs, err := getRollupConfigs(funcName, rf, expr, start, ec.End, ec.Step, window, ec.LookbackDelta, sharedTimestamps)
sharedTimestamps := getTimestamps(start, ec.End, ec.Step, ec.MaxPointsPerSeries)
preFunc, rcs, err := getRollupConfigs(funcName, rf, expr, start, ec.End, ec.Step, ec.MaxPointsPerSeries, window, ec.LookbackDelta, sharedTimestamps)
if err != nil {
return nil, err
}

View file

@ -48,3 +48,31 @@ m2{b="bar"} 1`, `{}`)
f(`m1{a="foo",b="bar"} 1
m2{b="bar",c="x"} 1`, `{b="bar"}`)
}
func TestValidateMaxPointsPerSeriesFailure(t *testing.T) {
f := func(start, end, step int64, maxPoints int) {
t.Helper()
if err := ValidateMaxPointsPerSeries(start, end, step, maxPoints); err == nil {
t.Fatalf("expecint non-nil error for ValidateMaxPointsPerSeries(start=%d, end=%d, step=%d, maxPoints=%d)", start, end, step, maxPoints)
}
}
// zero step
f(0, 0, 0, 0)
f(0, 0, 0, 1)
// the maxPoints is smaller than the generated points
f(0, 1, 1, 0)
f(0, 1, 1, 1)
f(1659962171908, 1659966077742, 5000, 700)
}
func TestValidateMaxPointsPerSeriesSuccess(t *testing.T) {
f := func(start, end, step int64, maxPoints int) {
t.Helper()
if err := ValidateMaxPointsPerSeries(start, end, step, maxPoints); err != nil {
t.Fatalf("unexpected error in ValidateMaxPointsPerSeries(start=%d, end=%d, step=%d, maxPoints=%d): %s", start, end, step, maxPoints, err)
}
}
f(1, 1, 1, 2)
f(1659962171908, 1659966077742, 5000, 800)
f(1659962150000, 1659966070000, 10000, 393)
}

View file

@ -58,12 +58,13 @@ func TestExecSuccess(t *testing.T) {
f := func(q string, resultExpected []netstorage.Result) {
t.Helper()
ec := &EvalConfig{
Start: start,
End: end,
Step: step,
MaxSeries: 1000,
Deadline: searchutils.NewDeadline(time.Now(), time.Minute, ""),
RoundDigits: 100,
Start: start,
End: end,
Step: step,
MaxPointsPerSeries: 1e4,
MaxSeries: 1000,
Deadline: searchutils.NewDeadline(time.Now(), time.Minute, ""),
RoundDigits: 100,
}
for i := 0; i < 5; i++ {
result, err := Exec(nil, ec, q, false)
@ -7743,12 +7744,13 @@ func TestExecError(t *testing.T) {
f := func(q string) {
t.Helper()
ec := &EvalConfig{
Start: 1000,
End: 2000,
Step: 100,
MaxSeries: 1000,
Deadline: searchutils.NewDeadline(time.Now(), time.Minute, ""),
RoundDigits: 100,
Start: 1000,
End: 2000,
Step: 100,
MaxPointsPerSeries: 1e4,
MaxSeries: 1000,
Deadline: searchutils.NewDeadline(time.Now(), time.Minute, ""),
RoundDigits: 100,
}
for i := 0; i < 4; i++ {
rv, err := Exec(nil, ec, q, false)

View file

@ -248,7 +248,7 @@ func getRollupAggrFuncNames(expr metricsql.Expr) ([]string, error) {
return aggrFuncNames, nil
}
func getRollupConfigs(name string, rf rollupFunc, expr metricsql.Expr, start, end, step, window int64, lookbackDelta int64, sharedTimestamps []int64) (
func getRollupConfigs(name string, rf rollupFunc, expr metricsql.Expr, start, end, step int64, maxPointsPerSeries int, window, lookbackDelta int64, sharedTimestamps []int64) (
func(values []float64, timestamps []int64), []*rollupConfig, error) {
preFunc := func(values []float64, timestamps []int64) {}
if rollupFuncsRemoveCounterResets[name] {
@ -258,12 +258,15 @@ func getRollupConfigs(name string, rf rollupFunc, expr metricsql.Expr, start, en
}
newRollupConfig := func(rf rollupFunc, tagValue string) *rollupConfig {
return &rollupConfig{
TagValue: tagValue,
Func: rf,
Start: start,
End: end,
Step: step,
Window: window,
TagValue: tagValue,
Func: rf,
Start: start,
End: end,
Step: step,
Window: window,
MaxPointsPerSeries: maxPointsPerSeries,
MayAdjustWindow: rollupFuncsCanAdjustWindow[name],
LookbackDelta: lookbackDelta,
Timestamps: sharedTimestamps,
@ -400,6 +403,9 @@ type rollupConfig struct {
Step int64
Window int64
// The maximum number of points, which can be generated per each series.
MaxPointsPerSeries int
// Whether window may be adjusted to 2 x interval between data points.
// This is needed for functions which have dt in the denominator
// such as rate, deriv, etc.
@ -416,6 +422,10 @@ type rollupConfig struct {
isDefaultRollup bool
}
func (rc *rollupConfig) getTimestamps() []int64 {
return getTimestamps(rc.Start, rc.End, rc.Step, rc.MaxPointsPerSeries)
}
func (rc *rollupConfig) String() string {
start := storage.TimestampToHumanReadableFormat(rc.Start)
end := storage.TimestampToHumanReadableFormat(rc.End)
@ -513,7 +523,7 @@ func (rc *rollupConfig) doInternal(dstValues []float64, tsm *timeseriesMap, valu
if rc.Window < 0 {
logger.Panicf("BUG: Window must be non-negative; got %d", rc.Window)
}
if err := ValidateMaxPointsPerTimeseries(rc.Start, rc.End, rc.Step); err != nil {
if err := ValidateMaxPointsPerSeries(rc.Start, rc.End, rc.Step, rc.MaxPointsPerSeries); err != nil {
logger.Panicf("BUG: %s; this must be validated before the call to rollupConfig.Do", err)
}

View file

@ -33,9 +33,10 @@ func TestRollupResultCache(t *testing.T) {
ResetRollupResultCache()
window := int64(456)
ec := &EvalConfig{
Start: 1000,
End: 2000,
Step: 200,
Start: 1000,
End: 2000,
Step: 200,
MaxPointsPerSeries: 1e4,
MayCache: true,
}
@ -291,9 +292,10 @@ func TestRollupResultCache(t *testing.T) {
func TestMergeTimeseries(t *testing.T) {
ec := &EvalConfig{
Start: 1000,
End: 2000,
Step: 200,
Start: 1000,
End: 2000,
Step: 200,
MaxPointsPerSeries: 1e4,
}
bStart := int64(1400)

View file

@ -578,13 +578,14 @@ func TestRollupNewRollupFuncError(t *testing.T) {
func TestRollupNoWindowNoPoints(t *testing.T) {
t.Run("beforeStart", func(t *testing.T) {
rc := rollupConfig{
Func: rollupFirst,
Start: 0,
End: 4,
Step: 1,
Window: 0,
Func: rollupFirst,
Start: 0,
End: 4,
Step: 1,
Window: 0,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned != 0 {
t.Fatalf("expecting zero samplesScanned from rollupConfig.Do; got %d", samplesScanned)
@ -595,13 +596,14 @@ func TestRollupNoWindowNoPoints(t *testing.T) {
})
t.Run("afterEnd", func(t *testing.T) {
rc := rollupConfig{
Func: rollupDelta,
Start: 120,
End: 148,
Step: 4,
Window: 0,
Func: rollupDelta,
Start: 120,
End: 148,
Step: 4,
Window: 0,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -615,13 +617,14 @@ func TestRollupNoWindowNoPoints(t *testing.T) {
func TestRollupWindowNoPoints(t *testing.T) {
t.Run("beforeStart", func(t *testing.T) {
rc := rollupConfig{
Func: rollupFirst,
Start: 0,
End: 4,
Step: 1,
Window: 3,
Func: rollupFirst,
Start: 0,
End: 4,
Step: 1,
Window: 3,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned != 0 {
t.Fatalf("expecting zero samplesScanned from rollupConfig.Do; got %d", samplesScanned)
@ -632,13 +635,14 @@ func TestRollupWindowNoPoints(t *testing.T) {
})
t.Run("afterEnd", func(t *testing.T) {
rc := rollupConfig{
Func: rollupFirst,
Start: 161,
End: 191,
Step: 10,
Window: 3,
Func: rollupFirst,
Start: 161,
End: 191,
Step: 10,
Window: 3,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned != 0 {
t.Fatalf("expecting zero samplesScanned from rollupConfig.Do; got %d", samplesScanned)
@ -652,13 +656,14 @@ func TestRollupWindowNoPoints(t *testing.T) {
func TestRollupNoWindowPartialPoints(t *testing.T) {
t.Run("beforeStart", func(t *testing.T) {
rc := rollupConfig{
Func: rollupFirst,
Start: 0,
End: 25,
Step: 5,
Window: 0,
Func: rollupFirst,
Start: 0,
End: 25,
Step: 5,
Window: 0,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -669,13 +674,14 @@ func TestRollupNoWindowPartialPoints(t *testing.T) {
})
t.Run("afterEnd", func(t *testing.T) {
rc := rollupConfig{
Func: rollupFirst,
Start: 100,
End: 160,
Step: 20,
Window: 0,
Func: rollupFirst,
Start: 100,
End: 160,
Step: 20,
Window: 0,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -686,13 +692,14 @@ func TestRollupNoWindowPartialPoints(t *testing.T) {
})
t.Run("middle", func(t *testing.T) {
rc := rollupConfig{
Func: rollupFirst,
Start: -50,
End: 150,
Step: 50,
Window: 0,
Func: rollupFirst,
Start: -50,
End: 150,
Step: 50,
Window: 0,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -706,13 +713,14 @@ func TestRollupNoWindowPartialPoints(t *testing.T) {
func TestRollupWindowPartialPoints(t *testing.T) {
t.Run("beforeStart", func(t *testing.T) {
rc := rollupConfig{
Func: rollupLast,
Start: 0,
End: 20,
Step: 5,
Window: 8,
Func: rollupLast,
Start: 0,
End: 20,
Step: 5,
Window: 8,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -723,13 +731,14 @@ func TestRollupWindowPartialPoints(t *testing.T) {
})
t.Run("afterEnd", func(t *testing.T) {
rc := rollupConfig{
Func: rollupLast,
Start: 100,
End: 160,
Step: 20,
Window: 18,
Func: rollupLast,
Start: 100,
End: 160,
Step: 20,
Window: 18,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -740,13 +749,14 @@ func TestRollupWindowPartialPoints(t *testing.T) {
})
t.Run("middle", func(t *testing.T) {
rc := rollupConfig{
Func: rollupLast,
Start: 0,
End: 150,
Step: 50,
Window: 19,
Func: rollupLast,
Start: 0,
End: 150,
Step: 50,
Window: 19,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -760,13 +770,14 @@ func TestRollupWindowPartialPoints(t *testing.T) {
func TestRollupFuncsLookbackDelta(t *testing.T) {
t.Run("1", func(t *testing.T) {
rc := rollupConfig{
Func: rollupFirst,
Start: 80,
End: 140,
Step: 10,
LookbackDelta: 1,
Func: rollupFirst,
Start: 80,
End: 140,
Step: 10,
LookbackDelta: 1,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -777,13 +788,14 @@ func TestRollupFuncsLookbackDelta(t *testing.T) {
})
t.Run("7", func(t *testing.T) {
rc := rollupConfig{
Func: rollupFirst,
Start: 80,
End: 140,
Step: 10,
LookbackDelta: 7,
Func: rollupFirst,
Start: 80,
End: 140,
Step: 10,
LookbackDelta: 7,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -794,13 +806,14 @@ func TestRollupFuncsLookbackDelta(t *testing.T) {
})
t.Run("0", func(t *testing.T) {
rc := rollupConfig{
Func: rollupFirst,
Start: 80,
End: 140,
Step: 10,
LookbackDelta: 0,
Func: rollupFirst,
Start: 80,
End: 140,
Step: 10,
LookbackDelta: 0,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -814,13 +827,14 @@ func TestRollupFuncsLookbackDelta(t *testing.T) {
func TestRollupFuncsNoWindow(t *testing.T) {
t.Run("first", func(t *testing.T) {
rc := rollupConfig{
Func: rollupFirst,
Start: 0,
End: 160,
Step: 40,
Window: 0,
Func: rollupFirst,
Start: 0,
End: 160,
Step: 40,
Window: 0,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -831,13 +845,14 @@ func TestRollupFuncsNoWindow(t *testing.T) {
})
t.Run("count", func(t *testing.T) {
rc := rollupConfig{
Func: rollupCount,
Start: 0,
End: 160,
Step: 40,
Window: 0,
Func: rollupCount,
Start: 0,
End: 160,
Step: 40,
Window: 0,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -848,13 +863,14 @@ func TestRollupFuncsNoWindow(t *testing.T) {
})
t.Run("min", func(t *testing.T) {
rc := rollupConfig{
Func: rollupMin,
Start: 0,
End: 160,
Step: 40,
Window: 0,
Func: rollupMin,
Start: 0,
End: 160,
Step: 40,
Window: 0,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -865,13 +881,14 @@ func TestRollupFuncsNoWindow(t *testing.T) {
})
t.Run("max", func(t *testing.T) {
rc := rollupConfig{
Func: rollupMax,
Start: 0,
End: 160,
Step: 40,
Window: 0,
Func: rollupMax,
Start: 0,
End: 160,
Step: 40,
Window: 0,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -882,13 +899,14 @@ func TestRollupFuncsNoWindow(t *testing.T) {
})
t.Run("sum", func(t *testing.T) {
rc := rollupConfig{
Func: rollupSum,
Start: 0,
End: 160,
Step: 40,
Window: 0,
Func: rollupSum,
Start: 0,
End: 160,
Step: 40,
Window: 0,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -899,13 +917,14 @@ func TestRollupFuncsNoWindow(t *testing.T) {
})
t.Run("delta", func(t *testing.T) {
rc := rollupConfig{
Func: rollupDelta,
Start: 0,
End: 160,
Step: 40,
Window: 0,
Func: rollupDelta,
Start: 0,
End: 160,
Step: 40,
Window: 0,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -916,13 +935,14 @@ func TestRollupFuncsNoWindow(t *testing.T) {
})
t.Run("delta_prometheus", func(t *testing.T) {
rc := rollupConfig{
Func: rollupDeltaPrometheus,
Start: 0,
End: 160,
Step: 40,
Window: 0,
Func: rollupDeltaPrometheus,
Start: 0,
End: 160,
Step: 40,
Window: 0,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -933,13 +953,14 @@ func TestRollupFuncsNoWindow(t *testing.T) {
})
t.Run("idelta", func(t *testing.T) {
rc := rollupConfig{
Func: rollupIdelta,
Start: 10,
End: 130,
Step: 40,
Window: 0,
Func: rollupIdelta,
Start: 10,
End: 130,
Step: 40,
Window: 0,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -950,13 +971,14 @@ func TestRollupFuncsNoWindow(t *testing.T) {
})
t.Run("lag", func(t *testing.T) {
rc := rollupConfig{
Func: rollupLag,
Start: 0,
End: 160,
Step: 40,
Window: 0,
Func: rollupLag,
Start: 0,
End: 160,
Step: 40,
Window: 0,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -967,13 +989,14 @@ func TestRollupFuncsNoWindow(t *testing.T) {
})
t.Run("lifetime_1", func(t *testing.T) {
rc := rollupConfig{
Func: rollupLifetime,
Start: 0,
End: 160,
Step: 40,
Window: 0,
Func: rollupLifetime,
Start: 0,
End: 160,
Step: 40,
Window: 0,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -984,13 +1007,14 @@ func TestRollupFuncsNoWindow(t *testing.T) {
})
t.Run("lifetime_2", func(t *testing.T) {
rc := rollupConfig{
Func: rollupLifetime,
Start: 0,
End: 160,
Step: 40,
Window: 200,
Func: rollupLifetime,
Start: 0,
End: 160,
Step: 40,
Window: 200,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -1001,13 +1025,14 @@ func TestRollupFuncsNoWindow(t *testing.T) {
})
t.Run("scrape_interval_1", func(t *testing.T) {
rc := rollupConfig{
Func: rollupScrapeInterval,
Start: 0,
End: 160,
Step: 40,
Window: 0,
Func: rollupScrapeInterval,
Start: 0,
End: 160,
Step: 40,
Window: 0,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -1018,13 +1043,14 @@ func TestRollupFuncsNoWindow(t *testing.T) {
})
t.Run("scrape_interval_2", func(t *testing.T) {
rc := rollupConfig{
Func: rollupScrapeInterval,
Start: 0,
End: 160,
Step: 40,
Window: 80,
Func: rollupScrapeInterval,
Start: 0,
End: 160,
Step: 40,
Window: 80,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -1035,13 +1061,14 @@ func TestRollupFuncsNoWindow(t *testing.T) {
})
t.Run("changes", func(t *testing.T) {
rc := rollupConfig{
Func: rollupChanges,
Start: 0,
End: 160,
Step: 40,
Window: 0,
Func: rollupChanges,
Start: 0,
End: 160,
Step: 40,
Window: 0,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -1052,13 +1079,14 @@ func TestRollupFuncsNoWindow(t *testing.T) {
})
t.Run("changes_prometheus", func(t *testing.T) {
rc := rollupConfig{
Func: rollupChangesPrometheus,
Start: 0,
End: 160,
Step: 40,
Window: 0,
Func: rollupChangesPrometheus,
Start: 0,
End: 160,
Step: 40,
Window: 0,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -1069,13 +1097,14 @@ func TestRollupFuncsNoWindow(t *testing.T) {
})
t.Run("changes_small_window", func(t *testing.T) {
rc := rollupConfig{
Func: rollupChanges,
Start: 0,
End: 45,
Step: 9,
Window: 9,
Func: rollupChanges,
Start: 0,
End: 45,
Step: 9,
Window: 9,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -1086,13 +1115,14 @@ func TestRollupFuncsNoWindow(t *testing.T) {
})
t.Run("resets", func(t *testing.T) {
rc := rollupConfig{
Func: rollupResets,
Start: 0,
End: 160,
Step: 40,
Window: 0,
Func: rollupResets,
Start: 0,
End: 160,
Step: 40,
Window: 0,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -1103,13 +1133,14 @@ func TestRollupFuncsNoWindow(t *testing.T) {
})
t.Run("avg", func(t *testing.T) {
rc := rollupConfig{
Func: rollupAvg,
Start: 0,
End: 160,
Step: 40,
Window: 0,
Func: rollupAvg,
Start: 0,
End: 160,
Step: 40,
Window: 0,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -1120,13 +1151,14 @@ func TestRollupFuncsNoWindow(t *testing.T) {
})
t.Run("deriv", func(t *testing.T) {
rc := rollupConfig{
Func: rollupDerivSlow,
Start: 0,
End: 160,
Step: 40,
Window: 0,
Func: rollupDerivSlow,
Start: 0,
End: 160,
Step: 40,
Window: 0,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -1137,13 +1169,14 @@ func TestRollupFuncsNoWindow(t *testing.T) {
})
t.Run("deriv_fast", func(t *testing.T) {
rc := rollupConfig{
Func: rollupDerivFast,
Start: 0,
End: 20,
Step: 4,
Window: 0,
Func: rollupDerivFast,
Start: 0,
End: 20,
Step: 4,
Window: 0,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -1154,13 +1187,14 @@ func TestRollupFuncsNoWindow(t *testing.T) {
})
t.Run("ideriv", func(t *testing.T) {
rc := rollupConfig{
Func: rollupIderiv,
Start: 0,
End: 160,
Step: 40,
Window: 0,
Func: rollupIderiv,
Start: 0,
End: 160,
Step: 40,
Window: 0,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -1171,13 +1205,14 @@ func TestRollupFuncsNoWindow(t *testing.T) {
})
t.Run("stddev", func(t *testing.T) {
rc := rollupConfig{
Func: rollupStddev,
Start: 0,
End: 160,
Step: 40,
Window: 0,
Func: rollupStddev,
Start: 0,
End: 160,
Step: 40,
Window: 0,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -1188,13 +1223,14 @@ func TestRollupFuncsNoWindow(t *testing.T) {
})
t.Run("integrate", func(t *testing.T) {
rc := rollupConfig{
Func: rollupIntegrate,
Start: 0,
End: 160,
Step: 40,
Window: 0,
Func: rollupIntegrate,
Start: 0,
End: 160,
Step: 40,
Window: 0,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -1205,13 +1241,14 @@ func TestRollupFuncsNoWindow(t *testing.T) {
})
t.Run("distinct_over_time_1", func(t *testing.T) {
rc := rollupConfig{
Func: rollupDistinct,
Start: 0,
End: 160,
Step: 40,
Window: 0,
Func: rollupDistinct,
Start: 0,
End: 160,
Step: 40,
Window: 0,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -1222,13 +1259,14 @@ func TestRollupFuncsNoWindow(t *testing.T) {
})
t.Run("distinct_over_time_2", func(t *testing.T) {
rc := rollupConfig{
Func: rollupDistinct,
Start: 0,
End: 160,
Step: 40,
Window: 80,
Func: rollupDistinct,
Start: 0,
End: 160,
Step: 40,
Window: 80,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -1239,13 +1277,14 @@ func TestRollupFuncsNoWindow(t *testing.T) {
})
t.Run("mode_over_time", func(t *testing.T) {
rc := rollupConfig{
Func: rollupModeOverTime,
Start: 0,
End: 160,
Step: 40,
Window: 80,
Func: rollupModeOverTime,
Start: 0,
End: 160,
Step: 40,
Window: 80,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -1256,13 +1295,14 @@ func TestRollupFuncsNoWindow(t *testing.T) {
})
t.Run("rate_over_sum", func(t *testing.T) {
rc := rollupConfig{
Func: rollupRateOverSum,
Start: 0,
End: 160,
Step: 40,
Window: 80,
Func: rollupRateOverSum,
Start: 0,
End: 160,
Step: 40,
Window: 80,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -1273,13 +1313,14 @@ func TestRollupFuncsNoWindow(t *testing.T) {
})
t.Run("zscore_over_time", func(t *testing.T) {
rc := rollupConfig{
Func: rollupZScoreOverTime,
Start: 0,
End: 160,
Step: 40,
Window: 80,
Func: rollupZScoreOverTime,
Start: 0,
End: 160,
Step: 40,
Window: 80,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
values, samplesScanned := rc.Do(nil, testValues, testTimestamps)
if samplesScanned == 0 {
t.Fatalf("expecting non-zero samplesScanned from rollupConfig.Do")
@ -1293,12 +1334,13 @@ func TestRollupFuncsNoWindow(t *testing.T) {
func TestRollupBigNumberOfValues(t *testing.T) {
const srcValuesCount = 1e4
rc := rollupConfig{
Func: rollupDefault,
End: srcValuesCount,
Step: srcValuesCount / 5,
Window: srcValuesCount / 4,
Func: rollupDefault,
End: srcValuesCount,
Step: srcValuesCount / 5,
Window: srcValuesCount / 4,
MaxPointsPerSeries: 1e4,
}
rc.Timestamps = getTimestamps(rc.Start, rc.End, rc.Step)
rc.Timestamps = rc.getTimestamps()
srcValues := make([]float64, srcValuesCount)
srcTimestamps := make([]int64, srcValuesCount)
for i := 0; i < srcValuesCount; i++ {

View file

@ -22,6 +22,7 @@ The following tip changes can be tested by building VictoriaMetrics components f
* SECURITY: [vmalert](https://docs.victoriametrics.com/vmalert.html): do not expose `-remoteWrite.url`, `-remoteRead.url` and `-datasource.url` command-line flag values in logs and at `http://vmalert:8880/flags` page by default, since they may contain sensitive data such as auth keys. This aligns `vmalert` behaviour with [vmagent](https://docs.victoriametrics.com/vmagent.html), which doesn't expose `-remoteWrite.url` command-line flag value in logs and at `http://vmagent:8429/flags` page by default. Specify `-remoteWrite.showURL`, `-remoteRead.showURL` and `-datasource.showURL` command-line flags for showing values for the corresponding `-*.url` flags in logs. Thanks to @mble for [the pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2965).
* FEATURE: return shorter error messages to Grafana and to other clients requesting [/api/v1/query](https://docs.victoriametrics.com/keyConcepts.html#instant-query) and [/api/v1/query_range](https://docs.victoriametrics.com/keyConcepts.html#range-query) endpoints. This should simplify reading these errors by humans. The long error message with full context is still written to logs.
* FEATURE: add the ability to fine-tune the number of points, which can be generated per each matching time series during [subquery](https://docs.victoriametrics.com/MetricsQL.html#subqueries) evaluation. This can be done with the `-search.maxPointsSubqueryPerTimeseries` command-line flag. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2922).
* FEATURE: [monitoring](https://docs.victoriametrics.com/#monitoring): expose `vm_hourly_series_limit_max_series`, `vm_hourly_series_limit_current_series`, `vm_daily_series_limit_max_series` and `vm_daily_series_limit_current_series` metrics when `-search.maxHourlySeries` or `-search.maxDailySeries` limits are set. This allows alerting when the number of unique series reaches the configured limits. See [these docs](https://docs.victoriametrics.com/#cardinality-limiter) for details.
* FEATURE: [VictoriaMetrics cluster](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html): reduce the amounts of logging at `vmstorage` when `vmselect` connects/disconnects to `vmstorage`.
* FEATURE: [VictoriaMetrics cluster](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html): improve performance for heavy queries on systems with many CPU cores.

View file

@ -442,6 +442,8 @@ By default cluster components of VictoriaMetrics are tuned for an optimal resour
- `-search.maxConcurrentRequests` at `vmselect` limits the number of concurrent requests a single `vmselect` node can process. Bigger number of concurrent requests usually means bigger memory usage at both `vmselect` and `vmstorage`. For example, if a single query needs 100 MiB of additional memory during its execution, then 100 concurrent queries may need `100 * 100 MiB = 10 GiB` of additional memory. So it is better to limit the number of concurrent queries, while suspending additional incoming queries if the concurrency limit is reached. `vmselect` provides `-search.maxQueueDuration` command-line flag for limiting the max wait time for suspended queries.
- `-search.maxSamplesPerSeries` at `vmselect` limits the number of raw samples the query can process per each time series. `vmselect` sequentially processes raw samples per each found time series during the query. It unpacks raw samples on the selected time range per each time series into memory and then applies the given [rollup function](https://docs.victoriametrics.com/MetricsQL.html#rollup-functions). The `-search.maxSamplesPerSeries` command-line flag allows limiting memory usage at `vmselect` in the case when the query is executed on a time range, which contains hundreds of millions of raw samples per each located time series.
- `-search.maxSamplesPerQuery` at `vmselect` limits the number of raw samples a single query can process. This allows limiting CPU usage at `vmselect` for heavy queries.
- `-search.maxPointsPerTimeseries` limits the number of calculated points, which can be returned per each matching time series from [range query](https://docs.victoriametrics.com/keyConcepts.html#range-query).
- `-search.maxPointsSubqueryPerTimeseries` limits the number of calculated points, which can be generated per each matching time series during [subquery](https://docs.victoriametrics.com/MetricsQL.html#subqueries) evaluation.
- `-search.maxSeries` at `vmselect` limits the number of time series, which may be returned from [/api/v1/series](https://prometheus.io/docs/prometheus/latest/querying/api/#finding-series-by-label-matchers). This endpoint is used mostly by Grafana for auto-completion of metric names, label names and label values. Queries to this endpoint may take big amounts of CPU time and memory at `vmstorage` and `vmselect` when the database contains big number of unique time series because of [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate). In this case it might be useful to set the `-search.maxSeries` to quite low value in order limit CPU and memory usage.
- `-search.maxTagKeys` at `vmstorage` limits the number of items, which may be returned from [/api/v1/labels](https://prometheus.io/docs/prometheus/latest/querying/api/#getting-label-names). This endpoint is used mostly by Grafana for auto-completion of label names. Queries to this endpoint may take big amounts of CPU time and memory at `vmstorage` and `vmselect` when the database contains big number of unique time series because of [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate). In this case it might be useful to set the `-search.maxTagKeys` to quite low value in order to limit CPU and memory usage.
- `-search.maxTagValues` at `vmstorage` limits the number of items, which may be returned from [/api/v1/label/.../values](https://prometheus.io/docs/prometheus/latest/querying/api/#querying-label-values). This endpoint is used mostly by Grafana for auto-completion of label values. Queries to this endpoint may take big amounts of CPU time and memory at `vmstorage` and `vmselect` when the database contains big number of unique time series because of [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate). In this case it might be useful to set the `-search.maxTagValues` to quite low value in order to limit CPU and memory usage.
@ -916,7 +918,9 @@ Below is the output for `/path/to/vmselect -help`:
-search.maxLookback duration
Synonym to -search.lookback-delta from Prometheus. The value is dynamically detected from interval between time series datapoints if not set. It can be overridden on per-query basis via max_lookback arg. See also '-search.maxStalenessInterval' flag, which has the same meaining due to historical reasons
-search.maxPointsPerTimeseries int
The maximum points per a single timeseries returned from /api/v1/query_range. This option doesn't limit the number of scanned raw samples in the database. The main purpose of this option is to limit the number of per-series points returned to graphing UI such as Grafana. There is no sense in setting this limit to values bigger than the horizontal resolution of the graph (default 30000)
The maximum points per a single timeseries returned from /api/v1/query_range. This option doesn't limit the number of scanned raw samples in the database. The main purpose of this option is to limit the number of per-series points returned to graphing UI such as VMUI or Grafana. There is no sense in setting this limit to values bigger than the horizontal resolution of the graph (default 30000)
-search.maxPointsSubqueryPerTimeseries int
The maximum number of points per series, which can be generated by subquery. See https://valyala.medium.com/prometheus-subqueries-in-victoriametrics-9b1492b720b3 (default 100000)
-search.maxQueryDuration duration
The maximum duration for query execution (default 30s)
-search.maxQueryLen size

View file

@ -1213,6 +1213,8 @@ By default VictoriaMetrics is tuned for an optimal resource usage under typical
- `-search.maxConcurrentRequests` limits the number of concurrent requests VictoriaMetrics can process. Bigger number of concurrent requests usually means bigger memory usage. For example, if a single query needs 100 MiB of additional memory during its execution, then 100 concurrent queries may need `100 * 100 MiB = 10 GiB` of additional memory. So it is better to limit the number of concurrent queries, while suspending additional incoming queries if the concurrency limit is reached. VictoriaMetrics provides `-search.maxQueueDuration` command-line flag for limiting the max wait time for suspended queries.
- `-search.maxSamplesPerSeries` limits the number of raw samples the query can process per each time series. VictoriaMetrics sequentially processes raw samples per each found time series during the query. It unpacks raw samples on the selected time range per each time series into memory and then applies the given [rollup function](https://docs.victoriametrics.com/MetricsQL.html#rollup-functions). The `-search.maxSamplesPerSeries` command-line flag allows limiting memory usage in the case when the query is executed on a time range, which contains hundreds of millions of raw samples per each located time series.
- `-search.maxSamplesPerQuery` limits the number of raw samples a single query can process. This allows limiting CPU usage for heavy queries.
- `-search.maxPointsPerTimeseries` limits the number of calculated points, which can be returned per each matching time series from [range query](https://docs.victoriametrics.com/keyConcepts.html#range-query).
- `-search.maxPointsSubqueryPerTimeseries` limits the number of calculated points, which can be generated per each matching time series during [subquery](https://docs.victoriametrics.com/MetricsQL.html#subqueries) evaluation.
- `-search.maxSeries` limits the number of time series, which may be returned from [/api/v1/series](https://prometheus.io/docs/prometheus/latest/querying/api/#finding-series-by-label-matchers). This endpoint is used mostly by Grafana for auto-completion of metric names, label names and label values. Queries to this endpoint may take big amounts of CPU time and memory when the database contains big number of unique time series because of [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate). In this case it might be useful to set the `-search.maxSeries` to quite low value in order limit CPU and memory usage.
- `-search.maxTagKeys` limits the number of items, which may be returned from [/api/v1/labels](https://prometheus.io/docs/prometheus/latest/querying/api/#getting-label-names). This endpoint is used mostly by Grafana for auto-completion of label names. Queries to this endpoint may take big amounts of CPU time and memory when the database contains big number of unique time series because of [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate). In this case it might be useful to set the `-search.maxTagKeys` to quite low value in order to limit CPU and memory usage.
- `-search.maxTagValues` limits the number of items, which may be returned from [/api/v1/label/.../values](https://prometheus.io/docs/prometheus/latest/querying/api/#querying-label-values). This endpoint is used mostly by Grafana for auto-completion of label values. Queries to this endpoint may take big amounts of CPU time and memory when the database contains big number of unique time series because of [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate). In this case it might be useful to set the `-search.maxTagValues` to quite low value in order to limit CPU and memory usage.
@ -2173,7 +2175,9 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
-search.maxLookback duration
Synonym to -search.lookback-delta from Prometheus. The value is dynamically detected from interval between time series datapoints if not set. It can be overridden on per-query basis via max_lookback arg. See also '-search.maxStalenessInterval' flag, which has the same meaining due to historical reasons
-search.maxPointsPerTimeseries int
The maximum points per a single timeseries returned from /api/v1/query_range. This option doesn't limit the number of scanned raw samples in the database. The main purpose of this option is to limit the number of per-series points returned to graphing UI such as Grafana. There is no sense in setting this limit to values bigger than the horizontal resolution of the graph (default 30000)
The maximum points per a single timeseries returned from /api/v1/query_range. This option doesn't limit the number of scanned raw samples in the database. The main purpose of this option is to limit the number of per-series points returned to graphing UI such as VMUI or Grafana. There is no sense in setting this limit to values bigger than the horizontal resolution of the graph (default 30000)
-search.maxPointsSubqueryPerTimeseries int
The maximum number of points per series, which can be generated by subquery. See https://valyala.medium.com/prometheus-subqueries-in-victoriametrics-9b1492b720b3 (default 100000)
-search.maxQueryDuration duration
The maximum duration for query execution (default 30s)
-search.maxQueryLen size

View file

@ -1217,6 +1217,8 @@ By default VictoriaMetrics is tuned for an optimal resource usage under typical
- `-search.maxConcurrentRequests` limits the number of concurrent requests VictoriaMetrics can process. Bigger number of concurrent requests usually means bigger memory usage. For example, if a single query needs 100 MiB of additional memory during its execution, then 100 concurrent queries may need `100 * 100 MiB = 10 GiB` of additional memory. So it is better to limit the number of concurrent queries, while suspending additional incoming queries if the concurrency limit is reached. VictoriaMetrics provides `-search.maxQueueDuration` command-line flag for limiting the max wait time for suspended queries.
- `-search.maxSamplesPerSeries` limits the number of raw samples the query can process per each time series. VictoriaMetrics sequentially processes raw samples per each found time series during the query. It unpacks raw samples on the selected time range per each time series into memory and then applies the given [rollup function](https://docs.victoriametrics.com/MetricsQL.html#rollup-functions). The `-search.maxSamplesPerSeries` command-line flag allows limiting memory usage in the case when the query is executed on a time range, which contains hundreds of millions of raw samples per each located time series.
- `-search.maxSamplesPerQuery` limits the number of raw samples a single query can process. This allows limiting CPU usage for heavy queries.
- `-search.maxPointsPerTimeseries` limits the number of calculated points, which can be returned per each matching time series from [range query](https://docs.victoriametrics.com/keyConcepts.html#range-query).
- `-search.maxPointsSubqueryPerTimeseries` limits the number of calculated points, which can be generated per each matching time series during [subquery](https://docs.victoriametrics.com/MetricsQL.html#subqueries) evaluation.
- `-search.maxSeries` limits the number of time series, which may be returned from [/api/v1/series](https://prometheus.io/docs/prometheus/latest/querying/api/#finding-series-by-label-matchers). This endpoint is used mostly by Grafana for auto-completion of metric names, label names and label values. Queries to this endpoint may take big amounts of CPU time and memory when the database contains big number of unique time series because of [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate). In this case it might be useful to set the `-search.maxSeries` to quite low value in order limit CPU and memory usage.
- `-search.maxTagKeys` limits the number of items, which may be returned from [/api/v1/labels](https://prometheus.io/docs/prometheus/latest/querying/api/#getting-label-names). This endpoint is used mostly by Grafana for auto-completion of label names. Queries to this endpoint may take big amounts of CPU time and memory when the database contains big number of unique time series because of [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate). In this case it might be useful to set the `-search.maxTagKeys` to quite low value in order to limit CPU and memory usage.
- `-search.maxTagValues` limits the number of items, which may be returned from [/api/v1/label/.../values](https://prometheus.io/docs/prometheus/latest/querying/api/#querying-label-values). This endpoint is used mostly by Grafana for auto-completion of label values. Queries to this endpoint may take big amounts of CPU time and memory when the database contains big number of unique time series because of [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate). In this case it might be useful to set the `-search.maxTagValues` to quite low value in order to limit CPU and memory usage.
@ -2177,7 +2179,9 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
-search.maxLookback duration
Synonym to -search.lookback-delta from Prometheus. The value is dynamically detected from interval between time series datapoints if not set. It can be overridden on per-query basis via max_lookback arg. See also '-search.maxStalenessInterval' flag, which has the same meaining due to historical reasons
-search.maxPointsPerTimeseries int
The maximum points per a single timeseries returned from /api/v1/query_range. This option doesn't limit the number of scanned raw samples in the database. The main purpose of this option is to limit the number of per-series points returned to graphing UI such as Grafana. There is no sense in setting this limit to values bigger than the horizontal resolution of the graph (default 30000)
The maximum points per a single timeseries returned from /api/v1/query_range. This option doesn't limit the number of scanned raw samples in the database. The main purpose of this option is to limit the number of per-series points returned to graphing UI such as VMUI or Grafana. There is no sense in setting this limit to values bigger than the horizontal resolution of the graph (default 30000)
-search.maxPointsSubqueryPerTimeseries int
The maximum number of points per series, which can be generated by subquery. See https://valyala.medium.com/prometheus-subqueries-in-victoriametrics-9b1492b720b3 (default 100000)
-search.maxQueryDuration duration
The maximum duration for query execution (default 30s)
-search.maxQueryLen size