Commit graph

967 commits

Author SHA1 Message Date
Artem Fetishev
f5e4cd1fbb v1.113.0
-----BEGIN PGP SIGNATURE-----
 
 iQGzBAABCgAdFiEEkQ/CQiZgCF7/P941YXGv2IAsemkFAmfK/c0ACgkQYXGv2IAs
 emn4gAv+PmE3oE5IHj/mEiezpn+QWfD9OJrhv9amS7MdM9b551sHRxgPL6ZwD1rE
 guUr90WTz0+OY8AbES3TOcNHxwjnkEHO9AYO2ZsLKzlPiYUfcc2r0DOl8KJoNNjr
 BCAAWVnCQnRId5MRIg8HqxeMpp4KhUoODV9wiw8+KCVnryAr/Tb95dtiVcHLoKHn
 tNsjTbtWrf3MKS9esslVfJVR7oNB85hlCtdxsT6KSM7tFOukMaArpwZiBxrF3LVV
 16mf9OqpA9ZHgjQKhhvJ9XGux5N7RzVEWUe1vJzv7oIuypBQkV+XxAgNMrfPUrLB
 GFuQUsPATHfYMJFU+ZUrzBXyHT+46Gldgn/MAUkR28oc2QK3Qhvd1+ofTjnMgtHs
 utq55yTRhf5pOsJwNeScn3ecNGdJBz9xWycyY7386zwOVSO8J3VY3TS6tWOEeJ0g
 wfh3GP1N577hKqBnSHCkjxPmBgWBHV8oZbRMGU3lKoD9Q5nroiyutnwd8uKrAoMU
 aG9TwqCl
 =3gHM
 -----END PGP SIGNATURE-----
gpgsig -----BEGIN PGP SIGNATURE-----
 
 iQGzBAABCgAdFiEEkQ/CQiZgCF7/P941YXGv2IAsemkFAmfUK5sACgkQYXGv2IAs
 emnTGAv/eU/K2PpeQ5N+NgIU2rOcG+bjWPsojxy4FR8evJwetJefza2cHIgQwJTV
 /F3RtjGQWiTVDsTJFCP6Flc1jjq63tL2NUhVH5bsLpIL1xGNfxdaAIEQuOG/474I
 9YFyGs20uCTg7ji0lOSgdK5NcsRqAgCmrRJkIO91SfeZ04VEkjtC1dyfHzLD4T9t
 TLFH6jZHoWRynY4I2foY9g6iWaijTmqmviU9WSkvj4lu7H3VnuT6YQQj4Tm6eHzP
 5IrtpK+v7uuhOZeXmrAEuygD1rRFqgIiKZXR7be7Zafv+bzYyjf852DnWZGLKKU1
 AG+kWk0QXTebF91yUMsu8Sc1Fr2YyGveHQGIS40wWDofZKyChdUUXmPN3QzIH96Q
 bR9naAfE+jB0EJj9C6jAeyvT5hOEVernZ1sDsIlYGUhfz32YyHMYdKpDTu9i5JPo
 rC0TsK84tOLiHT1XeyAQVoo+w+/L4FyzCO7IzGoN+E6gqKMNSMAAx7yeLvaC+opE
 fbS5lfOO
 =AEm7
 -----END PGP SIGNATURE-----

Merge tag 'v1.113.0' into pmm-6401-read-prometheus-data-files

v1.113.0

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2025-03-14 14:13:56 +01:00
f41gh7
ec68ea2222
lib/metricnamestats: follow-up after b85b28d30a
* properly save state for cross-device mount points
* properly check empty state for tracker

Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-03-06 23:18:49 +01:00
Nikolay
b85b28d30a
lib/storage: add tracker for time series metric names statistics
This feature allows to track query requests by metric names. Tracker
state is stored in-memory, capped by 1/100 of allocated memory to the
storage. If cap exceeds, tracker rejects any new items add and instead
registers query requests for already observed metric names.

This feature is disable by default and new flag:
`-storage.trackMetricNamesStats` enables it.

  New API added to the select component:

* /api/v1/status/metric_names_stats - which returns a JSON
object
    with usage statistics.
* /admin/api/v1/status/metric_names_stats/reset - which resets internal
    state of the tracker and reset tsid/cache.

   New metrics were added for this feature:

  * vm_cache_size_bytes{type="storage/metricNamesUsageTracker"}
  * vm_cache_size{type="storage/metricNamesUsageTracker"}
  * vm_cache_size_max_bytes{type="storage/metricNamesUsageTracker"}

  Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4458

---------

Signed-off-by: f41gh7 <nik@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2025-03-06 22:06:50 +01:00
Roman Khavronenko
63f6ac3ff8
lib/promutils: move time-related funcs from promutils to timeutil (#8403)
Since funcs `ParseDuration` and `ParseTimeMsec` are used in vlogs,
vmalert, victoriametrics and other components, importing promutils only
for this reason makes them to export irrelevant
`vm_rows_invalid_total{type="prometheus"}` metric.

This change removes `vm_rows_invalid_total{type="prometheus"}` metric
from /metrics page for these components.

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2025-03-03 10:25:42 +01:00
Zhu Jiekun
3d3480140c
lib/storage: properly cache extDB metricsID on search error
Previously, if indexDB search failed for some reason during search at previous indexDB (aka extDB), VictoriaMetrics stored empty search result at cache. It could cause incorrect search results at subsequent requests.

 This commit checks search error and stores request results only on success. 

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/8345
2025-02-26 15:48:25 +01:00
f41gh7
73b0273967
Merge tag 'v1.112.0' into pmm-6401-read-prometheus-data-files-cpc 2025-02-24 16:01:49 +01:00
Aliaksandr Valialkin
bc69d5f1a4
lib/mergeset: explicitly pass the interval for flushing in-memory data to disk at MustOpenTable()
This allows using different intervals for flushing in-memory data among different mergeset.Table instances.

The initial user of this feature is lib/logstorage.Storage, which explicitly passes Storage.flushInterval
to every created mereset.Table instance. Previously mergeset.Table instances were using 5 seconds
flush interval, which didn't depend on the Storage.flushInterval.

Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4775
2025-02-24 15:23:50 +01:00
f41gh7
855dfb324d
app/vmselect: add query resource limits priority
This commit adds support for overriding vmstorage `maxUniqueTimeseries` with specific
resource limits:
1. `-search.maxLabelsAPISeries` for
[/api/v1/labels](https://docs.victoriametrics.com/url-examples/#apiv1labels),
[/api/v1/label/.../values](https://docs.victoriametrics.com/url-examples/#apiv1labelvalues)
2. `-search. maxSeries` for
[/api/v1/series](https://docs.victoriametrics.com/url-examples/#apiv1series)
3. `-search.maxTSDBStatusSeries` for
[/api/v1/status/tsdb](https://docs.victoriametrics.com/#tsdb-stats)
4. `-search.maxDeleteSeries` for
[/api/v1/admin/tsdb/delete_series](https://docs.victoriametrics.com/url-examples/#apiv1admintsdbdelete_series)

Currently, this limit priority logic cannot be applied to flags
`-search.maxFederateSeries` and `-search.maxExportSeries`, because they
share the same RPC `search_v7` with the /api/v1/query and
/api/v1/query_range APIs, preventing vmstorage from identifying the
actual API of the request. To address that, we need to add additional
information to the protocol between vmstorage and vmselect, which should
be introduced in the future when possible.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7857
2025-02-19 18:18:32 +01:00
Artem Fetishev
ba0d7dc2fc
Allow disabling per-day index (#6976)
### Describe Your Changes

Allow disabling the per-day index using the `-disablePerDayIndex` flag.
This should significantly improve the ingestion rate and decrease the
disk space usage for the use cases that assume small or no churn rate.
See the docs added to `docs/README.md` for details.

Both improvements are due to no data written to the per-day index.
Benchmark results:

```shell
rm -Rf ./lib/storage/Benchmark*; go test ./lib/storage -run=NONE -bench=BenchmarkStorageInsertWithAndWithoutPerDayIndex --loggerLevel=ERROR
goos: linux
goarch: amd64
pkg: github.com/VictoriaMetrics/VictoriaMetrics/lib/storage
cpu: 13th Gen Intel(R) Core(TM) i7-1355U
BenchmarkStorageInsertWithAndWithoutPerDayIndex/HighChurnRate/perDayIndexes-12                 1        3850268120 ns/op                39.56 data-MiB          28.20 indexdb-MiB           259722 rows/s
BenchmarkStorageInsertWithAndWithoutPerDayIndex/HighChurnRate/noPerDayIndexes-12               1        2916865725 ns/op                39.57 data-MiB          25.73 indexdb-MiB           342834 rows/s
BenchmarkStorageInsertWithAndWithoutPerDayIndex/NoChurnRate/perDayIndexes-12                   1        2218073474 ns/op                 9.772 data-MiB         13.73 indexdb-MiB           450842 rows/s
BenchmarkStorageInsertWithAndWithoutPerDayIndex/NoChurnRate/noPerDayIndexes-12                 1        1295140898 ns/op                 9.771 data-MiB          0.3566 indexdb-MiB         772119 rows/s
PASS
ok      github.com/VictoriaMetrics/VictoriaMetrics/lib/storage  11.421s
```

Signed-off-by: Artem Fetishev <wwctrsrx@gmail.com>
Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Co-authored-by: Roman Khavronenko <hagen1778@gmail.com>
2025-02-14 12:35:51 +01:00
Nikolay
b9a7bda0a1
lib/storage: refactoring introduce OpenOptions
MustOpenStorage function may accept variable number of optional
arguments. This commit combines optional arguments into dedicated OpenOptions
struct. It reduces complexity of adding new optional arguments.

Related PR:
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8118
2025-02-13 11:10:44 +01:00
Zakhar Bessarab
0df20d4a4f
Merge tag 'v1.111.0' into pmm-6401-read-prometheus-data-files 2025-02-10 21:19:12 +04:00
Artem Fetishev
631b736bc2
lib/storage: fix cardinality limiting for cases when insertion takes fast path (#8218)
### Describe Your Changes

The cardinality limiter in this case does not receive the actual
metricID but some other value found in r.TSID.MetricID and is not
initialized. Depending on the system and/or go runtime implementation,
this value can be 0 or some garbage value (which shouldn't have too wide
a range). Thus, there basically no limit for inserted metricIDs.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2025-02-05 15:22:34 +01:00
Aliaksandr Valialkin
dc9dba71b2
lib/storage: open per-month partitions in parallel
This should reduce the time needed for opening the storage with retentions exceeding a few months.

While at at, limit the concurrency of opening partitions in parallel to the number of available CPU cores,
since higher concurrency may increase RAM usage and CPU usage without performance improvements
if opening a single partition is CPU-bound task.

This is a follow-up for 17988942ab
2025-01-27 16:07:14 +01:00
Zakhar Bessarab
c32fa83d38
Merge tag 'v1.109.1' into oss/pmm-6401-read-prometheus-data-files 2025-01-17 18:58:38 +04:00
Zakhar Bessarab
1c599d9661
Merge tag 'v1.109.0' into oss/pmm-6401-read-prometheus-data-files 2025-01-14 18:51:09 +04:00
Nikolay
277fdd1070
lib/storage: reduce test suite batch size (#8022)
Commit eef6943084 added new test
functions. Which checks various cases for metricName registration at
data ingestion.
Initial dataset size had 4 batches with 100 rows each. It works fine at
machines with 5GB+ memory.
But i386 architecture supports only 4GB of memory per process.

Due to given limitations, batch size should be reduced to 3 batches and
30 rows. It keeps the same
test funtionality, but reduces overall memory usage to ~3GB.

Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-01-14 11:27:50 +01:00
Nikolay
e9f86af7f5
lib/storage: add a hint for merge about type of parts in merge (#7998)
Hint allows to choose type of cache to be used for index search:
- in-memory parts are storing recently ingested samples and should use
main cache. This improves ingestion speed and cache hit ration for
queries accessing recently ingested samples.
- merges of file parts is performed in background, using a separate
cache allows avoiding pollution of the main cache with irrelevant
entries.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7182

---------

Signed-off-by: f41gh7 <nik@victoriametrics.com>
2025-01-10 16:01:39 +04:00
Nikolay
9ada784983
lib/storage: make finalDedup schedule interval configurable
This commit makes configurable interval for checking if final dedup
process for the historical data should be started. It allows to spread
resource utilisation for multiple vmstorage/vmsingle instances in time.
Since final dedup may add additional preasure on disk, backup systems
and make cluster less stable. Storage unconditionally adds 25% jitter to
the provided value, it should simplify configuration management at
Kubernetes ecosystem. Because Kubernetes application pods must have the
same configuration.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7880


---------

Signed-off-by: f41gh7 <nik@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2025-01-10 10:46:46 +01:00
Roman Khavronenko
c464d4484f
lib/storage: update dedup tests
* update misleading comments about preferring NaNs on intervals. NaNs
are only preferred on timestamp conflicts
* add conflicting timestamps to the benchmark test. Previously,
benchmark wasn't checking the timestamp conflict code branch. The
updated results after
c0fcfd6b97
are the following:
```
benchstat old.txt new.txt

goos: darwin
goarch: arm64
pkg: github.com/VictoriaMetrics/VictoriaMetrics/lib/storage
cpu: Apple M4 Pro
                                                       │   old.txt    │               new.txt                │
                                                       │    sec/op    │    sec/op     vs base                │
DeduplicateSamples/minScrapeInterval=3s-14               889.7n ± ∞ ¹   904.3n ± ∞ ¹       ~ (p=1.000 n=1) ²
DeduplicateSamples/minScrapeInterval=4s-14               735.9n ± ∞ ¹   748.7n ± ∞ ¹       ~ (p=1.000 n=1) ²
DeduplicateSamples/minScrapeInterval=10s-14              637.7n ± ∞ ¹   659.3n ± ∞ ¹       ~ (p=1.000 n=1) ²
DeduplicateSamplesDuringMerge/minScrapeInterval=3s-14    838.8n ± ∞ ¹   810.4n ± ∞ ¹       ~ (p=1.000 n=1) ²
DeduplicateSamplesDuringMerge/minScrapeInterval=4s-14    765.2n ± ∞ ¹   735.1n ± ∞ ¹       ~ (p=1.000 n=1) ²
DeduplicateSamplesDuringMerge/minScrapeInterval=10s-14   673.1n ± ∞ ¹   622.4n ± ∞ ¹       ~ (p=1.000 n=1) ²
geomean                                                  751.7n         741.0n        -1.42%
```

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

---
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-12-16 12:50:41 +01:00
f41gh7
ec08a408d2
Merge tag 'v1.108.0' into pmm-6401-read-prometheus-data-files-cpc 2024-12-16 12:24:24 +01:00
Andrei Baidarov
0dc576d3da
lib/storage: prefer stale markers over other values on dedup interval
Previously, during de-duplication staleness markers could be removed due to incorrect logic at
values equality check.
 During the evaluation of read query vmselect deduplicates samples using dedupInterval option. It picks the highest value across all points with the same timestamp next to the border of dedupInterval. The issue is any comparison with NaN via <, > returns false. This means that the position of NaN in srcValues could affect the result.


 This commit changes this logic with additional step, that explicitly checks for staleness marker for the following cases:
 1. Deduplication on vmselect
2. Deduplication in vmstorage during merges
3. Deduplication in stream aggregation

check performed only for stale markers, because other NaNs are rejected on ingestion
by vmstorage or by stream aggregation.

Checking for stale markers in general slows down dedup speed by 3%:
```
 benchstat old.txt new.txt

goos: darwin
goarch: arm64
pkg: github.com/VictoriaMetrics/VictoriaMetrics/lib/storage
cpu: Apple M4 Pro
                                                       │   old.txt    │               new.txt                │
                                                       │    sec/op    │    sec/op     vs base                │
DeduplicateSamples/minScrapeInterval=1s-14               462.8n ± ∞ ¹   425.2n ± ∞ ¹       ~ (p=1.000 n=1) ²
DeduplicateSamples/minScrapeInterval=2s-14               905.6n ± ∞ ¹   903.3n ± ∞ ¹       ~ (p=1.000 n=1) ²
DeduplicateSamples/minScrapeInterval=5s-14               710.0n ± ∞ ¹   698.9n ± ∞ ¹       ~ (p=1.000 n=1) ²
DeduplicateSamples/minScrapeInterval=10s-14              632.7n ± ∞ ¹   638.5n ± ∞ ¹       ~ (p=1.000 n=1) ²
DeduplicateSamplesDuringMerge/minScrapeInterval=1s-14    439.7n ± ∞ ¹   409.9n ± ∞ ¹       ~ (p=1.000 n=1) ²
DeduplicateSamplesDuringMerge/minScrapeInterval=2s-14    908.9n ± ∞ ¹   882.2n ± ∞ ¹       ~ (p=1.000 n=1) ²
DeduplicateSamplesDuringMerge/minScrapeInterval=5s-14    721.2n ± ∞ ¹   684.7n ± ∞ ¹       ~ (p=1.000 n=1) ²
DeduplicateSamplesDuringMerge/minScrapeInterval=10s-14   659.1n ± ∞ ¹   630.6n ± ∞ ¹       ~ (p=1.000 n=1) ²
geomean                                                  659.5n         636.0n        -3.56%
```

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7674
---------
Co-authored-by: hagen1778 <roman@victoriametrics.com>
2024-12-12 12:34:17 +01:00
Andrii Chubatiuk
564e6ea024
app/{vminsert,vmagent}: drop time series on exceeding labels limits.
Previously, time series with labels exceeding the configured limits were truncated and written to storage, potentially causing data inconsistency. This could lead to collisions between time series and make it difficult to identify the source due to truncated labels.

This commit changes the behavior:
*  Such time series are now rejected outright.
* Rejected time series are logged to stdout, and corresponding counters are incremented.  
* removes `vm_too_long_label_values_total`, `vm_too_long_label_names_total`, `vm_metrics_with_dropped_labels_total` metrics.  
* adds new values `[too_many_labels,too_long_label_name,too_long_label_value]`  to `reason` label of the `vm_rows_ignored_total` metric name

related issues:
- https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6928
- https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7661
2024-12-10 21:19:16 +01:00
f41gh7
d6cb7d09e5
Merge tag 'v1.107.0' into pmm-6401-read-prometheus-data-files-cpc 2024-12-02 11:34:23 +01:00
Andrii Chubatiuk
9cfdbc582f
refactoring: changed prompb to prompbmarshal everythere where internal series transformations are happening (#7409)
### Describe Your Changes

doing similar changes for both vmagent and vminsert (like one in
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/7399) ends up
with almost same implementations for each of packages instead of having
this shared code in one place. one of the reasons is the same Timeseries
and Labels structure from different prompb and prompbmarshal packages.
My proposal is to use structures from prompb package only to
marshal/unmarshal sent/received data, but for internal transformations
use only structures from prompbmarshal package

Another example, where it already can help to simplify code is streaming
aggregation pipeline for vmsingle (now it first marshals
prompb.Timeseries to storage.MetricRow and then if streaming aggregation
or deduplication is enabled it unmarshals all the series back but to
prompbmarshal.Timeseries)

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2024-11-26 12:45:17 +01:00
Artem Fetishev
3383589fd1
lib/storage: confirm that changing retention period can cause previous indexDB deletion (#7569)
### Describe Your Changes

Add test cases proving that it is possible to lose indexDB after
changing the retention period. See #7609

### Checklist

The following checks are **mandatory**:

- [x ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
2024-11-21 10:44:21 +01:00
f41gh7
54df0fa870
Merge tag 'v1.106.1' into pmm-6401-read-prometheus-data-files-cpc 2024-11-18 15:09:17 +01:00
Nikolay
1985110de2
lib/storage: properly check for minMissingTimestamps
After changes at commit 787b9cd. Minimal timestamps for extDB check was performed without context of the index search prefix.
It worked fine for Single node version, but for cluster version a different prefix was used for
metricID search requests. It may lead to incomplete results, if minimal missing timestamp was cached
for the tenant with different ingestion patterns.

 Minimal reproducible case is:
- metrics were ingested for tenants 0 and 1
- at some point in time metrics ingestion for tenant 1 stopped
- index records have the following timestamps layout:
 tenant 0:  1,2,3,4,5,6
 tenant 1:  1,2,3,4
- after indexDB rotation, containsTimeRange lookups may produce
  incorrect results:
 time range request for tenant 1 - 5:6 caches 5 as min timestamp
 request for the same or smaller time range for tenant 0 now returns
empty results.

Second case:
- requests for the tenant without metrics always updates atomic value with incorrect minimal time range for other tenants.

 This commit replaces single atomic with map of search prefix keys. It should have slight performance overhead,
but work consistently for cluster version. minMissingTimestamp is cached by prefix search key, which included tenantID.

 Since it will be only populated at runtime, it doesn't hold unused tenants for queries.

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7417
2024-11-15 16:25:13 +01:00
Zakhar Bessarab
cd513b9758
Merge tag 'v1.106.0' into pmm-6401-read-prometheus-data-files 2024-11-04 13:56:36 -03:00
Zakhar Bessarab
9f9cc24e4c
Revert "lib/mergeset: add sparse indexdb cache (#7269)"
This reverts commit 837d0d136d.
2024-11-04 10:29:14 -03:00
Zakhar Bessarab
4e50d6eed3
lib/storage/partition: prevent panic in case resulting in-memory part is empty after merge (#7329)
It is possible for in-memory part to be empty if ingested samples are
removed by retention filters. In this case, data will not be discarded
due to retention before creating in memory part. After in-memory parts
merge samples will be removed resulting in creating completely empty
part at destination.

 This commit checks for resulting part and skips it, if it's empty.

---------
Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2024-10-27 20:40:13 +01:00
Zakhar Bessarab
837d0d136d
lib/mergeset: add sparse indexdb cache (#7269)
Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7182

- add a separate index cache for searches which might read through large
amounts of random entries. Primary use-case for this is retention and
downsampling filters, when applying filters background merge needs to
fetch large amount of random entries which pollutes an index cache.
Using different caches allows to reduce effect on memory usage and cache
efficiency of the main cache while still having high cache hit rate. A
separate cache size is 5% of allowed memory.

- reduce size of indexdb/dataBlocks cache in order to free memory for
new sparse cache. Reduced size by 5% and moved this to a separate cache.

- add a separate metricName search which does not cache metric names -
this is needed in order to allow disabling metric name caching when
applying downsampling/retention filters. Applying filters during
background merge accesses random entries, this fills up cache and does
not provide an actual improvement due to random access nature.


Merge performance and memory usage stats before and after the change:

- before

![image](https://github.com/user-attachments/assets/485fffbb-c225-47ae-b5c5-bc8a7c57b36e)


- after

![image](https://github.com/user-attachments/assets/f4ba3440-7c1c-4ec1-bc54-4d2ab431eef5)

---------

Signed-off-by: Zakhar Bessarab <z.bessarab@victoriametrics.com>
2024-10-24 15:21:17 +02:00
Artem Fetishev
6b9f57e5f7
lib/storage: Fix flaky test: TestStorageRotateIndexDB (#7267)
This commit fixes the TestStorageRotateIndexDB flaky test reported at:
#6977. Sample test failure: https://pastebin.com/bTSs8HP1

The test fails because one goroutine adds items to the indexDB table
while another goroutine is closing that table. This may happen if
indexDB rotation happens twice during one Storage.add() operation:
-  Storage.add() takes the current indexDB and adds index recods to it
- First index db rotation makes the current index DB a previous one
(still ok at this point)
- Second index db rotation removes the indexDB that was current two
rotations earlier. It does this by setting the mustDrop flag to true and
decrementing the ref counter. The ref counter reaches zero which cases
the underlying indexdb table to release its resources gracefully.
Graceful release assumes that the table is not written anymore. But
Storage.add() still adds items to it.

The solution is to increment the indexDB ref counters while it is used
inside add().
The unit test has been changed a little so that the test fails reliably.
The idea is to make add() function invocation to last much longer,
therefore the test inserts not just one record at a time but thouthands
of them.

To see the test fail, just replace the idbsLocked() func with:

```go
unc (s *Storage) idbsLocked2() (*indexDB, *indexDB, func()) {
       return s.idbCurr.Load(), s.idbNext.Load(), func() {}
}
``` 


---------

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
2024-10-23 11:48:21 +02:00
f41gh7
cf7eb6bc7c v1.105.0
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEcUy2K0fVfVQqlyTqRVgxHPd17HIFAmcWVZgACgkQRVgxHPd1
 7HJ7yg/+LToSxly5iKgyZlyBToTjWIs+NhPyrDJaDXHzXkxMdcc/p43WFazjKD0A
 Sp47oKqSDUU9Bde32mk97jKq6INHQGY3SWKg6EY8pKtTtiJFol9O1Tn7wOFVr9hK
 bcfs8Q+Ibbue/YaDAKM7oaZdSfSGPA8O6vqJPtAaMgRDb7J1mBTA5a2Cs3utE30C
 FRz0wkqwf/zEyle8Tg7e2GXmn3RleiWpinhPyQg4oVoxvid4DCNSAMzmc/gZogN3
 twcf/ynH7RfysoP4iQc6Bsc417lkJvA6TcKLjm+VP6yzXcSXyqwoQSbT7zNdOgwz
 9d7M6LpJZ75voVO18f77pZj/BEYjjAlFrxGxAtT/WAEml/fDYT6NHpLpSmwWXweX
 uJjI5SLr92/0rNWnMicSC/pzd4MQOxjSfF9ij7GqPXxeFt8hGrE5fqbYHz/DXQvM
 kEMtsjDVn40FsXwz0Jxti/zPBI0J/AJlkFJF9xp0jLXYbgDb+3KaJJ3MfmHciw3V
 NrSus28nlfyMba5ktES0ZszWeJk0MTLKmiw9Q6otLDo1gtHW66ijQeIHkEIJ5KhR
 EEAYTEZyXujX96cAfrINHzFJNLFA6Zbx6oKnAZx/mMHbfrt15vd9mVb5NMWcKl65
 DDPF7dFZDB/t3HYPPmwxzdN+LptZAmtQMHh91kJYTuhKhs71lH8=
 =spFl
 -----END PGP SIGNATURE-----
gpgsig -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEcUy2K0fVfVQqlyTqRVgxHPd17HIFAmcWiOIACgkQRVgxHPd1
 7HL4Ow//fv7iw0673L/64oGsH1XgHJKfualZj98ql3bWu4iD/LZ4XT/zqUqQD4cA
 80gtMkudTu+qAqDCmj/tJhqaDO5bIChzyLE7Esk5r+7sM1vP7Na1lxZS96r2F2Gg
 1E4gWKbp1e+Ms+ud7d8+B5cbFndZRw3rBxpsONyqQaE/GzpZv1mUomeDUEkSy0Oi
 qUo4V65Ei1ZXDN8saBb0zKDOhTPcICfhmMMyRMcF7wkAR4JRt84nmZrHZ6AORW8K
 xEuz1bXihT0HnLaxQsuPG9WCL0xOqTOnzL2Amtw5sPi6dLOcd6Lp8Z79B/uP3+Iy
 NMOLaaMJldM3pc+ZNDxYKAx4cSzOmECAs9ldiGa2QoxzEAQ4qrNfG/mdrauExVW3
 vVJ0uK5S+GL+rEKGQcD1d7fkTDizPXjuWCgCwmM0j84jriF/0slNcFe+5bPBrw3z
 vvUTyuYsv16abHraEbUq5G5ekRKhAd6QkAzyzTKcrNiKqYL5zlyJAjkfinMdhf9e
 hBvtyZqvkhxgRRL1WlAFsQY+QwkWHLbrTQU8AKB2G4qLQfiqn9Lald6LD/A/HG4E
 uXba+ndGuM0anB43L+W9UjQmF31urxLBuLag59J4JhEmKMYGrxeK4xnhpH8k1hA/
 7T4eREKl03R3IaTg9taOJI6vvnuWxGJ0Q5B/AZ3z32fLXdIMaGc=
 =glUW
 -----END PGP SIGNATURE-----

Merge tag 'v1.105.0' into pmm-6401-read-prometheus-data-files-cpc

v1.105.0
2024-10-21 19:01:12 +02:00
Zhu Jiekun
8c50c38a80
vmstorage: auto calculate maxUniqueTimeseries based on resources (#6961)
### Describe Your Changes

Add support for
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6930

Calculate `-search.maxUniqueTimeseries` by
`-search.maxConcurrentRequests` and remaining memory if it's **not set**
or **less equal than 0**.

The remaining memory is affected by `-memory.allowedPercent`,
`-memory.allowedBytes` and cgroup memory limit.
### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>

(cherry picked from commit 85f60237e2)
Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-10-18 14:00:14 +02:00
hagen1778
2404b4bc00 v1.104.0
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEkhL6N9vmSTjg0VSVO/dfN0HKlkAFAmb9KqIACgkQO/dfN0HK
 lkAXcBAAluUXBa4oYV4g/xvsd30oXtC79DoFY527K1cTgesDohf0FdLGU++7Aphm
 efOR8BaytBPGHGn9PmuZIbebiFv6TVBih7b8gl+frm/yGLh/1WyAYp2sClB1KcJa
 r7rHBMF7sikDkLPFlJv9qYhERj05aUTc/uwWn7KzUMPbmUZcXOJhxttm1Hf7Rc6P
 zcO1cymSEouzSOw0qoHFHRZYgkt9j1GW36vUgEX6+b3VJvOAhoaolw6OX65wt8Cm
 +YdXW51gEalZRIRNtgY3lDJnCAHn72RsRbLpylyGW1TcuBnwfSIWlPpLU04IGVlx
 06Vl47o/6vEBoVKk+2Y6La4iwD8+x/Td1RlrELOo4Qzrv1ppqOCveUa0wh6JQfjB
 aQawE7Yzh35qKvRVZtgY8NaUzkTL2QISlnpkokHfZZLIn6WAhok4c+vxnCl5CaBE
 3yRenqZ/OdMs+Wa8WMb6thcxA0eQ40t3B35iYyvMJdhSKDtdNT2F5kFh7ve6Woiu
 2TmN+GWPM0zBMVEVGy1i1L+42dlG6ANY3p5a8vz0qfqBBJF+V+P/BetfejTPjJ7r
 PN6HpdcfN+a+FGsUWckhFSU7z0LFJIytQyxb6vGn5N1UW0pupQMs5E/jFcFJl2/Q
 yO8WZmGm4QfhupcfgAkTIBgsUliqmIBXsNk6sjhzwbxYBtXSqwQ=
 =F+22
 -----END PGP SIGNATURE-----
gpgsig -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEkhL6N9vmSTjg0VSVO/dfN0HKlkAFAmb9XnUACgkQO/dfN0HK
 lkAYJw/8D+fAkqo48iNynRZf8N7kR22XBVc+zvJIFIL8wyScMec3o6+rRQ5WmmEk
 MxT73EWW2Nv81L4JC585u3zutM7Ow7nG1paxrF2hWLNAniKJd+Z+okRWThf89c8Q
 IC2egeVtgQ9ADNBTNGF72FsBBj+P6rv3Xe/M0XSLCS4mLY1eVnhdx7yuQsSNkzpr
 hxndq5odwEprFNXe9WEgH04ekS3u0ZMzWidhSHJpZVXDt6iFTxfoD+NkYpPRIZuc
 KwE0Zm1eTn98MJNZvoVyJ2hbD3f513I5yvdaNMFZ0I08Dh281uugYZu8r7mwqS49
 0uCC9PoEuErYbCGCGjmXOGVnyB6vvRjIfIOif/M1KqpH5g7xTKWc9S23P2ib3HgI
 brFl5EDl1Qa+qnkwWC98G58b85hjTJjLYhbst+O/MW+j6W2zihrt0N9UsKKTPgzj
 xvLhYz97wF0GCOfD5sZyyMdTCI6QWqtbE79ysHw+WCSrbZIKh6MFp6eO6qQF3JWT
 9IPT6O9G57Q9iwtS+MSVgriobE7qV/fHB/ICiciTGtsYfsovwxnq8BJuBiehwqau
 deqf4gbsZQiME1i+o9nnOcekDXkziKnkJIv8E5NBq77NQEzliSwfHwoaTtEusj7n
 4XbgRX37B8XtANVg1twWZb8gYFtxqYoojymAKx/Ag2e4I3qnzbM=
 =e9ts
 -----END PGP SIGNATURE-----

Merge tag 'v1.104.0' into pmm-6401-read-prometheus-data-files

v1.104.0

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-10-02 16:53:41 +02:00
Roman Khavronenko
0d4f4b8f7d
(app|lib)/vmstorage: do not increment vm_rows_ignored_total on NaNs (#7166)
`vm_rows_ignored_total` metric is a metric for users to signalize about
ingestion issues, such as bad timestamp or parsing error.
In commit
a5424e95b3
this metric started to increment each time vmstorage gets NaN. But NaN
is a valid value for Prometheus data model and for Prometheus metrics
exposition format. Exporters from Prometheus ecosystem could expose NaNs
as values for metrics and these values will be delivered to vmstorage
and increment the metric.
Since there is nothing user can do with this, in opposite to parsing
errors or bad timestamps, there is not much sense in incrementing this
metric. So this commit rolls-back `reason="nan_value"` increments.

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

Signed-off-by: hagen1778 <roman@victoriametrics.com>
2024-10-02 12:37:27 +02:00
Artem Fetishev
ed5da38ede
Introduce a flag for limiting the number of time series to delete (#7091)
### Describe Your Changes

Introduce the `-search.maxDeleteSeries` flag that limits the number of
time series that can be deleted with a single
`/api/v1/admin/tsdb/delete_series` call.

Currently, any number can be deleted and if the number is big (millions)
then the operation may result in unaccounted CPU and memory usage spikes
which in some cases may result in OOM kill (see #7027). The flag limits
the number to 30k by default and the users may override it if needed at
the vmstorage start time.


---------

Signed-off-by: Artem Fetishev <rtm@victoriametrics.com>
Co-authored-by: Nikolay <nik@victoriametrics.com>
2024-09-30 10:02:21 +02:00
Artem Fetishev
55febc0920
lib/storage: restore ability to put empty metric ID list into tagFiltersToMetricIDsCache (#7064)
### Describe Your Changes

Currently it the metricID list is empty it won't be mashalled and as the
result won't be put into the tagFiltersToMetricIDsCache which causes the
cache misses for the corresponding tagFilters. In some setups this
causes severe search speed detradation (see #7009).

The empty metric IDs was covered before but then was accidentally
removed in 6c21439.

This PR restores the coverage of this case.

A new unit test can be used as a proof that empty metricID lists are not
added to the cache (just remove the fix in index_db.go and run the test
to see the result)

Also a benchmark has been added to see the implications of the
compression.

```
user@laptop:~/p/github.com/rtm0/VictoriaMetrics/01/src$ go test ./lib/storage/ -run=NONE -bench BenchmarkMarshalUnmarshalMetricIDs --loggerLevel=ERROR
goos: linux
goarch: amd64
pkg: github.com/VictoriaMetrics/VictoriaMetrics/lib/storage
cpu: 13th Gen Intel(R) Core(TM) i7-1355U
BenchmarkMarshalUnmarshalMetricIDs/numMetricIDs-0-12             3237240               363.5 ns/op               0 compression-rate
BenchmarkMarshalUnmarshalMetricIDs/numMetricIDs-1-12             2831049               451.8 ns/op               0.4706 compression-rate
BenchmarkMarshalUnmarshalMetricIDs/numMetricIDs-10-12            1152764              1009 ns/op                 1.667 compression-rate
BenchmarkMarshalUnmarshalMetricIDs/numMetricIDs-100-12            297055              3998 ns/op                 5.755 compression-rate
BenchmarkMarshalUnmarshalMetricIDs/numMetricIDs-1000-12            31172             34566 ns/op                 8.484 compression-rate
BenchmarkMarshalUnmarshalMetricIDs/numMetricIDs-10000-12            4900            289659 ns/op                 9.416 compression-rate
BenchmarkMarshalUnmarshalMetricIDs/numMetricIDs-100000-12            447           2341173 ns/op                 9.456 compression-rate
BenchmarkMarshalUnmarshalMetricIDs/numMetricIDs-1000000-12            42          24926928 ns/op                 9.468 compression-rate
BenchmarkMarshalUnmarshalMetricIDs/numMetricIDs-10000000-12            5         204098872 ns/op                 9.467 compression-rate
PASS
ok      github.com/VictoriaMetrics/VictoriaMetrics/lib/storage  15.018s
```

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: Artem Fetishev <wwctrsrx@gmail.com>
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
2024-09-20 17:21:53 +02:00
Aliaksandr Valialkin
787b9cd9a0
lib/storage: improve performance for indexSearch.containsTimeRange()
The indexSearch.containsTimeRange() function is called for the current indexDB and the previous indexDB
every time when searching for metricIDs by label filters. This function consumes a lot of additional CPU time
for cases when queries with lightweight label filters are sent to VictoriaMetrics at high rate (e.g. thousands of RPS),
like in the issue https://github.com/VictoriaMetrics/VictoriaMetrics/issues/7009 .

Optimize indexSearch.containsTimeRange() function in the following ways:

- Unconditionally return true if this function is called for the current indexDB, since there are very high
  chances that the current indexDB contains the data with timestamps in the requested time range.

- Cache the minimum timestamp, which is missing in the indexed data for the previous indexDB.
  This is safe to do, since the previous indexDB is readonly.
  This optimization eliminates potentially slow lookup in the previous indexDB for typical
  use cases when the requested time range is close to the current time.
2024-09-20 13:07:20 +02:00
Aliaksandr Valialkin
6f61e9d49d
lib/storage: simplify indexDB.doExtDB() usage by removing the returned value
Previously indexDB.doExtDB() was returning boolean value, which was indicating whether f callback was called.
There is no need in returning this boolean value, since the f callback can determine on itself whether it was called.

This simplifies the code a bit.

While at it, document indexDB.doExtDB().
2024-09-20 11:59:57 +02:00
Roman Khavronenko
218c533874
lib/storage: follow-up after d8f8822fa5 (#7036)
Make function name and comments more clear.

d8f8822fa5

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: Nikolay <nik@victoriametrics.com>
2024-09-20 11:50:47 +02:00
Nikolay
d8f8822fa5
lib/storage: consistently check for missing metricID index records (#6967)
* Previously, only metricID->metricName missing index records were
tracked with deadline But it was possible a case for missing
metricID->TSID index records. IndexDB metrics fix exposed misleading
metric for such missing records.

* This commit adds check for metricID->TSID missing index records. And
delete missing metricID entry if it hit 60 second deadline.

Related issue
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6931

Signed-off-by: f41gh7 <nik@victoriametrics.com>
2024-09-16 10:05:08 +02:00
Artem Fetishev
a5424e95b3
lib/storage: adds metrics that count records that failed to insert
### Describe Your Changes

Add storage metrics that count records that failed to insert:

- `RowsReceivedTotal`: the number of records that have been received by
the storage from the clients
- `RowsAddedTotal`: the number of records that have actually been
persisted. This value must be equal to `RowsReceivedTotal` if all the
records have been valid ones. But it will be smaller otherwise. The
values of the metrics below should provide the insight of why some
records hasn't been added
-   `NaNValueRows`: the number of records whose value was `NaN`
- `StaleNaNValueRows`: the number of records whose value was `Stale NaN`
- `InvalidRawMetricNames`: the number of records whose raw metric name
has failed to unmarshal.

The following metrics existed before this PR and are listed here for
completeness:

- `TooSmallTimestampRows`: the number of records whose timestamp is
negative or is older than retention period
- `TooBigTimestampRows`: the number of records whose timestamp is too
far in the future.
- `HourlySeriesLimitRowsDropped`: the number of records that have not
been added because the hourly series limit has been exceeded.
- `DailySeriesLimitRowsDropped`: the number of records that have not
been added because the daily series limit has been exceeded.

---
Signed-off-by: Artem Fetishev <wwctrsrx@gmail.com>
2024-09-06 17:57:21 +02:00
Artem Fetishev
39294b4919
lib/storage: do not drop stale NaN samples (#6936)
This patch reverts 1fd3385

After discussing it we've come to conclusion that this is a valid
behavior which can be avoided by deleting the time series only once the
corresponding stale NaNs have been received.

On the other hand, the fix leads to lost stale NaNs in some rare but
valid use cases. For example:

- In a cluster configuration the samples for a given time series are
normally sent to the same vmstorage replica. However, wminsert may
reroute the samples to another replica because the original one is down
or is overloaded. In this case the stale NaN may end up on a replica
that has no data for that time series, but we still want to record that
sample.

Thus, reverting that fix.

---

related issue https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5069

Signed-off-by: Artem Fetishev <wwctrsrx@gmail.com>
2024-09-05 16:45:09 +02:00
Hui Wang
b48f5f3e59
lib/storage: fix metric vm_object_references{type="indexdb"} (#6937)
follow up
4ecc370acb

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2024-09-05 16:42:49 +02:00
rtm0
4df243d530
lib/storage: improve the message of the tooManyTimeseries error (#6893)
### Describe Your Changes

This is a follow-up for #6836. Per @valyala's
[comment](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/6836#discussion_r1730291704),
the error message does not reflect which flag needs to be adjusted.

### Checklist

The following checks are **mandatory**:

- [x ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

---------

Signed-off-by: Artem Fetishev <wwctrsrx@gmail.com>
2024-09-03 10:28:03 +02:00
rtm0
2c856c6951
tests: check Metrics.RowsAddedTotal in unit tests (#6895)
### Describe Your Changes

This is a follow-up PR: Unit tests introduced in #6872 can now use
RowsAddedTotal counter whose scope was fixed in #6841.

### Checklist

The following checks are **mandatory**:

- [x] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).

Signed-off-by: Artem Fetishev <wwctrsrx@gmail.com>
Co-authored-by: Nikolay <nik@victoriametrics.com>
2024-08-30 14:31:15 +02:00
f41gh7
e3e06b1f47
Merge remote-tracking branch 'origin/master' into pmm-6401-read-prometheus-data-files-cpc 2024-08-29 15:47:43 +02:00
Nikolay
4ecc370acb
lib/storage: properly add previous indexDB metrics (#6890)
Previously, some extIndexDB metrics were not registered. It resulted
into missing metrics, if metric value was added to the extIndexDB. It's
a usual case for search requests at both indexes.

 Current commit updates all metrics from extIndexDB according to the
current IndexDB. It must fix such cases

Related issue:
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6868

### Describe Your Changes

Please provide a brief description of the changes you made. Be as
specific as possible to help others understand the purpose and impact of
your modifications.

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
2024-08-28 11:14:28 +02:00
rtm0
9fcfba3927
lib/storage: properly handle maxMetrics limit at metricID search
`TL;DR` This PR improves the metric IDs search in IndexDB:

- Avoid seaching for metric IDs twice when `maxMetrics` limit is
exceeded
- Use correct error type for indicating that the `maxMetrics` limit is
exceded
- Simplify the logic of deciding between per-day and global index search

A unit test has been added to ensure that this refactoring does not
break anything.

---

Function calls before the fix:

```
idb.searchMetricIDs
    |__ is.searchMetricIDs
        |__ is.searchMetricIDsInternal
            |__ is.updateMetricIDsForTagFilters
                |__ is.tryUpdatingMetricIDsForDateRange
                |                       |
                |__ is.getMetricIDsForDateAndFilters
```

- `searchMetricIDsInternal` searches metric IDs for each filter set. It
maintains a metric ID set variable which is updated every time the
`updateMetricIDsForTagFilters` function is called. After each successful
call, the function checks the length of the updated metric ID set and if
it is greater than `maxMetrics`, the function returns `too many
timeseries` error.
- `updateMetricIDsForTagFilters` uses either per-day or global index to
search metric IDs for the given filter set. The decision of which index
to use is made is made within the `tryUpdatingMetricIDsForDateRange`
function and if it returns `fallback to global search` error then the
function uses global index by calling `getMetricIDsForDateAndFilters`
with zero date.
- `tryUpdatingMetricIDsForDateRange` first checks if the given time
range is larger than 40 days and if so returns `fallback to global
search` error. Otherwise it proceeds to searching for metric IDs within
that time range by calling `getMetricIDsForDateAndFilters` for each
date.
- `getMetricIDsForDateAndFilters` searches for metric IDs for the given
date and returns `fallback to global search` error if the number of
found metric IDs is greater than `maxMetrics`.

Problems with this solution:

1. The `fallback to global search` error returned by
`getMetricIDsForDateAndFilters` in case when maxMetrics is exceeded is
misleading.
2. If `tryUpdatingMetricIDsForDateRange` proceeds to date range search
and returns `fallback to global search` error (because
`getMetricIDsForDateAndFilters` returns it) then this will trigger
global search in `updateMetricIDsForTagFilters`. However the global
search uses the same maxMetrics value which means this search is
destined to fail too. I.e. the same search is performed twice and fails
twice.
3. `too many timeseries` error is already handled in
`searchMetricIDsInternal` and therefore handing this error in
`updateMetricIDsForTagFilters` is redundant
4. updateMetricIDsForTagFilters is a better place to make a decision on
whether to use per-day or global index.

Solution:

1.  Use a dedicated error for `too many timeseries` case
2. Handle `too many timeseries` error in  `searchMetricIDsInternal` only
3. Move the per-day or global search decision from
`tryUpdatingMetricIDsForDateRange` to `updateMetricIDsForTagFilters` and
remove `fallback to global search` error.


---------

Signed-off-by: Artem Fetishev <wwctrsrx@gmail.com>
Co-authored-by: Nikolay <nik@victoriametrics.com>
2024-08-27 21:39:03 +02:00