mirror of
https://github.com/VictoriaMetrics/VictoriaMetrics.git
synced 2024-11-21 14:44:00 +00:00
fa566c68a6
It has been appeared that the registration of new time series slows down linearly with the number of indexdb parts, since VictoriaMetrics needs to check every indexdb part when it searches for TSID by newly ingested metric name. The number of in-memory parts grows when new time series are registered at high rate. The number of in-memory parts grows faster on systems with big number of CPU cores, because the mergeset maintains per-CPU buffers with newly added entries for the indexdb, and every such entry is transformed eventually into a separate in-memory part. The solution has been suggested in https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5212 by @misutoth - to limit the number of in-memory parts with buffered channel. This solution is implemented in this commit. Additionally, this commit merges per-CPU parts into a single part before adding it to the list of in-memory parts. This reduces CPU load when searching for TSID by newly ingested metric name. The https://github.com/VictoriaMetrics/VictoriaMetrics/pull/5212 recommends setting the limit on the number of in-memory parts to 100, but my internal testing shows that much lower limit 15 works with the same efficiency on a system with 16 CPU cores while reducing memory usage for `indexdb/dataBlocks` cache by up to 50%. Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/5190 |
||
---|---|---|
.. | ||
block_header.go | ||
block_stream_reader.go | ||
block_stream_reader_test.go | ||
block_stream_writer.go | ||
encoding.go | ||
encoding_test.go | ||
encoding_timing_test.go | ||
filenames.go | ||
inmemory_part.go | ||
merge.go | ||
merge_test.go | ||
metaindex_row.go | ||
part.go | ||
part_header.go | ||
part_search.go | ||
part_search_test.go | ||
table.go | ||
table_search.go | ||
table_search_test.go | ||
table_search_timing_test.go | ||
table_test.go |