This reverts cd4f641d32 , since it has been appeared that the disabled compression
for vmstorage->vmselect data increase network bandwidth usage by more than 10x on typical production workloads,
while it decreases CPU usage at vmstorage by up to 10% and improves query latency by up to 10%.
The 10x increase in network usage is too high price for 10% improvements on query latency and vmstorage CPU usage.
This may result in network bandwidth bottlenecks, which can reduce the overall performance and stability
of VictoriaMetrics cluster. That's why return back the vmstorage->vmselect data compression by default.
The vmstorage->vmselect compression can be disabled by passing -rpc.disableCompression command-line flag to vmstorage.
The vmselect->vmselect compression in multi-level cluster setup can be disabled by passing -clusternative.disableCompression command-line flag.
This should prevent from out of memory errors when big number of vmselect
nodes send many concurrent requests to vmstorage
The limit can be controlled at vmstorage via the following command-line flags:
- search.maxConcurrentRequests
- search.maxQueueDuration
See https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#resource-usage-limits
* {app/vmstorage,app/vmselect}: add API to get list of existing tenants
* {app/vmstorage,app/vmselect}: add API to get list of existing tenants
* app/vmselect: fix error message
* {app/vmstorage,app/vmselect}: fix error messages
* app/vmselect: change log level for error handling
* wip
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
This should improve vmselect performance scalability on systems with many CPU cores.
The following tasks were done:
- Use separate temporary files for storing the data read from each vmstorage node.
This may result in the following potential issues:
- Up to N times higher memory usage for performing each query where N is the number
of vmstorage nodes known to vmselect.
This issue shouldn't increase chances of out of memory errors in most cases,
since per-query memory overhead is quite low comparing to the overall vmselect memory usage.
- Up to N times higher number of open temporary files where N is the number
of vmstorage nodes known to vmselect.
This issue should be fixed by increasing the limit on the number of open files.
- Use separate counters per each vmstorage node for various stats calculation
when reading the data from vmstorage nodes.