* vmselect: limit `end` param max value by 2d in future
The change is applied only to service handlers like `/labels` or `/series`
and limits the `end` param by max value <= now() + 2 days. The same limit
is applied for the ingested data, so no reason to allow to request data
in future far than that.
The change is also needed for corner cases like https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2669
where too high `end` value triggers inefficient global index search.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
* docs/CHANGELOG.md: document the bugfix
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
Suggest `-search.maxStalenessInterval=10s` instead of `-search.maxStalenessInterval=1ms`,
since `1ms` would result in empty graphs in most cases, since the interval between data points
on the graph is usually much higher than 1ms. For example, if the graph shows time range of one hour
and it contains 1000 points, then the interval between points on the graph would equal to
3600s/1000=3.6 seconds.
* feat: make datepicker to be set to last 30 min by default
* fix: correct spinner while loading data
* feat: change legend style
* app/vmselect: `make vmui-update`
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
vmalert: support `limit` param in groups definition
`limit` param limits number of time series samples produced by a single rule
during execution.
On reaching the limit rule will return an err.
Signed-off-by: lihaowei <haoweili35@gmail.com>
- Remove unused js bloatware from /targets page. This strips down binary size by more than 100Kb
- Add /service-discovery page for API compatibility with Prometheus
- Properly load bootstrap.min.css from /prometheus/targets
- Serve static contents for /targets page from app/vminsert instead of app/vmselect, because /targets page is served from there
The default size of `indexdb/tagFilters` now can be overridden via
`storage.cacheSizeIndexDBTagFilters` flag.
Please, be careful with changing default size since it may
lead to inefficient work of the vmstorage or OOM exceptions.
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2663
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: Nikolay <nik@victoriametrics.com>
* promrelabel: add support of `lowercase` and `uppercase` relabeling actions
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2664
Signed-off-by: hagen1778 <roman@victoriametrics.com>
* lib/storage: make golangci-lint happy
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: Nikolay <nik@victoriametrics.com>
- data modification
- more applications for gauges
- recommendations for instrumenting app with metrics
Signed-off-by: hagen1778 <roman@victoriametrics.com>
* lib/promscrape/discovery/kubernetes: properly updates discovered scrape works
previously, added or updated scrapeworks may override previuosly
discovered.
it happens because swosByKey may contain small subset of kubernetes
objects with it's labels.
It happens for objectsUpdated and objectsAdded maps, which include only changed elements
* Properly calculate vm_promscrape_discovery_kubernetes_scrape_works
Co-authored-by: f41gh7 <nik@victoriametrics.com>
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
docs: update docs for beginners
QuickStart page was updated with more relevant information.
Key Concepts was added to cover basics for the VictoriaMetrics.
Unexpectedly, Grafana makes an extra request to `/rules`
handler in addition to `/api/v1/rules` calls in alerts UI.
This happens only for Grafana versions older than 8.5.*.
Apparently, this is related to support of other monitoring
systems.
Prometheus responds with `text/html` content for UI page `/rules`
to such requests. Actually, returning just a blank page with
SC=200 works as well.
Returning actual response of `/api/v1/rules`
results in error in Grafana since it expects a `yaml` (?) in response.
So we add a placeholder to `vmalert`.
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2583
Signed-off-by: hagen1778 <roman@victoriametrics.com>
For liquid text processor double braces `{{` `}}`
are special chars for templating.
Since we use them in some of our docs with different purpose,
we must escape them to avoid syntax errors from liquid.
For escaping curly braces we use bult-in plugin which helps
to enclose sections of text via `{% raw %}` and `{% endraw %}`.
This approach prevents liquid syntax errors and makes render correct.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
* deployment/docker: pass `-buildvs=false` to `go build` for production builds
This should resolve the `error obtaining VCS status: exit status 128` error
when the environment contains incorrect version of git or has incorrect access rights
to the directory with VictoriaMetrics source code.
See the following links for additional info:
- https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2508#issuecomment-1117126702 ,
- https://github.com/google/ko/issues/672
- https://github.com/golang/go/issues/49004
* lib/netutil: limit the number of concurrently established connections when calling ConnPool.Get()
This should reduce potential spikes in the number of established connections in the following cases:
- when the connection establishing procedure becomes temporarily slow
- after a temporary spike in the rate of ConnPool.Get() calls
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2552
* docs/CHANGELOG.md: document c8af625bcc
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1322#issuecomment-1120276146
* docs/Cluster-VictoriaMetrics.md: typo fix: `by by` -> `by`
* docs: add `resource usage limits` docs, which describe fine-grained tuning for various resource usage limits
* docs/Cluster-VictoriaMetrics.md: the `/api/v1/label/.../values` query can take CPU and ram at both vmstorage and vmselect
* Update root Readme and root vmagent readme
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
This should reduce potential spikes in the number of established connections in the following cases:
- when the connection establishing procedure becomes temporarily slow
- after a temporary spike in the rate of ConnPool.Get() calls
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2552