Make it possible to migrate timeseries while restoring the
original timeseries name previously written from Prometheus
to InfluxDB v1 via remote_write.
Fixes: https://github.com/VictoriaMetrics/vmctl/issues/8
Do not assume the db label to be the last one and also
make sure we are not skipping it and everything afterwards.
Breaking the loop would cause following labels to be empty.
* {lib/promscrape,app/vmagent}: adds sigv4 support for vmagent remoteWrite
moves aws related code into separate lib from lib/promscrape
it allows to write data from vmagent to the AWS managed prometheus (cortex)
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1287
* Apply suggestions from code review
* wip
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
* app/vmselect: adds proxy for rules and alerts API
It allows to visualization for rules at grafana
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1739
* Update app/vmselect/main.go
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
vmctl: fix vmctl blocking on process interrupt
This change prevents vmctl from indefinite blocking on
receiving the interrupt signal. The update touches all
import modes and suppose to improve tool reliability.
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2491
This adds a metric for the rate limit.
The limit is present as a flag currently:
`flag{name="remoteWrite.rateLimit", value="500000", is_set="true"} 1`
We are running many instances of vmagent and when creating alerts it is harder than it needs to be when extracting the value from the flag.
With this change it should be easier to monitor how close to the limit we are.
`((100/vmagent_remotewrite_rate_limit{account="account"})*sum (rate(vmagent_remotewrite_conn_bytes_written_total{account="account"}))) and ON (account) flag{name="remoteWrite.rateLimit"} == 1`
Function `ValidateTemplates`, used on the vmalert startup,
is supposed to check whether used templates and functions
in loaded rules are correct. The function was parsing
and executing loaded templates.
However, rules may contain functions which can't be executed
without values (label values or query results), like `slice`.
Because of this, validation for completely valid expression
`{{ slice $labels.job 9 }}` will fail since `$labels.job`
is empty during validation.
This PR updates `ValidateTemplates` function to only parse
templates without executing them.
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2514
Signed-off-by: hagen1778 <roman@victoriametrics.com>
* lib/{storage,flagutil} - Add option for snapshot autoremoval
- add prometheus-like duration as command flag
- add option to delete stale snapshots
- update duration.go flag to re-use own code
* wip
* lib/flagutil: re-use Duration.Set() call in NewDuration
* wip
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
add progress bars to the VM importer
The new progress bars supposed to display the processing speed per each
VM importer worker. This info should help to identify if there is a bottleneck
on the VM side during the import process, without waiting for its finish.
The new progress bars can be disabled by passing `vm-disable-progress-bar` flag.
Plotting multiple progress bars requires using experimental progress bar pool
from github.com/cheggaaa/pb/v3. Switch to progress bar pool required changes
in all import modes.
The openTSDB mode wasn't changed due to its implementation, which implies individual progress
bars per each series. Because of this, using the pool wasn't possible.
Signed-off-by: dmitryk-dk <kozlovdmitriyy@gmail.com>
Co-authored-by: hagen1778 <roman@victoriametrics.com>
* Export "null" in jsonl instead of NaN
The NaN appeared because of staleness markers that were added for compatibility. I think it's better to use json `null`, implemented here.
Also maybe it also makes sense to add a flag like `?skip-staleness-markers=true` to `/export`, to skip nulls at all?
* Update app/vmselect/prometheus/export.qtpl
* app/vmselect/prometheus/export.qtpl.go: `make quicktemplate-gen`
* docs/CHANGELOG.md: document the change
Co-authored-by: Aliaksandr Valialkin <valyala@victoriametrics.com>
* app/vmselect: adds API /api/v1/status/buildinfo
it should fix an compability error with grafana 8.5 prometheus datasource
https://github.com/grafana/grafana/pull/46771
* Update main.go
This should prevent from `duplicate time series` errors when executing the following query:
kube_pod_container_resource_requests{resource="cpu"} * on (namespace,pod) group_left() (kube_pod_status_phase{phase=~"Pending|Running"}==1)
where `kube_pod_status_phase{phase=~"Pending|Running"}==1` filters out diplicate time series
Before, relabeling for notifier configured via file was supported
only for target labels discovered via SD.
With this change, new config field `alert_relabel_configs` is introduced
for applying relabeling to labels of sent alerts.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
To improve compatibility with Prometheus alerting the order of
templates processing has changed.
Before, vmalert did all labels processing beforehand. It meant
all extra labels (such as `alertname`, `alertgroup` or rule labels)
were available in templating. All collisions were resolved in favour
of extra labels.
In Prometheus, only labels from the received metric are available in
templating, so no collisions are possible.
This change makes vmalert's behaviour similar to Prometheus.
For example, consider alerting rule which is triggered by time series
with `alertname` label. In vmalert, this label would be overriden
by alerting rule's name everywhere: for alert labels, for annotations, etc.
In Prometheus, it would be overriden for alert's labels only, but in annotations
the original label value would be available.
See more details here https://github.com/prometheus/compliance/issues/80
Signed-off-by: hagen1778 <roman@victoriametrics.com>
The following actions are taken:
- Increase the TLS hashdshake timeout from 5 seconds to 10 seconds
- Increase dial timeout from 5 seconds to 30 seconds
- Specify DialContext instead of Dial in http.Transport. This allows properly handling
the Context arg during dialing the remote storage
Updates https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1699
* lib/protoparser: changes ParseStream for native format
uses reader instead of http.Request
updates app/vmagent and app/vmagent method usage
* app/vmctl: add verify-block subcommand
it allows to check exported from VictoriaMetrics data block in native format
https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2362
Update app/vmctl/README.md
Co-authored-by: Roman Khavronenko <roman@victoriametrics.com>
The new flag `datasource.disableKeepAlive` allows disabling keepalive
connections. This may be useful if there are multiple datasource
replicas (e.g. vmselects) behind the HTTP balancer to avoid uneven
load spread because of long-lived connections.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Executor recently gain field for storing previously sent series.
Since the same executor object can be used in multiple goroutines,
the access to this field should be serialized.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
* vmalert: split alert's `Start` field into `ActiveAt` and `Start`
The `ActiveAt` field identifies when alert becomes active for rules
with `for > 0`. Previously, this value was stored in field `Start`.
The field `Start` now identifies the moment alert became `FIRING`.
The split is needed in order to distinguish these two moments
in the API responses for alerts.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
* vmalert: support specific moment of time for rules evaluation
The Querier interface was extended to accept a new argument
used as a timestamp at which evaluation should be made.
It is needed to align rules execution time within the group.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
* vmalert: mark disappeared series as stale
Series generated by alerting rules, which were sent to remote write
now will be marked as stale if they will disappear on the next
evaluation. This would make ALERTS and ALERTS_FOR_TIME series
more precise.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
* wip
Signed-off-by: hagen1778 <roman@victoriametrics.com>
* vmalert: evaluate rules at fixed timestamp
Before, time at which rules were evaluated was calculated
right before rule execution. The change makes sure
that timestamp is calculated only once per evalution round
and all rules are using the same timestamp.
It also updates the logic of resending of already resolved
alert notification.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
* vmalert: allow overridin `alertname` label value if it is present in response
Previously, `alertname` was always equal to the Alerting Rule name. Now,
its value can be overriden if series in response containt the different value
for this label.
The change is needed for improving compatibility with Prometheus.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
* vmalert: align rules evaluation in time
Now, evaluation timestamp for rules evaluates as if
there was no delay in rules evaluation. It means, that
rules will be evaluated at fixed timestamps+group_interval.
This way provides more consistent evaluation results and
improves compatibility with Prometheus,
Signed-off-by: hagen1778 <roman@victoriametrics.com>
* vmalert: add metric for missed iterations
New metric `vmalert_iteration_missed_total` will show
whether rules evaluation round was missed.
Signed-off-by: hagen1778 <roman@victoriametrics.com>
* vmalert: reduce delay before the initial rule evaluation in group
Signed-off-by: hagen1778 <roman@victoriametrics.com>
* vmalert: rollback alertname override
According to the spec:
```
The alert name from the alerting rule (HighRequestLatency from the example above) MUST be added to the labels of the alert with the label name as alertname. It MUST override any existing alertname label.
```
https://github.com/prometheus/compliance/blob/main/alert_generator/specification.md#step-3
Signed-off-by: hagen1778 <roman@victoriametrics.com>
* vmalert: throw err immediately on dedup detection
```
The execution of an alerting rule MUST error out immediately and MUST NOT send any alerts
or add samples to samples receiver if there is more than one alert with the same labels
```
https://github.com/prometheus/compliance/blob/main/alert_generator/specification.md#step-4
Signed-off-by: hagen1778 <roman@victoriametrics.com>
* vmalert: cleanup
Signed-off-by: hagen1778 <roman@victoriametrics.com>
* vmalert: use strings builder to reduce allocs
Signed-off-by: hagen1778 <roman@victoriametrics.com>