Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files

This commit is contained in:
Aliaksandr Valialkin 2020-11-16 21:06:21 +02:00
commit 36b748dfc7
130 changed files with 4353 additions and 1170 deletions

View file

@ -1,172 +0,0 @@
# CHANGELOG
# tip
* FEATURE: vmagent: add `-promscrape.dropOriginalLabels` command-line option, which can be used for reducing memory usage when scraping big number of targets.
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/825#issuecomment-724308361 for details.
# [v1.46.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.46.0)
* FEATURE: optimize requests to `/api/v1/labels` and `/api/v1/label/<name>/values` when `start` and `end` args are set.
* FEATURE: reduce memory usage when query touches big number of time series.
* FEATURE: vmagent: reduce memory usage when `kubernetes_sd_config` discovers big number of scrape targets (e.g. hundreds of thouthands) and the majority of these targets (99%)
are dropped during relabeling. Previously labels for all the dropped targets were displayed at `/api/v1/targets` page. Now only up to `-promscrape.maxDroppedTargets` such
targets are displayed. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/878 for details.
* FEATURE: vmagent: reduce memory usage when scraping big number of targets with big number of temporary labels starting with `__`.
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/825
* FEATURE: vmagent: add `/ready` HTTP endpoint, which returns 200 OK status code when all the service discovery has been initialized.
This may be useful during rolling upgrades. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/875
* BUGFIX: vmagent: eliminate data race when `-promscrape.streamParse` command-line is set. Previously this mode could result in scraped metrics with garbage labels.
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/825#issuecomment-723198247 for details.
* BUGFIX: properly calculate `topk_*` and `bottomk_*` functions from [MetricsQL](https://victoriametrics.github.io/MetricsQL.html) for time series with gaps.
See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/883
# [v1.45.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.45.0)
* FEATURE: allow setting `-retentionPeriod` smaller than one month. I.e. `-retentionPeriod=3d`, `-retentionPeriod=2w`, etc. is supported now.
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/173
* FEATURE: optimize more cases according to https://utcc.utoronto.ca/~cks/space/blog/sysadmin/PrometheusLabelNonOptimization . Now the following cases are optimized too:
* `rollup_func(foo{filters}[d]) op bar` -> `rollup_func(foo{filters}[d]) op bar{filters}`
* `transform_func(foo{filters}) op bar` -> `transform_func(foo{filters}) op bar{filters}`
* `num_or_scalar op foo{filters} op bar` -> `num_or_scalar op foo{filters} op bar{filters}`
* FEATURE: improve time series search for queries with multiple label filters. I.e. `foo{label1="value", label2=~"regexp"}`.
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/781
* FEATURE: vmagent: add `stream parse` mode. This mode allows reducing memory usage when individual scrape targets expose tens of millions of metrics.
For example, during scraping Prometheus in [federation](https://prometheus.io/docs/prometheus/latest/federation/) mode.
See `-promscrape.streamParse` command-line option and `stream_parse: true` config option for `scrape_config` section in `-promscrape.config`.
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/825 and [troubleshooting docs for vmagent](https://victoriametrics.github.io/vmagent.html#troubleshooting).
* FEATURE: vmalert: add `-dryRun` command-line option for validating the provided config files without the need to start `vmalert` service.
* FEATURE: accept optional third argument of string type at `topk_*` and `bottomk_*` functions. This is label name for additional time series to return with the sum of time series outside top/bottom K. See [MetricsQL docs](https://victoriametrics.github.io/MetricsQL.html) for more details.
* FEATURE: vmagent: expose `/api/v1/targets` page according to [the corresponding Prometheus API](https://prometheus.io/docs/prometheus/latest/querying/api/#targets).
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/643
* BUGFIX: vmagent: properly handle OpenStack endpoint ending with `v3.0` such as `https://ostack.example.com:5000/v3.0`
in the same way as Prometheus does. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/728#issuecomment-709914803
* BUGFIX: drop trailing data points for time series with a single raw sample. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/748
* BUGFIX: do not drop trailing data points for instant queries to `/api/v1/query`. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/845
* BUGFIX: vmbackup: fix panic when `-origin` isn't specified. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/856
* BUGFIX: vmalert: skip automatically added labels on alerts restore. Label `alertgroup` was introduced in [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/611)
and automatically added to generated time series. By mistake, this new label wasn't correctly purged on restore event and affected alert's ID uniqueness.
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/870
* BUGFIX: vmagent: fix panic at scrape error body formating. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/864
* BUGFIX: vmagent: add leading missing slash to metrics path like Prometheus does. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/835
* BUGFIX: vmagent: drop packet if remote storage returns 4xx status code. This make the behaviour consistent with Prometheus.
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/873
* BUGFIX: vmagent: properly handle 301 redirects. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/869
# [v1.44.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.44.0)
* FEATURE: automatically add missing label filters to binary operands as described at https://utcc.utoronto.ca/~cks/space/blog/sysadmin/PrometheusLabelNonOptimization .
This should improve performance for queries with missing label filters in binary operands. For example, the following query should work faster now, because it shouldn't
fetch and discard time series for `node_filesystem_files_free` metric without matching labels for the left side of the expression:
```
node_filesystem_files{ host="$host", mountpoint="/" } - node_filesystem_files_free
```
* FEATURE: vmagent: add Docker Swarm service discovery (aka [dockerswarm_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dockerswarm_sd_config)).
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/656
* FEATURE: add ability to export data in CSV format. See [these docs](https://victoriametrics.github.io/#how-to-export-csv-data) for details.
* FEATURE: vmagent: add `-promscrape.suppressDuplicateScrapeTargetErrors` command-line flag for suppressing `duplicate scrape target` errors.
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/651 and https://victoriametrics.github.io/vmagent.html#troubleshooting .
* FEATURE: vmagent: show original labels before relabeling is applied on `duplicate scrape target` errors. This should simplify debugging for incorrect relabeling.
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/651
* FEATURE: vmagent: `/targets` page now accepts optional `show_original_labels=1` query arg for displaying original labels for each target before relabeling is applied.
This should simplify debugging for target relabeling configs. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/651
* FEATURE: add `-finalMergeDelay` command-line flag for configuring the delay before final merge for per-month partitions.
The final merge is started after no new data is ingested into per-month partition during `-finalMergeDelay`.
* FEATURE: add `vm_rows_added_to_storage_total` metric, which shows the total number of rows added to storage since app start.
The `sum(rate(vm_rows_added_to_storage_total))` can be smaller than `sum(rate(vm_rows_inserted_total))` if certain metrics are dropped
due to [relabeling](https://victoriametrics.github.io/#relabeling). The `sum(rate(vm_rows_added_to_storage_total))` can be bigger
than `sum(rate(vm_rows_inserted_total))` if [replication](https://victoriametrics.github.io/Cluster-VictoriaMetrics.html#replication-and-data-safety) is enabled.
* FEATURE: keep metric name after applying [MetricsQL](https://victoriametrics.github.io/MetricsQL.html) functions, which don't change time series meaning.
The list of such functions:
* `keep_last_value`
* `keep_next_value`
* `interpolate`
* `running_min`
* `running_max`
* `running_avg`
* `range_min`
* `range_max`
* `range_avg`
* `range_first`
* `range_last`
* `range_quantile`
* `smooth_exponential`
* `ceil`
* `floor`
* `round`
* `clamp_min`
* `clamp_max`
* `max_over_time`
* `min_over_time`
* `avg_over_time`
* `quantile_over_time`
* `mode_over_time`
* `geomean_over_time`
* `holt_winters`
* `predict_linear`
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/674
* BUGFIX: properly handle stale time series after K8S deployment. Previously such time series could be double-counted.
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/748
* BUGFIX: return a single time series at max from `absent()` function like Prometheus does.
* BUGFIX: vmalert: accept days, weeks and years in `for: ` part of config like Prometheus does. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/817
* BUGFIX: fix `mode_over_time(m[d])` calculations. Previously the function could return incorrect results.
# [v1.43.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.43.0)
* FEATURE: reduce CPU usage for repeated queries over sliding time window when no new time series are added to the database.
Typical use cases: repeated evaluation of alerting rules in [vmalert](https://victoriametrics.github.io/vmalert.html) or dashboard auto-refresh in Grafana.
* FEATURE: vmagent: add OpenStack service discovery aka [openstack_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#openstack_sd_config).
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/728 .
* FEATURE: vmalert: make `-maxIdleConnections` configurable for datasource HTTP client. This option can be used for minimizing connection churn.
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/795 .
* FEATURE: add `-influx.maxLineSize` command-line flag for configuring the maximum size for a single Influx line during parsing.
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/807
* BUGFIX: properly handle `inf` values during [background merge of LSM parts](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282).
Previously `Inf` values could result in `NaN` values for adjancent samples in time series. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/805 .
* BUGFIX: fill gaps on graphs for `range_*` and `running_*` functions. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/806 .
* BUGFIX: make a copy of label with new name during relabeling with `action: labelmap` in the same way as Prometheus does.
Previously the original label name has been replaced. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/812 .
* BUGFIX: support parsing floating-point timestamp like Graphite Carbon does. Such timestmaps are truncated to seconds.
# [v1.42.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.42.0)
* FEATURE: use all the available CPU cores when accepting data via a single TCP connection
for [all the supported protocols](https://victoriametrics.github.io/#how-to-import-time-series-data).
Previously data ingested via a single TCP connection could use only a single CPU core. This could limit data ingestion performance.
The main benefit of this feature is that data can be imported at max speed via a single connection - there is no need to open multiple concurrent
connections to VictoriaMetrics or [vmagent](https://victoriametrics.github.io/vmagent.html) in order to achieve the maximum data ingestion speed.
* FEATURE: cluster: improve performance for data ingestion path from `vminsert` to `vmstorage` nodes. The maximum data ingestion performance
for a single connection between `vminsert` and `vmstorage` node scales with the number of available CPU cores on `vmstorage` side.
This should help with https://github.com/VictoriaMetrics/VictoriaMetrics/issues/791 .
* FEATURE: add ability to export / import data in native format via `/api/v1/export/native` and `/api/v1/import/native`.
This is the most optimized approach for data migration between VictoriaMetrics instances. Both single-node and cluster instances are supported.
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/787#issuecomment-700632551 .
* FEATURE: add `reduce_mem_usage` query option to `/api/v1/export` in order to reduce memory usage during data export / import.
See [these docs](https://victoriametrics.github.io/#how-to-export-data-in-json-line-format) for details.
* FEATURE: improve performance for `/api/v1/series` handler when it returns big number of time series.
* FEATURE: add `vm_merge_need_free_disk_space` metric, which can be used for estimating the number of deferred background data merges due to the lack of free disk space.
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/686 .
* FEATURE: add OpenBSD support. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/785 .
* BUGFIX: properly apply `-search.maxStalenessInterval` command-line flag value. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/784 .
* BUGFIX: fix displaying data in Grafana tables. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/720 .
* BUGFIX: do not adjust the number of detected CPU cores found at `/sys/devices/system/cpu/online`.
The adjustement was increasing the resulting GOMAXPROC by 1, which looked confusing to users.
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/685#issuecomment-698595309 .
* BUGFIX: vmagent: do not show `-remoteWrite.url` in initial logs if `-remoteWrite.showURL` isn't set. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/773 .
* BUGFIX: properly handle case when [/metrics/find](https://victoriametrics.github.io/#graphite-metrics-api-usage) finds both a leaf and a node for the given `query=prefix.*`.
In this case only the node must be returned with stripped dot in the end of id as carbonapi does.
# Previous releases
See [releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases).

View file

@ -122,8 +122,8 @@ benchmark-pure:
GO111MODULE=on CGO_ENABLED=0 go test -mod=vendor -bench=. ./app/... GO111MODULE=on CGO_ENABLED=0 go test -mod=vendor -bench=. ./app/...
vendor-update: vendor-update:
GO111MODULE=on go get -u ./lib/... GO111MODULE=on go get -u -d ./lib/...
GO111MODULE=on go get -u ./app/... GO111MODULE=on go get -u -d ./app/...
GO111MODULE=on go mod tidy GO111MODULE=on go mod tidy
GO111MODULE=on go mod vendor GO111MODULE=on go mod vendor
@ -156,4 +156,3 @@ docs-sync:
cp app/vmbackup/README.md docs/vmbackup.md cp app/vmbackup/README.md docs/vmbackup.md
cp app/vmrestore/README.md docs/vmrestore.md cp app/vmrestore/README.md docs/vmrestore.md
cp README.md docs/Single-server-VictoriaMetrics.md cp README.md docs/Single-server-VictoriaMetrics.md
cp CHANGELOG.md docs/

View file

@ -10,7 +10,7 @@
## VictoriaMetrics ## VictoriaMetrics
VictoriaMetrics is fast, cost-effective and scalable time-series database. VictoriaMetrics is fast, cost-effective and scalable monitoring solution and time series database.
It is available in [binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases), It is available in [binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases),
[docker images](https://hub.docker.com/r/victoriametrics/victoria-metrics/) and [docker images](https://hub.docker.com/r/victoriametrics/victoria-metrics/) and
@ -21,11 +21,13 @@ Cluster version is available [here](https://github.com/VictoriaMetrics/VictoriaM
See our [Wiki](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki) for additional documentation. See our [Wiki](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki) for additional documentation.
[Contact us](mailto:info@victoriametrics.com) if you need paid enterprise support for VictoriaMetrics. [Contact us](mailto:info@victoriametrics.com) if you need paid enterprise support for VictoriaMetrics.
See [features available for enterprise customers](https://github.com/VictoriaMetrics/VictoriaMetrics/issues?q=is%3Aissue+label%3Aenterprise). See [features available for enterprise customers](https://victoriametrics.com/enterprise.html).
## Case studies and talks ## Case studies and talks
Click on a link in order to read the corresponding case study
* [Adidas](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/CaseStudies#adidas) * [Adidas](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/CaseStudies#adidas)
* [CERN](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/CaseStudies#cern) * [CERN](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/CaseStudies#cern)
* [COLOPL](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/CaseStudies#colopl) * [COLOPL](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/CaseStudies#colopl)
@ -46,8 +48,8 @@ See [features available for enterprise customers](https://github.com/VictoriaMet
* VictoriaMetrics can be used as long-term storage for Prometheus or for [vmagent](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmagent/README.md). * VictoriaMetrics can be used as long-term storage for Prometheus or for [vmagent](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmagent/README.md).
See [these docs](#prometheus-setup) for details. See [these docs](#prometheus-setup) for details.
* Supports [Prometheus querying API](https://prometheus.io/docs/prometheus/latest/querying/api/), so it can be used as Prometheus drop-in replacement in Grafana. * Supports [Prometheus querying API](https://prometheus.io/docs/prometheus/latest/querying/api/), so it can be used as Prometheus drop-in replacement in Grafana.
VictoriaMetrics implements [MetricsQL](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/MetricsQL) query language, which is inspired by PromQL. VictoriaMetrics implements [MetricsQL](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/MetricsQL) query language, which inspired by PromQL. MetricsQL is backwards-compatible with PromQL.
* Supports global query view. Multiple Prometheus instances may write data into VictoriaMetrics. Later this data may be used in a single query. * Supports global query view. Multiple Prometheus instances or any other data sources may write data into VictoriaMetrics. Later this data may be queried in a single query.
* High performance and good scalability for both [inserts](https://medium.com/@valyala/high-cardinality-tsdb-benchmarks-victoriametrics-vs-timescaledb-vs-influxdb-13e6ee64dd6b) * High performance and good scalability for both [inserts](https://medium.com/@valyala/high-cardinality-tsdb-benchmarks-victoriametrics-vs-timescaledb-vs-influxdb-13e6ee64dd6b)
and [selects](https://medium.com/@valyala/when-size-matters-benchmarking-victoriametrics-vs-timescale-and-influxdb-6035811952d4). and [selects](https://medium.com/@valyala/when-size-matters-benchmarking-victoriametrics-vs-timescale-and-influxdb-6035811952d4).
[Outperforms InfluxDB and TimescaleDB by up to 20x](https://medium.com/@valyala/measuring-vertical-scalability-for-time-series-databases-in-google-cloud-92550d78d8ae). [Outperforms InfluxDB and TimescaleDB by up to 20x](https://medium.com/@valyala/measuring-vertical-scalability-for-time-series-databases-in-google-cloud-92550d78d8ae).
@ -104,7 +106,9 @@ See [features available for enterprise customers](https://github.com/VictoriaMet
* [How to send data from OpenTSDB-compatible agents](#how-to-send-data-from-opentsdb-compatible-agents) * [How to send data from OpenTSDB-compatible agents](#how-to-send-data-from-opentsdb-compatible-agents)
* [Prometheus querying API usage](#prometheus-querying-api-usage) * [Prometheus querying API usage](#prometheus-querying-api-usage)
* [Prometheus querying API enhancements](#prometheus-querying-api-enhancements) * [Prometheus querying API enhancements](#prometheus-querying-api-enhancements)
* [Graphite Metrics API usage](#graphite-metrics-api-usage) * [Graphite API usage](#graphite-api-usage)
* [Graphite Metrics API usage](#graphite-metrics-api-usage)
* [Graphite Tags API usage](#graphite-tags-api-usage)
* [How to build from sources](#how-to-build-from-sources) * [How to build from sources](#how-to-build-from-sources)
* [Development build](#development-build) * [Development build](#development-build)
* [Production build](#production-build) * [Production build](#production-build)
@ -410,6 +414,7 @@ Data sent to VictoriaMetrics via `Graphite plaintext protocol` may be read via t
* [Prometheus querying API](#prometheus-querying-api-usage) * [Prometheus querying API](#prometheus-querying-api-usage)
* Metric names can be explored via [Graphite metrics API](#graphite-metrics-api-usage) * Metric names can be explored via [Graphite metrics API](#graphite-metrics-api-usage)
* Tags can be explored via [Graphite tags API](#graphite-tags-api-usage)
* [go-graphite/carbonapi](https://github.com/go-graphite/carbonapi/blob/master/cmd/carbonapi/carbonapi.example.prometheus.yaml) * [go-graphite/carbonapi](https://github.com/go-graphite/carbonapi/blob/master/cmd/carbonapi/carbonapi.example.prometheus.yaml)
### How to send data from OpenTSDB-compatible agents ### How to send data from OpenTSDB-compatible agents
@ -495,7 +500,9 @@ VictoriaMetrics supports the following handlers from [Prometheus querying API](h
* [/api/v1/series](https://prometheus.io/docs/prometheus/latest/querying/api/#finding-series-by-label-matchers) * [/api/v1/series](https://prometheus.io/docs/prometheus/latest/querying/api/#finding-series-by-label-matchers)
* [/api/v1/labels](https://prometheus.io/docs/prometheus/latest/querying/api/#getting-label-names) * [/api/v1/labels](https://prometheus.io/docs/prometheus/latest/querying/api/#getting-label-names)
* [/api/v1/label/.../values](https://prometheus.io/docs/prometheus/latest/querying/api/#querying-label-values) * [/api/v1/label/.../values](https://prometheus.io/docs/prometheus/latest/querying/api/#querying-label-values)
* [/api/v1/status/tsdb](https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-stats) * [/api/v1/status/tsdb](https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-stats). VictoriaMetrics accepts optional `topN=N` and `date=YYYY-MM-DD`
query args for this handler, where `N` is the number of top entries to return in the response and `YYYY-MM-DD` is the date for collecting the stats.
By default top 10 entries are returned and the stats is collected for the current day.
* [/api/v1/targets](https://prometheus.io/docs/prometheus/latest/querying/api/#targets) - see [these docs](#how-to-scrape-prometheus-exporters-such-as-node-exporter) for more details. * [/api/v1/targets](https://prometheus.io/docs/prometheus/latest/querying/api/#targets) - see [these docs](#how-to-scrape-prometheus-exporters-such-as-node-exporter) for more details.
These handlers can be queried from Prometheus-compatible clients such as Grafana or curl. These handlers can be queried from Prometheus-compatible clients such as Grafana or curl.
@ -522,7 +529,15 @@ Additionally VictoriaMetrics provides the following handlers:
* `/api/v1/status/active_queries` - it returns a list of currently running queries. * `/api/v1/status/active_queries` - it returns a list of currently running queries.
### Graphite Metrics API usage ### Graphite API usage
VictoriaMetrics supports the following Graphite APIs:
* Metrics API - see [these docs](#graphite-metrics-api-usage).
* Tags API - see [these docs](#graphite-tags-api-usage).
#### Graphite Metrics API usage
VictoriaMetrics supports the following handlers from [Graphite Metrics API](https://graphite-api.readthedocs.io/en/latest/api.html#the-metrics-api): VictoriaMetrics supports the following handlers from [Graphite Metrics API](https://graphite-api.readthedocs.io/en/latest/api.html#the-metrics-api):
@ -536,6 +551,19 @@ VictoriaMetrics accepts the following additional query args at `/metrics/find` a
that start with `node_`. By default `delimiter=.`. that start with `node_`. By default `delimiter=.`.
#### Graphite Tags API usage
VictoriaMetrics supports the following handlers from [Graphite Tags API](https://graphite.readthedocs.io/en/stable/tags.html):
* [/tags/tagSeries](https://graphite.readthedocs.io/en/stable/tags.html#adding-series-to-the-tagdb)
* [/tags/tagMultiSeries](https://graphite.readthedocs.io/en/stable/tags.html#adding-series-to-the-tagdb)
* [/tags](https://graphite.readthedocs.io/en/stable/tags.html#exploring-tags)
* [/tags/tag_name](https://graphite.readthedocs.io/en/stable/tags.html#exploring-tags)
* [/tags/findSeries](https://graphite.readthedocs.io/en/stable/tags.html#exploring-tags)
* [/tags/autoComplete/tags](https://graphite.readthedocs.io/en/stable/tags.html#auto-complete-support)
* [/tags/autoComplete/values](https://graphite.readthedocs.io/en/stable/tags.html#auto-complete-support)
### How to build from sources ### How to build from sources
We recommend using either [binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) or We recommend using either [binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) or
@ -695,7 +723,16 @@ VictoriaMetrics provides the following handlers for exporting data:
Send a request to `http://<victoriametrics-addr>:8428/api/v1/export/native?match[]=<timeseries_selector_for_export>`, Send a request to `http://<victoriametrics-addr>:8428/api/v1/export/native?match[]=<timeseries_selector_for_export>`,
where `<timeseries_selector_for_export>` may contain any [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) where `<timeseries_selector_for_export>` may contain any [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors)
for metrics to export. Use `{__name__!=""}` selector for fetching all the time series. for metrics to export. Use `{__name__=~".*"}` selector for fetching all the time series.
On large databases you may experience problems with limit on unique timeseries (default value is 300000). In this case you need to adjust `-search.maxUniqueTimeseries` parameter:
```bash
# count unique timeseries in database
wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq '.data[0]'
# relaunch victoriametrics with search.maxUniqueTimeseries more than value from previous command
```
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values. unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
@ -1196,13 +1233,18 @@ VictoriaMetrics also exposes currently running queries with their execution time
* It is recommended inspecting logs during troubleshooting, since they may contain useful information. * It is recommended inspecting logs during troubleshooting, since they may contain useful information.
* VictoriaMetrics buffers incoming data in memory for up to a few seconds before flushing it to persistent storage.
This may lead to the following "issues":
* Data becomes available for querying in a few seconds after inserting. It is possible to flush in-memory buffers to persistent storage
by requesting `/internal/force_flush` http handler.
* The last few seconds of inserted data may be lost on unclean shutdown (i.e. OOM, `kill -9` or hardware reset).
See [this article for technical details](https://valyala.medium.com/wal-usage-looks-broken-in-modern-time-series-databases-b62a627ab704).
* If VictoriaMetrics works slowly and eats more than a CPU core per 100K ingested data points per second, * If VictoriaMetrics works slowly and eats more than a CPU core per 100K ingested data points per second,
then it is likely you have too many active time series for the current amount of RAM. then it is likely you have too many active time series for the current amount of RAM.
VictoriaMetrics [exposes](#monitoring) `vm_slow_*` metrics, which could be used as an indicator of low amounts of RAM. VictoriaMetrics [exposes](#monitoring) `vm_slow_*` metrics, which could be used as an indicator of low amounts of RAM.
It is recommended increasing the amount of RAM on the node with VictoriaMetrics in order to improve It is recommended increasing the amount of RAM on the node with VictoriaMetrics in order to improve
ingestion and query performance in this case. ingestion and query performance in this case.
Another option is to increase `-memory.allowedPercent` command-line flag value. Be careful with this
option, since too big value for `-memory.allowedPercent` may result in high I/O usage.
* VictoriaMetrics prioritizes data ingestion over data querying. So if it has no enough resources for data ingestion, * VictoriaMetrics prioritizes data ingestion over data querying. So if it has no enough resources for data ingestion,
then data querying may slow down significantly. then data querying may slow down significantly.
@ -1217,9 +1259,9 @@ VictoriaMetrics also exposes currently running queries with their execution time
which would start background merge if they had more free disk space. which would start background merge if they had more free disk space.
* If VictoriaMetrics doesn't work because of certain parts are corrupted due to disk errors, * If VictoriaMetrics doesn't work because of certain parts are corrupted due to disk errors,
then just remove directories with broken parts. This will recover VictoriaMetrics at the cost then just remove directories with broken parts. It is safe removing subdirectories under `<-storageDataPath>/data/{big,small}/YYYY_MM` directories
of data loss stored in the broken parts. In the future, `vmrecover` tool will be created when VictoriaMetrics isn't running. This recovers VictoriaMetrics at the cost of data loss stored in the deleted broken parts.
for automatic recovering from such errors. In the future, `vmrecover` tool will be created for automatic recovering from such errors.
* If you see gaps on the graphs, try resetting the cache by sending request to `/internal/resetRollupResultCache`. * If you see gaps on the graphs, try resetting the cache by sending request to `/internal/resetRollupResultCache`.
If this removes gaps on the graphs, then it is likely data with timestamps older than `-search.cacheTimestampOffset` If this removes gaps on the graphs, then it is likely data with timestamps older than `-search.cacheTimestampOffset`
@ -1241,6 +1283,11 @@ VictoriaMetrics also exposes currently running queries with their execution time
This prevents from ingesting metrics with too many labels. It is recommended [monitoring](#monitoring) `vm_metrics_with_dropped_labels_total` This prevents from ingesting metrics with too many labels. It is recommended [monitoring](#monitoring) `vm_metrics_with_dropped_labels_total`
metric in order to determine whether `-maxLabelsPerTimeseries` must be adjusted for your workload. metric in order to determine whether `-maxLabelsPerTimeseries` must be adjusted for your workload.
* If you store Graphite metrics like `foo.bar.baz` in VictoriaMetrics, then `-search.treatDotsAsIsInRegexps` command-line flag could be useful.
By default `.` chars in regexps match any char. If you need matching only dots, then the `\\.` must be used in regexp filters.
When `-search.treatDotsAsIsInRegexps` option is enabled, then dots in regexps are automatically escaped in order to match only dots instead of arbitrary chars.
This may significantly increase performance when locating time series for the given label filters.
* VictoriaMetrics ignores `NaN` values during data ingestion. * VictoriaMetrics ignores `NaN` values during data ingestion.

View file

@ -63,6 +63,22 @@ Then send Influx data to `http://vmagent-host:8429`. See [these docs](https://gi
Pass `-help` to `vmagent` in order to see the full list of supported command-line flags with their descriptions. Pass `-help` to `vmagent` in order to see the full list of supported command-line flags with their descriptions.
### Configuration update
`vmagent` should be restarted in order to update config options set via command-line args.
`vmagent` supports multiple approaches for reloading configs from updated config files such as `-promscrape.config`, `-remoteWrite.relabelConfig` and `-remoteWrite.urlRelabelConfig`:
* Sending `SUGHUP` signal to `vmagent` process:
```bash
kill -SIGHUP `pidof vmagent`
```
* Sending HTTP request to `http://vmagent:8429/-/reload` endpoint.
There is also `-promscrape.configCheckInterval` command-line option, which can be used for automatic reloading configs from updated `-promscrape.config` file.
### Use cases ### Use cases
@ -197,6 +213,7 @@ The relabeling can be defined in the following places:
Read more about relabeling in the following articles: Read more about relabeling in the following articles:
* [How to use Relabeling in Prometheus and VictoriaMetrics](https://valyala.medium.com/how-to-use-relabeling-in-prometheus-and-victoriametrics-8b90fc22c4b2)
* [Life of a label](https://www.robustperception.io/life-of-a-label) * [Life of a label](https://www.robustperception.io/life-of-a-label)
* [Discarding targets and timeseries with relabeling](https://www.robustperception.io/relabelling-can-discard-targets-timeseries-and-alerts) * [Discarding targets and timeseries with relabeling](https://www.robustperception.io/relabelling-can-discard-targets-timeseries-and-alerts)
* [Dropping labels at scrape time](https://www.robustperception.io/dropping-metrics-at-scrape-time-with-prometheus) * [Dropping labels at scrape time](https://www.robustperception.io/dropping-metrics-at-scrape-time-with-prometheus)

View file

@ -208,13 +208,13 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
return true return true
case "/targets": case "/targets":
promscrapeTargetsRequests.Inc() promscrapeTargetsRequests.Inc()
w.Header().Set("Content-Type", "text/plain") w.Header().Set("Content-Type", "text/plain; charset=utf-8")
showOriginalLabels, _ := strconv.ParseBool(r.FormValue("show_original_labels")) showOriginalLabels, _ := strconv.ParseBool(r.FormValue("show_original_labels"))
promscrape.WriteHumanReadableTargetsStatus(w, showOriginalLabels) promscrape.WriteHumanReadableTargetsStatus(w, showOriginalLabels)
return true return true
case "/api/v1/targets": case "/api/v1/targets":
promscrapeAPIV1TargetsRequests.Inc() promscrapeAPIV1TargetsRequests.Inc()
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json; charset=utf-8")
state := r.FormValue("state") state := r.FormValue("state")
promscrape.WriteAPIV1Targets(w, state) promscrape.WriteAPIV1Targets(w, state)
return true return true
@ -228,7 +228,7 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
errMsg := fmt.Sprintf("waiting for scrapes to init, left: %d", rdy) errMsg := fmt.Sprintf("waiting for scrapes to init, left: %d", rdy)
http.Error(w, errMsg, http.StatusTooEarly) http.Error(w, errMsg, http.StatusTooEarly)
} else { } else {
w.Header().Set("Content-Type", "text/plain") w.Header().Set("Content-Type", "text/plain; charset=utf-8")
w.WriteHeader(http.StatusOK) w.WriteHeader(http.StatusOK)
w.Write([]byte("OK")) w.Write([]byte("OK"))
} }

View file

@ -82,7 +82,7 @@ func (s *VMStorage) Query(ctx context.Context, query string) ([]Metric, error) {
if err != nil { if err != nil {
return nil, err return nil, err
} }
req.Header.Set("Content-Type", "application/json") req.Header.Set("Content-Type", "application/json; charset=utf-8")
if s.basicAuthPass != "" { if s.basicAuthPass != "" {
req.SetBasicAuth(s.basicAuthUser, s.basicAuthPass) req.SetBasicAuth(s.basicAuthUser, s.basicAuthPass)
} }

View file

@ -28,7 +28,7 @@ func (am *AlertManager) Send(ctx context.Context, alerts []Alert) error {
if err != nil { if err != nil {
return err return err
} }
req.Header.Set("Content-Type", "application/json") req.Header.Set("Content-Type", "application/json; charset=utf-8")
req = req.WithContext(ctx) req = req.WithContext(ctx)
if am.basicAuthPass != "" { if am.basicAuthPass != "" {
req.SetBasicAuth(am.basicAuthUser, am.basicAuthPass) req.SetBasicAuth(am.basicAuthUser, am.basicAuthPass)

View file

@ -40,7 +40,7 @@ func (rh *requestHandler) handler(w http.ResponseWriter, r *http.Request) bool {
httpserver.Errorf(w, r, "error in %q: %s", r.URL.Path, err) httpserver.Errorf(w, r, "error in %q: %s", r.URL.Path, err)
return true return true
} }
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.Write(data) w.Write(data)
return true return true
case "/api/v1/alerts": case "/api/v1/alerts":
@ -49,7 +49,7 @@ func (rh *requestHandler) handler(w http.ResponseWriter, r *http.Request) bool {
httpserver.Errorf(w, r, "error in %q: %s", r.URL.Path, err) httpserver.Errorf(w, r, "error in %q: %s", r.URL.Path, err)
return true return true
} }
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.Write(data) w.Write(data)
return true return true
case "/-/reload": case "/-/reload":
@ -67,7 +67,7 @@ func (rh *requestHandler) handler(w http.ResponseWriter, r *http.Request) bool {
httpserver.Errorf(w, r, "error in %q: %s", r.URL.Path, err) httpserver.Errorf(w, r, "error in %q: %s", r.URL.Path, err)
return true return true
} }
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.Write(data) w.Write(data)
return true return true
} }

View file

@ -0,0 +1,102 @@
package graphite
import (
"fmt"
"net/http"
"sort"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
graphiteparser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/graphite"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/metrics"
)
// TagsTagSeriesHandler implements /tags/tagSeries handler.
//
// See https://graphite.readthedocs.io/en/stable/tags.html#adding-series-to-the-tagdb
func TagsTagSeriesHandler(w http.ResponseWriter, r *http.Request) error {
return registerMetrics(w, r, false)
}
// TagsTagMultiSeriesHandler implements /tags/tagMultiSeries handler.
//
// See https://graphite.readthedocs.io/en/stable/tags.html#adding-series-to-the-tagdb
func TagsTagMultiSeriesHandler(w http.ResponseWriter, r *http.Request) error {
return registerMetrics(w, r, true)
}
func registerMetrics(w http.ResponseWriter, r *http.Request, isJSONResponse bool) error {
startTime := time.Now()
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse form values: %w", err)
}
paths := r.Form["path"]
var row graphiteparser.Row
var labels []prompb.Label
var b []byte
var tagsPool []graphiteparser.Tag
mrs := make([]storage.MetricRow, len(paths))
ct := time.Now().UnixNano() / 1e6
canonicalPaths := make([]string, len(paths))
for i, path := range paths {
var err error
tagsPool, err = row.UnmarshalMetricAndTags(path, tagsPool[:0])
if err != nil {
return fmt.Errorf("cannot parse path=%q: %w", path, err)
}
// Construct canonical path according to https://graphite.readthedocs.io/en/stable/tags.html#adding-series-to-the-tagdb
sort.Slice(row.Tags, func(i, j int) bool {
return row.Tags[i].Key < row.Tags[j].Key
})
b = append(b[:0], row.Metric...)
for _, tag := range row.Tags {
b = append(b, ';')
b = append(b, tag.Key...)
b = append(b, '=')
b = append(b, tag.Value...)
}
canonicalPaths[i] = string(b)
// Convert parsed metric and tags to labels.
labels = append(labels[:0], prompb.Label{
Name: []byte("__name__"),
Value: []byte(row.Metric),
})
for _, tag := range row.Tags {
labels = append(labels, prompb.Label{
Name: []byte(tag.Key),
Value: []byte(tag.Value),
})
}
// Put labels with the current timestamp to MetricRow
mr := &mrs[i]
mr.MetricNameRaw = storage.MarshalMetricNameRaw(mr.MetricNameRaw[:0], labels)
mr.Timestamp = ct
}
if err := vmstorage.RegisterMetricNames(mrs); err != nil {
return fmt.Errorf("cannot register paths: %w", err)
}
// Return response
contentType := "text/plain; charset=utf-8"
if isJSONResponse {
contentType = "application/json; charset=utf-8"
}
w.Header().Set("Content-Type", contentType)
WriteTagsTagMultiSeriesResponse(w, canonicalPaths, isJSONResponse)
if isJSONResponse {
tagsTagMultiSeriesDuration.UpdateDuration(startTime)
} else {
tagsTagSeriesDuration.UpdateDuration(startTime)
}
return nil
}
var (
tagsTagSeriesDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/tags/tagSeries"}`)
tagsTagMultiSeriesDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/tags/tagMultiSeries"}`)
)

View file

@ -0,0 +1,14 @@
{% stripspace %}
TagsTagMultiSeriesResponse generates response for /tags/tagMultiSeries .
See https://graphite.readthedocs.io/en/stable/tags.html#adding-series-to-the-tagdb
{% func TagsTagMultiSeriesResponse(canonicalPaths []string, isJSONResponse bool) %}
{% if isJSONResponse %}[{% endif %}
{% for i, path := range canonicalPaths %}
{%q= path %}
{% if i+1 < len(canonicalPaths) %},{% endif %}
{% endfor %}
{% if isJSONResponse %}]{% endif %}
{% endfunc %}
{% endstripspace %}

View file

@ -0,0 +1,75 @@
// Code generated by qtc from "tags_tag_multi_series_response.qtpl". DO NOT EDIT.
// See https://github.com/valyala/quicktemplate for details.
// TagsTagMultiSeriesResponse generates response for /tags/tagMultiSeries .See https://graphite.readthedocs.io/en/stable/tags.html#adding-series-to-the-tagdb
//line app/vminsert/graphite/tags_tag_multi_series_response.qtpl:5
package graphite
//line app/vminsert/graphite/tags_tag_multi_series_response.qtpl:5
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vminsert/graphite/tags_tag_multi_series_response.qtpl:5
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vminsert/graphite/tags_tag_multi_series_response.qtpl:5
func StreamTagsTagMultiSeriesResponse(qw422016 *qt422016.Writer, canonicalPaths []string, isJSONResponse bool) {
//line app/vminsert/graphite/tags_tag_multi_series_response.qtpl:6
if isJSONResponse {
//line app/vminsert/graphite/tags_tag_multi_series_response.qtpl:6
qw422016.N().S(`[`)
//line app/vminsert/graphite/tags_tag_multi_series_response.qtpl:6
}
//line app/vminsert/graphite/tags_tag_multi_series_response.qtpl:7
for i, path := range canonicalPaths {
//line app/vminsert/graphite/tags_tag_multi_series_response.qtpl:8
qw422016.N().Q(path)
//line app/vminsert/graphite/tags_tag_multi_series_response.qtpl:9
if i+1 < len(canonicalPaths) {
//line app/vminsert/graphite/tags_tag_multi_series_response.qtpl:9
qw422016.N().S(`,`)
//line app/vminsert/graphite/tags_tag_multi_series_response.qtpl:9
}
//line app/vminsert/graphite/tags_tag_multi_series_response.qtpl:10
}
//line app/vminsert/graphite/tags_tag_multi_series_response.qtpl:11
if isJSONResponse {
//line app/vminsert/graphite/tags_tag_multi_series_response.qtpl:11
qw422016.N().S(`]`)
//line app/vminsert/graphite/tags_tag_multi_series_response.qtpl:11
}
//line app/vminsert/graphite/tags_tag_multi_series_response.qtpl:12
}
//line app/vminsert/graphite/tags_tag_multi_series_response.qtpl:12
func WriteTagsTagMultiSeriesResponse(qq422016 qtio422016.Writer, canonicalPaths []string, isJSONResponse bool) {
//line app/vminsert/graphite/tags_tag_multi_series_response.qtpl:12
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vminsert/graphite/tags_tag_multi_series_response.qtpl:12
StreamTagsTagMultiSeriesResponse(qw422016, canonicalPaths, isJSONResponse)
//line app/vminsert/graphite/tags_tag_multi_series_response.qtpl:12
qt422016.ReleaseWriter(qw422016)
//line app/vminsert/graphite/tags_tag_multi_series_response.qtpl:12
}
//line app/vminsert/graphite/tags_tag_multi_series_response.qtpl:12
func TagsTagMultiSeriesResponse(canonicalPaths []string, isJSONResponse bool) string {
//line app/vminsert/graphite/tags_tag_multi_series_response.qtpl:12
qb422016 := qt422016.AcquireByteBuffer()
//line app/vminsert/graphite/tags_tag_multi_series_response.qtpl:12
WriteTagsTagMultiSeriesResponse(qb422016, canonicalPaths, isJSONResponse)
//line app/vminsert/graphite/tags_tag_multi_series_response.qtpl:12
qs422016 := string(qb422016.B)
//line app/vminsert/graphite/tags_tag_multi_series_response.qtpl:12
qt422016.ReleaseByteBuffer(qb422016)
//line app/vminsert/graphite/tags_tag_multi_series_response.qtpl:12
return qs422016
//line app/vminsert/graphite/tags_tag_multi_series_response.qtpl:12
}

View file

@ -153,15 +153,31 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
influxQueryRequests.Inc() influxQueryRequests.Inc()
fmt.Fprintf(w, `{"results":[{"series":[{"values":[]}]}]}`) fmt.Fprintf(w, `{"results":[{"series":[{"values":[]}]}]}`)
return true return true
case "/tags/tagSeries":
graphiteTagsTagSeriesRequests.Inc()
if err := graphite.TagsTagSeriesHandler(w, r); err != nil {
graphiteTagsTagSeriesErrors.Inc()
httpserver.Errorf(w, r, "error in %q: %s", r.URL.Path, err)
return true
}
return true
case "/tags/tagMultiSeries":
graphiteTagsTagMultiSeriesRequests.Inc()
if err := graphite.TagsTagMultiSeriesHandler(w, r); err != nil {
graphiteTagsTagMultiSeriesErrors.Inc()
httpserver.Errorf(w, r, "error in %q: %s", r.URL.Path, err)
return true
}
return true
case "/targets": case "/targets":
promscrapeTargetsRequests.Inc() promscrapeTargetsRequests.Inc()
w.Header().Set("Content-Type", "text/plain") w.Header().Set("Content-Type", "text/plain; charset=utf-8")
showOriginalLabels, _ := strconv.ParseBool(r.FormValue("show_original_labels")) showOriginalLabels, _ := strconv.ParseBool(r.FormValue("show_original_labels"))
promscrape.WriteHumanReadableTargetsStatus(w, showOriginalLabels) promscrape.WriteHumanReadableTargetsStatus(w, showOriginalLabels)
return true return true
case "/api/v1/targets": case "/api/v1/targets":
promscrapeAPIV1TargetsRequests.Inc() promscrapeAPIV1TargetsRequests.Inc()
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json; charset=utf-8")
state := r.FormValue("state") state := r.FormValue("state")
promscrape.WriteAPIV1Targets(w, state) promscrape.WriteAPIV1Targets(w, state)
return true return true
@ -175,7 +191,7 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
errMsg := fmt.Sprintf("waiting for scrape config to init targets, configs left: %d", rdy) errMsg := fmt.Sprintf("waiting for scrape config to init targets, configs left: %d", rdy)
http.Error(w, errMsg, http.StatusTooEarly) http.Error(w, errMsg, http.StatusTooEarly)
} else { } else {
w.Header().Set("Content-Type", "text/plain") w.Header().Set("Content-Type", "text/plain; charset=utf-8")
w.WriteHeader(http.StatusOK) w.WriteHeader(http.StatusOK)
w.Write([]byte("OK")) w.Write([]byte("OK"))
} }
@ -207,6 +223,12 @@ var (
influxQueryRequests = metrics.NewCounter(`vm_http_requests_total{path="/query", protocol="influx"}`) influxQueryRequests = metrics.NewCounter(`vm_http_requests_total{path="/query", protocol="influx"}`)
graphiteTagsTagSeriesRequests = metrics.NewCounter(`vm_http_requests_total{path="/tags/tagSeries", protocol="graphite"}`)
graphiteTagsTagSeriesErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/tags/tagSeries", protocol="graphite"}`)
graphiteTagsTagMultiSeriesRequests = metrics.NewCounter(`vm_http_requests_total{path="/tags/tagMultiSeries", protocol="graphite"}`)
graphiteTagsTagMultiSeriesErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/tags/tagMultiSeries", protocol="graphite"}`)
promscrapeTargetsRequests = metrics.NewCounter(`vm_http_requests_total{path="/targets"}`) promscrapeTargetsRequests = metrics.NewCounter(`vm_http_requests_total{path="/targets"}`)
promscrapeAPIV1TargetsRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/targets"}`) promscrapeAPIV1TargetsRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/targets"}`)

View file

@ -84,10 +84,7 @@ func MetricsFindHandler(startTime time.Time, w http.ResponseWriter, r *http.Requ
} }
paths = deduplicatePaths(paths, delimiter) paths = deduplicatePaths(paths, delimiter)
sortPaths(paths, delimiter) sortPaths(paths, delimiter)
contentType := "application/json" contentType := getContentType(jsonp)
if jsonp != "" {
contentType = "text/javascript"
}
w.Header().Set("Content-Type", contentType) w.Header().Set("Content-Type", contentType)
bw := bufferedwriter.Get(w) bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw) defer bufferedwriter.Put(bw)
@ -166,10 +163,7 @@ func MetricsExpandHandler(startTime time.Time, w http.ResponseWriter, r *http.Re
} }
m[query] = paths m[query] = paths
} }
contentType := "application/json" contentType := getContentType(jsonp)
if jsonp != "" {
contentType = "text/javascript"
}
w.Header().Set("Content-Type", contentType) w.Header().Set("Content-Type", contentType)
if groupByExpr { if groupByExpr {
for _, paths := range m { for _, paths := range m {
@ -215,10 +209,7 @@ func MetricsIndexHandler(startTime time.Time, w http.ResponseWriter, r *http.Req
if err != nil { if err != nil {
return fmt.Errorf(`cannot obtain metric names: %w`, err) return fmt.Errorf(`cannot obtain metric names: %w`, err)
} }
contentType := "application/json" contentType := getContentType(jsonp)
if jsonp != "" {
contentType = "text/javascript"
}
w.Header().Set("Content-Type", contentType) w.Header().Set("Content-Type", contentType)
bw := bufferedwriter.Get(w) bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw) defer bufferedwriter.Put(bw)
@ -417,3 +408,10 @@ var regexpCache = make(map[regexpCacheKey]*regexpCacheEntry)
var regexpCacheLock sync.Mutex var regexpCacheLock sync.Mutex
const maxRegexpCacheSize = 10000 const maxRegexpCacheSize = 10000
func getContentType(jsonp string) string {
if jsonp == "" {
return "application/json; charset=utf-8"
}
return "text/javascript; charset=utf-8"
}

View file

@ -0,0 +1,20 @@
{% stripspace %}
Tags generates response for /tags/<tag_name> handler
See https://graphite.readthedocs.io/en/stable/tags.html#exploring-tags
{% func TagValuesResponse(tagName string, tagValues []string) %}
{
"tag":{%q= tagName %},
"values":[
{% for i, value := range tagValues %}
{
"count":1,
"value":{%q= value %}
}
{% if i+1 < len(tagValues) %},{% endif %}
{% endfor %}
]
}
{% endfunc %}
{% endstripspace %}

View file

@ -0,0 +1,75 @@
// Code generated by qtc from "tag_values_response.qtpl". DO NOT EDIT.
// See https://github.com/valyala/quicktemplate for details.
// Tags generates response for /tags/<tag_name> handlerSee https://graphite.readthedocs.io/en/stable/tags.html#exploring-tags
//line app/vmselect/graphite/tag_values_response.qtpl:5
package graphite
//line app/vmselect/graphite/tag_values_response.qtpl:5
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vmselect/graphite/tag_values_response.qtpl:5
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vmselect/graphite/tag_values_response.qtpl:5
func StreamTagValuesResponse(qw422016 *qt422016.Writer, tagName string, tagValues []string) {
//line app/vmselect/graphite/tag_values_response.qtpl:5
qw422016.N().S(`{"tag":`)
//line app/vmselect/graphite/tag_values_response.qtpl:7
qw422016.N().Q(tagName)
//line app/vmselect/graphite/tag_values_response.qtpl:7
qw422016.N().S(`,"values":[`)
//line app/vmselect/graphite/tag_values_response.qtpl:9
for i, value := range tagValues {
//line app/vmselect/graphite/tag_values_response.qtpl:9
qw422016.N().S(`{"count":1,"value":`)
//line app/vmselect/graphite/tag_values_response.qtpl:12
qw422016.N().Q(value)
//line app/vmselect/graphite/tag_values_response.qtpl:12
qw422016.N().S(`}`)
//line app/vmselect/graphite/tag_values_response.qtpl:14
if i+1 < len(tagValues) {
//line app/vmselect/graphite/tag_values_response.qtpl:14
qw422016.N().S(`,`)
//line app/vmselect/graphite/tag_values_response.qtpl:14
}
//line app/vmselect/graphite/tag_values_response.qtpl:15
}
//line app/vmselect/graphite/tag_values_response.qtpl:15
qw422016.N().S(`]}`)
//line app/vmselect/graphite/tag_values_response.qtpl:18
}
//line app/vmselect/graphite/tag_values_response.qtpl:18
func WriteTagValuesResponse(qq422016 qtio422016.Writer, tagName string, tagValues []string) {
//line app/vmselect/graphite/tag_values_response.qtpl:18
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/graphite/tag_values_response.qtpl:18
StreamTagValuesResponse(qw422016, tagName, tagValues)
//line app/vmselect/graphite/tag_values_response.qtpl:18
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/graphite/tag_values_response.qtpl:18
}
//line app/vmselect/graphite/tag_values_response.qtpl:18
func TagValuesResponse(tagName string, tagValues []string) string {
//line app/vmselect/graphite/tag_values_response.qtpl:18
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/graphite/tag_values_response.qtpl:18
WriteTagValuesResponse(qb422016, tagName, tagValues)
//line app/vmselect/graphite/tag_values_response.qtpl:18
qs422016 := string(qb422016.B)
//line app/vmselect/graphite/tag_values_response.qtpl:18
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/graphite/tag_values_response.qtpl:18
return qs422016
//line app/vmselect/graphite/tag_values_response.qtpl:18
}

View file

@ -0,0 +1,368 @@
package graphite
import (
"fmt"
"net/http"
"regexp"
"sort"
"strconv"
"strings"
"time"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/bufferedwriter"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/searchutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/metrics"
)
// TagsAutoCompleteValuesHandler implements /tags/autoComplete/values endpoint from Graphite Tags API.
//
// See https://graphite.readthedocs.io/en/stable/tags.html#auto-complete-support
func TagsAutoCompleteValuesHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
deadline := searchutils.GetDeadlineForQuery(r, startTime)
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse form values: %w", err)
}
limit, err := getInt(r, "limit")
if err != nil {
return err
}
if limit <= 0 {
// Use limit=100 by default. See https://graphite.readthedocs.io/en/stable/tags.html#auto-complete-support
limit = 100
}
tag := r.FormValue("tag")
if len(tag) == 0 {
return fmt.Errorf("missing `tag` query arg")
}
valuePrefix := r.FormValue("valuePrefix")
exprs := r.Form["expr"]
var tagValues []string
if len(exprs) == 0 {
// Fast path: there are no `expr` filters, so use netstorage.GetGraphiteTagValues.
// Escape special chars in tagPrefix as Graphite does.
// See https://github.com/graphite-project/graphite-web/blob/3ad279df5cb90b211953e39161df416e54a84948/webapp/graphite/tags/base.py#L228
filter := regexp.QuoteMeta(valuePrefix)
tagValues, err = netstorage.GetGraphiteTagValues(tag, filter, limit, deadline)
if err != nil {
return err
}
} else {
// Slow path: use netstorage.SearchMetricNames for applying `expr` filters.
sq, err := getSearchQueryForExprs(exprs)
if err != nil {
return err
}
mns, err := netstorage.SearchMetricNames(sq, deadline)
if err != nil {
return fmt.Errorf("cannot fetch metric names for %q: %w", sq, err)
}
m := make(map[string]struct{})
if tag == "name" {
tag = "__name__"
}
for _, mn := range mns {
tagValue := mn.GetTagValue(tag)
if len(tagValue) == 0 {
continue
}
m[string(tagValue)] = struct{}{}
}
if len(valuePrefix) > 0 {
for tagValue := range m {
if !strings.HasPrefix(tagValue, valuePrefix) {
delete(m, tagValue)
}
}
}
tagValues = make([]string, 0, len(m))
for tagValue := range m {
tagValues = append(tagValues, tagValue)
}
sort.Strings(tagValues)
if limit > 0 && limit < len(tagValues) {
tagValues = tagValues[:limit]
}
}
jsonp := r.FormValue("jsonp")
contentType := getContentType(jsonp)
w.Header().Set("Content-Type", contentType)
bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw)
WriteTagsAutoCompleteResponse(bw, tagValues, jsonp)
if err := bw.Flush(); err != nil {
return err
}
tagsAutoCompleteValuesDuration.UpdateDuration(startTime)
return nil
}
var tagsAutoCompleteValuesDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/tags/autoComplete/values"}`)
// TagsAutoCompleteTagsHandler implements /tags/autoComplete/tags endpoint from Graphite Tags API.
//
// See https://graphite.readthedocs.io/en/stable/tags.html#auto-complete-support
func TagsAutoCompleteTagsHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
deadline := searchutils.GetDeadlineForQuery(r, startTime)
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse form values: %w", err)
}
limit, err := getInt(r, "limit")
if err != nil {
return err
}
if limit <= 0 {
// Use limit=100 by default. See https://graphite.readthedocs.io/en/stable/tags.html#auto-complete-support
limit = 100
}
tagPrefix := r.FormValue("tagPrefix")
exprs := r.Form["expr"]
var labels []string
if len(exprs) == 0 {
// Fast path: there are no `expr` filters, so use netstorage.GetGraphiteTags.
// Escape special chars in tagPrefix as Graphite does.
// See https://github.com/graphite-project/graphite-web/blob/3ad279df5cb90b211953e39161df416e54a84948/webapp/graphite/tags/base.py#L181
filter := regexp.QuoteMeta(tagPrefix)
labels, err = netstorage.GetGraphiteTags(filter, limit, deadline)
if err != nil {
return err
}
} else {
// Slow path: use netstorage.SearchMetricNames for applying `expr` filters.
sq, err := getSearchQueryForExprs(exprs)
if err != nil {
return err
}
mns, err := netstorage.SearchMetricNames(sq, deadline)
if err != nil {
return fmt.Errorf("cannot fetch metric names for %q: %w", sq, err)
}
m := make(map[string]struct{})
for _, mn := range mns {
m["name"] = struct{}{}
for _, tag := range mn.Tags {
m[string(tag.Key)] = struct{}{}
}
}
if len(tagPrefix) > 0 {
for label := range m {
if !strings.HasPrefix(label, tagPrefix) {
delete(m, label)
}
}
}
labels = make([]string, 0, len(m))
for label := range m {
labels = append(labels, label)
}
sort.Strings(labels)
if limit > 0 && limit < len(labels) {
labels = labels[:limit]
}
}
jsonp := r.FormValue("jsonp")
contentType := getContentType(jsonp)
w.Header().Set("Content-Type", contentType)
bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw)
WriteTagsAutoCompleteResponse(bw, labels, jsonp)
if err := bw.Flush(); err != nil {
return err
}
tagsAutoCompleteTagsDuration.UpdateDuration(startTime)
return nil
}
var tagsAutoCompleteTagsDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/tags/autoComplete/tags"}`)
// TagsFindSeriesHandler implements /tags/findSeries endpoint from Graphite Tags API.
//
// See https://graphite.readthedocs.io/en/stable/tags.html#exploring-tags
func TagsFindSeriesHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
deadline := searchutils.GetDeadlineForQuery(r, startTime)
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse form values: %w", err)
}
limit, err := getInt(r, "limit")
if err != nil {
return err
}
exprs := r.Form["expr"]
if len(exprs) == 0 {
return fmt.Errorf("expecting at least one `expr` query arg")
}
sq, err := getSearchQueryForExprs(exprs)
if err != nil {
return err
}
mns, err := netstorage.SearchMetricNames(sq, deadline)
if err != nil {
return fmt.Errorf("cannot fetch metric names for %q: %w", sq, err)
}
paths := getCanonicalPaths(mns)
if limit > 0 && limit < len(paths) {
paths = paths[:limit]
}
w.Header().Set("Content-Type", "application/json; charset=utf-8")
bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw)
WriteTagsFindSeriesResponse(bw, paths)
if err := bw.Flush(); err != nil {
return err
}
tagsFindSeriesDuration.UpdateDuration(startTime)
return nil
}
func getCanonicalPaths(mns []storage.MetricName) []string {
paths := make([]string, 0, len(mns))
var b []byte
var tags []storage.Tag
for _, mn := range mns {
b = append(b[:0], mn.MetricGroup...)
tags = append(tags[:0], mn.Tags...)
sort.Slice(tags, func(i, j int) bool {
return string(tags[i].Key) < string(tags[j].Key)
})
for _, tag := range tags {
b = append(b, ';')
b = append(b, tag.Key...)
b = append(b, '=')
b = append(b, tag.Value...)
}
paths = append(paths, string(b))
}
sort.Strings(paths)
return paths
}
var tagsFindSeriesDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/tags/findSeries"}`)
// TagValuesHandler implements /tags/<tag_name> endpoint from Graphite Tags API.
//
// See https://graphite.readthedocs.io/en/stable/tags.html#exploring-tags
func TagValuesHandler(startTime time.Time, tagName string, w http.ResponseWriter, r *http.Request) error {
deadline := searchutils.GetDeadlineForQuery(r, startTime)
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse form values: %w", err)
}
limit, err := getInt(r, "limit")
if err != nil {
return err
}
filter := r.FormValue("filter")
tagValues, err := netstorage.GetGraphiteTagValues(tagName, filter, limit, deadline)
if err != nil {
return err
}
w.Header().Set("Content-Type", "application/json; charset=utf-8")
bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw)
WriteTagValuesResponse(bw, tagName, tagValues)
if err := bw.Flush(); err != nil {
return err
}
tagValuesDuration.UpdateDuration(startTime)
return nil
}
var tagValuesDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/tags/<tag_name>"}`)
// TagsHandler implements /tags endpoint from Graphite Tags API.
//
// See https://graphite.readthedocs.io/en/stable/tags.html#exploring-tags
func TagsHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
deadline := searchutils.GetDeadlineForQuery(r, startTime)
if err := r.ParseForm(); err != nil {
return fmt.Errorf("cannot parse form values: %w", err)
}
limit, err := getInt(r, "limit")
if err != nil {
return err
}
filter := r.FormValue("filter")
labels, err := netstorage.GetGraphiteTags(filter, limit, deadline)
if err != nil {
return err
}
w.Header().Set("Content-Type", "application/json; charset=utf-8")
bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw)
WriteTagsResponse(bw, labels)
if err := bw.Flush(); err != nil {
return err
}
tagsDuration.UpdateDuration(startTime)
return nil
}
var tagsDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/tags"}`)
func getInt(r *http.Request, argName string) (int, error) {
argValue := r.FormValue(argName)
if len(argValue) == 0 {
return 0, nil
}
n, err := strconv.Atoi(argValue)
if err != nil {
return 0, fmt.Errorf("cannot parse %q=%q: %w", argName, argValue, err)
}
return n, nil
}
func getSearchQueryForExprs(exprs []string) (*storage.SearchQuery, error) {
tfs, err := exprsToTagFilters(exprs)
if err != nil {
return nil, err
}
ct := time.Now().UnixNano() / 1e6
sq := storage.NewSearchQuery(0, ct, [][]storage.TagFilter{tfs})
return sq, nil
}
func exprsToTagFilters(exprs []string) ([]storage.TagFilter, error) {
tfs := make([]storage.TagFilter, 0, len(exprs))
for _, expr := range exprs {
tf, err := parseFilterExpr(expr)
if err != nil {
return nil, fmt.Errorf("cannot parse `expr` query arg: %w", err)
}
tfs = append(tfs, *tf)
}
return tfs, nil
}
func parseFilterExpr(s string) (*storage.TagFilter, error) {
n := strings.Index(s, "=")
if n < 0 {
return nil, fmt.Errorf("missing tag value in filter expression %q", s)
}
tagName := s[:n]
tagValue := s[n+1:]
isNegative := false
if strings.HasSuffix(tagName, "!") {
isNegative = true
tagName = tagName[:len(tagName)-1]
}
if tagName == "name" {
tagName = ""
}
isRegexp := false
if strings.HasPrefix(tagValue, "~") {
isRegexp = true
tagValue = "^(?:" + tagValue[1:] + ").*"
}
return &storage.TagFilter{
Key: []byte(tagName),
Value: []byte(tagValue),
IsNegative: isNegative,
IsRegexp: isRegexp,
}, nil
}

View file

@ -0,0 +1,16 @@
{% stripspace %}
TagsAutoCompleteResponse generates responses for /tags/autoComplete/{tags,values} handlers in Graphite Tags API.
See https://graphite.readthedocs.io/en/stable/tags.html#auto-complete-support
{% func TagsAutoCompleteResponse(ss []string, jsonp string) %}
{% if jsonp != "" %}{%s= jsonp %}({% endif %}
[
{% for i, s := range ss %}
{%q= s %}
{% if i+1 < len(ss) %},{% endif %}
{% endfor %}
]
{% if jsonp != "" %}){% endif %}
{% endfunc %}
{% endstripspace %}

View file

@ -0,0 +1,81 @@
// Code generated by qtc from "tags_autocomplete_response.qtpl". DO NOT EDIT.
// See https://github.com/valyala/quicktemplate for details.
// TagsAutoCompleteResponse generates responses for /tags/autoComplete/{tags,values} handlers in Graphite Tags API.See https://graphite.readthedocs.io/en/stable/tags.html#auto-complete-support
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:5
package graphite
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:5
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:5
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:5
func StreamTagsAutoCompleteResponse(qw422016 *qt422016.Writer, ss []string, jsonp string) {
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:6
if jsonp != "" {
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:6
qw422016.N().S(jsonp)
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:6
qw422016.N().S(`(`)
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:6
}
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:6
qw422016.N().S(`[`)
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:8
for i, s := range ss {
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:9
qw422016.N().Q(s)
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:10
if i+1 < len(ss) {
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:10
qw422016.N().S(`,`)
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:10
}
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:11
}
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:11
qw422016.N().S(`]`)
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:13
if jsonp != "" {
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:13
qw422016.N().S(`)`)
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:13
}
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:14
}
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:14
func WriteTagsAutoCompleteResponse(qq422016 qtio422016.Writer, ss []string, jsonp string) {
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:14
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:14
StreamTagsAutoCompleteResponse(qw422016, ss, jsonp)
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:14
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:14
}
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:14
func TagsAutoCompleteResponse(ss []string, jsonp string) string {
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:14
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:14
WriteTagsAutoCompleteResponse(qb422016, ss, jsonp)
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:14
qs422016 := string(qb422016.B)
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:14
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:14
return qs422016
//line app/vmselect/graphite/tags_autocomplete_response.qtpl:14
}

View file

@ -0,0 +1,12 @@
{% stripspace %}
{% func TagsFindSeriesResponse(paths []string) %}
[
{% for i, path := range paths %}
{%q= path %}
{% if i+1 < len(paths) %},{% endif %}
{% endfor %}
]
{% endfunc %}
{% endstripspace %}

View file

@ -0,0 +1,65 @@
// Code generated by qtc from "tags_find_series_response.qtpl". DO NOT EDIT.
// See https://github.com/valyala/quicktemplate for details.
//line app/vmselect/graphite/tags_find_series_response.qtpl:3
package graphite
//line app/vmselect/graphite/tags_find_series_response.qtpl:3
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vmselect/graphite/tags_find_series_response.qtpl:3
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vmselect/graphite/tags_find_series_response.qtpl:3
func StreamTagsFindSeriesResponse(qw422016 *qt422016.Writer, paths []string) {
//line app/vmselect/graphite/tags_find_series_response.qtpl:3
qw422016.N().S(`[`)
//line app/vmselect/graphite/tags_find_series_response.qtpl:5
for i, path := range paths {
//line app/vmselect/graphite/tags_find_series_response.qtpl:6
qw422016.N().Q(path)
//line app/vmselect/graphite/tags_find_series_response.qtpl:7
if i+1 < len(paths) {
//line app/vmselect/graphite/tags_find_series_response.qtpl:7
qw422016.N().S(`,`)
//line app/vmselect/graphite/tags_find_series_response.qtpl:7
}
//line app/vmselect/graphite/tags_find_series_response.qtpl:8
}
//line app/vmselect/graphite/tags_find_series_response.qtpl:8
qw422016.N().S(`]`)
//line app/vmselect/graphite/tags_find_series_response.qtpl:10
}
//line app/vmselect/graphite/tags_find_series_response.qtpl:10
func WriteTagsFindSeriesResponse(qq422016 qtio422016.Writer, paths []string) {
//line app/vmselect/graphite/tags_find_series_response.qtpl:10
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/graphite/tags_find_series_response.qtpl:10
StreamTagsFindSeriesResponse(qw422016, paths)
//line app/vmselect/graphite/tags_find_series_response.qtpl:10
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/graphite/tags_find_series_response.qtpl:10
}
//line app/vmselect/graphite/tags_find_series_response.qtpl:10
func TagsFindSeriesResponse(paths []string) string {
//line app/vmselect/graphite/tags_find_series_response.qtpl:10
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/graphite/tags_find_series_response.qtpl:10
WriteTagsFindSeriesResponse(qb422016, paths)
//line app/vmselect/graphite/tags_find_series_response.qtpl:10
qs422016 := string(qb422016.B)
//line app/vmselect/graphite/tags_find_series_response.qtpl:10
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/graphite/tags_find_series_response.qtpl:10
return qs422016
//line app/vmselect/graphite/tags_find_series_response.qtpl:10
}

View file

@ -0,0 +1,16 @@
{% stripspace %}
Tags generates response for /tags handler
See https://graphite.readthedocs.io/en/stable/tags.html#exploring-tags
{% func TagsResponse(tags []string) %}
[
{% for i, tag := range tags %}
{
"tag":{%q= tag %}
}
{% if i+1 < len(tags) %},{% endif %}
{% endfor %}
]
{% endfunc %}
{% endstripspace %}

View file

@ -0,0 +1,71 @@
// Code generated by qtc from "tags_response.qtpl". DO NOT EDIT.
// See https://github.com/valyala/quicktemplate for details.
// Tags generates response for /tags handlerSee https://graphite.readthedocs.io/en/stable/tags.html#exploring-tags
//line app/vmselect/graphite/tags_response.qtpl:5
package graphite
//line app/vmselect/graphite/tags_response.qtpl:5
import (
qtio422016 "io"
qt422016 "github.com/valyala/quicktemplate"
)
//line app/vmselect/graphite/tags_response.qtpl:5
var (
_ = qtio422016.Copy
_ = qt422016.AcquireByteBuffer
)
//line app/vmselect/graphite/tags_response.qtpl:5
func StreamTagsResponse(qw422016 *qt422016.Writer, tags []string) {
//line app/vmselect/graphite/tags_response.qtpl:5
qw422016.N().S(`[`)
//line app/vmselect/graphite/tags_response.qtpl:7
for i, tag := range tags {
//line app/vmselect/graphite/tags_response.qtpl:7
qw422016.N().S(`{"tag":`)
//line app/vmselect/graphite/tags_response.qtpl:9
qw422016.N().Q(tag)
//line app/vmselect/graphite/tags_response.qtpl:9
qw422016.N().S(`}`)
//line app/vmselect/graphite/tags_response.qtpl:11
if i+1 < len(tags) {
//line app/vmselect/graphite/tags_response.qtpl:11
qw422016.N().S(`,`)
//line app/vmselect/graphite/tags_response.qtpl:11
}
//line app/vmselect/graphite/tags_response.qtpl:12
}
//line app/vmselect/graphite/tags_response.qtpl:12
qw422016.N().S(`]`)
//line app/vmselect/graphite/tags_response.qtpl:14
}
//line app/vmselect/graphite/tags_response.qtpl:14
func WriteTagsResponse(qq422016 qtio422016.Writer, tags []string) {
//line app/vmselect/graphite/tags_response.qtpl:14
qw422016 := qt422016.AcquireWriter(qq422016)
//line app/vmselect/graphite/tags_response.qtpl:14
StreamTagsResponse(qw422016, tags)
//line app/vmselect/graphite/tags_response.qtpl:14
qt422016.ReleaseWriter(qw422016)
//line app/vmselect/graphite/tags_response.qtpl:14
}
//line app/vmselect/graphite/tags_response.qtpl:14
func TagsResponse(tags []string) string {
//line app/vmselect/graphite/tags_response.qtpl:14
qb422016 := qt422016.AcquireByteBuffer()
//line app/vmselect/graphite/tags_response.qtpl:14
WriteTagsResponse(qb422016, tags)
//line app/vmselect/graphite/tags_response.qtpl:14
qs422016 := string(qb422016.B)
//line app/vmselect/graphite/tags_response.qtpl:14
qt422016.ReleaseByteBuffer(qb422016)
//line app/vmselect/graphite/tags_response.qtpl:14
return qs422016
//line app/vmselect/graphite/tags_response.qtpl:14
}

View file

@ -132,6 +132,16 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
return true return true
} }
} }
if strings.HasPrefix(path, "/tags/") && !isGraphiteTagsPath(path) {
tagName := r.URL.Path[len("/tags/"):]
graphiteTagValuesRequests.Inc()
if err := graphite.TagValuesHandler(startTime, tagName, w, r); err != nil {
graphiteTagValuesErrors.Inc()
httpserver.Errorf(w, r, "error in %q: %s", r.URL.Path, err)
return true
}
return true
}
switch path { switch path {
case "/api/v1/query": case "/api/v1/query":
@ -259,22 +269,56 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
return true return true
} }
return true return true
case "/tags":
graphiteTagsRequests.Inc()
if err := graphite.TagsHandler(startTime, w, r); err != nil {
graphiteTagsErrors.Inc()
httpserver.Errorf(w, r, "error in %q: %s", r.URL.Path, err)
return true
}
return true
case "/tags/findSeries":
graphiteTagsFindSeriesRequests.Inc()
if err := graphite.TagsFindSeriesHandler(startTime, w, r); err != nil {
graphiteTagsFindSeriesErrors.Inc()
httpserver.Errorf(w, r, "error in %q: %s", r.URL.Path, err)
return true
}
return true
case "/tags/autoComplete/tags":
graphiteTagsAutoCompleteTagsRequests.Inc()
httpserver.EnableCORS(w, r)
if err := graphite.TagsAutoCompleteTagsHandler(startTime, w, r); err != nil {
graphiteTagsAutoCompleteTagsErrors.Inc()
httpserver.Errorf(w, r, "error in %q: %s", r.URL.Path, err)
return true
}
return true
case "/tags/autoComplete/values":
graphiteTagsAutoCompleteValuesRequests.Inc()
httpserver.EnableCORS(w, r)
if err := graphite.TagsAutoCompleteValuesHandler(startTime, w, r); err != nil {
graphiteTagsAutoCompleteValuesErrors.Inc()
httpserver.Errorf(w, r, "error in %q: %s", r.URL.Path, err)
return true
}
return true
case "/api/v1/rules": case "/api/v1/rules":
// Return dumb placeholder // Return dumb placeholder
rulesRequests.Inc() rulesRequests.Inc()
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json; charset=utf-8")
fmt.Fprintf(w, "%s", `{"status":"success","data":{"groups":[]}}`) fmt.Fprintf(w, "%s", `{"status":"success","data":{"groups":[]}}`)
return true return true
case "/api/v1/alerts": case "/api/v1/alerts":
// Return dumb placehloder // Return dumb placehloder
alertsRequests.Inc() alertsRequests.Inc()
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json; charset=utf-8")
fmt.Fprintf(w, "%s", `{"status":"success","data":{"alerts":[]}}`) fmt.Fprintf(w, "%s", `{"status":"success","data":{"alerts":[]}}`)
return true return true
case "/api/v1/metadata": case "/api/v1/metadata":
// Return dumb placeholder // Return dumb placeholder
metadataRequests.Inc() metadataRequests.Inc()
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json; charset=utf-8")
fmt.Fprintf(w, "%s", `{"status":"success","data":{}}`) fmt.Fprintf(w, "%s", `{"status":"success","data":{}}`)
return true return true
case "/api/v1/admin/tsdb/delete_series": case "/api/v1/admin/tsdb/delete_series":
@ -296,10 +340,22 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
} }
} }
func isGraphiteTagsPath(path string) bool {
switch path {
// See https://graphite.readthedocs.io/en/stable/tags.html for a list of Graphite Tags API paths.
// Do not include `/tags/<tag_name>` here, since this will fool the caller.
case "/tags/tagSeries", "/tags/tagMultiSeries", "/tags/findSeries",
"/tags/autoComplete/tags", "/tags/autoComplete/values", "/tags/delSeries":
return true
default:
return false
}
}
func sendPrometheusError(w http.ResponseWriter, r *http.Request, err error) { func sendPrometheusError(w http.ResponseWriter, r *http.Request, err error) {
logger.Warnf("error in %q: %s", r.RequestURI, err) logger.Warnf("error in %q: %s", r.RequestURI, err)
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json; charset=utf-8")
statusCode := http.StatusUnprocessableEntity statusCode := http.StatusUnprocessableEntity
var esc *httpserver.ErrorWithStatusCode var esc *httpserver.ErrorWithStatusCode
if errors.As(err, &esc) { if errors.As(err, &esc) {
@ -360,6 +416,21 @@ var (
graphiteMetricsIndexRequests = metrics.NewCounter(`vm_http_requests_total{path="/metrics/index.json"}`) graphiteMetricsIndexRequests = metrics.NewCounter(`vm_http_requests_total{path="/metrics/index.json"}`)
graphiteMetricsIndexErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/metrics/index.json"}`) graphiteMetricsIndexErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/metrics/index.json"}`)
graphiteTagsRequests = metrics.NewCounter(`vm_http_requests_total{path="/tags"}`)
graphiteTagsErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/tags"}`)
graphiteTagValuesRequests = metrics.NewCounter(`vm_http_requests_total{path="/tags/<tag_name>"}`)
graphiteTagValuesErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/tags/<tag_name>"}`)
graphiteTagsFindSeriesRequests = metrics.NewCounter(`vm_http_requests_total{path="/tags/findSeries"}`)
graphiteTagsFindSeriesErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/tags/findSeries"}`)
graphiteTagsAutoCompleteTagsRequests = metrics.NewCounter(`vm_http_requests_total{path="/tags/autoComplete/tags"}`)
graphiteTagsAutoCompleteTagsErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/tags/autoComplete/tags"}`)
graphiteTagsAutoCompleteValuesRequests = metrics.NewCounter(`vm_http_requests_total{path="/tags/autoComplete/values"}`)
graphiteTagsAutoCompleteValuesErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/tags/autoComplete/values"}`)
rulesRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/rules"}`) rulesRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/rules"}`)
alertsRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/alerts"}`) alertsRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/alerts"}`)
metadataRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/metadata"}`) metadataRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/metadata"}`)

View file

@ -5,6 +5,7 @@ import (
"errors" "errors"
"flag" "flag"
"fmt" "fmt"
"regexp"
"runtime" "runtime"
"sort" "sort"
"sync" "sync"
@ -523,6 +524,35 @@ func GetLabelsOnTimeRange(tr storage.TimeRange, deadline searchutils.Deadline) (
return labels, nil return labels, nil
} }
// GetGraphiteTags returns Graphite tags until the given deadline.
func GetGraphiteTags(filter string, limit int, deadline searchutils.Deadline) ([]string, error) {
if deadline.Exceeded() {
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
}
labels, err := GetLabels(deadline)
if err != nil {
return nil, err
}
// Substitute "__name__" with "name" for Graphite compatibility
for i := range labels {
if labels[i] == "__name__" {
labels[i] = "name"
sort.Strings(labels)
break
}
}
if len(filter) > 0 {
labels, err = applyGraphiteRegexpFilter(filter, labels)
if err != nil {
return nil, err
}
}
if limit > 0 && limit < len(labels) {
labels = labels[:limit]
}
return labels, nil
}
// GetLabels returns labels until the given deadline. // GetLabels returns labels until the given deadline.
func GetLabels(deadline searchutils.Deadline) ([]string, error) { func GetLabels(deadline searchutils.Deadline) ([]string, error) {
if deadline.Exceeded() { if deadline.Exceeded() {
@ -599,6 +629,30 @@ func GetLabelValuesOnTimeRange(labelName string, tr storage.TimeRange, deadline
return labelValues, nil return labelValues, nil
} }
// GetGraphiteTagValues returns tag values for the given tagName until the given deadline.
func GetGraphiteTagValues(tagName, filter string, limit int, deadline searchutils.Deadline) ([]string, error) {
if deadline.Exceeded() {
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
}
if tagName == "name" {
tagName = ""
}
tagValues, err := GetLabelValues(tagName, deadline)
if err != nil {
return nil, err
}
if len(filter) > 0 {
tagValues, err = applyGraphiteRegexpFilter(filter, tagValues)
if err != nil {
return nil, err
}
}
if limit > 0 && limit < len(tagValues) {
tagValues = tagValues[:limit]
}
return tagValues, nil
}
// GetLabelValues returns label values for the given labelName // GetLabelValues returns label values for the given labelName
// until the given deadline. // until the given deadline.
func GetLabelValues(labelName string, deadline searchutils.Deadline) ([]string, error) { func GetLabelValues(labelName string, deadline searchutils.Deadline) ([]string, error) {
@ -819,6 +873,32 @@ var exportWorkPool = &sync.Pool{
}, },
} }
// SearchMetricNames returns all the metric names matching sq until the given deadline.
func SearchMetricNames(sq *storage.SearchQuery, deadline searchutils.Deadline) ([]storage.MetricName, error) {
if deadline.Exceeded() {
return nil, fmt.Errorf("timeout exceeded before starting to search metric names: %s", deadline.String())
}
// Setup search.
tfss, err := setupTfss(sq.TagFilterss)
if err != nil {
return nil, err
}
tr := storage.TimeRange{
MinTimestamp: sq.MinTimestamp,
MaxTimestamp: sq.MaxTimestamp,
}
if err := vmstorage.CheckTimeRange(tr); err != nil {
return nil, err
}
mns, err := vmstorage.SearchMetricNames(tfss, tr, *maxMetricsPerSearch, deadline.Deadline())
if err != nil {
return nil, fmt.Errorf("cannot find metric names: %w", err)
}
return mns, nil
}
// ProcessSearchQuery performs sq until the given deadline. // ProcessSearchQuery performs sq until the given deadline.
// //
// Results.RunParallel or Results.Cancel must be called on the returned Results. // Results.RunParallel or Results.Cancel must be called on the returned Results.
@ -951,3 +1031,20 @@ func setupTfss(tagFilterss [][]storage.TagFilter) ([]*storage.TagFilters, error)
} }
return tfss, nil return tfss, nil
} }
func applyGraphiteRegexpFilter(filter string, ss []string) ([]string, error) {
// Anchor filter regexp to the beginning of the string as Graphite does.
// See https://github.com/graphite-project/graphite-web/blob/3ad279df5cb90b211953e39161df416e54a84948/webapp/graphite/tags/localdatabase.py#L157
filter = "^(?:" + filter + ")"
re, err := regexp.Compile(filter)
if err != nil {
return nil, fmt.Errorf("cannot parse regexp filter=%q: %w", filter, err)
}
dst := ss[:0]
for _, s := range ss {
if re.MatchString(s) {
dst = append(dst, s)
}
}
return dst, nil
}

View file

@ -78,17 +78,13 @@ func FederateHandler(startTime time.Time, w http.ResponseWriter, r *http.Request
if err != nil { if err != nil {
return err return err
} }
sq := &storage.SearchQuery{ sq := storage.NewSearchQuery(start, end, tagFilterss)
MinTimestamp: start,
MaxTimestamp: end,
TagFilterss: tagFilterss,
}
rss, err := netstorage.ProcessSearchQuery(sq, true, deadline) rss, err := netstorage.ProcessSearchQuery(sq, true, deadline)
if err != nil { if err != nil {
return fmt.Errorf("cannot fetch data for %q: %w", sq, err) return fmt.Errorf("cannot fetch data for %q: %w", sq, err)
} }
w.Header().Set("Content-Type", "text/plain") w.Header().Set("Content-Type", "text/plain; charset=utf-8")
bw := bufferedwriter.Get(w) bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw) defer bufferedwriter.Put(bw)
err = rss.RunParallel(func(rs *netstorage.Result, workerID uint) error { err = rss.RunParallel(func(rs *netstorage.Result, workerID uint) error {
@ -146,12 +142,8 @@ func ExportCSVHandler(startTime time.Time, w http.ResponseWriter, r *http.Reques
if err != nil { if err != nil {
return err return err
} }
sq := &storage.SearchQuery{ sq := storage.NewSearchQuery(start, end, tagFilterss)
MinTimestamp: start, w.Header().Set("Content-Type", "text/csv; charset=utf-8")
MaxTimestamp: end,
TagFilterss: tagFilterss,
}
w.Header().Set("Content-Type", "text/csv")
bw := bufferedwriter.Get(w) bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw) defer bufferedwriter.Put(bw)
@ -227,11 +219,7 @@ func ExportNativeHandler(startTime time.Time, w http.ResponseWriter, r *http.Req
if err != nil { if err != nil {
return err return err
} }
sq := &storage.SearchQuery{ sq := storage.NewSearchQuery(start, end, tagFilterss)
MinTimestamp: start,
MaxTimestamp: end,
TagFilterss: tagFilterss,
}
w.Header().Set("Content-Type", "VictoriaMetrics/native") w.Header().Set("Content-Type", "VictoriaMetrics/native")
bw := bufferedwriter.Get(w) bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw) defer bufferedwriter.Put(bw)
@ -331,9 +319,9 @@ func exportHandler(w http.ResponseWriter, matches []string, start, end int64, fo
WriteExportJSONLine(bb, xb) WriteExportJSONLine(bb, xb)
resultsCh <- bb resultsCh <- bb
} }
contentType := "application/stream+json" contentType := "application/stream+json; charset=utf-8"
if format == "prometheus" { if format == "prometheus" {
contentType = "text/plain" contentType = "text/plain; charset=utf-8"
writeLineFunc = func(xb *exportBlock, resultsCh chan<- *quicktemplate.ByteBuffer) { writeLineFunc = func(xb *exportBlock, resultsCh chan<- *quicktemplate.ByteBuffer) {
bb := quicktemplate.AcquireByteBuffer() bb := quicktemplate.AcquireByteBuffer()
WriteExportPrometheusLine(bb, xb) WriteExportPrometheusLine(bb, xb)
@ -381,11 +369,7 @@ func exportHandler(w http.ResponseWriter, matches []string, start, end int64, fo
if err != nil { if err != nil {
return err return err
} }
sq := &storage.SearchQuery{ sq := storage.NewSearchQuery(start, end, tagFilterss)
MinTimestamp: start,
MaxTimestamp: end,
TagFilterss: tagFilterss,
}
w.Header().Set("Content-Type", contentType) w.Header().Set("Content-Type", contentType)
bw := bufferedwriter.Get(w) bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw) defer bufferedwriter.Put(bw)
@ -486,9 +470,7 @@ func DeleteHandler(startTime time.Time, r *http.Request) error {
if err != nil { if err != nil {
return err return err
} }
sq := &storage.SearchQuery{ sq := storage.NewSearchQuery(0, 0, tagFilterss)
TagFilterss: tagFilterss,
}
deletedCount, err := netstorage.DeleteSeries(sq) deletedCount, err := netstorage.DeleteSeries(sq)
if err != nil { if err != nil {
return fmt.Errorf("cannot delete time series matching %q: %w", matches, err) return fmt.Errorf("cannot delete time series matching %q: %w", matches, err)
@ -561,7 +543,7 @@ func LabelValuesHandler(startTime time.Time, labelName string, w http.ResponseWr
} }
} }
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json; charset=utf-8")
bw := bufferedwriter.Get(w) bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw) defer bufferedwriter.Put(bw)
WriteLabelValuesResponse(bw, labelValues) WriteLabelValuesResponse(bw, labelValues)
@ -596,32 +578,41 @@ func labelValuesWithMatches(labelName string, matches []string, start, end int64
if start >= end { if start >= end {
end = start + defaultStep end = start + defaultStep
} }
sq := &storage.SearchQuery{ sq := storage.NewSearchQuery(start, end, tagFilterss)
MinTimestamp: start,
MaxTimestamp: end,
TagFilterss: tagFilterss,
}
rss, err := netstorage.ProcessSearchQuery(sq, false, deadline)
if err != nil {
return nil, fmt.Errorf("cannot fetch data for %q: %w", sq, err)
}
m := make(map[string]struct{}) m := make(map[string]struct{})
var mLock sync.Mutex if end-start > 24*3600*1000 {
err = rss.RunParallel(func(rs *netstorage.Result, workerID uint) error { // It is cheaper to call SearchMetricNames on time ranges exceeding a day.
labelValue := rs.MetricName.GetTagValue(labelName) mns, err := netstorage.SearchMetricNames(sq, deadline)
if len(labelValue) == 0 { if err != nil {
return nil return nil, fmt.Errorf("cannot fetch time series for %q: %w", sq, err)
}
for _, mn := range mns {
labelValue := mn.GetTagValue(labelName)
if len(labelValue) == 0 {
continue
}
m[string(labelValue)] = struct{}{}
}
} else {
rss, err := netstorage.ProcessSearchQuery(sq, false, deadline)
if err != nil {
return nil, fmt.Errorf("cannot fetch data for %q: %w", sq, err)
}
var mLock sync.Mutex
err = rss.RunParallel(func(rs *netstorage.Result, workerID uint) error {
labelValue := rs.MetricName.GetTagValue(labelName)
if len(labelValue) == 0 {
return nil
}
mLock.Lock()
m[string(labelValue)] = struct{}{}
mLock.Unlock()
return nil
})
if err != nil {
return nil, fmt.Errorf("error when data fetching: %w", err)
} }
mLock.Lock()
m[string(labelValue)] = struct{}{}
mLock.Unlock()
return nil
})
if err != nil {
return nil, fmt.Errorf("error when data fetching: %w", err)
} }
labelValues := make([]string, 0, len(m)) labelValues := make([]string, 0, len(m))
for labelValue := range m { for labelValue := range m {
labelValues = append(labelValues, labelValue) labelValues = append(labelValues, labelValue)
@ -639,7 +630,7 @@ func LabelsCountHandler(startTime time.Time, w http.ResponseWriter, r *http.Requ
if err != nil { if err != nil {
return fmt.Errorf(`cannot obtain label entries: %w`, err) return fmt.Errorf(`cannot obtain label entries: %w`, err)
} }
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json; charset=utf-8")
bw := bufferedwriter.Get(w) bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw) defer bufferedwriter.Put(bw)
WriteLabelsCountResponse(bw, labelEntries) WriteLabelsCountResponse(bw, labelEntries)
@ -690,7 +681,7 @@ func TSDBStatusHandler(startTime time.Time, w http.ResponseWriter, r *http.Reque
if err != nil { if err != nil {
return fmt.Errorf(`cannot obtain tsdb status for date=%d, topN=%d: %w`, date, topN, err) return fmt.Errorf(`cannot obtain tsdb status for date=%d, topN=%d: %w`, date, topN, err)
} }
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json; charset=utf-8")
bw := bufferedwriter.Get(w) bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw) defer bufferedwriter.Put(bw)
WriteTSDBStatusResponse(bw, status) WriteTSDBStatusResponse(bw, status)
@ -760,7 +751,7 @@ func LabelsHandler(startTime time.Time, w http.ResponseWriter, r *http.Request)
} }
} }
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json; charset=utf-8")
bw := bufferedwriter.Get(w) bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw) defer bufferedwriter.Put(bw)
WriteLabelsResponse(bw, labels) WriteLabelsResponse(bw, labels)
@ -782,33 +773,41 @@ func labelsWithMatches(matches []string, start, end int64, deadline searchutils.
if start >= end { if start >= end {
end = start + defaultStep end = start + defaultStep
} }
sq := &storage.SearchQuery{ sq := storage.NewSearchQuery(start, end, tagFilterss)
MinTimestamp: start,
MaxTimestamp: end,
TagFilterss: tagFilterss,
}
rss, err := netstorage.ProcessSearchQuery(sq, false, deadline)
if err != nil {
return nil, fmt.Errorf("cannot fetch data for %q: %w", sq, err)
}
m := make(map[string]struct{}) m := make(map[string]struct{})
var mLock sync.Mutex if end-start > 24*3600*1000 {
err = rss.RunParallel(func(rs *netstorage.Result, workerID uint) error { // It is cheaper to call SearchMetricNames on time ranges exceeding a day.
mLock.Lock() mns, err := netstorage.SearchMetricNames(sq, deadline)
tags := rs.MetricName.Tags if err != nil {
for i := range tags { return nil, fmt.Errorf("cannot fetch time series for %q: %w", sq, err)
t := &tags[i] }
m[string(t.Key)] = struct{}{} for _, mn := range mns {
for _, tag := range mn.Tags {
m[string(tag.Key)] = struct{}{}
}
}
if len(mns) > 0 {
m["__name__"] = struct{}{}
}
} else {
rss, err := netstorage.ProcessSearchQuery(sq, false, deadline)
if err != nil {
return nil, fmt.Errorf("cannot fetch data for %q: %w", sq, err)
}
var mLock sync.Mutex
err = rss.RunParallel(func(rs *netstorage.Result, workerID uint) error {
mLock.Lock()
for _, tag := range rs.MetricName.Tags {
m[string(tag.Key)] = struct{}{}
}
m["__name__"] = struct{}{}
mLock.Unlock()
return nil
})
if err != nil {
return nil, fmt.Errorf("error when data fetching: %w", err)
} }
m["__name__"] = struct{}{}
mLock.Unlock()
return nil
})
if err != nil {
return nil, fmt.Errorf("error when data fetching: %w", err)
} }
labels := make([]string, 0, len(m)) labels := make([]string, 0, len(m))
for label := range m { for label := range m {
labels = append(labels, label) labels = append(labels, label)
@ -826,7 +825,7 @@ func SeriesCountHandler(startTime time.Time, w http.ResponseWriter, r *http.Requ
if err != nil { if err != nil {
return fmt.Errorf("cannot obtain series count: %w", err) return fmt.Errorf("cannot obtain series count: %w", err)
} }
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json; charset=utf-8")
bw := bufferedwriter.Get(w) bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw) defer bufferedwriter.Put(bw)
WriteSeriesCountResponse(bw, n) WriteSeriesCountResponse(bw, n)
@ -873,17 +872,39 @@ func SeriesHandler(startTime time.Time, w http.ResponseWriter, r *http.Request)
if start >= end { if start >= end {
end = start + defaultStep end = start + defaultStep
} }
sq := &storage.SearchQuery{ sq := storage.NewSearchQuery(start, end, tagFilterss)
MinTimestamp: start, if end-start > 24*3600*1000 {
MaxTimestamp: end, // It is cheaper to call SearchMetricNames on time ranges exceeding a day.
TagFilterss: tagFilterss, mns, err := netstorage.SearchMetricNames(sq, deadline)
if err != nil {
return fmt.Errorf("cannot fetch time series for %q: %w", sq, err)
}
w.Header().Set("Content-Type", "application/json; charset=utf-8")
bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw)
resultsCh := make(chan *quicktemplate.ByteBuffer)
go func() {
for i := range mns {
bb := quicktemplate.AcquireByteBuffer()
writemetricNameObject(bb, &mns[i])
resultsCh <- bb
}
close(resultsCh)
}()
// WriteSeriesResponse must consume all the data from resultsCh.
WriteSeriesResponse(bw, resultsCh)
if err := bw.Flush(); err != nil {
return err
}
seriesDuration.UpdateDuration(startTime)
return nil
} }
rss, err := netstorage.ProcessSearchQuery(sq, false, deadline) rss, err := netstorage.ProcessSearchQuery(sq, false, deadline)
if err != nil { if err != nil {
return fmt.Errorf("cannot fetch data for %q: %w", sq, err) return fmt.Errorf("cannot fetch data for %q: %w", sq, err)
} }
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json; charset=utf-8")
bw := bufferedwriter.Get(w) bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw) defer bufferedwriter.Put(bw)
resultsCh := make(chan *quicktemplate.ByteBuffer) resultsCh := make(chan *quicktemplate.ByteBuffer)
@ -1020,7 +1041,7 @@ func QueryHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) e
} }
} }
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json; charset=utf-8")
bw := bufferedwriter.Get(w) bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw) defer bufferedwriter.Put(bw)
WriteQueryResponse(bw, result) WriteQueryResponse(bw, result)
@ -1119,7 +1140,7 @@ func queryRangeHandler(startTime time.Time, w http.ResponseWriter, query string,
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/153 // See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/153
result = removeEmptyValuesAndTimeseries(result) result = removeEmptyValuesAndTimeseries(result)
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json; charset=utf-8")
bw := bufferedwriter.Get(w) bw := bufferedwriter.Get(w)
defer bufferedwriter.Put(bw) defer bufferedwriter.Put(bw)
WriteQueryRangeResponse(bw, result) WriteQueryRangeResponse(bw, result)

View file

@ -653,11 +653,7 @@ func evalRollupFuncWithMetricExpr(ec *EvalConfig, name string, rf rollupFunc,
} else { } else {
minTimestamp -= ec.Step minTimestamp -= ec.Step
} }
sq := &storage.SearchQuery{ sq := storage.NewSearchQuery(minTimestamp, ec.End, [][]storage.TagFilter{tfs})
MinTimestamp: minTimestamp,
MaxTimestamp: ec.End,
TagFilterss: [][]storage.TagFilter{tfs},
}
rss, err := netstorage.ProcessSearchQuery(sq, true, ec.Deadline) rss, err := netstorage.ProcessSearchQuery(sq, true, ec.Deadline)
if err != nil { if err != nil {
return nil, err return nil, err

View file

@ -5,6 +5,7 @@ import (
"fmt" "fmt"
"math" "math"
"sort" "sort"
"strings"
"sync" "sync"
"sync/atomic" "sync/atomic"
"time" "time"
@ -15,7 +16,13 @@ import (
"github.com/VictoriaMetrics/metricsql" "github.com/VictoriaMetrics/metricsql"
) )
var logSlowQueryDuration = flag.Duration("search.logSlowQueryDuration", 5*time.Second, "Log queries with execution time exceeding this value. Zero disables slow query logging") var (
logSlowQueryDuration = flag.Duration("search.logSlowQueryDuration", 5*time.Second, "Log queries with execution time exceeding this value. Zero disables slow query logging")
treatDotsAsIsInRegexps = flag.Bool("search.treatDotsAsIsInRegexps", false, "Whether to treat dots as is in regexp label filters used in queries. "+
`For example, foo{bar=~"a.b.c"} will be automatically converted to foo{bar=~"a\\.b\\.c"}, i.e. all the dots in regexp filters will be automatically escaped `+
`in order to match only dot char instead of matching any char. Dots in ".+", ".*" and ".{n}" regexps aren't escaped. `+
`Such escaping can be useful when querying Graphite data`)
)
var slowQueries = metrics.NewCounter(`vm_slow_queries_total`) var slowQueries = metrics.NewCounter(`vm_slow_queries_total`)
@ -177,6 +184,9 @@ func parsePromQLWithCache(q string) (metricsql.Expr, error) {
if err == nil { if err == nil {
e = metricsql.Optimize(e) e = metricsql.Optimize(e)
e = adjustCmpOps(e) e = adjustCmpOps(e)
if *treatDotsAsIsInRegexps {
e = escapeDotsInRegexpLabelFilters(e)
}
} }
pcv = &parseCacheValue{ pcv = &parseCacheValue{
e: e, e: e,
@ -190,6 +200,41 @@ func parsePromQLWithCache(q string) (metricsql.Expr, error) {
return pcv.e, nil return pcv.e, nil
} }
func escapeDotsInRegexpLabelFilters(e metricsql.Expr) metricsql.Expr {
metricsql.VisitAll(e, func(expr metricsql.Expr) {
me, ok := expr.(*metricsql.MetricExpr)
if !ok {
return
}
for i := range me.LabelFilters {
f := &me.LabelFilters[i]
if f.IsRegexp {
f.Value = escapeDots(f.Value)
}
}
})
return e
}
func escapeDots(s string) string {
dotsCount := strings.Count(s, ".")
if dotsCount <= 0 {
return s
}
result := make([]byte, 0, len(s)+2*dotsCount)
for i := 0; i < len(s); i++ {
if s[i] == '.' && (i == 0 || s[i-1] != '\\') && (i+1 == len(s) || i+1 < len(s) && s[i+1] != '*' && s[i+1] != '+' && s[i+1] != '{') {
// Escape a dot if the following conditions are met:
// - if it isn't escaped already, i.e. if there is no `\` char before the dot.
// - if there is no regexp modifiers such as '+', '*' or '{' after the dot.
result = append(result, '\\', '.')
} else {
result = append(result, s[i])
}
}
return string(result)
}
var parseCacheV = func() *parseCache { var parseCacheV = func() *parseCache {
pc := &parseCache{ pc := &parseCache{
m: make(map[string]*parseCacheValue), m: make(map[string]*parseCacheValue),

View file

@ -7,8 +7,46 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage" "github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/searchutils" "github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/searchutils"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage" "github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
"github.com/VictoriaMetrics/metricsql"
) )
func TestEscapeDots(t *testing.T) {
f := func(s, resultExpected string) {
t.Helper()
result := escapeDots(s)
if result != resultExpected {
t.Fatalf("unexpected result for escapeDots(%q); got\n%s\nwant\n%s", s, result, resultExpected)
}
}
f("", "")
f("a", "a")
f("foobar", "foobar")
f(".", `\.`)
f(".*", `.*`)
f(".+", `.+`)
f("..", `\.\.`)
f("foo.b.{2}ar..+baz.*", `foo\.b.{2}ar\..+baz.*`)
}
func TestEscapeDotsInRegexpLabelFilters(t *testing.T) {
f := func(s, resultExpected string) {
t.Helper()
e, err := metricsql.Parse(s)
if err != nil {
t.Fatalf("unexpected error in metricsql.Parse(%q): %s", s, err)
}
e = escapeDotsInRegexpLabelFilters(e)
result := e.AppendString(nil)
if string(result) != resultExpected {
t.Fatalf("unexpected result for escapeDotsInRegexpLabelFilters(%q);\ngot\n%s\nwant\n%s", s, result, resultExpected)
}
}
f("2", "2")
f(`foo.bar + 123`, `foo.bar + 123`)
f(`foo{bar=~"baz.xx.yyy"}`, `foo{bar=~"baz\\.xx\\.yyy"}`)
f(`foo(a.b{c="d.e",x=~"a.b.+[.a]",y!~"aaa.bb|cc.dd"}) + x.y(1,sum({x=~"aa.bb"}))`, `foo(a.b{c="d.e", x=~"a\\.b.+[\\.a]", y!~"aaa\\.bb|cc\\.dd"}) + x.y(1, sum({x=~"aa\\.bb"}))`)
}
func TestExecSuccess(t *testing.T) { func TestExecSuccess(t *testing.T) {
start := int64(1000e3) start := int64(1000e3)
end := int64(2000e3) end := int64(2000e3)

View file

@ -15,7 +15,7 @@ import (
"github.com/valyala/histogram" "github.com/valyala/histogram"
) )
var minStalenessInterval = flag.Duration("search.minStalenessInterval", 0, "The mimimum interval for staleness calculations. "+ var minStalenessInterval = flag.Duration("search.minStalenessInterval", 0, "The minimum interval for staleness calculations. "+
"This flag could be useful for removing gaps on graphs generated from time series with irregular intervals between samples. "+ "This flag could be useful for removing gaps on graphs generated from time series with irregular intervals between samples. "+
"See also '-search.maxStalenessInterval'") "See also '-search.maxStalenessInterval'")
@ -326,14 +326,32 @@ func getRollupFunc(funcName string) newRollupFunc {
} }
type rollupFuncArg struct { type rollupFuncArg struct {
prevValue float64 // The value preceeding values if it fits staleness interval.
prevTimestamp int64 prevValue float64
values []float64
timestamps []int64
// The timestamp for prevValue.
prevTimestamp int64
// Values that fit window ending at currTimestamp.
values []float64
// Timestamps for values.
timestamps []int64
// Real value preceeding values without restrictions on staleness interval.
realPrevValue float64
// Real value which goes after values.
realNextValue float64
// Current timestamp for rollup evaluation.
currTimestamp int64 currTimestamp int64
idx int
window int64 // Index for the currently evaluated point relative to time range for query evaluation.
idx int
// Time window for rollup calculations.
window int64
tsm *timeseriesMap tsm *timeseriesMap
} }
@ -507,7 +525,9 @@ func (rc *rollupConfig) doInternal(dstValues []float64, tsm *timeseriesMap, valu
ni := 0 ni := 0
nj := 0 nj := 0
stalenessInterval := int64(float64(scrapeInterval) * 0.9) stalenessInterval := int64(float64(scrapeInterval) * 0.9)
canDropLastSample := rc.CanDropLastSample // Do not drop trailing data points for queries, which return 2 or 1 point (aka instant queries).
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/845
canDropLastSample := rc.CanDropLastSample && len(rc.Timestamps) > 2
for _, tEnd := range rc.Timestamps { for _, tEnd := range rc.Timestamps {
tStart := tEnd - window tStart := tEnd - window
ni = seekFirstTimestampIdxAfter(timestamps[i:], tStart, ni) ni = seekFirstTimestampIdxAfter(timestamps[i:], tStart, ni)
@ -526,17 +546,26 @@ func (rc *rollupConfig) doInternal(dstValues []float64, tsm *timeseriesMap, valu
} }
rfa.values = values[i:j] rfa.values = values[i:j]
rfa.timestamps = timestamps[i:j] rfa.timestamps = timestamps[i:j]
if canDropLastSample && j == len(timestamps) && j > 0 && (tEnd-timestamps[j-1] > stalenessInterval || i == j && len(timestamps) == 1) && rc.End-tEnd >= 2*rc.Step { if canDropLastSample && j == len(timestamps) && j > 0 && (tEnd-timestamps[j-1] > stalenessInterval || i == j && len(timestamps) == 1) {
// Drop trailing data points in the following cases: // Drop trailing data points in the following cases:
// - if the distance between the last raw sample and tEnd exceeds stalenessInterval // - if the distance between the last raw sample and tEnd exceeds stalenessInterval
// - if time series contains only a single raw sample // - if time series contains only a single raw sample
// This should prevent from double counting when a label changes in time series (for instance, // This should prevent from double counting when a label changes in time series (for instance,
// during new deployment in K8S). See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/748 // during new deployment in K8S). See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/748
// Do not drop trailing data points for instant queries. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/845
rfa.prevValue = nan rfa.prevValue = nan
rfa.values = nil rfa.values = nil
rfa.timestamps = nil rfa.timestamps = nil
} }
if i > 0 {
rfa.realPrevValue = values[i-1]
} else {
rfa.realPrevValue = nan
}
if j < len(values) {
rfa.realNextValue = values[j]
} else {
rfa.realNextValue = nan
}
rfa.currTimestamp = tEnd rfa.currTimestamp = tEnd
value := rc.Func(rfa) value := rc.Func(rfa)
rfa.idx++ rfa.idx++
@ -1243,6 +1272,12 @@ func rollupDelta(rfa *rollupFuncArg) float64 {
if len(values) == 0 { if len(values) == 0 {
return nan return nan
} }
if !math.IsNaN(rfa.realPrevValue) {
// Assume that the value didn't change during the current gap.
// This should fix high delta() and increase() values at the end of gaps.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/894
return values[len(values)-1] - rfa.realPrevValue
}
// Assume that the previous non-existing value was 0 // Assume that the previous non-existing value was 0
// only if the first value doesn't exceed too much the delta with the next value. // only if the first value doesn't exceed too much the delta with the next value.
// //
@ -1255,6 +1290,8 @@ func rollupDelta(rfa *rollupFuncArg) float64 {
d := float64(10) d := float64(10)
if len(values) > 1 { if len(values) > 1 {
d = values[1] - values[0] d = values[1] - values[0]
} else if !math.IsNaN(rfa.realNextValue) {
d = rfa.realNextValue - values[0]
} }
if math.Abs(values[0]) < 10*(math.Abs(d)+1) { if math.Abs(values[0]) < 10*(math.Abs(d)+1) {
prevValue = 0 prevValue = 0

View file

@ -1103,11 +1103,13 @@ func testRowsEqual(t *testing.T, values []float64, timestamps []int64, valuesExp
} }
func TestRollupDelta(t *testing.T) { func TestRollupDelta(t *testing.T) {
f := func(prevValue float64, values []float64, resultExpected float64) { f := func(prevValue, realPrevValue, realNextValue float64, values []float64, resultExpected float64) {
t.Helper() t.Helper()
rfa := &rollupFuncArg{ rfa := &rollupFuncArg{
prevValue: prevValue, prevValue: prevValue,
values: values, values: values,
realPrevValue: realPrevValue,
realNextValue: realNextValue,
} }
result := rollupDelta(rfa) result := rollupDelta(rfa)
if math.IsNaN(result) { if math.IsNaN(result) {
@ -1120,22 +1122,36 @@ func TestRollupDelta(t *testing.T) {
t.Fatalf("unexpected result; got %v; want %v", result, resultExpected) t.Fatalf("unexpected result; got %v; want %v", result, resultExpected)
} }
} }
f(nan, nil, nan) f(nan, nan, nan, nil, nan)
// Small initial value // Small initial value
f(nan, []float64{1}, 1) f(nan, nan, nan, []float64{1}, 1)
f(nan, []float64{10}, 10) f(nan, nan, nan, []float64{10}, 10)
f(nan, []float64{100}, 100) f(nan, nan, nan, []float64{100}, 100)
f(nan, []float64{1, 2, 3}, 3) f(nan, nan, nan, []float64{1, 2, 3}, 3)
f(1, []float64{1, 2, 3}, 2) f(1, nan, nan, []float64{1, 2, 3}, 2)
f(nan, []float64{5, 6, 8}, 8) f(nan, nan, nan, []float64{5, 6, 8}, 8)
f(2, []float64{5, 6, 8}, 6) f(2, nan, nan, []float64{5, 6, 8}, 6)
// Too big initial value must be skipped. // Too big initial value must be skipped.
f(nan, []float64{1000}, 0) f(nan, nan, nan, []float64{1000}, 0)
f(nan, []float64{1000, 1001, 1002}, 2) f(nan, nan, nan, []float64{1000, 1001, 1002}, 2)
// Non-nan realPrevValue
f(nan, 900, nan, []float64{1000}, 100)
f(nan, 1000, nan, []float64{1000}, 0)
f(nan, 1100, nan, []float64{1000}, -100)
f(nan, 900, nan, []float64{1000, 1001, 1002}, 102)
// Small delta between realNextValue and values
f(nan, nan, 990, []float64{1000}, 0)
f(nan, nan, 1005, []float64{1000}, 0)
// Big delta between relaNextValue and values
f(nan, nan, 800, []float64{1000}, 1000)
f(nan, nan, 1300, []float64{1000}, 1000)
// Empty values // Empty values
f(1, nil, 0) f(1, nan, nan, nil, 0)
f(100, nil, 0) f(100, nan, nan, nil, 0)
} }

View file

@ -24,6 +24,7 @@ var (
retentionPeriod = flagutil.NewDuration("retentionPeriod", 1, "Data with timestamps outside the retentionPeriod is automatically deleted") retentionPeriod = flagutil.NewDuration("retentionPeriod", 1, "Data with timestamps outside the retentionPeriod is automatically deleted")
snapshotAuthKey = flag.String("snapshotAuthKey", "", "authKey, which must be passed in query string to /snapshot* pages") snapshotAuthKey = flag.String("snapshotAuthKey", "", "authKey, which must be passed in query string to /snapshot* pages")
forceMergeAuthKey = flag.String("forceMergeAuthKey", "", "authKey, which must be passed in query string to /internal/force_merge pages") forceMergeAuthKey = flag.String("forceMergeAuthKey", "", "authKey, which must be passed in query string to /internal/force_merge pages")
forceFlushAuthKey = flag.String("forceFlushAuthKey", "", "authKey, which must be passed in query string to /internal/force_flush pages")
precisionBits = flag.Int("precisionBits", 64, "The number of precision bits to store per each value. Lower precision bits improves data compression at the cost of precision loss") precisionBits = flag.Int("precisionBits", 64, "The number of precision bits to store per each value. Lower precision bits improves data compression at the cost of precision loss")
@ -115,6 +116,14 @@ func AddRows(mrs []storage.MetricRow) error {
return err return err
} }
// RegisterMetricNames registers all the metrics from mrs in the storage.
func RegisterMetricNames(mrs []storage.MetricRow) error {
WG.Add(1)
err := Storage.RegisterMetricNames(mrs)
WG.Done()
return err
}
// DeleteMetrics deletes metrics matching tfss. // DeleteMetrics deletes metrics matching tfss.
// //
// Returns the number of deleted metrics. // Returns the number of deleted metrics.
@ -125,6 +134,14 @@ func DeleteMetrics(tfss []*storage.TagFilters) (int, error) {
return n, err return n, err
} }
// SearchMetricNames returns metric names for the given tfss on the given tr.
func SearchMetricNames(tfss []*storage.TagFilters, tr storage.TimeRange, maxMetrics int, deadline uint64) ([]storage.MetricName, error) {
WG.Add(1)
mns, err := Storage.SearchMetricNames(tfss, tr, maxMetrics, deadline)
WG.Done()
return mns, err
}
// SearchTagKeysOnTimeRange searches for tag keys on tr. // SearchTagKeysOnTimeRange searches for tag keys on tr.
func SearchTagKeysOnTimeRange(tr storage.TimeRange, maxTagKeys int, deadline uint64) ([]string, error) { func SearchTagKeysOnTimeRange(tr storage.TimeRange, maxTagKeys int, deadline uint64) ([]string, error) {
WG.Add(1) WG.Add(1)
@ -226,6 +243,16 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
}() }()
return true return true
} }
if path == "/internal/force_flush" {
authKey := r.FormValue("authKey")
if authKey != *forceFlushAuthKey {
httpserver.Errorf(w, r, "invalid authKey %q. It must match the value from -forceFlushAuthKey command line flag", authKey)
return true
}
logger.Infof("flushing storage to make pending data available for reading")
Storage.DebugFlush()
return true
}
prometheusCompatibleResponse := false prometheusCompatibleResponse := false
if path == "/api/v1/admin/tsdb/snapshot" { if path == "/api/v1/admin/tsdb/snapshot" {
// Handle Prometheus API - https://prometheus.io/docs/prometheus/latest/querying/api/#snapshot . // Handle Prometheus API - https://prometheus.io/docs/prometheus/latest/querying/api/#snapshot .
@ -244,7 +271,7 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
switch path { switch path {
case "/create": case "/create":
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json; charset=utf-8")
snapshotPath, err := Storage.CreateSnapshot() snapshotPath, err := Storage.CreateSnapshot()
if err != nil { if err != nil {
err = fmt.Errorf("cannot create snapshot: %w", err) err = fmt.Errorf("cannot create snapshot: %w", err)
@ -258,7 +285,7 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
} }
return true return true
case "/list": case "/list":
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json; charset=utf-8")
snapshots, err := Storage.ListSnapshots() snapshots, err := Storage.ListSnapshots()
if err != nil { if err != nil {
err = fmt.Errorf("cannot list snapshots: %w", err) err = fmt.Errorf("cannot list snapshots: %w", err)
@ -275,7 +302,7 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
fmt.Fprintf(w, `]}`) fmt.Fprintf(w, `]}`)
return true return true
case "/delete": case "/delete":
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json; charset=utf-8")
snapshotName := r.FormValue("snapshot") snapshotName := r.FormValue("snapshot")
if err := Storage.DeleteSnapshot(snapshotName); err != nil { if err := Storage.DeleteSnapshot(snapshotName); err != nil {
err = fmt.Errorf("cannot delete snapshot %q: %w", snapshotName, err) err = fmt.Errorf("cannot delete snapshot %q: %w", snapshotName, err)
@ -285,7 +312,7 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
fmt.Fprintf(w, `{"status":"ok"}`) fmt.Fprintf(w, `{"status":"ok"}`)
return true return true
case "/delete_all": case "/delete_all":
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json; charset=utf-8")
snapshots, err := Storage.ListSnapshots() snapshots, err := Storage.ListSnapshots()
if err != nil { if err != nil {
err = fmt.Errorf("cannot list snapshots: %w", err) err = fmt.Errorf("cannot list snapshots: %w", err)

View file

@ -4,7 +4,7 @@ DOCKER_NAMESPACE := victoriametrics
ROOT_IMAGE ?= alpine:3.12.1 ROOT_IMAGE ?= alpine:3.12.1
CERTS_IMAGE := alpine:3.12.1 CERTS_IMAGE := alpine:3.12.1
GO_BUILDER_IMAGE := golang:1.15.4 GO_BUILDER_IMAGE := golang:1.15.5
BUILDER_IMAGE := local/builder:2.0.0-$(shell echo $(GO_BUILDER_IMAGE) | tr : _) BUILDER_IMAGE := local/builder:2.0.0-$(shell echo $(GO_BUILDER_IMAGE) | tr : _)
BASE_IMAGE := local/base:1.1.1-$(shell echo $(ROOT_IMAGE) | tr : _)-$(shell echo $(CERTS_IMAGE) | tr : _) BASE_IMAGE := local/base:1.1.1-$(shell echo $(ROOT_IMAGE) | tr : _)-$(shell echo $(CERTS_IMAGE) | tr : _)

View file

@ -2,8 +2,35 @@
# tip # tip
# [v1.47.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.47.0)
* FEATURE: vmselect: return the original error from `vmstorage` node in query response if `-search.denyPartialResponse` is set.
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/891
* FEATURE: vmselect: add `"isPartial":{true|false}` field in JSON output for `/api/v1/*` functions
from [Prometheus querying API](https://prometheus.io/docs/prometheus/latest/querying/api/). `"isPartial":true` is set if the response contains partial data
because of a part of `vmstorage` nodes were unavailable during query processing.
* FEATURE: improve performance for `/api/v1/series`, `/api/v1/labels` and `/api/v1/label/<labelName>/values` on time ranges exceeding one day.
* FEATURE: vmagent: reduce memory usage when service discovery detects big number of scrape targets and the set of discovered targets changes over time.
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/825
* FEATURE: vmagent: add `-promscrape.dropOriginalLabels` command-line option, which can be used for reducing memory usage when scraping big number of targets. * FEATURE: vmagent: add `-promscrape.dropOriginalLabels` command-line option, which can be used for reducing memory usage when scraping big number of targets.
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/825#issuecomment-724308361 for details. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/825#issuecomment-724308361 for details.
* FEATURE: vmalert: explicitly set extra labels to alert entities. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/870
* FEATURE: add `-search.treatDotsAsIsInRegexps` command-line flag, which can be used for automatic escaping of dots in regexp label filters used in queries.
For example, if `-search.treatDotsAsIsInRegexps` is set, then the query `foo{bar=~"aaa.bb.cc|dd.eee"}` is automatically converted to `foo{bar=~"aaa\\.bb\\.cc|dd\\.eee"}`.
This may be useful for querying Graphite data.
* FEATURE: consistently return text-based HTTP responses such as `plain/text` and `application/json` with `charset=utf-8`.
See https://github.com/VictoriaMetrics/VictoriaMetrics/pull/897
* FEATURE: update Go builder from v1.15.4 to v1.15.5. This should fix [these issues in Go](https://github.com/golang/go/issues?q=milestone%3AGo1.15.5+label%3ACherryPickApproved).
* FEATURE: added `/internal/force_flush` http handler for flushing recently ingested data from in-memory buffers to persistent storage.
See [troubleshooting docs](https://victoriametrics.github.io/#troubleshooting) for more details.
* FEATURE: added [Graphite Tags API](https://graphite.readthedocs.io/en/stable/tags.html) support.
See [these docs](https://victoriametrics.github.io/#graphite-tags-api-usage) for details.
* BUGFIX: do not return data points in the end of the selected time range for time series ending in the middle of the selected time range.
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/887 and https://github.com/VictoriaMetrics/VictoriaMetrics/issues/845
* BUGFIX: remove spikes at the end of time series gaps for `increase()` or `delta()` functions. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/894
* BUGFIX: vminsert: properly return HTTP 503 status code when all the vmstorage nodes are unavailable. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/896
# [v1.46.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.46.0) # [v1.46.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.46.0)

View file

@ -181,7 +181,7 @@ or [an alternative dashboard for VictoriaMetrics cluster](https://grafana.com/gr
- `prometheus/api/v1/import/csv` - for importing arbitrary CSV data. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-import-csv-data) for details. - `prometheus/api/v1/import/csv` - for importing arbitrary CSV data. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-import-csv-data) for details.
- `prometheus/api/v1/import/prometheus` - for importing data in Prometheus exposition format. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-import-data-in-prometheus-exposition-format) for details. - `prometheus/api/v1/import/prometheus` - for importing data in Prometheus exposition format. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-import-data-in-prometheus-exposition-format) for details.
* URLs for [Prmetheus querying API](https://prometheus.io/docs/prometheus/latest/querying/api/): `http://<vmselect>:8481/select/<accountID>/prometheus/<suffix>`, where: * URLs for [Prometheus querying API](https://prometheus.io/docs/prometheus/latest/querying/api/): `http://<vmselect>:8481/select/<accountID>/prometheus/<suffix>`, where:
- `<accountID>` is an arbitrary number identifying data namespace for the query (aka tenant) - `<accountID>` is an arbitrary number identifying data namespace for the query (aka tenant)
- `<suffix>` may have the following values: - `<suffix>` may have the following values:
- `api/v1/query` - performs [PromQL instant query](https://prometheus.io/docs/prometheus/latest/querying/api/#instant-queries). - `api/v1/query` - performs [PromQL instant query](https://prometheus.io/docs/prometheus/latest/querying/api/#instant-queries).
@ -194,6 +194,8 @@ or [an alternative dashboard for VictoriaMetrics cluster](https://grafana.com/gr
- `api/v1/export/native` - exports raw data in native binary format. It may be imported into another VictoriaMetrics via `api/v1/import/native` (see above). - `api/v1/export/native` - exports raw data in native binary format. It may be imported into another VictoriaMetrics via `api/v1/import/native` (see above).
- `api/v1/export/csv` - exports data in CSV. It may be imported into another VictoriaMetrics via `api/v1/import/csv` (see above). - `api/v1/export/csv` - exports data in CSV. It may be imported into another VictoriaMetrics via `api/v1/import/csv` (see above).
- `api/v1/status/tsdb` - for time series stats. See [these docs](https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-stats) for details. - `api/v1/status/tsdb` - for time series stats. See [these docs](https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-stats) for details.
VictoriaMetrics accepts optional `topN=N` and `date=YYYY-MM-DD` query args for this handler, where `N` is the number of top entries to return in the response
and `YYYY-MM-DD` is the date for collecting the stats. By default the stats is collected for the current day.
- `api/v1/status/active_queries` - for currently executed active queries. Note that every `vmselect` maintains an independent list of active queries, - `api/v1/status/active_queries` - for currently executed active queries. Note that every `vmselect` maintains an independent list of active queries,
which is returned in the response. which is returned in the response.
@ -203,6 +205,11 @@ or [an alternative dashboard for VictoriaMetrics cluster](https://grafana.com/gr
- `metrics/find` - searches Graphite metrics. See [these docs](https://graphite-api.readthedocs.io/en/latest/api.html#metrics-find). - `metrics/find` - searches Graphite metrics. See [these docs](https://graphite-api.readthedocs.io/en/latest/api.html#metrics-find).
- `metrics/expand` - expands Graphite metrics. See [these docs](https://graphite-api.readthedocs.io/en/latest/api.html#metrics-expand). - `metrics/expand` - expands Graphite metrics. See [these docs](https://graphite-api.readthedocs.io/en/latest/api.html#metrics-expand).
- `metrics/index.json` - returns all the metric names. See [these docs](https://graphite-api.readthedocs.io/en/latest/api.html#metrics-index-json). - `metrics/index.json` - returns all the metric names. See [these docs](https://graphite-api.readthedocs.io/en/latest/api.html#metrics-index-json).
- `tags` - returns tag names. See [these docs](https://graphite.readthedocs.io/en/stable/tags.html#exploring-tags).
- `tags/<tag_name>` - returns tag values for the given `<tag_name>`. See [these docs](https://graphite.readthedocs.io/en/stable/tags.html#exploring-tags).
- `tags/findSeries` - returns series matching the given `expr`. See [these docs](https://graphite.readthedocs.io/en/stable/tags.html#exploring-tags).
- `tags/autoComplete/tags` - returns tags matching the given `tagPrefix` and/or `expr`. See [these docs](https://graphite.readthedocs.io/en/stable/tags.html#auto-complete-support).
- `tags/autoComplete/values` - returns tag values matching the given `valuePrefix` and/or `expr`. See [these docs](https://graphite.readthedocs.io/en/stable/tags.html#auto-complete-support).
* URL for time series deletion: `http://<vmselect>:8481/delete/<accountID>/prometheus/api/v1/admin/tsdb/delete_series?match[]=<timeseries_selector_for_delete>`. * URL for time series deletion: `http://<vmselect>:8481/delete/<accountID>/prometheus/api/v1/admin/tsdb/delete_series?match[]=<timeseries_selector_for_delete>`.
Note that the `delete_series` handler should be used only in exceptional cases such as deletion of accidentally ingested incorrect time series. It shouldn't Note that the `delete_series` handler should be used only in exceptional cases such as deletion of accidentally ingested incorrect time series. It shouldn't

View file

@ -2,7 +2,7 @@ Release process guidance
## Release version and Docker images ## Release version and Docker images
0. Document all the changes for new release in [CHANGELOG.md](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/CHANGELOG.md). 0. Document all the changes for new release in [CHANGELOG.md](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/docs/CHANGELOG.md).
1. Create release tag with `git tag v1.xx.y`. 1. Create release tag with `git tag v1.xx.y`.
2. Run `make release` for creating `*.tar.gz` release archive with the corresponding `_checksums.txt` inside `bin` directory. 2. Run `make release` for creating `*.tar.gz` release archive with the corresponding `_checksums.txt` inside `bin` directory.
3. Run `make publish` for creating and publishing Docker images. 3. Run `make publish` for creating and publishing Docker images.

View file

@ -10,7 +10,7 @@
## VictoriaMetrics ## VictoriaMetrics
VictoriaMetrics is fast, cost-effective and scalable time-series database. VictoriaMetrics is fast, cost-effective and scalable monitoring solution and time series database.
It is available in [binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases), It is available in [binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases),
[docker images](https://hub.docker.com/r/victoriametrics/victoria-metrics/) and [docker images](https://hub.docker.com/r/victoriametrics/victoria-metrics/) and
@ -21,11 +21,13 @@ Cluster version is available [here](https://github.com/VictoriaMetrics/VictoriaM
See our [Wiki](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki) for additional documentation. See our [Wiki](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki) for additional documentation.
[Contact us](mailto:info@victoriametrics.com) if you need paid enterprise support for VictoriaMetrics. [Contact us](mailto:info@victoriametrics.com) if you need paid enterprise support for VictoriaMetrics.
See [features available for enterprise customers](https://github.com/VictoriaMetrics/VictoriaMetrics/issues?q=is%3Aissue+label%3Aenterprise). See [features available for enterprise customers](https://victoriametrics.com/enterprise.html).
## Case studies and talks ## Case studies and talks
Click on a link in order to read the corresponding case study
* [Adidas](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/CaseStudies#adidas) * [Adidas](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/CaseStudies#adidas)
* [CERN](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/CaseStudies#cern) * [CERN](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/CaseStudies#cern)
* [COLOPL](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/CaseStudies#colopl) * [COLOPL](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/CaseStudies#colopl)
@ -46,8 +48,8 @@ See [features available for enterprise customers](https://github.com/VictoriaMet
* VictoriaMetrics can be used as long-term storage for Prometheus or for [vmagent](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmagent/README.md). * VictoriaMetrics can be used as long-term storage for Prometheus or for [vmagent](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/app/vmagent/README.md).
See [these docs](#prometheus-setup) for details. See [these docs](#prometheus-setup) for details.
* Supports [Prometheus querying API](https://prometheus.io/docs/prometheus/latest/querying/api/), so it can be used as Prometheus drop-in replacement in Grafana. * Supports [Prometheus querying API](https://prometheus.io/docs/prometheus/latest/querying/api/), so it can be used as Prometheus drop-in replacement in Grafana.
VictoriaMetrics implements [MetricsQL](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/MetricsQL) query language, which is inspired by PromQL. VictoriaMetrics implements [MetricsQL](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/MetricsQL) query language, which inspired by PromQL. MetricsQL is backwards-compatible with PromQL.
* Supports global query view. Multiple Prometheus instances may write data into VictoriaMetrics. Later this data may be used in a single query. * Supports global query view. Multiple Prometheus instances or any other data sources may write data into VictoriaMetrics. Later this data may be queried in a single query.
* High performance and good scalability for both [inserts](https://medium.com/@valyala/high-cardinality-tsdb-benchmarks-victoriametrics-vs-timescaledb-vs-influxdb-13e6ee64dd6b) * High performance and good scalability for both [inserts](https://medium.com/@valyala/high-cardinality-tsdb-benchmarks-victoriametrics-vs-timescaledb-vs-influxdb-13e6ee64dd6b)
and [selects](https://medium.com/@valyala/when-size-matters-benchmarking-victoriametrics-vs-timescale-and-influxdb-6035811952d4). and [selects](https://medium.com/@valyala/when-size-matters-benchmarking-victoriametrics-vs-timescale-and-influxdb-6035811952d4).
[Outperforms InfluxDB and TimescaleDB by up to 20x](https://medium.com/@valyala/measuring-vertical-scalability-for-time-series-databases-in-google-cloud-92550d78d8ae). [Outperforms InfluxDB and TimescaleDB by up to 20x](https://medium.com/@valyala/measuring-vertical-scalability-for-time-series-databases-in-google-cloud-92550d78d8ae).
@ -104,7 +106,9 @@ See [features available for enterprise customers](https://github.com/VictoriaMet
* [How to send data from OpenTSDB-compatible agents](#how-to-send-data-from-opentsdb-compatible-agents) * [How to send data from OpenTSDB-compatible agents](#how-to-send-data-from-opentsdb-compatible-agents)
* [Prometheus querying API usage](#prometheus-querying-api-usage) * [Prometheus querying API usage](#prometheus-querying-api-usage)
* [Prometheus querying API enhancements](#prometheus-querying-api-enhancements) * [Prometheus querying API enhancements](#prometheus-querying-api-enhancements)
* [Graphite Metrics API usage](#graphite-metrics-api-usage) * [Graphite API usage](#graphite-api-usage)
* [Graphite Metrics API usage](#graphite-metrics-api-usage)
* [Graphite Tags API usage](#graphite-tags-api-usage)
* [How to build from sources](#how-to-build-from-sources) * [How to build from sources](#how-to-build-from-sources)
* [Development build](#development-build) * [Development build](#development-build)
* [Production build](#production-build) * [Production build](#production-build)
@ -410,6 +414,7 @@ Data sent to VictoriaMetrics via `Graphite plaintext protocol` may be read via t
* [Prometheus querying API](#prometheus-querying-api-usage) * [Prometheus querying API](#prometheus-querying-api-usage)
* Metric names can be explored via [Graphite metrics API](#graphite-metrics-api-usage) * Metric names can be explored via [Graphite metrics API](#graphite-metrics-api-usage)
* Tags can be explored via [Graphite tags API](#graphite-tags-api-usage)
* [go-graphite/carbonapi](https://github.com/go-graphite/carbonapi/blob/master/cmd/carbonapi/carbonapi.example.prometheus.yaml) * [go-graphite/carbonapi](https://github.com/go-graphite/carbonapi/blob/master/cmd/carbonapi/carbonapi.example.prometheus.yaml)
### How to send data from OpenTSDB-compatible agents ### How to send data from OpenTSDB-compatible agents
@ -495,7 +500,9 @@ VictoriaMetrics supports the following handlers from [Prometheus querying API](h
* [/api/v1/series](https://prometheus.io/docs/prometheus/latest/querying/api/#finding-series-by-label-matchers) * [/api/v1/series](https://prometheus.io/docs/prometheus/latest/querying/api/#finding-series-by-label-matchers)
* [/api/v1/labels](https://prometheus.io/docs/prometheus/latest/querying/api/#getting-label-names) * [/api/v1/labels](https://prometheus.io/docs/prometheus/latest/querying/api/#getting-label-names)
* [/api/v1/label/.../values](https://prometheus.io/docs/prometheus/latest/querying/api/#querying-label-values) * [/api/v1/label/.../values](https://prometheus.io/docs/prometheus/latest/querying/api/#querying-label-values)
* [/api/v1/status/tsdb](https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-stats) * [/api/v1/status/tsdb](https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-stats). VictoriaMetrics accepts optional `topN=N` and `date=YYYY-MM-DD`
query args for this handler, where `N` is the number of top entries to return in the response and `YYYY-MM-DD` is the date for collecting the stats.
By default top 10 entries are returned and the stats is collected for the current day.
* [/api/v1/targets](https://prometheus.io/docs/prometheus/latest/querying/api/#targets) - see [these docs](#how-to-scrape-prometheus-exporters-such-as-node-exporter) for more details. * [/api/v1/targets](https://prometheus.io/docs/prometheus/latest/querying/api/#targets) - see [these docs](#how-to-scrape-prometheus-exporters-such-as-node-exporter) for more details.
These handlers can be queried from Prometheus-compatible clients such as Grafana or curl. These handlers can be queried from Prometheus-compatible clients such as Grafana or curl.
@ -522,7 +529,15 @@ Additionally VictoriaMetrics provides the following handlers:
* `/api/v1/status/active_queries` - it returns a list of currently running queries. * `/api/v1/status/active_queries` - it returns a list of currently running queries.
### Graphite Metrics API usage ### Graphite API usage
VictoriaMetrics supports the following Graphite APIs:
* Metrics API - see [these docs](#graphite-metrics-api-usage).
* Tags API - see [these docs](#graphite-tags-api-usage).
#### Graphite Metrics API usage
VictoriaMetrics supports the following handlers from [Graphite Metrics API](https://graphite-api.readthedocs.io/en/latest/api.html#the-metrics-api): VictoriaMetrics supports the following handlers from [Graphite Metrics API](https://graphite-api.readthedocs.io/en/latest/api.html#the-metrics-api):
@ -536,6 +551,19 @@ VictoriaMetrics accepts the following additional query args at `/metrics/find` a
that start with `node_`. By default `delimiter=.`. that start with `node_`. By default `delimiter=.`.
#### Graphite Tags API usage
VictoriaMetrics supports the following handlers from [Graphite Tags API](https://graphite.readthedocs.io/en/stable/tags.html):
* [/tags/tagSeries](https://graphite.readthedocs.io/en/stable/tags.html#adding-series-to-the-tagdb)
* [/tags/tagMultiSeries](https://graphite.readthedocs.io/en/stable/tags.html#adding-series-to-the-tagdb)
* [/tags](https://graphite.readthedocs.io/en/stable/tags.html#exploring-tags)
* [/tags/tag_name](https://graphite.readthedocs.io/en/stable/tags.html#exploring-tags)
* [/tags/findSeries](https://graphite.readthedocs.io/en/stable/tags.html#exploring-tags)
* [/tags/autoComplete/tags](https://graphite.readthedocs.io/en/stable/tags.html#auto-complete-support)
* [/tags/autoComplete/values](https://graphite.readthedocs.io/en/stable/tags.html#auto-complete-support)
### How to build from sources ### How to build from sources
We recommend using either [binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) or We recommend using either [binary releases](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) or
@ -695,7 +723,16 @@ VictoriaMetrics provides the following handlers for exporting data:
Send a request to `http://<victoriametrics-addr>:8428/api/v1/export/native?match[]=<timeseries_selector_for_export>`, Send a request to `http://<victoriametrics-addr>:8428/api/v1/export/native?match[]=<timeseries_selector_for_export>`,
where `<timeseries_selector_for_export>` may contain any [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) where `<timeseries_selector_for_export>` may contain any [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors)
for metrics to export. Use `{__name__!=""}` selector for fetching all the time series. for metrics to export. Use `{__name__=~".*"}` selector for fetching all the time series.
On large databases you may experience problems with limit on unique timeseries (default value is 300000). In this case you need to adjust `-search.maxUniqueTimeseries` parameter:
```bash
# count unique timeseries in database
wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq '.data[0]'
# relaunch victoriametrics with search.maxUniqueTimeseries more than value from previous command
```
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values. unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
@ -1196,13 +1233,18 @@ VictoriaMetrics also exposes currently running queries with their execution time
* It is recommended inspecting logs during troubleshooting, since they may contain useful information. * It is recommended inspecting logs during troubleshooting, since they may contain useful information.
* VictoriaMetrics buffers incoming data in memory for up to a few seconds before flushing it to persistent storage.
This may lead to the following "issues":
* Data becomes available for querying in a few seconds after inserting. It is possible to flush in-memory buffers to persistent storage
by requesting `/internal/force_flush` http handler.
* The last few seconds of inserted data may be lost on unclean shutdown (i.e. OOM, `kill -9` or hardware reset).
See [this article for technical details](https://valyala.medium.com/wal-usage-looks-broken-in-modern-time-series-databases-b62a627ab704).
* If VictoriaMetrics works slowly and eats more than a CPU core per 100K ingested data points per second, * If VictoriaMetrics works slowly and eats more than a CPU core per 100K ingested data points per second,
then it is likely you have too many active time series for the current amount of RAM. then it is likely you have too many active time series for the current amount of RAM.
VictoriaMetrics [exposes](#monitoring) `vm_slow_*` metrics, which could be used as an indicator of low amounts of RAM. VictoriaMetrics [exposes](#monitoring) `vm_slow_*` metrics, which could be used as an indicator of low amounts of RAM.
It is recommended increasing the amount of RAM on the node with VictoriaMetrics in order to improve It is recommended increasing the amount of RAM on the node with VictoriaMetrics in order to improve
ingestion and query performance in this case. ingestion and query performance in this case.
Another option is to increase `-memory.allowedPercent` command-line flag value. Be careful with this
option, since too big value for `-memory.allowedPercent` may result in high I/O usage.
* VictoriaMetrics prioritizes data ingestion over data querying. So if it has no enough resources for data ingestion, * VictoriaMetrics prioritizes data ingestion over data querying. So if it has no enough resources for data ingestion,
then data querying may slow down significantly. then data querying may slow down significantly.
@ -1217,9 +1259,9 @@ VictoriaMetrics also exposes currently running queries with their execution time
which would start background merge if they had more free disk space. which would start background merge if they had more free disk space.
* If VictoriaMetrics doesn't work because of certain parts are corrupted due to disk errors, * If VictoriaMetrics doesn't work because of certain parts are corrupted due to disk errors,
then just remove directories with broken parts. This will recover VictoriaMetrics at the cost then just remove directories with broken parts. It is safe removing subdirectories under `<-storageDataPath>/data/{big,small}/YYYY_MM` directories
of data loss stored in the broken parts. In the future, `vmrecover` tool will be created when VictoriaMetrics isn't running. This recovers VictoriaMetrics at the cost of data loss stored in the deleted broken parts.
for automatic recovering from such errors. In the future, `vmrecover` tool will be created for automatic recovering from such errors.
* If you see gaps on the graphs, try resetting the cache by sending request to `/internal/resetRollupResultCache`. * If you see gaps on the graphs, try resetting the cache by sending request to `/internal/resetRollupResultCache`.
If this removes gaps on the graphs, then it is likely data with timestamps older than `-search.cacheTimestampOffset` If this removes gaps on the graphs, then it is likely data with timestamps older than `-search.cacheTimestampOffset`
@ -1241,6 +1283,11 @@ VictoriaMetrics also exposes currently running queries with their execution time
This prevents from ingesting metrics with too many labels. It is recommended [monitoring](#monitoring) `vm_metrics_with_dropped_labels_total` This prevents from ingesting metrics with too many labels. It is recommended [monitoring](#monitoring) `vm_metrics_with_dropped_labels_total`
metric in order to determine whether `-maxLabelsPerTimeseries` must be adjusted for your workload. metric in order to determine whether `-maxLabelsPerTimeseries` must be adjusted for your workload.
* If you store Graphite metrics like `foo.bar.baz` in VictoriaMetrics, then `-search.treatDotsAsIsInRegexps` command-line flag could be useful.
By default `.` chars in regexps match any char. If you need matching only dots, then the `\\.` must be used in regexp filters.
When `-search.treatDotsAsIsInRegexps` option is enabled, then dots in regexps are automatically escaped in order to match only dots instead of arbitrary chars.
This may significantly increase performance when locating time series for the given label filters.
* VictoriaMetrics ignores `NaN` values during data ingestion. * VictoriaMetrics ignores `NaN` values during data ingestion.

View file

@ -63,6 +63,22 @@ Then send Influx data to `http://vmagent-host:8429`. See [these docs](https://gi
Pass `-help` to `vmagent` in order to see the full list of supported command-line flags with their descriptions. Pass `-help` to `vmagent` in order to see the full list of supported command-line flags with their descriptions.
### Configuration update
`vmagent` should be restarted in order to update config options set via command-line args.
`vmagent` supports multiple approaches for reloading configs from updated config files such as `-promscrape.config`, `-remoteWrite.relabelConfig` and `-remoteWrite.urlRelabelConfig`:
* Sending `SUGHUP` signal to `vmagent` process:
```bash
kill -SIGHUP `pidof vmagent`
```
* Sending HTTP request to `http://vmagent:8429/-/reload` endpoint.
There is also `-promscrape.configCheckInterval` command-line option, which can be used for automatic reloading configs from updated `-promscrape.config` file.
### Use cases ### Use cases
@ -197,6 +213,7 @@ The relabeling can be defined in the following places:
Read more about relabeling in the following articles: Read more about relabeling in the following articles:
* [How to use Relabeling in Prometheus and VictoriaMetrics](https://valyala.medium.com/how-to-use-relabeling-in-prometheus-and-victoriametrics-8b90fc22c4b2)
* [Life of a label](https://www.robustperception.io/life-of-a-label) * [Life of a label](https://www.robustperception.io/life-of-a-label)
* [Discarding targets and timeseries with relabeling](https://www.robustperception.io/relabelling-can-discard-targets-timeseries-and-alerts) * [Discarding targets and timeseries with relabeling](https://www.robustperception.io/relabelling-can-discard-targets-timeseries-and-alerts)
* [Dropping labels at scrape time](https://www.robustperception.io/dropping-metrics-at-scrape-time-with-prometheus) * [Dropping labels at scrape time](https://www.robustperception.io/dropping-metrics-at-scrape-time-with-prometheus)

19
go.mod
View file

@ -1,7 +1,7 @@
module github.com/VictoriaMetrics/VictoriaMetrics module github.com/VictoriaMetrics/VictoriaMetrics
require ( require (
cloud.google.com/go v0.71.0 // indirect cloud.google.com/go v0.72.0 // indirect
cloud.google.com/go/storage v1.12.0 cloud.google.com/go/storage v1.12.0
github.com/VictoriaMetrics/fastcache v1.5.7 github.com/VictoriaMetrics/fastcache v1.5.7
@ -10,12 +10,13 @@ require (
github.com/VictoriaMetrics/fasthttp v1.0.7 github.com/VictoriaMetrics/fasthttp v1.0.7
github.com/VictoriaMetrics/metrics v1.12.3 github.com/VictoriaMetrics/metrics v1.12.3
github.com/VictoriaMetrics/metricsql v0.7.2 github.com/VictoriaMetrics/metricsql v0.7.2
github.com/aws/aws-sdk-go v1.35.23 github.com/aws/aws-sdk-go v1.35.28
github.com/cespare/xxhash/v2 v2.1.1 github.com/cespare/xxhash/v2 v2.1.1
github.com/go-kit/kit v0.10.0 github.com/go-kit/kit v0.10.0
github.com/golang/snappy v0.0.2 github.com/golang/snappy v0.0.2
github.com/klauspost/compress v1.11.2 github.com/klauspost/compress v1.11.3
github.com/prometheus/client_golang v1.8.0 // indirect github.com/prometheus/client_golang v1.8.0 // indirect
github.com/prometheus/common v0.15.0 // indirect
github.com/prometheus/prometheus v1.8.2-0.20201029103703-63be30dceed9 github.com/prometheus/prometheus v1.8.2-0.20201029103703-63be30dceed9
github.com/valyala/fastjson v1.6.1 github.com/valyala/fastjson v1.6.1
github.com/valyala/fastrand v1.0.0 github.com/valyala/fastrand v1.0.0
@ -23,15 +24,13 @@ require (
github.com/valyala/gozstd v1.8.3 github.com/valyala/gozstd v1.8.3
github.com/valyala/histogram v1.1.2 github.com/valyala/histogram v1.1.2
github.com/valyala/quicktemplate v1.6.3 github.com/valyala/quicktemplate v1.6.3
golang.org/x/net v0.0.0-20201031054903-ff519b6c9102 // indirect golang.org/x/net v0.0.0-20201110031124-69a78807bb2b // indirect
golang.org/x/oauth2 v0.0.0-20200902213428-5d25da1a8d43 golang.org/x/oauth2 v0.0.0-20201109201403-9fd604954f58
golang.org/x/sys v0.0.0-20201106081118-db71ae66460a golang.org/x/sys v0.0.0-20201116161645-c061ba923fbb
golang.org/x/text v0.3.4 // indirect golang.org/x/tools v0.0.0-20201116182000-1d699438d2cf // indirect
golang.org/x/tools v0.0.0-20201105220310-78b158585360 // indirect
google.golang.org/api v0.35.0 google.golang.org/api v0.35.0
google.golang.org/appengine v1.6.7 // indirect google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20201106154455-f9bfe239b0ba // indirect google.golang.org/genproto v0.0.0-20201116144945-7adebfbe6a3f // indirect
google.golang.org/grpc v1.33.2 // indirect
gopkg.in/yaml.v2 v2.3.0 gopkg.in/yaml.v2 v2.3.0
) )

42
go.sum
View file

@ -16,8 +16,8 @@ cloud.google.com/go v0.57.0/go.mod h1:oXiQ6Rzq3RAkkY7N6t3TcE6jE+CIBBbA36lwQ1JyzZ
cloud.google.com/go v0.62.0/go.mod h1:jmCYTdRCQuc1PHIIJ/maLInMho30T/Y0M4hTdTShOYc= cloud.google.com/go v0.62.0/go.mod h1:jmCYTdRCQuc1PHIIJ/maLInMho30T/Y0M4hTdTShOYc=
cloud.google.com/go v0.65.0/go.mod h1:O5N8zS7uWy9vkA9vayVHs65eM1ubvY4h553ofrNHObY= cloud.google.com/go v0.65.0/go.mod h1:O5N8zS7uWy9vkA9vayVHs65eM1ubvY4h553ofrNHObY=
cloud.google.com/go v0.66.0/go.mod h1:dgqGAjKCDxyhGTtC9dAREQGUJpkceNm1yt590Qno0Ko= cloud.google.com/go v0.66.0/go.mod h1:dgqGAjKCDxyhGTtC9dAREQGUJpkceNm1yt590Qno0Ko=
cloud.google.com/go v0.71.0 h1:2ha722Z08cmRa0orJrzBaszYQcLbLFcsZHsGSj/kIF4= cloud.google.com/go v0.72.0 h1:eWRCuwubtDrCJG0oSUMgnsbD4CmPFQF2ei4OFbXvwww=
cloud.google.com/go v0.71.0/go.mod h1:qZfY4Y7AEIQwG/fQYD3xrxLNkQZ0Xzf3HGeqCkA6LVM= cloud.google.com/go v0.72.0/go.mod h1:M+5Vjvlc2wnp6tjzE102Dw08nGShTscUx2nZMufOKPI=
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o= cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE= cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc= cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
@ -116,8 +116,8 @@ github.com/asaskevich/govalidator v0.0.0-20200108200545-475eaeb16496/go.mod h1:o
github.com/aws/aws-lambda-go v1.13.3/go.mod h1:4UKl9IzQMoD+QF79YdCuzCwp8VbmG4VAQwij/eHl5CU= github.com/aws/aws-lambda-go v1.13.3/go.mod h1:4UKl9IzQMoD+QF79YdCuzCwp8VbmG4VAQwij/eHl5CU=
github.com/aws/aws-sdk-go v1.27.0/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo= github.com/aws/aws-sdk-go v1.27.0/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
github.com/aws/aws-sdk-go v1.35.5/go.mod h1:tlPOdRjfxPBpNIwqDj61rmsnA85v9jc0Ps9+muhnW+k= github.com/aws/aws-sdk-go v1.35.5/go.mod h1:tlPOdRjfxPBpNIwqDj61rmsnA85v9jc0Ps9+muhnW+k=
github.com/aws/aws-sdk-go v1.35.23 h1:SCP0d0XvyJTDmfnHEQPvBaYi3kea1VNUo7uQmkVgFts= github.com/aws/aws-sdk-go v1.35.28 h1:S2LuRnfC8X05zgZLC8gy/Sb82TGv2Cpytzbzz7tkeHc=
github.com/aws/aws-sdk-go v1.35.23/go.mod h1:tlPOdRjfxPBpNIwqDj61rmsnA85v9jc0Ps9+muhnW+k= github.com/aws/aws-sdk-go v1.35.28/go.mod h1:tlPOdRjfxPBpNIwqDj61rmsnA85v9jc0Ps9+muhnW+k=
github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g= github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q= github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8= github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
@ -464,8 +464,8 @@ github.com/klauspost/compress v1.4.0/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0
github.com/klauspost/compress v1.9.5/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A= github.com/klauspost/compress v1.9.5/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A=
github.com/klauspost/compress v1.10.7/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs= github.com/klauspost/compress v1.10.7/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
github.com/klauspost/compress v1.11.0/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs= github.com/klauspost/compress v1.11.0/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
github.com/klauspost/compress v1.11.2 h1:MiK62aErc3gIiVEtyzKfeOHgW7atJb5g/KNX5m3c2nQ= github.com/klauspost/compress v1.11.3 h1:dB4Bn0tN3wdCzQxnS8r06kV74qN/TAfaIS0bVE8h3jc=
github.com/klauspost/compress v1.11.2/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs= github.com/klauspost/compress v1.11.3/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
github.com/klauspost/cpuid v0.0.0-20170728055534-ae7887de9fa5/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek= github.com/klauspost/cpuid v0.0.0-20170728055534-ae7887de9fa5/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek=
github.com/klauspost/crc32 v0.0.0-20161016154125-cb6bfca970f6/go.mod h1:+ZoRqAPRLkC4NPOvfYeR5KNOrY6TD+/sAC3HXPZgDYg= github.com/klauspost/crc32 v0.0.0-20161016154125-cb6bfca970f6/go.mod h1:+ZoRqAPRLkC4NPOvfYeR5KNOrY6TD+/sAC3HXPZgDYg=
github.com/klauspost/pgzip v1.0.2-0.20170402124221-0bf5dcad4ada/go.mod h1:Ch1tH69qFZu15pkjo5kYi6mth2Zzwzt50oCQKQE9RUs= github.com/klauspost/pgzip v1.0.2-0.20170402124221-0bf5dcad4ada/go.mod h1:Ch1tH69qFZu15pkjo5kYi6mth2Zzwzt50oCQKQE9RUs=
@ -614,8 +614,9 @@ github.com/prometheus/common v0.6.0/go.mod h1:eBmuwkDJBwy6iBfxCBob6t6dR6ENT/y+J+
github.com/prometheus/common v0.7.0/go.mod h1:DjGbpBbp5NYNiECxcL/VnbXCCaQpKd3tt26CguLLsqA= github.com/prometheus/common v0.7.0/go.mod h1:DjGbpBbp5NYNiECxcL/VnbXCCaQpKd3tt26CguLLsqA=
github.com/prometheus/common v0.9.1/go.mod h1:yhUN8i9wzaXS3w1O07YhxHEBxD+W35wd8bs7vj7HSQ4= github.com/prometheus/common v0.9.1/go.mod h1:yhUN8i9wzaXS3w1O07YhxHEBxD+W35wd8bs7vj7HSQ4=
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo= github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
github.com/prometheus/common v0.14.0 h1:RHRyE8UocrbjU+6UvRzwi6HjiDfxrrBU91TtbKzkGp4=
github.com/prometheus/common v0.14.0/go.mod h1:U+gB1OBLb1lF3O42bTCL+FK18tX9Oar16Clt/msog/s= github.com/prometheus/common v0.14.0/go.mod h1:U+gB1OBLb1lF3O42bTCL+FK18tX9Oar16Clt/msog/s=
github.com/prometheus/common v0.15.0 h1:4fgOnadei3EZvgRwxJ7RMpG1k1pOZth5Pc13tyspaKM=
github.com/prometheus/common v0.15.0/go.mod h1:U+gB1OBLb1lF3O42bTCL+FK18tX9Oar16Clt/msog/s=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
@ -837,16 +838,17 @@ golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81R
golang.org/x/net v0.0.0-20200904194848-62affa334b73/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= golang.org/x/net v0.0.0-20200904194848-62affa334b73/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20201006153459-a7d1128ccaa0/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20201006153459-a7d1128ccaa0/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201026091529-146b70c837a4/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201031054903-ff519b6c9102 h1:42cLlJJdEh+ySyeUUbEQ5bsTiq8voBeTuweGVkY6Puw=
golang.org/x/net v0.0.0-20201031054903-ff519b6c9102/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20201031054903-ff519b6c9102/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b h1:uwuIcX0g4Yl1NC5XAz37xsr2lTtcqevgzYNVt49waME=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20200902213428-5d25da1a8d43 h1:ld7aEMNHoBnnDAX15v1T6z31v8HwR2A9FYOuAhWqkwc=
golang.org/x/oauth2 v0.0.0-20200902213428-5d25da1a8d43/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/oauth2 v0.0.0-20200902213428-5d25da1a8d43/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20201109201403-9fd604954f58 h1:Mj83v+wSRNEar42a/MQgxk9X42TdEmrOl9i+y8WbxLo=
golang.org/x/oauth2 v0.0.0-20201109201403-9fd604954f58/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@ -920,8 +922,8 @@ golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201008064518-c1f3e3309c71/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20201008064518-c1f3e3309c71/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201015000850-e3ed0017c211/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20201015000850-e3ed0017c211/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201106081118-db71ae66460a h1:ALUFBKlIyeY7y5ZgPJmblk/vKz+zBQSnNiPkt41sgeg= golang.org/x/sys v0.0.0-20201116161645-c061ba923fbb h1:+EHGEcgeA7ESswi5i4ojbo7sRzlz7vWoxFGcMuEZtu8=
golang.org/x/sys v0.0.0-20201106081118-db71ae66460a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20201116161645-c061ba923fbb/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@ -1001,9 +1003,9 @@ golang.org/x/tools v0.0.0-20200904185747-39188db58858/go.mod h1:Cj7w3i3Rnn0Xh82u
golang.org/x/tools v0.0.0-20200915173823-2db8f0ff891c/go.mod h1:z6u4i615ZeAfBE4XtMziQW1fSVJXACjjbWkB/mvPzlU= golang.org/x/tools v0.0.0-20200915173823-2db8f0ff891c/go.mod h1:z6u4i615ZeAfBE4XtMziQW1fSVJXACjjbWkB/mvPzlU=
golang.org/x/tools v0.0.0-20200918232735-d647fc253266/go.mod h1:z6u4i615ZeAfBE4XtMziQW1fSVJXACjjbWkB/mvPzlU= golang.org/x/tools v0.0.0-20200918232735-d647fc253266/go.mod h1:z6u4i615ZeAfBE4XtMziQW1fSVJXACjjbWkB/mvPzlU=
golang.org/x/tools v0.0.0-20201020161133-226fd2f889ca/go.mod h1:z6u4i615ZeAfBE4XtMziQW1fSVJXACjjbWkB/mvPzlU= golang.org/x/tools v0.0.0-20201020161133-226fd2f889ca/go.mod h1:z6u4i615ZeAfBE4XtMziQW1fSVJXACjjbWkB/mvPzlU=
golang.org/x/tools v0.0.0-20201030143252-cf7a54d06671/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20201110124207-079ba7bd75cd/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201105220310-78b158585360 h1:/9CzsU8hOpnSUCtem1vfWNgsVeCTgkMdx+VE5YIYxnU= golang.org/x/tools v0.0.0-20201116182000-1d699438d2cf h1:sDQg8i3k24bqfv1V4MugOhRCHMRzkrHdLJX5QraRSt4=
golang.org/x/tools v0.0.0-20201105220310-78b158585360/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20201116182000-1d699438d2cf/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@ -1034,8 +1036,6 @@ google.golang.org/api v0.29.0/go.mod h1:Lcubydp8VUV7KeIHD9z2Bys/sm/vGKnG1UHuDBSr
google.golang.org/api v0.30.0/go.mod h1:QGmEvQ87FHZNiUVJkT14jQNYJ4ZJjdRF23ZXz5138Fc= google.golang.org/api v0.30.0/go.mod h1:QGmEvQ87FHZNiUVJkT14jQNYJ4ZJjdRF23ZXz5138Fc=
google.golang.org/api v0.31.0/go.mod h1:CL+9IBCa2WWU6gRuBWaKqGWLFFwbEUXkfeMkHLQWYWo= google.golang.org/api v0.31.0/go.mod h1:CL+9IBCa2WWU6gRuBWaKqGWLFFwbEUXkfeMkHLQWYWo=
google.golang.org/api v0.32.0/go.mod h1:/XrVsuzM0rZmrsbjJutiuftIzeuTQcEeaYcSk/mQ1dg= google.golang.org/api v0.32.0/go.mod h1:/XrVsuzM0rZmrsbjJutiuftIzeuTQcEeaYcSk/mQ1dg=
google.golang.org/api v0.34.0 h1:k40adF3uR+6x/+hO5Dh4ZFUqFp67vxvbpafFiJxl10A=
google.golang.org/api v0.34.0/go.mod h1:/XrVsuzM0rZmrsbjJutiuftIzeuTQcEeaYcSk/mQ1dg=
google.golang.org/api v0.35.0 h1:TBCmTTxUrRDA1iTctnK/fIeitxIZ+TQuaf0j29fmCGo= google.golang.org/api v0.35.0 h1:TBCmTTxUrRDA1iTctnK/fIeitxIZ+TQuaf0j29fmCGo=
google.golang.org/api v0.35.0/go.mod h1:/XrVsuzM0rZmrsbjJutiuftIzeuTQcEeaYcSk/mQ1dg= google.golang.org/api v0.35.0/go.mod h1:/XrVsuzM0rZmrsbjJutiuftIzeuTQcEeaYcSk/mQ1dg=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
@ -1084,9 +1084,9 @@ google.golang.org/genproto v0.0.0-20200831141814-d751682dd103/go.mod h1:FWY/as6D
google.golang.org/genproto v0.0.0-20200904004341-0bd0a958aa1d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20200904004341-0bd0a958aa1d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20200914193844-75d14daec038/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20200914193844-75d14daec038/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20200921151605-7abf4a1a14d5/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20200921151605-7abf4a1a14d5/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201030142918-24207fddd1c3/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20201109203340-2640f1f9cdfb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201106154455-f9bfe239b0ba h1:HocWKLuilwaaLY56cHV38rw84wJ1nscA0Rs7OnO8mm8= google.golang.org/genproto v0.0.0-20201116144945-7adebfbe6a3f h1:YZKfGrT39pgYIg+3cfyIdK1z4VLjUPVboS1Ob49DyDA=
google.golang.org/genproto v0.0.0-20201106154455-f9bfe239b0ba/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20201116144945-7adebfbe6a3f/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs= google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM= google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM=
@ -1106,8 +1106,6 @@ google.golang.org/grpc v1.30.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM
google.golang.org/grpc v1.31.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak= google.golang.org/grpc v1.31.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.31.1/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak= google.golang.org/grpc v1.31.1/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.32.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak= google.golang.org/grpc v1.32.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.33.1 h1:DGeFlSan2f+WEtCERJ4J9GJWk15TxUi8QGagfI87Xyc=
google.golang.org/grpc v1.33.1/go.mod h1:fr5YgcSWrqhRRxogOsw7RzIpsmvOZ6IcH4kBYTpR3n0=
google.golang.org/grpc v1.33.2 h1:EQyQC3sa8M+p6Ulc8yy9SWSS2GVwyRc83gAbG8lrl4o= google.golang.org/grpc v1.33.2 h1:EQyQC3sa8M+p6Ulc8yy9SWSS2GVwyRc83gAbG8lrl4o=
google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc= google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=

View file

@ -208,7 +208,7 @@ func handlerWrapper(s *server, w http.ResponseWriter, r *http.Request, rh Reques
r.URL.Path = path r.URL.Path = path
switch r.URL.Path { switch r.URL.Path {
case "/health": case "/health":
w.Header().Set("Content-Type", "text/plain") w.Header().Set("Content-Type", "text/plain; charset=utf-8")
deadline := atomic.LoadInt64(&s.shutdownDelayDeadline) deadline := atomic.LoadInt64(&s.shutdownDelayDeadline)
if deadline <= 0 { if deadline <= 0 {
w.Write([]byte("OK")) w.Write([]byte("OK"))
@ -244,7 +244,7 @@ func handlerWrapper(s *server, w http.ResponseWriter, r *http.Request, rh Reques
return return
} }
startTime := time.Now() startTime := time.Now()
w.Header().Set("Content-Type", "text/plain") w.Header().Set("Content-Type", "text/plain; charset=utf-8")
WritePrometheusMetrics(w) WritePrometheusMetrics(w)
metricsHandlerDuration.UpdateDuration(startTime) metricsHandlerDuration.UpdateDuration(startTime)
return return
@ -395,7 +395,7 @@ func (zrw *gzipResponseWriter) Write(p []byte) (int, error) {
if h.Get("Content-Type") == "" { if h.Get("Content-Type") == "" {
// Disable auto-detection of content-type, since it // Disable auto-detection of content-type, since it
// is incorrectly detected after the compression. // is incorrectly detected after the compression.
h.Set("Content-Type", "text/html") h.Set("Content-Type", "text/html; charset=utf-8")
} }
} }
zrw.writeHeader() zrw.writeHeader()

View file

@ -13,18 +13,18 @@ import (
// //
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#tls_config // See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#tls_config
type TLSConfig struct { type TLSConfig struct {
CAFile string `yaml:"ca_file"` CAFile string `yaml:"ca_file,omitempty"`
CertFile string `yaml:"cert_file"` CertFile string `yaml:"cert_file,omitempty"`
KeyFile string `yaml:"key_file"` KeyFile string `yaml:"key_file,omitempty"`
ServerName string `yaml:"server_name"` ServerName string `yaml:"server_name,omitempty"`
InsecureSkipVerify bool `yaml:"insecure_skip_verify"` InsecureSkipVerify bool `yaml:"insecure_skip_verify,omitempty"`
} }
// BasicAuthConfig represents basic auth config. // BasicAuthConfig represents basic auth config.
type BasicAuthConfig struct { type BasicAuthConfig struct {
Username string `yaml:"username"` Username string `yaml:"username"`
Password string `yaml:"password"` Password string `yaml:"password,omitempty"`
PasswordFile string `yaml:"password_file"` PasswordFile string `yaml:"password_file,omitempty"`
} }
// Config is auth config. // Config is auth config.

View file

@ -14,13 +14,13 @@ import (
// //
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config // See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config
type RelabelConfig struct { type RelabelConfig struct {
SourceLabels []string `yaml:"source_labels"` SourceLabels []string `yaml:"source_labels,flow,omitempty"`
Separator *string `yaml:"separator"` Separator *string `yaml:"separator,omitempty"`
TargetLabel string `yaml:"target_label"` TargetLabel string `yaml:"target_label,omitempty"`
Regex *string `yaml:"regex"` Regex *string `yaml:"regex,omitempty"`
Modulus uint64 `yaml:"modulus"` Modulus uint64 `yaml:"modulus,omitempty"`
Replacement *string `yaml:"replacement"` Replacement *string `yaml:"replacement,omitempty"`
Action string `yaml:"action"` Action string `yaml:"action,omitempty"`
} }
// LoadRelabelConfigs loads relabel configs from the given path. // LoadRelabelConfigs loads relabel configs from the given path.

View file

@ -50,9 +50,9 @@ type Config struct {
// //
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/ // See https://prometheus.io/docs/prometheus/latest/configuration/configuration/
type GlobalConfig struct { type GlobalConfig struct {
ScrapeInterval time.Duration `yaml:"scrape_interval"` ScrapeInterval time.Duration `yaml:"scrape_interval,omitempty"`
ScrapeTimeout time.Duration `yaml:"scrape_timeout"` ScrapeTimeout time.Duration `yaml:"scrape_timeout,omitempty"`
ExternalLabels map[string]string `yaml:"external_labels"` ExternalLabels map[string]string `yaml:"external_labels,omitempty"`
} }
// ScrapeConfig represents essential parts for `scrape_config` section of Prometheus config. // ScrapeConfig represents essential parts for `scrape_config` section of Prometheus config.
@ -60,34 +60,34 @@ type GlobalConfig struct {
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config // See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config
type ScrapeConfig struct { type ScrapeConfig struct {
JobName string `yaml:"job_name"` JobName string `yaml:"job_name"`
ScrapeInterval time.Duration `yaml:"scrape_interval"` ScrapeInterval time.Duration `yaml:"scrape_interval,omitempty"`
ScrapeTimeout time.Duration `yaml:"scrape_timeout"` ScrapeTimeout time.Duration `yaml:"scrape_timeout,omitempty"`
MetricsPath string `yaml:"metrics_path"` MetricsPath string `yaml:"metrics_path,omitempty"`
HonorLabels bool `yaml:"honor_labels"` HonorLabels bool `yaml:"honor_labels,omitempty"`
HonorTimestamps bool `yaml:"honor_timestamps"` HonorTimestamps bool `yaml:"honor_timestamps,omitempty"`
Scheme string `yaml:"scheme"` Scheme string `yaml:"scheme,omitempty"`
Params map[string][]string `yaml:"params"` Params map[string][]string `yaml:"params,omitempty"`
BasicAuth *promauth.BasicAuthConfig `yaml:"basic_auth"` BasicAuth *promauth.BasicAuthConfig `yaml:"basic_auth,omitempty"`
BearerToken string `yaml:"bearer_token"` BearerToken string `yaml:"bearer_token,omitempty"`
BearerTokenFile string `yaml:"bearer_token_file"` BearerTokenFile string `yaml:"bearer_token_file,omitempty"`
TLSConfig *promauth.TLSConfig `yaml:"tls_config"` TLSConfig *promauth.TLSConfig `yaml:"tls_config,omitempty"`
StaticConfigs []StaticConfig `yaml:"static_configs"` StaticConfigs []StaticConfig `yaml:"static_configs,omitempty"`
FileSDConfigs []FileSDConfig `yaml:"file_sd_configs"` FileSDConfigs []FileSDConfig `yaml:"file_sd_configs,omitempty"`
KubernetesSDConfigs []kubernetes.SDConfig `yaml:"kubernetes_sd_configs"` KubernetesSDConfigs []kubernetes.SDConfig `yaml:"kubernetes_sd_configs,omitempty"`
OpenStackSDConfigs []openstack.SDConfig `yaml:"openstack_sd_configs"` OpenStackSDConfigs []openstack.SDConfig `yaml:"openstack_sd_configs,omitempty"`
ConsulSDConfigs []consul.SDConfig `yaml:"consul_sd_configs"` ConsulSDConfigs []consul.SDConfig `yaml:"consul_sd_configs,omitempty"`
DockerSwarmConfigs []dockerswarm.SDConfig `yaml:"dockerswarm_sd_configs"` DockerSwarmConfigs []dockerswarm.SDConfig `yaml:"dockerswarm_sd_configs,omitempty"`
DNSSDConfigs []dns.SDConfig `yaml:"dns_sd_configs"` DNSSDConfigs []dns.SDConfig `yaml:"dns_sd_configs,omitempty"`
EC2SDConfigs []ec2.SDConfig `yaml:"ec2_sd_configs"` EC2SDConfigs []ec2.SDConfig `yaml:"ec2_sd_configs,omitempty"`
GCESDConfigs []gce.SDConfig `yaml:"gce_sd_configs"` GCESDConfigs []gce.SDConfig `yaml:"gce_sd_configs,omitempty"`
RelabelConfigs []promrelabel.RelabelConfig `yaml:"relabel_configs"` RelabelConfigs []promrelabel.RelabelConfig `yaml:"relabel_configs,omitempty"`
MetricRelabelConfigs []promrelabel.RelabelConfig `yaml:"metric_relabel_configs"` MetricRelabelConfigs []promrelabel.RelabelConfig `yaml:"metric_relabel_configs,omitempty"`
SampleLimit int `yaml:"sample_limit"` SampleLimit int `yaml:"sample_limit,omitempty"`
// These options are supported only by lib/promscrape. // These options are supported only by lib/promscrape.
DisableCompression bool `yaml:"disable_compression"` DisableCompression bool `yaml:"disable_compression,omitempty"`
DisableKeepAlive bool `yaml:"disable_keepalive"` DisableKeepAlive bool `yaml:"disable_keepalive,omitempty"`
StreamParse bool `yaml:"stream_parse"` StreamParse bool `yaml:"stream_parse,omitempty"`
// This is set in loadConfig // This is set in loadConfig
swc *scrapeWorkConfig swc *scrapeWorkConfig
@ -106,7 +106,7 @@ type FileSDConfig struct {
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#static_config // See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#static_config
type StaticConfig struct { type StaticConfig struct {
Targets []string `yaml:"targets"` Targets []string `yaml:"targets"`
Labels map[string]string `yaml:"labels"` Labels map[string]string `yaml:"labels,omitempty"`
} }
func loadStaticConfigs(path string) ([]StaticConfig, error) { func loadStaticConfigs(path string) ([]StaticConfig, error) {
@ -184,7 +184,7 @@ func getSWSByJob(sws []ScrapeWork) map[string][]ScrapeWork {
// getKubernetesSDScrapeWork returns `kubernetes_sd_configs` ScrapeWork from cfg. // getKubernetesSDScrapeWork returns `kubernetes_sd_configs` ScrapeWork from cfg.
func (cfg *Config) getKubernetesSDScrapeWork(prev []ScrapeWork) []ScrapeWork { func (cfg *Config) getKubernetesSDScrapeWork(prev []ScrapeWork) []ScrapeWork {
swsPrevByJob := getSWSByJob(prev) swsPrevByJob := getSWSByJob(prev)
var dst []ScrapeWork dst := make([]ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs { for i := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i] sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst) dstLen := len(dst)
@ -212,7 +212,7 @@ func (cfg *Config) getKubernetesSDScrapeWork(prev []ScrapeWork) []ScrapeWork {
// getOpenStackSDScrapeWork returns `openstack_sd_configs` ScrapeWork from cfg. // getOpenStackSDScrapeWork returns `openstack_sd_configs` ScrapeWork from cfg.
func (cfg *Config) getOpenStackSDScrapeWork(prev []ScrapeWork) []ScrapeWork { func (cfg *Config) getOpenStackSDScrapeWork(prev []ScrapeWork) []ScrapeWork {
swsPrevByJob := getSWSByJob(prev) swsPrevByJob := getSWSByJob(prev)
var dst []ScrapeWork dst := make([]ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs { for i := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i] sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst) dstLen := len(dst)
@ -240,7 +240,7 @@ func (cfg *Config) getOpenStackSDScrapeWork(prev []ScrapeWork) []ScrapeWork {
// getDockerSwarmSDScrapeWork returns `dockerswarm_sd_configs` ScrapeWork from cfg. // getDockerSwarmSDScrapeWork returns `dockerswarm_sd_configs` ScrapeWork from cfg.
func (cfg *Config) getDockerSwarmSDScrapeWork(prev []ScrapeWork) []ScrapeWork { func (cfg *Config) getDockerSwarmSDScrapeWork(prev []ScrapeWork) []ScrapeWork {
swsPrevByJob := getSWSByJob(prev) swsPrevByJob := getSWSByJob(prev)
var dst []ScrapeWork dst := make([]ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs { for i := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i] sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst) dstLen := len(dst)
@ -268,7 +268,7 @@ func (cfg *Config) getDockerSwarmSDScrapeWork(prev []ScrapeWork) []ScrapeWork {
// getConsulSDScrapeWork returns `consul_sd_configs` ScrapeWork from cfg. // getConsulSDScrapeWork returns `consul_sd_configs` ScrapeWork from cfg.
func (cfg *Config) getConsulSDScrapeWork(prev []ScrapeWork) []ScrapeWork { func (cfg *Config) getConsulSDScrapeWork(prev []ScrapeWork) []ScrapeWork {
swsPrevByJob := getSWSByJob(prev) swsPrevByJob := getSWSByJob(prev)
var dst []ScrapeWork dst := make([]ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs { for i := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i] sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst) dstLen := len(dst)
@ -296,7 +296,7 @@ func (cfg *Config) getConsulSDScrapeWork(prev []ScrapeWork) []ScrapeWork {
// getDNSSDScrapeWork returns `dns_sd_configs` ScrapeWork from cfg. // getDNSSDScrapeWork returns `dns_sd_configs` ScrapeWork from cfg.
func (cfg *Config) getDNSSDScrapeWork(prev []ScrapeWork) []ScrapeWork { func (cfg *Config) getDNSSDScrapeWork(prev []ScrapeWork) []ScrapeWork {
swsPrevByJob := getSWSByJob(prev) swsPrevByJob := getSWSByJob(prev)
var dst []ScrapeWork dst := make([]ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs { for i := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i] sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst) dstLen := len(dst)
@ -324,7 +324,7 @@ func (cfg *Config) getDNSSDScrapeWork(prev []ScrapeWork) []ScrapeWork {
// getEC2SDScrapeWork returns `ec2_sd_configs` ScrapeWork from cfg. // getEC2SDScrapeWork returns `ec2_sd_configs` ScrapeWork from cfg.
func (cfg *Config) getEC2SDScrapeWork(prev []ScrapeWork) []ScrapeWork { func (cfg *Config) getEC2SDScrapeWork(prev []ScrapeWork) []ScrapeWork {
swsPrevByJob := getSWSByJob(prev) swsPrevByJob := getSWSByJob(prev)
var dst []ScrapeWork dst := make([]ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs { for i := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i] sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst) dstLen := len(dst)
@ -352,7 +352,7 @@ func (cfg *Config) getEC2SDScrapeWork(prev []ScrapeWork) []ScrapeWork {
// getGCESDScrapeWork returns `gce_sd_configs` ScrapeWork from cfg. // getGCESDScrapeWork returns `gce_sd_configs` ScrapeWork from cfg.
func (cfg *Config) getGCESDScrapeWork(prev []ScrapeWork) []ScrapeWork { func (cfg *Config) getGCESDScrapeWork(prev []ScrapeWork) []ScrapeWork {
swsPrevByJob := getSWSByJob(prev) swsPrevByJob := getSWSByJob(prev)
var dst []ScrapeWork dst := make([]ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs { for i := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i] sc := &cfg.ScrapeConfigs[i]
dstLen := len(dst) dstLen := len(dst)
@ -390,7 +390,7 @@ func (cfg *Config) getFileSDScrapeWork(prev []ScrapeWork) []ScrapeWork {
swsMapPrev[filepath] = append(swsMapPrev[filepath], *sw) swsMapPrev[filepath] = append(swsMapPrev[filepath], *sw)
} }
} }
var dst []ScrapeWork dst := make([]ScrapeWork, 0, len(prev))
for i := range cfg.ScrapeConfigs { for i := range cfg.ScrapeConfigs {
sc := &cfg.ScrapeConfigs[i] sc := &cfg.ScrapeConfigs[i]
for j := range sc.FileSDConfigs { for j := range sc.FileSDConfigs {

View file

@ -475,7 +475,7 @@ scrape_configs:
- job_name: foo - job_name: foo
static_configs: static_configs:
- targets: ["xxx"] - targets: ["xxx"]
`, nil) `, []ScrapeWork{})
f(` f(`
scrape_configs: scrape_configs:
- job_name: foo - job_name: foo

View file

@ -10,18 +10,18 @@ import (
// //
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#consul_sd_config // See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#consul_sd_config
type SDConfig struct { type SDConfig struct {
Server string `yaml:"server"` Server string `yaml:"server,omitempty"`
Token *string `yaml:"token"` Token *string `yaml:"token"`
Datacenter string `yaml:"datacenter"` Datacenter string `yaml:"datacenter"`
Scheme string `yaml:"scheme"` Scheme string `yaml:"scheme,omitempty"`
Username string `yaml:"username"` Username string `yaml:"username"`
Password string `yaml:"password"` Password string `yaml:"password"`
TLSConfig *promauth.TLSConfig `yaml:"tls_config"` TLSConfig *promauth.TLSConfig `yaml:"tls_config,omitempty"`
Services []string `yaml:"services"` Services []string `yaml:"services,omitempty"`
Tags []string `yaml:"tags"` Tags []string `yaml:"tags,omitempty"`
NodeMeta map[string]string `yaml:"node_meta"` NodeMeta map[string]string `yaml:"node_meta,omitempty"`
TagSeparator *string `yaml:"tag_separator"` TagSeparator *string `yaml:"tag_separator,omitempty"`
AllowStale bool `yaml:"allow_stale"` AllowStale bool `yaml:"allow_stale,omitempty"`
// RefreshInterval time.Duration `yaml:"refresh_interval"` // RefreshInterval time.Duration `yaml:"refresh_interval"`
// refresh_interval is obtained from `-promscrape.consulSDCheckInterval` command-line option. // refresh_interval is obtained from `-promscrape.consulSDCheckInterval` command-line option.
} }

View file

@ -17,8 +17,8 @@ import (
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dns_sd_config // See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dns_sd_config
type SDConfig struct { type SDConfig struct {
Names []string `yaml:"names"` Names []string `yaml:"names"`
Type string `yaml:"type"` Type string `yaml:"type,omitempty"`
Port *int `yaml:"port"` Port *int `yaml:"port,omitempty"`
// RefreshInterval time.Duration `yaml:"refresh_interval"` // RefreshInterval time.Duration `yaml:"refresh_interval"`
// refresh_interval is obtained from `-promscrape.dnsSDCheckInterval` command-line option. // refresh_interval is obtained from `-promscrape.dnsSDCheckInterval` command-line option.
} }

View file

@ -12,13 +12,13 @@ import (
type SDConfig struct { type SDConfig struct {
Host string `yaml:"host"` Host string `yaml:"host"`
// TODO: add support for proxy_url // TODO: add support for proxy_url
TLSConfig *promauth.TLSConfig `yaml:"tls_config"` TLSConfig *promauth.TLSConfig `yaml:"tls_config,omitempty"`
Role string `yaml:"role"` Role string `yaml:"role"`
Port int `yaml:"port"` Port int `yaml:"port,omitempty"`
// refresh_interval is obtained from `-promscrape.dockerswarmSDCheckInterval` command-line option // refresh_interval is obtained from `-promscrape.dockerswarmSDCheckInterval` command-line option
BasicAuth *promauth.BasicAuthConfig `yaml:"basic_auth"` BasicAuth *promauth.BasicAuthConfig `yaml:"basic_auth,omitempty"`
BearerToken string `yaml:"bearer_token"` BearerToken string `yaml:"bearer_token,omitempty"`
BearerTokenFile string `yaml:"bearer_token_file"` BearerTokenFile string `yaml:"bearer_token_file,omitempty"`
} }
// GetLabels returns dockerswarm labels according to sdc. // GetLabels returns dockerswarm labels according to sdc.

View file

@ -8,17 +8,17 @@ import (
// //
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#ec2_sd_config // See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#ec2_sd_config
type SDConfig struct { type SDConfig struct {
Region string `yaml:"region"` Region string `yaml:"region,omitempty"`
Endpoint string `yaml:"endpoint"` Endpoint string `yaml:"endpoint,omitempty"`
AccessKey string `yaml:"access_key"` AccessKey string `yaml:"access_key,omitempty"`
SecretKey string `yaml:"secret_key"` SecretKey string `yaml:"secret_key,omitempty"`
// TODO add support for Profile, not working atm // TODO add support for Profile, not working atm
Profile string `yaml:"profile"` Profile string `yaml:"profile,omitempty"`
RoleARN string `yaml:"role_arn"` RoleARN string `yaml:"role_arn,omitempty"`
// RefreshInterval time.Duration `yaml:"refresh_interval"` // RefreshInterval time.Duration `yaml:"refresh_interval"`
// refresh_interval is obtained from `-promscrape.ec2SDCheckInterval` command-line option. // refresh_interval is obtained from `-promscrape.ec2SDCheckInterval` command-line option.
Port *int `yaml:"port"` Port *int `yaml:"port,omitempty"`
Filters []Filter `yaml:"filters"` Filters []Filter `yaml:"filters,omitempty"`
} }
// Filter is ec2 filter. // Filter is ec2 filter.

View file

@ -10,11 +10,11 @@ import (
type SDConfig struct { type SDConfig struct {
Project string `yaml:"project"` Project string `yaml:"project"`
Zone ZoneYAML `yaml:"zone"` Zone ZoneYAML `yaml:"zone"`
Filter string `yaml:"filter"` Filter string `yaml:"filter,omitempty"`
// RefreshInterval time.Duration `yaml:"refresh_interval"` // RefreshInterval time.Duration `yaml:"refresh_interval"`
// refresh_interval is obtained from `-promscrape.gceSDCheckInterval` command-line option. // refresh_interval is obtained from `-promscrape.gceSDCheckInterval` command-line option.
Port *int `yaml:"port"` Port *int `yaml:"port,omitempty"`
TagSeparator *string `yaml:"tag_separator"` TagSeparator *string `yaml:"tag_separator,omitempty"`
} }
// ZoneYAML holds info about zones. // ZoneYAML holds info about zones.

View file

@ -10,14 +10,14 @@ import (
// //
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config // See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config
type SDConfig struct { type SDConfig struct {
APIServer string `yaml:"api_server"` APIServer string `yaml:"api_server,omitempty"`
Role string `yaml:"role"` Role string `yaml:"role"`
BasicAuth *promauth.BasicAuthConfig `yaml:"basic_auth"` BasicAuth *promauth.BasicAuthConfig `yaml:"basic_auth,omitempty"`
BearerToken string `yaml:"bearer_token"` BearerToken string `yaml:"bearer_token,omitempty"`
BearerTokenFile string `yaml:"bearer_token_file"` BearerTokenFile string `yaml:"bearer_token_file,omitempty"`
TLSConfig *promauth.TLSConfig `yaml:"tls_config"` TLSConfig *promauth.TLSConfig `yaml:"tls_config,omitempty"`
Namespaces Namespaces `yaml:"namespaces"` Namespaces Namespaces `yaml:"namespaces,omitempty"`
Selectors []Selector `yaml:"selectors"` Selectors []Selector `yaml:"selectors,omitempty"`
} }
// Namespaces represents namespaces for SDConfig // Namespaces represents namespaces for SDConfig

View file

@ -10,25 +10,25 @@ import (
// //
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#openstack_sd_config // See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#openstack_sd_config
type SDConfig struct { type SDConfig struct {
IdentityEndpoint string `yaml:"identity_endpoint"` IdentityEndpoint string `yaml:"identity_endpoint,omitempty"`
Username string `yaml:"username"` Username string `yaml:"username,omitempty"`
UserID string `yaml:"userid"` UserID string `yaml:"userid,omitempty"`
Password string `yaml:"password"` Password string `yaml:"password,omitempty"`
ProjectName string `yaml:"project_name"` ProjectName string `yaml:"project_name,omitempty"`
ProjectID string `yaml:"project_id"` ProjectID string `yaml:"project_id,omitempty"`
DomainName string `yaml:"domain_name"` DomainName string `yaml:"domain_name,omitempty"`
DomainID string `yaml:"domain_id"` DomainID string `yaml:"domain_id,omitempty"`
ApplicationCredentialName string `yaml:"application_credential_name"` ApplicationCredentialName string `yaml:"application_credential_name,omitempty"`
ApplicationCredentialID string `yaml:"application_credential_id"` ApplicationCredentialID string `yaml:"application_credential_id,omitempty"`
ApplicationCredentialSecret string `yaml:"application_credential_secret"` ApplicationCredentialSecret string `yaml:"application_credential_secret,omitempty"`
Role string `yaml:"role"` Role string `yaml:"role"`
Region string `yaml:"region"` Region string `yaml:"region"`
// RefreshInterval time.Duration `yaml:"refresh_interval"` // RefreshInterval time.Duration `yaml:"refresh_interval"`
// refresh_interval is obtained from `-promscrape.openstackSDCheckInterval` command-line option. // refresh_interval is obtained from `-promscrape.openstackSDCheckInterval` command-line option.
Port int `yaml:"port"` Port int `yaml:"port,omitempty"`
AllTenants bool `yaml:"all_tenants"` AllTenants bool `yaml:"all_tenants,omitempty"`
TLSConfig *promauth.TLSConfig `yaml:"tls_config"` TLSConfig *promauth.TLSConfig `yaml:"tls_config,omitempty"`
Availability string `yaml:"availability"` Availability string `yaml:"availability,omitempty"`
} }
// GetLabels returns gce labels according to sdc. // GetLabels returns gce labels according to sdc.

View file

@ -64,7 +64,7 @@ func (tsm *targetStatusMap) Reset() {
func (tsm *targetStatusMap) Register(sw *ScrapeWork) { func (tsm *targetStatusMap) Register(sw *ScrapeWork) {
tsm.mu.Lock() tsm.mu.Lock()
tsm.m[sw.ID] = targetStatus{ tsm.m[sw.ID] = targetStatus{
sw: sw, sw: *sw,
} }
tsm.mu.Unlock() tsm.mu.Unlock()
} }
@ -78,7 +78,7 @@ func (tsm *targetStatusMap) Unregister(sw *ScrapeWork) {
func (tsm *targetStatusMap) Update(sw *ScrapeWork, group string, up bool, scrapeTime, scrapeDuration int64, err error) { func (tsm *targetStatusMap) Update(sw *ScrapeWork, group string, up bool, scrapeTime, scrapeDuration int64, err error) {
tsm.mu.Lock() tsm.mu.Lock()
tsm.m[sw.ID] = targetStatus{ tsm.m[sw.ID] = targetStatus{
sw: sw, sw: *sw,
up: up, up: up,
scrapeGroup: group, scrapeGroup: group,
scrapeTime: scrapeTime, scrapeTime: scrapeTime,
@ -221,7 +221,7 @@ type jobStatus struct {
} }
type targetStatus struct { type targetStatus struct {
sw *ScrapeWork sw ScrapeWork
up bool up bool
scrapeGroup string scrapeGroup string
scrapeTime int64 scrapeTime int64

View file

@ -11,7 +11,6 @@ import (
"time" "time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil" "github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common" "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/metrics" "github.com/VictoriaMetrics/metrics"
) )
@ -24,7 +23,6 @@ var (
// ParseStream parses csv from req and calls callback for the parsed rows. // ParseStream parses csv from req and calls callback for the parsed rows.
// //
// The callback can be called concurrently multiple times for streamed data from req. // The callback can be called concurrently multiple times for streamed data from req.
// The callback can be called after ParseStream returns.
// //
// callback shouldn't hold rows after returning. // callback shouldn't hold rows after returning.
func ParseStream(req *http.Request, callback func(rows []Row) error) error { func ParseStream(req *http.Request, callback func(rows []Row) error) error {
@ -47,12 +45,26 @@ func ParseStream(req *http.Request, callback func(rows []Row) error) error {
defer putStreamContext(ctx) defer putStreamContext(ctx)
for ctx.Read() { for ctx.Read() {
uw := getUnmarshalWork() uw := getUnmarshalWork()
uw.callback = callback uw.callback = func(rows []Row) {
if err := callback(rows); err != nil {
ctx.callbackErrLock.Lock()
if ctx.callbackErr == nil {
ctx.callbackErr = fmt.Errorf("error when processing imported data: %w", err)
}
ctx.callbackErrLock.Unlock()
}
ctx.wg.Done()
}
uw.cds = cds uw.cds = cds
uw.reqBuf, ctx.reqBuf = ctx.reqBuf, uw.reqBuf uw.reqBuf, ctx.reqBuf = ctx.reqBuf, uw.reqBuf
ctx.wg.Add(1)
common.ScheduleUnmarshalWork(uw) common.ScheduleUnmarshalWork(uw)
} }
return ctx.Error() ctx.wg.Wait()
if err := ctx.Error(); err != nil {
return err
}
return ctx.callbackErr
} }
func (ctx *streamContext) Read() bool { func (ctx *streamContext) Read() bool {
@ -82,6 +94,10 @@ type streamContext struct {
reqBuf []byte reqBuf []byte
tailBuf []byte tailBuf []byte
err error err error
wg sync.WaitGroup
callbackErrLock sync.Mutex
callbackErr error
} }
func (ctx *streamContext) Error() error { func (ctx *streamContext) Error() error {
@ -96,6 +112,7 @@ func (ctx *streamContext) reset() {
ctx.reqBuf = ctx.reqBuf[:0] ctx.reqBuf = ctx.reqBuf[:0]
ctx.tailBuf = ctx.tailBuf[:0] ctx.tailBuf = ctx.tailBuf[:0]
ctx.err = nil ctx.err = nil
ctx.callbackErr = nil
} }
func getStreamContext(r io.Reader) *streamContext { func getStreamContext(r io.Reader) *streamContext {
@ -129,7 +146,7 @@ var streamContextPoolCh = make(chan *streamContext, runtime.GOMAXPROCS(-1))
type unmarshalWork struct { type unmarshalWork struct {
rows Rows rows Rows
callback func(rows []Row) error callback func(rows []Row)
cds []ColumnDescriptor cds []ColumnDescriptor
reqBuf []byte reqBuf []byte
} }
@ -164,11 +181,7 @@ func (uw *unmarshalWork) Unmarshal() {
} }
} }
if err := uw.callback(rows); err != nil { uw.callback(rows)
logger.Errorf("error when processing imported data: %s", err)
putUnmarshalWork(uw)
return
}
putUnmarshalWork(uw) putUnmarshalWork(uw)
} }

View file

@ -55,6 +55,33 @@ func (r *Row) reset() {
r.Timestamp = 0 r.Timestamp = 0
} }
// UnmarshalMetricAndTags unmarshals metric and optional tags from s.
func (r *Row) UnmarshalMetricAndTags(s string, tagsPool []Tag) ([]Tag, error) {
if strings.Contains(s, " ") {
return tagsPool, fmt.Errorf("unexpected whitespace found in %q", s)
}
n := strings.IndexByte(s, ';')
if n < 0 {
// No tags
r.Metric = s
} else {
// Tags found
r.Metric = s[:n]
tagsStart := len(tagsPool)
var err error
tagsPool, err = unmarshalTags(tagsPool, s[n+1:])
if err != nil {
return tagsPool, fmt.Errorf("cannot umarshal tags: %w", err)
}
tags := tagsPool[tagsStart:]
r.Tags = tags[:len(tags):len(tags)]
}
if len(r.Metric) == 0 {
return tagsPool, fmt.Errorf("metric cannot be empty")
}
return tagsPool, nil
}
func (r *Row) unmarshal(s string, tagsPool []Tag) ([]Tag, error) { func (r *Row) unmarshal(s string, tagsPool []Tag) ([]Tag, error) {
r.reset() r.reset()
n := strings.IndexByte(s, ' ') n := strings.IndexByte(s, ' ')
@ -64,24 +91,9 @@ func (r *Row) unmarshal(s string, tagsPool []Tag) ([]Tag, error) {
metricAndTags := s[:n] metricAndTags := s[:n]
tail := s[n+1:] tail := s[n+1:]
n = strings.IndexByte(metricAndTags, ';') tagsPool, err := r.UnmarshalMetricAndTags(metricAndTags, tagsPool)
if n < 0 { if err != nil {
// No tags return tagsPool, err
r.Metric = metricAndTags
} else {
// Tags found
r.Metric = metricAndTags[:n]
tagsStart := len(tagsPool)
var err error
tagsPool, err = unmarshalTags(tagsPool, metricAndTags[n+1:])
if err != nil {
return tagsPool, fmt.Errorf("cannot umarshal tags: %w", err)
}
tags := tagsPool[tagsStart:]
r.Tags = tags[:len(tags):len(tags)]
}
if len(r.Metric) == 0 {
return tagsPool, fmt.Errorf("metric cannot be empty")
} }
n = strings.IndexByte(tail, ' ') n = strings.IndexByte(tail, ' ')

View file

@ -7,6 +7,57 @@ import (
"testing" "testing"
) )
func TestUnmarshalMetricAndTagsFailure(t *testing.T) {
f := func(s string) {
t.Helper()
var r Row
_, err := r.UnmarshalMetricAndTags(s, nil)
if err == nil {
t.Fatalf("expecting non-nil error for UnmarshalMetricAndTags(%q)", s)
}
}
f("")
f(";foo=bar")
f(" ")
f("foo;bar")
f("foo ;bar=baz")
f("f oo;bar=baz")
f("foo;bar=baz ")
f("foo;bar= baz")
f("foo;bar=b az")
f("foo;b ar=baz")
}
func TestUnmarshalMetricAndTagsSuccess(t *testing.T) {
f := func(s string, rExpected *Row) {
t.Helper()
var r Row
_, err := r.UnmarshalMetricAndTags(s, nil)
if err != nil {
t.Fatalf("unexpected error in UnmarshalMetricAndTags(%q): %s", s, err)
}
if !reflect.DeepEqual(&r, rExpected) {
t.Fatalf("unexpected row;\ngot\n%+v\nwant\n%+v", &r, rExpected)
}
}
f("foo", &Row{
Metric: "foo",
})
f("foo;bar=123;baz=aabb", &Row{
Metric: "foo",
Tags: []Tag{
{
Key: "bar",
Value: "123",
},
{
Key: "baz",
Value: "aabb",
},
},
})
}
func TestRowsUnmarshalFailure(t *testing.T) { func TestRowsUnmarshalFailure(t *testing.T) {
f := func(s string) { f := func(s string) {
t.Helper() t.Helper()
@ -200,7 +251,7 @@ func Test_streamContext_Read(t *testing.T) {
} }
uw := getUnmarshalWork() uw := getUnmarshalWork()
callbackCalls := 0 callbackCalls := 0
uw.callback = func(rows []Row) error { uw.callback = func(rows []Row) {
callbackCalls++ callbackCalls++
if len(rows) != len(rowsExpected.Rows) { if len(rows) != len(rowsExpected.Rows) {
t.Fatalf("different len of expected rows;\ngot\n%+v;\nwant\n%+v", rows, rowsExpected.Rows) t.Fatalf("different len of expected rows;\ngot\n%+v;\nwant\n%+v", rows, rowsExpected.Rows)
@ -208,7 +259,6 @@ func Test_streamContext_Read(t *testing.T) {
if !reflect.DeepEqual(rows, rowsExpected.Rows) { if !reflect.DeepEqual(rows, rowsExpected.Rows) {
t.Fatalf("unexpected rows;\ngot\n%+v;\nwant\n%+v", rows, rowsExpected.Rows) t.Fatalf("unexpected rows;\ngot\n%+v;\nwant\n%+v", rows, rowsExpected.Rows)
} }
return nil
} }
uw.reqBuf = append(uw.reqBuf[:0], ctx.reqBuf...) uw.reqBuf = append(uw.reqBuf[:0], ctx.reqBuf...)
uw.Unmarshal() uw.Unmarshal()

View file

@ -11,7 +11,6 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil" "github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime" "github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common" "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/metrics" "github.com/VictoriaMetrics/metrics"
) )
@ -24,7 +23,6 @@ var (
// ParseStream parses Graphite lines from r and calls callback for the parsed rows. // ParseStream parses Graphite lines from r and calls callback for the parsed rows.
// //
// The callback can be called concurrently multiple times for streamed data from r. // The callback can be called concurrently multiple times for streamed data from r.
// The callback can be called after ParseStream returns.
// //
// callback shouldn't hold rows after returning. // callback shouldn't hold rows after returning.
func ParseStream(r io.Reader, callback func(rows []Row) error) error { func ParseStream(r io.Reader, callback func(rows []Row) error) error {
@ -33,11 +31,25 @@ func ParseStream(r io.Reader, callback func(rows []Row) error) error {
for ctx.Read() { for ctx.Read() {
uw := getUnmarshalWork() uw := getUnmarshalWork()
uw.callback = callback uw.callback = func(rows []Row) {
if err := callback(rows); err != nil {
ctx.callbackErrLock.Lock()
if ctx.callbackErr == nil {
ctx.callbackErr = fmt.Errorf("error when processing imported data: %w", err)
}
ctx.callbackErrLock.Unlock()
}
ctx.wg.Done()
}
uw.reqBuf, ctx.reqBuf = ctx.reqBuf, uw.reqBuf uw.reqBuf, ctx.reqBuf = ctx.reqBuf, uw.reqBuf
ctx.wg.Add(1)
common.ScheduleUnmarshalWork(uw) common.ScheduleUnmarshalWork(uw)
} }
return ctx.Error() ctx.wg.Wait()
if err := ctx.Error(); err != nil {
return err
}
return ctx.callbackErr
} }
func (ctx *streamContext) Read() bool { func (ctx *streamContext) Read() bool {
@ -61,6 +73,10 @@ type streamContext struct {
reqBuf []byte reqBuf []byte
tailBuf []byte tailBuf []byte
err error err error
wg sync.WaitGroup
callbackErrLock sync.Mutex
callbackErr error
} }
func (ctx *streamContext) Error() error { func (ctx *streamContext) Error() error {
@ -75,6 +91,7 @@ func (ctx *streamContext) reset() {
ctx.reqBuf = ctx.reqBuf[:0] ctx.reqBuf = ctx.reqBuf[:0]
ctx.tailBuf = ctx.tailBuf[:0] ctx.tailBuf = ctx.tailBuf[:0]
ctx.err = nil ctx.err = nil
ctx.callbackErr = nil
} }
var ( var (
@ -114,7 +131,7 @@ var streamContextPoolCh = make(chan *streamContext, runtime.GOMAXPROCS(-1))
type unmarshalWork struct { type unmarshalWork struct {
rows Rows rows Rows
callback func(rows []Row) error callback func(rows []Row)
reqBuf []byte reqBuf []byte
} }
@ -152,11 +169,7 @@ func (uw *unmarshalWork) Unmarshal() {
} }
} }
if err := uw.callback(rows); err != nil { uw.callback(rows)
logger.Errorf("error when processing imported data: %s", err)
putUnmarshalWork(uw)
return
}
putUnmarshalWork(uw) putUnmarshalWork(uw)
} }

View file

@ -11,7 +11,6 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil" "github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil" "github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common" "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/metrics" "github.com/VictoriaMetrics/metrics"
) )
@ -25,7 +24,6 @@ var (
// ParseStream parses r with the given args and calls callback for the parsed rows. // ParseStream parses r with the given args and calls callback for the parsed rows.
// //
// The callback can be called concurrently multiple times for streamed data from r. // The callback can be called concurrently multiple times for streamed data from r.
// The callback can be called after ParseStream returns.
// //
// callback shouldn't hold rows after returning. // callback shouldn't hold rows after returning.
func ParseStream(r io.Reader, isGzipped bool, precision, db string, callback func(db string, rows []Row) error) error { func ParseStream(r io.Reader, isGzipped bool, precision, db string, callback func(db string, rows []Row) error) error {
@ -59,13 +57,27 @@ func ParseStream(r io.Reader, isGzipped bool, precision, db string, callback fun
defer putStreamContext(ctx) defer putStreamContext(ctx)
for ctx.Read() { for ctx.Read() {
uw := getUnmarshalWork() uw := getUnmarshalWork()
uw.callback = callback uw.callback = func(db string, rows []Row) {
if err := callback(db, rows); err != nil {
ctx.callbackErrLock.Lock()
if ctx.callbackErr == nil {
ctx.callbackErr = fmt.Errorf("error when processing imported data: %w", err)
}
ctx.callbackErrLock.Unlock()
}
ctx.wg.Done()
}
uw.db = db uw.db = db
uw.tsMultiplier = tsMultiplier uw.tsMultiplier = tsMultiplier
uw.reqBuf, ctx.reqBuf = ctx.reqBuf, uw.reqBuf uw.reqBuf, ctx.reqBuf = ctx.reqBuf, uw.reqBuf
ctx.wg.Add(1)
common.ScheduleUnmarshalWork(uw) common.ScheduleUnmarshalWork(uw)
} }
return ctx.Error() ctx.wg.Wait()
if err := ctx.Error(); err != nil {
return err
}
return ctx.callbackErr
} }
func (ctx *streamContext) Read() bool { func (ctx *streamContext) Read() bool {
@ -95,6 +107,10 @@ type streamContext struct {
reqBuf []byte reqBuf []byte
tailBuf []byte tailBuf []byte
err error err error
wg sync.WaitGroup
callbackErrLock sync.Mutex
callbackErr error
} }
func (ctx *streamContext) Error() error { func (ctx *streamContext) Error() error {
@ -109,6 +125,7 @@ func (ctx *streamContext) reset() {
ctx.reqBuf = ctx.reqBuf[:0] ctx.reqBuf = ctx.reqBuf[:0]
ctx.tailBuf = ctx.tailBuf[:0] ctx.tailBuf = ctx.tailBuf[:0]
ctx.err = nil ctx.err = nil
ctx.callbackErr = nil
} }
func getStreamContext(r io.Reader) *streamContext { func getStreamContext(r io.Reader) *streamContext {
@ -142,7 +159,7 @@ var streamContextPoolCh = make(chan *streamContext, runtime.GOMAXPROCS(-1))
type unmarshalWork struct { type unmarshalWork struct {
rows Rows rows Rows
callback func(db string, rows []Row) error callback func(db string, rows []Row)
db string db string
tsMultiplier int64 tsMultiplier int64
reqBuf []byte reqBuf []byte
@ -195,11 +212,7 @@ func (uw *unmarshalWork) Unmarshal() {
} }
} }
if err := uw.callback(uw.db, rows); err != nil { uw.callback(uw.db, rows)
logger.Errorf("error when processing imported data: %s", err)
putUnmarshalWork(uw)
return
}
putUnmarshalWork(uw) putUnmarshalWork(uw)
} }

View file

@ -18,10 +18,8 @@ import (
// ParseStream parses /api/v1/import/native lines from req and calls callback for parsed blocks. // ParseStream parses /api/v1/import/native lines from req and calls callback for parsed blocks.
// //
// The callback can be called concurrently multiple times for streamed data from req. // The callback can be called concurrently multiple times for streamed data from req.
// The callback can be called after ParseStream returns.
// //
// callback shouldn't hold block after returning. // callback shouldn't hold block after returning.
// callback can be called in parallel from multiple concurrent goroutines.
func ParseStream(req *http.Request, callback func(block *Block) error) error { func ParseStream(req *http.Request, callback func(block *Block) error) error {
r := req.Body r := req.Body
if req.Header.Get("Content-Encoding") == "gzip" { if req.Header.Get("Content-Encoding") == "gzip" {
@ -47,30 +45,49 @@ func ParseStream(req *http.Request, callback func(block *Block) error) error {
// Read native blocks and feed workers with work. // Read native blocks and feed workers with work.
sizeBuf := make([]byte, 4) sizeBuf := make([]byte, 4)
var wg sync.WaitGroup
var (
callbackErrLock sync.Mutex
callbackErr error
)
for { for {
uw := getUnmarshalWork() uw := getUnmarshalWork()
uw.tr = tr uw.tr = tr
uw.callback = callback uw.callback = func(block *Block) {
if err := callback(block); err != nil {
processErrors.Inc()
callbackErrLock.Lock()
if callbackErr == nil {
callbackErr = fmt.Errorf("error when processing native block: %w", err)
}
callbackErrLock.Unlock()
}
wg.Done()
}
// Read uw.metricNameBuf // Read uw.metricNameBuf
if _, err := io.ReadFull(br, sizeBuf); err != nil { if _, err := io.ReadFull(br, sizeBuf); err != nil {
if err == io.EOF { if err == io.EOF {
// End of stream // End of stream
putUnmarshalWork(uw) putUnmarshalWork(uw)
return nil wg.Wait()
return callbackErr
} }
readErrors.Inc() readErrors.Inc()
wg.Wait()
return fmt.Errorf("cannot read metricName size: %w", err) return fmt.Errorf("cannot read metricName size: %w", err)
} }
readCalls.Inc() readCalls.Inc()
bufSize := encoding.UnmarshalUint32(sizeBuf) bufSize := encoding.UnmarshalUint32(sizeBuf)
if bufSize > 1024*1024 { if bufSize > 1024*1024 {
parseErrors.Inc() parseErrors.Inc()
wg.Wait()
return fmt.Errorf("too big metricName size; got %d; shouldn't exceed %d", bufSize, 1024*1024) return fmt.Errorf("too big metricName size; got %d; shouldn't exceed %d", bufSize, 1024*1024)
} }
uw.metricNameBuf = bytesutil.Resize(uw.metricNameBuf, int(bufSize)) uw.metricNameBuf = bytesutil.Resize(uw.metricNameBuf, int(bufSize))
if _, err := io.ReadFull(br, uw.metricNameBuf); err != nil { if _, err := io.ReadFull(br, uw.metricNameBuf); err != nil {
readErrors.Inc() readErrors.Inc()
wg.Wait()
return fmt.Errorf("cannot read metricName with size %d bytes: %w", bufSize, err) return fmt.Errorf("cannot read metricName with size %d bytes: %w", bufSize, err)
} }
readCalls.Inc() readCalls.Inc()
@ -78,22 +95,26 @@ func ParseStream(req *http.Request, callback func(block *Block) error) error {
// Read uw.blockBuf // Read uw.blockBuf
if _, err := io.ReadFull(br, sizeBuf); err != nil { if _, err := io.ReadFull(br, sizeBuf); err != nil {
readErrors.Inc() readErrors.Inc()
wg.Wait()
return fmt.Errorf("cannot read native block size: %w", err) return fmt.Errorf("cannot read native block size: %w", err)
} }
readCalls.Inc() readCalls.Inc()
bufSize = encoding.UnmarshalUint32(sizeBuf) bufSize = encoding.UnmarshalUint32(sizeBuf)
if bufSize > 1024*1024 { if bufSize > 1024*1024 {
parseErrors.Inc() parseErrors.Inc()
wg.Wait()
return fmt.Errorf("too big native block size; got %d; shouldn't exceed %d", bufSize, 1024*1024) return fmt.Errorf("too big native block size; got %d; shouldn't exceed %d", bufSize, 1024*1024)
} }
uw.blockBuf = bytesutil.Resize(uw.blockBuf, int(bufSize)) uw.blockBuf = bytesutil.Resize(uw.blockBuf, int(bufSize))
if _, err := io.ReadFull(br, uw.blockBuf); err != nil { if _, err := io.ReadFull(br, uw.blockBuf); err != nil {
readErrors.Inc() readErrors.Inc()
wg.Wait()
return fmt.Errorf("cannot read native block with size %d bytes: %w", bufSize, err) return fmt.Errorf("cannot read native block with size %d bytes: %w", bufSize, err)
} }
readCalls.Inc() readCalls.Inc()
blocksRead.Inc() blocksRead.Inc()
wg.Add(1)
common.ScheduleUnmarshalWork(uw) common.ScheduleUnmarshalWork(uw)
} }
} }
@ -123,7 +144,7 @@ var (
type unmarshalWork struct { type unmarshalWork struct {
tr storage.TimeRange tr storage.TimeRange
callback func(block *Block) error callback func(block *Block)
metricNameBuf []byte metricNameBuf []byte
blockBuf []byte blockBuf []byte
block Block block Block
@ -144,12 +165,7 @@ func (uw *unmarshalWork) Unmarshal() {
putUnmarshalWork(uw) putUnmarshalWork(uw)
return return
} }
if err := uw.callback(&uw.block); err != nil { uw.callback(&uw.block)
processErrors.Inc()
logger.Errorf("error when processing native block: %s", err)
putUnmarshalWork(uw)
return
}
putUnmarshalWork(uw) putUnmarshalWork(uw)
} }

View file

@ -11,7 +11,6 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil" "github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime" "github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common" "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/metrics" "github.com/VictoriaMetrics/metrics"
) )
@ -24,7 +23,6 @@ var (
// ParseStream parses OpenTSDB lines from r and calls callback for the parsed rows. // ParseStream parses OpenTSDB lines from r and calls callback for the parsed rows.
// //
// The callback can be called concurrently multiple times for streamed data from r. // The callback can be called concurrently multiple times for streamed data from r.
// The callback can be called after ParseStream returns.
// //
// callback shouldn't hold rows after returning. // callback shouldn't hold rows after returning.
func ParseStream(r io.Reader, callback func(rows []Row) error) error { func ParseStream(r io.Reader, callback func(rows []Row) error) error {
@ -32,11 +30,25 @@ func ParseStream(r io.Reader, callback func(rows []Row) error) error {
defer putStreamContext(ctx) defer putStreamContext(ctx)
for ctx.Read() { for ctx.Read() {
uw := getUnmarshalWork() uw := getUnmarshalWork()
uw.callback = callback uw.callback = func(rows []Row) {
if err := callback(rows); err != nil {
ctx.callbackErrLock.Lock()
if ctx.callbackErr == nil {
ctx.callbackErr = fmt.Errorf("error when processing imported data: %w", err)
}
ctx.callbackErrLock.Unlock()
}
ctx.wg.Done()
}
uw.reqBuf, ctx.reqBuf = ctx.reqBuf, uw.reqBuf uw.reqBuf, ctx.reqBuf = ctx.reqBuf, uw.reqBuf
ctx.wg.Add(1)
common.ScheduleUnmarshalWork(uw) common.ScheduleUnmarshalWork(uw)
} }
return ctx.Error() ctx.wg.Wait()
if err := ctx.Error(); err != nil {
return err
}
return ctx.callbackErr
} }
func (ctx *streamContext) Read() bool { func (ctx *streamContext) Read() bool {
@ -60,6 +72,10 @@ type streamContext struct {
reqBuf []byte reqBuf []byte
tailBuf []byte tailBuf []byte
err error err error
wg sync.WaitGroup
callbackErrLock sync.Mutex
callbackErr error
} }
func (ctx *streamContext) Error() error { func (ctx *streamContext) Error() error {
@ -74,6 +90,7 @@ func (ctx *streamContext) reset() {
ctx.reqBuf = ctx.reqBuf[:0] ctx.reqBuf = ctx.reqBuf[:0]
ctx.tailBuf = ctx.tailBuf[:0] ctx.tailBuf = ctx.tailBuf[:0]
ctx.err = nil ctx.err = nil
ctx.callbackErr = nil
} }
var ( var (
@ -113,7 +130,7 @@ var streamContextPoolCh = make(chan *streamContext, runtime.GOMAXPROCS(-1))
type unmarshalWork struct { type unmarshalWork struct {
rows Rows rows Rows
callback func(rows []Row) error callback func(rows []Row)
reqBuf []byte reqBuf []byte
} }
@ -151,11 +168,7 @@ func (uw *unmarshalWork) Unmarshal() {
} }
} }
if err := uw.callback(rows); err != nil { uw.callback(rows)
logger.Errorf("error when processing imported data: %s", err)
putUnmarshalWork(uw)
return
}
putUnmarshalWork(uw) putUnmarshalWork(uw)
} }

View file

@ -13,7 +13,6 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil" "github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime" "github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil" "github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common" "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/metrics" "github.com/VictoriaMetrics/metrics"
) )
@ -27,7 +26,6 @@ var (
// ParseStream parses OpenTSDB http lines from req and calls callback for the parsed rows. // ParseStream parses OpenTSDB http lines from req and calls callback for the parsed rows.
// //
// The callback can be called concurrently multiple times for streamed data from req. // The callback can be called concurrently multiple times for streamed data from req.
// The callback can be called after ParseStream returns.
// //
// callback shouldn't hold rows after returning. // callback shouldn't hold rows after returning.
func ParseStream(req *http.Request, callback func(rows []Row) error) error { func ParseStream(req *http.Request, callback func(rows []Row) error) error {
@ -58,10 +56,50 @@ func ParseStream(req *http.Request, callback func(rows []Row) error) error {
return fmt.Errorf("too big HTTP OpenTSDB request; mustn't exceed `-opentsdbhttp.maxInsertRequestSize=%d` bytes", maxInsertRequestSize.N) return fmt.Errorf("too big HTTP OpenTSDB request; mustn't exceed `-opentsdbhttp.maxInsertRequestSize=%d` bytes", maxInsertRequestSize.N)
} }
uw := getUnmarshalWork() // Process the request synchronously, since there is no sense in processing a single request asynchronously.
uw.callback = callback // Sync code is easier to read and understand.
uw.reqBuf, ctx.reqBuf.B = ctx.reqBuf.B, uw.reqBuf p := getJSONParser()
common.ScheduleUnmarshalWork(uw) defer putJSONParser(p)
v, err := p.ParseBytes(ctx.reqBuf.B)
if err != nil {
unmarshalErrors.Inc()
return fmt.Errorf("cannot parse HTTP OpenTSDB json: %w", err)
}
rs := getRows()
defer putRows(rs)
rs.Unmarshal(v)
rows := rs.Rows
rowsRead.Add(len(rows))
// Fill in missing timestamps
currentTimestamp := int64(fasttime.UnixTimestamp())
for i := range rows {
r := &rows[i]
if r.Timestamp == 0 {
r.Timestamp = currentTimestamp
}
}
// Convert timestamps in seconds to milliseconds if needed.
// See http://opentsdb.net/docs/javadoc/net/opentsdb/core/Const.html#SECOND_MASK
for i := range rows {
r := &rows[i]
if r.Timestamp&secondMask == 0 {
r.Timestamp *= 1e3
}
}
// Trim timestamps if required.
if tsTrim := trimTimestamp.Milliseconds(); tsTrim > 1 {
for i := range rows {
row := &rows[i]
row.Timestamp -= row.Timestamp % tsTrim
}
}
if err := callback(rows); err != nil {
return fmt.Errorf("error when processing imported data: %w", err)
}
return nil return nil
} }
@ -113,77 +151,17 @@ func putStreamContext(ctx *streamContext) {
var streamContextPool sync.Pool var streamContextPool sync.Pool
var streamContextPoolCh = make(chan *streamContext, runtime.GOMAXPROCS(-1)) var streamContextPoolCh = make(chan *streamContext, runtime.GOMAXPROCS(-1))
type unmarshalWork struct { func getRows() *Rows {
rows Rows v := rowsPool.Get()
callback func(rows []Row) error
reqBuf []byte
}
func (uw *unmarshalWork) reset() {
uw.rows.Reset()
uw.callback = nil
uw.reqBuf = uw.reqBuf[:0]
}
// Unmarshal implements common.UnmarshalWork
func (uw *unmarshalWork) Unmarshal() {
p := getJSONParser()
defer putJSONParser(p)
v, err := p.ParseBytes(uw.reqBuf)
if err != nil {
unmarshalErrors.Inc()
logger.Errorf("cannot parse HTTP OpenTSDB json: %s", err)
return
}
uw.rows.Unmarshal(v)
rows := uw.rows.Rows
rowsRead.Add(len(rows))
// Fill in missing timestamps
currentTimestamp := int64(fasttime.UnixTimestamp())
for i := range rows {
r := &rows[i]
if r.Timestamp == 0 {
r.Timestamp = currentTimestamp
}
}
// Convert timestamps in seconds to milliseconds if needed.
// See http://opentsdb.net/docs/javadoc/net/opentsdb/core/Const.html#SECOND_MASK
for i := range rows {
r := &rows[i]
if r.Timestamp&secondMask == 0 {
r.Timestamp *= 1e3
}
}
// Trim timestamps if required.
if tsTrim := trimTimestamp.Milliseconds(); tsTrim > 1 {
for i := range rows {
row := &rows[i]
row.Timestamp -= row.Timestamp % tsTrim
}
}
if err := uw.callback(rows); err != nil {
logger.Errorf("error when processing imported data: %s", err)
putUnmarshalWork(uw)
return
}
putUnmarshalWork(uw)
}
func getUnmarshalWork() *unmarshalWork {
v := unmarshalWorkPool.Get()
if v == nil { if v == nil {
return &unmarshalWork{} return &Rows{}
} }
return v.(*unmarshalWork) return v.(*Rows)
} }
func putUnmarshalWork(uw *unmarshalWork) { func putRows(rs *Rows) {
uw.reset() rs.Reset()
unmarshalWorkPool.Put(uw) rowsPool.Put(rs)
} }
var unmarshalWorkPool sync.Pool var rowsPool sync.Pool

View file

@ -9,7 +9,6 @@ import (
"time" "time"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil" "github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common" "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/metrics" "github.com/VictoriaMetrics/metrics"
) )
@ -17,7 +16,6 @@ import (
// ParseStream parses lines with Prometheus exposition format from r and calls callback for the parsed rows. // ParseStream parses lines with Prometheus exposition format from r and calls callback for the parsed rows.
// //
// The callback can be called concurrently multiple times for streamed data from r. // The callback can be called concurrently multiple times for streamed data from r.
// It is guaranteed that the callback isn't called after ParseStream returns.
// //
// callback shouldn't hold rows after returning. // callback shouldn't hold rows after returning.
func ParseStream(r io.Reader, defaultTimestamp int64, isGzipped bool, callback func(rows []Row) error) error { func ParseStream(r io.Reader, defaultTimestamp int64, isGzipped bool, callback func(rows []Row) error) error {
@ -33,18 +31,26 @@ func ParseStream(r io.Reader, defaultTimestamp int64, isGzipped bool, callback f
defer putStreamContext(ctx) defer putStreamContext(ctx)
for ctx.Read() { for ctx.Read() {
uw := getUnmarshalWork() uw := getUnmarshalWork()
uw.callback = func(rows []Row) error { uw.callback = func(rows []Row) {
err := callback(rows) if err := callback(rows); err != nil {
ctx.callbackErrLock.Lock()
if ctx.callbackErr == nil {
ctx.callbackErr = fmt.Errorf("error when processing imported data: %w", err)
}
ctx.callbackErrLock.Unlock()
}
ctx.wg.Done() ctx.wg.Done()
return err
} }
uw.defaultTimestamp = defaultTimestamp uw.defaultTimestamp = defaultTimestamp
uw.reqBuf, ctx.reqBuf = ctx.reqBuf, uw.reqBuf uw.reqBuf, ctx.reqBuf = ctx.reqBuf, uw.reqBuf
ctx.wg.Add(1) ctx.wg.Add(1)
common.ScheduleUnmarshalWork(uw) common.ScheduleUnmarshalWork(uw)
} }
ctx.wg.Wait() // wait for all the outstanding callback calls before returning ctx.wg.Wait()
return ctx.Error() if err := ctx.Error(); err != nil {
return err
}
return ctx.callbackErr
} }
func (ctx *streamContext) Read() bool { func (ctx *streamContext) Read() bool {
@ -69,7 +75,9 @@ type streamContext struct {
tailBuf []byte tailBuf []byte
err error err error
wg sync.WaitGroup wg sync.WaitGroup
callbackErrLock sync.Mutex
callbackErr error
} }
func (ctx *streamContext) Error() error { func (ctx *streamContext) Error() error {
@ -84,6 +92,7 @@ func (ctx *streamContext) reset() {
ctx.reqBuf = ctx.reqBuf[:0] ctx.reqBuf = ctx.reqBuf[:0]
ctx.tailBuf = ctx.tailBuf[:0] ctx.tailBuf = ctx.tailBuf[:0]
ctx.err = nil ctx.err = nil
ctx.callbackErr = nil
} }
var ( var (
@ -123,7 +132,7 @@ var streamContextPoolCh = make(chan *streamContext, runtime.GOMAXPROCS(-1))
type unmarshalWork struct { type unmarshalWork struct {
rows Rows rows Rows
callback func(rows []Row) error callback func(rows []Row)
defaultTimestamp int64 defaultTimestamp int64
reqBuf []byte reqBuf []byte
} }
@ -153,11 +162,7 @@ func (uw *unmarshalWork) Unmarshal() {
} }
} }
if err := uw.callback(rows); err != nil { uw.callback(rows)
logger.Errorf("error when processing imported data: %s", err)
putUnmarshalWork(uw)
return
}
putUnmarshalWork(uw) putUnmarshalWork(uw)
} }

View file

@ -9,10 +9,9 @@ import (
"sync" "sync"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil" "github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil" "github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb" "github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/metrics" "github.com/VictoriaMetrics/metrics"
"github.com/golang/snappy" "github.com/golang/snappy"
) )
@ -21,9 +20,6 @@ var maxInsertRequestSize = flagutil.NewBytes("maxInsertRequestSize", 32*1024*102
// ParseStream parses Prometheus remote_write message req and calls callback for the parsed timeseries. // ParseStream parses Prometheus remote_write message req and calls callback for the parsed timeseries.
// //
// The callback can be called concurrently multiple times for streamed data from req.
// The callback can be called after ParseStream returns.
//
// callback shouldn't hold tss after returning. // callback shouldn't hold tss after returning.
func ParseStream(req *http.Request, callback func(tss []prompb.TimeSeries) error) error { func ParseStream(req *http.Request, callback func(tss []prompb.TimeSeries) error) error {
ctx := getPushCtx(req.Body) ctx := getPushCtx(req.Body)
@ -31,13 +27,42 @@ func ParseStream(req *http.Request, callback func(tss []prompb.TimeSeries) error
if err := ctx.Read(); err != nil { if err := ctx.Read(); err != nil {
return err return err
} }
uw := getUnmarshalWork()
uw.callback = callback // Synchronously process the request in order to properly return errors to ParseStream caller,
uw.reqBuf, ctx.reqBuf.B = ctx.reqBuf.B, uw.reqBuf // so it could properly return HTTP 503 status code in response.
common.ScheduleUnmarshalWork(uw) // See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/896
bb := bodyBufferPool.Get()
defer bodyBufferPool.Put(bb)
var err error
bb.B, err = snappy.Decode(bb.B[:cap(bb.B)], ctx.reqBuf.B)
if err != nil {
return fmt.Errorf("cannot decompress request with length %d: %w", len(ctx.reqBuf.B), err)
}
if len(bb.B) > maxInsertRequestSize.N {
return fmt.Errorf("too big unpacked request; mustn't exceed `-maxInsertRequestSize=%d` bytes; got %d bytes", maxInsertRequestSize.N, len(bb.B))
}
wr := getWriteRequest()
defer putWriteRequest(wr)
if err := wr.Unmarshal(bb.B); err != nil {
unmarshalErrors.Inc()
return fmt.Errorf("cannot unmarshal prompb.WriteRequest with size %d bytes: %w", len(bb.B), err)
}
rows := 0
tss := wr.Timeseries
for i := range tss {
rows += len(tss[i].Samples)
}
rowsRead.Add(rows)
if err := callback(tss); err != nil {
return fmt.Errorf("error when processing imported data: %w", err)
}
return nil return nil
} }
var bodyBufferPool bytesutil.ByteBufferPool
type pushCtx struct { type pushCtx struct {
br *bufio.Reader br *bufio.Reader
reqBuf bytesutil.ByteBuffer reqBuf bytesutil.ByteBuffer
@ -51,10 +76,11 @@ func (ctx *pushCtx) reset() {
func (ctx *pushCtx) Read() error { func (ctx *pushCtx) Read() error {
readCalls.Inc() readCalls.Inc()
lr := io.LimitReader(ctx.br, int64(maxInsertRequestSize.N)+1) lr := io.LimitReader(ctx.br, int64(maxInsertRequestSize.N)+1)
startTime := fasttime.UnixTimestamp()
reqLen, err := ctx.reqBuf.ReadFrom(lr) reqLen, err := ctx.reqBuf.ReadFrom(lr)
if err != nil { if err != nil {
readErrors.Inc() readErrors.Inc()
return fmt.Errorf("cannot read compressed request: %w", err) return fmt.Errorf("cannot read compressed request in %d seconds: %w", fasttime.UnixTimestamp()-startTime, err)
} }
if reqLen > int64(maxInsertRequestSize.N) { if reqLen > int64(maxInsertRequestSize.N) {
readErrors.Inc() readErrors.Inc()
@ -99,66 +125,17 @@ func putPushCtx(ctx *pushCtx) {
var pushCtxPool sync.Pool var pushCtxPool sync.Pool
var pushCtxPoolCh = make(chan *pushCtx, runtime.GOMAXPROCS(-1)) var pushCtxPoolCh = make(chan *pushCtx, runtime.GOMAXPROCS(-1))
type unmarshalWork struct { func getWriteRequest() *prompb.WriteRequest {
wr prompb.WriteRequest v := writeRequestPool.Get()
callback func(tss []prompb.TimeSeries) error
reqBuf []byte
}
func (uw *unmarshalWork) reset() {
uw.wr.Reset()
uw.callback = nil
uw.reqBuf = uw.reqBuf[:0]
}
// Unmarshal implements common.UnmarshalWork
func (uw *unmarshalWork) Unmarshal() {
bb := bodyBufferPool.Get()
defer bodyBufferPool.Put(bb)
var err error
bb.B, err = snappy.Decode(bb.B[:cap(bb.B)], uw.reqBuf)
if err != nil {
logger.Errorf("cannot decompress request with length %d: %s", len(uw.reqBuf), err)
return
}
if len(bb.B) > maxInsertRequestSize.N {
logger.Errorf("too big unpacked request; mustn't exceed `-maxInsertRequestSize=%d` bytes; got %d bytes", maxInsertRequestSize.N, len(bb.B))
return
}
if err := uw.wr.Unmarshal(bb.B); err != nil {
unmarshalErrors.Inc()
logger.Errorf("cannot unmarshal prompb.WriteRequest with size %d bytes: %s", len(bb.B), err)
return
}
rows := 0
tss := uw.wr.Timeseries
for i := range tss {
rows += len(tss[i].Samples)
}
rowsRead.Add(rows)
if err := uw.callback(tss); err != nil {
logger.Errorf("error when processing imported data: %s", err)
putUnmarshalWork(uw)
return
}
putUnmarshalWork(uw)
}
var bodyBufferPool bytesutil.ByteBufferPool
func getUnmarshalWork() *unmarshalWork {
v := unmarshalWorkPool.Get()
if v == nil { if v == nil {
return &unmarshalWork{} return &prompb.WriteRequest{}
} }
return v.(*unmarshalWork) return v.(*prompb.WriteRequest)
} }
func putUnmarshalWork(uw *unmarshalWork) { func putWriteRequest(wr *prompb.WriteRequest) {
uw.reset() wr.Reset()
unmarshalWorkPool.Put(uw) writeRequestPool.Put(wr)
} }
var unmarshalWorkPool sync.Pool var writeRequestPool sync.Pool

View file

@ -10,7 +10,6 @@ import (
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil" "github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil" "github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common" "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
"github.com/VictoriaMetrics/metrics" "github.com/VictoriaMetrics/metrics"
) )
@ -21,7 +20,6 @@ var maxLineLen = flagutil.NewBytes("import.maxLineLen", 100*1024*1024, "The maxi
// ParseStream parses /api/v1/import lines from req and calls callback for the parsed rows. // ParseStream parses /api/v1/import lines from req and calls callback for the parsed rows.
// //
// The callback can be called concurrently multiple times for streamed data from req. // The callback can be called concurrently multiple times for streamed data from req.
// The callback can be called after ParseStream returns.
// //
// callback shouldn't hold rows after returning. // callback shouldn't hold rows after returning.
func ParseStream(req *http.Request, callback func(rows []Row) error) error { func ParseStream(req *http.Request, callback func(rows []Row) error) error {
@ -38,11 +36,25 @@ func ParseStream(req *http.Request, callback func(rows []Row) error) error {
defer putStreamContext(ctx) defer putStreamContext(ctx)
for ctx.Read() { for ctx.Read() {
uw := getUnmarshalWork() uw := getUnmarshalWork()
uw.callback = callback uw.callback = func(rows []Row) {
if err := callback(rows); err != nil {
ctx.callbackErrLock.Lock()
if ctx.callbackErr == nil {
ctx.callbackErr = fmt.Errorf("error when processing imported data: %w", err)
}
ctx.callbackErrLock.Unlock()
}
ctx.wg.Done()
}
uw.reqBuf, ctx.reqBuf = ctx.reqBuf, uw.reqBuf uw.reqBuf, ctx.reqBuf = ctx.reqBuf, uw.reqBuf
ctx.wg.Add(1)
common.ScheduleUnmarshalWork(uw) common.ScheduleUnmarshalWork(uw)
} }
return ctx.Error() ctx.wg.Wait()
if err := ctx.Error(); err != nil {
return err
}
return ctx.callbackErr
} }
func (ctx *streamContext) Read() bool { func (ctx *streamContext) Read() bool {
@ -72,6 +84,10 @@ type streamContext struct {
reqBuf []byte reqBuf []byte
tailBuf []byte tailBuf []byte
err error err error
wg sync.WaitGroup
callbackErrLock sync.Mutex
callbackErr error
} }
func (ctx *streamContext) Error() error { func (ctx *streamContext) Error() error {
@ -86,6 +102,7 @@ func (ctx *streamContext) reset() {
ctx.reqBuf = ctx.reqBuf[:0] ctx.reqBuf = ctx.reqBuf[:0]
ctx.tailBuf = ctx.tailBuf[:0] ctx.tailBuf = ctx.tailBuf[:0]
ctx.err = nil ctx.err = nil
ctx.callbackErr = nil
} }
func getStreamContext(r io.Reader) *streamContext { func getStreamContext(r io.Reader) *streamContext {
@ -119,7 +136,7 @@ var streamContextPoolCh = make(chan *streamContext, runtime.GOMAXPROCS(-1))
type unmarshalWork struct { type unmarshalWork struct {
rows Rows rows Rows
callback func(rows []Row) error callback func(rows []Row)
reqBuf []byte reqBuf []byte
} }
@ -137,11 +154,7 @@ func (uw *unmarshalWork) Unmarshal() {
row := &rows[i] row := &rows[i]
rowsRead.Add(len(row.Timestamps)) rowsRead.Add(len(row.Timestamps))
} }
if err := uw.callback(rows); err != nil { uw.callback(rows)
logger.Errorf("error when processing imported data: %s", err)
putUnmarshalWork(uw)
return
}
putUnmarshalWork(uw) putUnmarshalWork(uw)
} }

View file

@ -722,6 +722,10 @@ func (db *indexDB) SearchTagKeysOnTimeRange(tr TimeRange, maxTagKeys int, deadli
keys := make([]string, 0, len(tks)) keys := make([]string, 0, len(tks))
for key := range tks { for key := range tks {
if key == string(graphiteReverseTagKey) {
// Do not show artificially created graphiteReverseTagKey to the caller.
continue
}
// Do not skip empty keys, since they are converted to __name__ // Do not skip empty keys, since they are converted to __name__
keys = append(keys, key) keys = append(keys, key)
} }
@ -833,6 +837,10 @@ func (db *indexDB) SearchTagKeys(maxTagKeys int, deadline uint64) ([]string, err
keys := make([]string, 0, len(tks)) keys := make([]string, 0, len(tks))
for key := range tks { for key := range tks {
if key == string(graphiteReverseTagKey) {
// Do not show artificially created graphiteReverseTagKey to the caller.
continue
}
// Do not skip empty keys, since they are converted to __name__ // Do not skip empty keys, since they are converted to __name__
keys = append(keys, key) keys = append(keys, key)
} }

View file

@ -220,6 +220,15 @@ type SearchQuery struct {
TagFilterss [][]TagFilter TagFilterss [][]TagFilter
} }
// NewSearchQuery creates new search query for the given args.
func NewSearchQuery(start, end int64, tagFilterss [][]TagFilter) *SearchQuery {
return &SearchQuery{
MinTimestamp: start,
MaxTimestamp: end,
TagFilterss: tagFilterss,
}
}
// TagFilter represents a single tag filter from SearchQuery. // TagFilter represents a single tag filter from SearchQuery.
type TagFilter struct { type TagFilter struct {
Key []byte Key []byte

View file

@ -191,8 +191,8 @@ func OpenStorage(path string, retentionMsecs int64) (*Storage, error) {
return s, nil return s, nil
} }
// debugFlush flushes recently added storage data, so it becomes visible to search. // DebugFlush flushes recently added storage data, so it becomes visible to search.
func (s *Storage) debugFlush() { func (s *Storage) DebugFlush() {
s.tb.flushRawRows() s.tb.flushRawRows()
s.idb().tb.DebugFlush() s.idb().tb.DebugFlush()
} }
@ -796,6 +796,41 @@ func nextRetentionDuration(retentionMonths int) time.Duration {
return deadline.Sub(t) return deadline.Sub(t)
} }
// SearchMetricNames returns metric names matching the given tfss on the given tr.
func (s *Storage) SearchMetricNames(tfss []*TagFilters, tr TimeRange, maxMetrics int, deadline uint64) ([]MetricName, error) {
tsids, err := s.searchTSIDs(tfss, tr, maxMetrics, deadline)
if err != nil {
return nil, err
}
if err = s.prefetchMetricNames(tsids, deadline); err != nil {
return nil, err
}
idb := s.idb()
is := idb.getIndexSearch(deadline)
defer idb.putIndexSearch(is)
mns := make([]MetricName, 0, len(tsids))
var metricName []byte
for i := range tsids {
metricID := tsids[i].MetricID
var err error
metricName, err = is.searchMetricName(metricName[:0], metricID)
if err != nil {
if err == io.EOF {
// Skip missing metricName for metricID.
// It should be automatically fixed. See indexDB.searchMetricName for details.
continue
}
return nil, fmt.Errorf("error when searching metricName for metricID=%d: %w", metricID, err)
}
mns = mns[:len(mns)+1]
mn := &mns[len(mns)-1]
if err = mn.Unmarshal(metricName); err != nil {
return nil, fmt.Errorf("cannot unmarshal metricName=%q: %w", metricName, err)
}
}
return mns, nil
}
// searchTSIDs returns sorted TSIDs for the given tfss and the given tr. // searchTSIDs returns sorted TSIDs for the given tfss and the given tr.
func (s *Storage) searchTSIDs(tfss []*TagFilters, tr TimeRange, maxMetrics int, deadline uint64) ([]TSID, error) { func (s *Storage) searchTSIDs(tfss []*TagFilters, tr TimeRange, maxMetrics int, deadline uint64) ([]TSID, error) {
// Do not cache tfss -> tsids here, since the caching is performed // Do not cache tfss -> tsids here, since the caching is performed
@ -1070,7 +1105,6 @@ func (s *Storage) AddRows(mrs []MetricRow, precisionBits uint8) error {
if len(mrs) == 0 { if len(mrs) == 0 {
return nil return nil
} }
atomic.AddUint64(&rowsAddedTotal, uint64(len(mrs)))
// Limit the number of concurrent goroutines that may add rows to the storage. // Limit the number of concurrent goroutines that may add rows to the storage.
// This should prevent from out of memory errors and CPU trashing when too many // This should prevent from out of memory errors and CPU trashing when too many
@ -1107,6 +1141,7 @@ func (s *Storage) AddRows(mrs []MetricRow, precisionBits uint8) error {
<-addRowsConcurrencyCh <-addRowsConcurrencyCh
atomic.AddUint64(&rowsAddedTotal, uint64(len(mrs)))
return err return err
} }
@ -1118,6 +1153,64 @@ var (
addRowsTimeout = 30 * time.Second addRowsTimeout = 30 * time.Second
) )
// RegisterMetricNames registers all the metric names from mns in the indexdb, so they can be queried later.
//
// The the MetricRow.Timestamp is used for registering the metric name starting from the given timestamp.
// Th MetricRow.Value field is ignored.
func (s *Storage) RegisterMetricNames(mrs []MetricRow) error {
var (
tsid TSID
mn MetricName
metricName []byte
)
idb := s.idb()
is := idb.getIndexSearch(noDeadline)
defer idb.putIndexSearch(is)
for i := range mrs {
mr := &mrs[i]
if s.getTSIDFromCache(&tsid, mr.MetricNameRaw) {
// Fast path - mr.MetricNameRaw has been already registered.
continue
}
// Slow path - register mr.MetricNameRaw.
if err := mn.unmarshalRaw(mr.MetricNameRaw); err != nil {
return fmt.Errorf("cannot register the metric because cannot unmarshal MetricNameRaw %q: %w", mr.MetricNameRaw, err)
}
mn.sortTags()
metricName = mn.Marshal(metricName[:0])
if err := is.GetOrCreateTSIDByName(&tsid, metricName); err != nil {
return fmt.Errorf("cannot register the metric because cannot create TSID for metricName %q: %w", metricName, err)
}
s.putTSIDToCache(&tsid, mr.MetricNameRaw)
// Register the metric in per-day inverted index.
date := uint64(mr.Timestamp) / msecPerDay
metricID := tsid.MetricID
if s.dateMetricIDCache.Has(date, metricID) {
// Fast path: the metric has been already registered in per-day inverted index
continue
}
// Slow path: acutally register the metric in per-day inverted index.
ok, err := is.hasDateMetricID(date, metricID)
if err != nil {
return fmt.Errorf("cannot register the metric in per-date inverted index because of error when locating (date=%d, metricID=%d) in database: %w",
date, metricID, err)
}
if !ok {
// The (date, metricID) entry is missing in the indexDB. Add it there.
if err := is.storeDateMetricID(date, metricID); err != nil {
return fmt.Errorf("cannot register the metric in per-date inverted index because of error when storing (date=%d, metricID=%d) in database: %w",
date, metricID, err)
}
}
// The metric must be added to cache only after it has been successfully added to indexDB.
s.dateMetricIDCache.Set(date, metricID)
}
return nil
}
func (s *Storage) add(rows []rawRow, mrs []MetricRow, precisionBits uint8) ([]rawRow, error) { func (s *Storage) add(rows []rawRow, mrs []MetricRow, precisionBits uint8) ([]rawRow, error) {
idb := s.idb() idb := s.idb()
rowsLen := len(rows) rowsLen := len(rows)

View file

@ -5,6 +5,7 @@ import (
"math/rand" "math/rand"
"os" "os"
"reflect" "reflect"
"sort"
"strings" "strings"
"testing" "testing"
"testing/quick" "testing/quick"
@ -103,7 +104,7 @@ func TestUpdateCurrHourMetricIDs(t *testing.T) {
s.pendingHourEntries = &uint64set.Set{} s.pendingHourEntries = &uint64set.Set{}
return &s return &s
} }
t.Run("empty_pedning_metric_ids_stale_curr_hour", func(t *testing.T) { t.Run("empty_pending_metric_ids_stale_curr_hour", func(t *testing.T) {
s := newStorage() s := newStorage()
hour := uint64(timestampFromTime(time.Now())) / msecPerHour hour := uint64(timestampFromTime(time.Now())) / msecPerHour
hmOrig := &hourMetricIDs{ hmOrig := &hourMetricIDs{
@ -138,7 +139,7 @@ func TestUpdateCurrHourMetricIDs(t *testing.T) {
t.Fatalf("unexpected s.pendingHourEntries.Len(); got %d; want %d", s.pendingHourEntries.Len(), 0) t.Fatalf("unexpected s.pendingHourEntries.Len(); got %d; want %d", s.pendingHourEntries.Len(), 0)
} }
}) })
t.Run("empty_pedning_metric_ids_valid_curr_hour", func(t *testing.T) { t.Run("empty_pending_metric_ids_valid_curr_hour", func(t *testing.T) {
s := newStorage() s := newStorage()
hour := uint64(timestampFromTime(time.Now())) / msecPerHour hour := uint64(timestampFromTime(time.Now())) / msecPerHour
hmOrig := &hourMetricIDs{ hmOrig := &hourMetricIDs{
@ -557,7 +558,7 @@ func testStorageDeleteMetrics(s *Storage, workerNum int) error {
return fmt.Errorf("unexpected error when adding mrs: %w", err) return fmt.Errorf("unexpected error when adding mrs: %w", err)
} }
} }
s.debugFlush() s.DebugFlush()
// Verify tag values exist // Verify tag values exist
tvs, err := s.SearchTagValues(workerTag, 1e5, noDeadline) tvs, err := s.SearchTagValues(workerTag, 1e5, noDeadline)
@ -664,6 +665,167 @@ func checkTagKeys(tks []string, tksExpected map[string]bool) error {
return nil return nil
} }
func TestStorageRegisterMetricNamesSerial(t *testing.T) {
path := "TestStorageRegisterMetricNamesSerial"
s, err := OpenStorage(path, 0)
if err != nil {
t.Fatalf("cannot open storage: %s", err)
}
if err := testStorageRegisterMetricNames(s); err != nil {
t.Fatalf("unexpected error: %s", err)
}
s.MustClose()
if err := os.RemoveAll(path); err != nil {
t.Fatalf("cannot remove %q: %s", path, err)
}
}
func TestStorageRegisterMetricNamesConcurrent(t *testing.T) {
path := "TestStorageRegisterMetricNamesConcurrent"
s, err := OpenStorage(path, 0)
if err != nil {
t.Fatalf("cannot open storage: %s", err)
}
ch := make(chan error, 3)
for i := 0; i < cap(ch); i++ {
go func() {
ch <- testStorageRegisterMetricNames(s)
}()
}
for i := 0; i < cap(ch); i++ {
select {
case err := <-ch:
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
case <-time.After(10 * time.Second):
t.Fatalf("timeout")
}
}
s.MustClose()
if err := os.RemoveAll(path); err != nil {
t.Fatalf("cannot remove %q: %s", path, err)
}
}
func testStorageRegisterMetricNames(s *Storage) error {
const metricsPerAdd = 1e3
const addsCount = 10
addIDsMap := make(map[string]struct{})
for i := 0; i < addsCount; i++ {
var mrs []MetricRow
var mn MetricName
addID := fmt.Sprintf("%d", i)
addIDsMap[addID] = struct{}{}
mn.Tags = []Tag{
{[]byte("job"), []byte("webservice")},
{[]byte("instance"), []byte("1.2.3.4")},
{[]byte("add_id"), []byte(addID)},
}
now := timestampFromTime(time.Now())
for j := 0; j < metricsPerAdd; j++ {
mn.MetricGroup = []byte(fmt.Sprintf("metric_%d", j))
metricNameRaw := mn.marshalRaw(nil)
mr := MetricRow{
MetricNameRaw: metricNameRaw,
Timestamp: now,
}
mrs = append(mrs, mr)
}
if err := s.RegisterMetricNames(mrs); err != nil {
return fmt.Errorf("unexpected error in AddMetrics: %w", err)
}
}
var addIDsExpected []string
for k := range addIDsMap {
addIDsExpected = append(addIDsExpected, k)
}
sort.Strings(addIDsExpected)
// Verify the storage contains the added metric names.
s.DebugFlush()
// Verify that SearchTagKeys returns correct result.
tksExpected := []string{
"",
"add_id",
"instance",
"job",
}
tks, err := s.SearchTagKeys(100, noDeadline)
if err != nil {
return fmt.Errorf("error in SearchTagKeys: %w", err)
}
sort.Strings(tks)
if !reflect.DeepEqual(tks, tksExpected) {
return fmt.Errorf("unexpected tag keys returned from SearchTagKeys;\ngot\n%q\nwant\n%q", tks, tksExpected)
}
// Verify that SearchTagKeysOnTimeRange returns correct result.
now := timestampFromTime(time.Now())
start := now - msecPerDay
end := now + 60*1000
tr := TimeRange{
MinTimestamp: start,
MaxTimestamp: end,
}
tks, err = s.SearchTagKeysOnTimeRange(tr, 100, noDeadline)
if err != nil {
return fmt.Errorf("error in SearchTagKeysOnTimeRange: %w", err)
}
sort.Strings(tks)
if !reflect.DeepEqual(tks, tksExpected) {
return fmt.Errorf("unexpected tag keys returned from SearchTagKeysOnTimeRange;\ngot\n%q\nwant\n%q", tks, tksExpected)
}
// Verify that SearchTagValues returns correct result.
addIDs, err := s.SearchTagValues([]byte("add_id"), addsCount+100, noDeadline)
if err != nil {
return fmt.Errorf("error in SearchTagValues: %w", err)
}
sort.Strings(addIDs)
if !reflect.DeepEqual(addIDs, addIDsExpected) {
return fmt.Errorf("unexpected tag values returned from SearchTagValues;\ngot\n%q\nwant\n%q", addIDs, addIDsExpected)
}
// Verify that SearchTagValuesOnTimeRange returns correct result.
addIDs, err = s.SearchTagValuesOnTimeRange([]byte("add_id"), tr, addsCount+100, noDeadline)
if err != nil {
return fmt.Errorf("error in SearchTagValuesOnTimeRange: %w", err)
}
sort.Strings(addIDs)
if !reflect.DeepEqual(addIDs, addIDsExpected) {
return fmt.Errorf("unexpected tag values returned from SearchTagValuesOnTimeRange;\ngot\n%q\nwant\n%q", addIDs, addIDsExpected)
}
// Verify that SearchMetricNames returns correct result.
tfs := NewTagFilters()
if err := tfs.Add([]byte("add_id"), []byte("0"), false, false); err != nil {
return fmt.Errorf("unexpected error in TagFilters.Add: %w", err)
}
mns, err := s.SearchMetricNames([]*TagFilters{tfs}, tr, metricsPerAdd*addsCount*100+100, noDeadline)
if err != nil {
return fmt.Errorf("error in SearchMetricNames: %w", err)
}
if len(mns) < metricsPerAdd {
return fmt.Errorf("unexpected number of metricNames returned from SearchMetricNames; got %d; want at least %d", len(mns), int(metricsPerAdd))
}
for i, mn := range mns {
addID := mn.GetTagValue("add_id")
if string(addID) != "0" {
return fmt.Errorf("unexpected addID for metricName #%d; got %q; want %q", i, addID, "0")
}
job := mn.GetTagValue("job")
if string(job) != "webservice" {
return fmt.Errorf("unexpected job for metricName #%d; got %q; want %q", i, job, "webservice")
}
}
return nil
}
func TestStorageAddRowsSerial(t *testing.T) { func TestStorageAddRowsSerial(t *testing.T) {
path := "TestStorageAddRowsSerial" path := "TestStorageAddRowsSerial"
s, err := OpenStorage(path, 0) s, err := OpenStorage(path, 0)

View file

@ -2,6 +2,7 @@
.idea .idea
.vscode .vscode
*.swp *.swp
.history
# Test files # Test files
*.test *.test

View file

@ -1,6 +1,20 @@
# Changes # Changes
## [0.72.0](https://www.github.com/googleapis/google-cloud-go/compare/v0.71.0...v0.72.0) (2020-11-10)
### Features
* **all:** auto-regenerate gapics , refs [#3177](https://www.github.com/googleapis/google-cloud-go/issues/3177) [#3164](https://www.github.com/googleapis/google-cloud-go/issues/3164) [#3149](https://www.github.com/googleapis/google-cloud-go/issues/3149) [#3142](https://www.github.com/googleapis/google-cloud-go/issues/3142) [#3136](https://www.github.com/googleapis/google-cloud-go/issues/3136) [#3130](https://www.github.com/googleapis/google-cloud-go/issues/3130) [#3121](https://www.github.com/googleapis/google-cloud-go/issues/3121) [#3119](https://www.github.com/googleapis/google-cloud-go/issues/3119)
### Bug Fixes
* **all:** Update hand-written clients to not use WithEndpoint override ([#3111](https://www.github.com/googleapis/google-cloud-go/issues/3111)) ([f0cfd05](https://www.github.com/googleapis/google-cloud-go/commit/f0cfd0532f5204ff16f7bae406efa72603d16f44))
* **internal/godocfx:** rename README files to pkg-readme ([#3185](https://www.github.com/googleapis/google-cloud-go/issues/3185)) ([d3a8571](https://www.github.com/googleapis/google-cloud-go/commit/d3a85719be411b692aede3331abb29b5a7b3da9a))
## [0.71.0](https://www.github.com/googleapis/google-cloud-go/compare/v0.70.0...v0.71.0) (2020-10-30) ## [0.71.0](https://www.github.com/googleapis/google-cloud-go/compare/v0.70.0...v0.71.0) (2020-10-30)

View file

@ -131,8 +131,6 @@ To release a submodule:
1. On master, run `git log $CV.. -- datastore/` to list all the changes to the 1. On master, run `git log $CV.. -- datastore/` to list all the changes to the
submodule directory since the last release. submodule directory since the last release.
1. Edit `datastore/CHANGES.md` to include a summary of the changes. 1. Edit `datastore/CHANGES.md` to include a summary of the changes.
1. In `internal/version/version.go`, update `const Repo` to today's date with
the format `YYYYMMDD`.
1. In `internal/version` run `go generate`. 1. In `internal/version` run `go generate`.
1. Commit the changes, ignoring the generated `.go-r` file. Push to your fork, 1. Commit the changes, ignoring the generated `.go-r` file. Push to your fork,
and create a PR titled `chore(datastore): release $NV`. and create a PR titled `chore(datastore): release $NV`.

12
vendor/cloud.google.com/go/go.mod generated vendored
View file

@ -13,11 +13,11 @@ require (
github.com/jstemmer/go-junit-report v0.9.1 github.com/jstemmer/go-junit-report v0.9.1
go.opencensus.io v0.22.5 go.opencensus.io v0.22.5
golang.org/x/lint v0.0.0-20200302205851-738671d3881b golang.org/x/lint v0.0.0-20200302205851-738671d3881b
golang.org/x/net v0.0.0-20201026091529-146b70c837a4 golang.org/x/net v0.0.0-20201031054903-ff519b6c9102
golang.org/x/oauth2 v0.0.0-20200902213428-5d25da1a8d43 golang.org/x/oauth2 v0.0.0-20200902213428-5d25da1a8d43
golang.org/x/text v0.3.3 golang.org/x/text v0.3.4
golang.org/x/tools v0.0.0-20201030143252-cf7a54d06671 golang.org/x/tools v0.0.0-20201110124207-079ba7bd75cd
google.golang.org/api v0.34.0 google.golang.org/api v0.35.0
google.golang.org/genproto v0.0.0-20201030142918-24207fddd1c3 google.golang.org/genproto v0.0.0-20201109203340-2640f1f9cdfb
google.golang.org/grpc v1.33.1 google.golang.org/grpc v1.33.2
) )

22
vendor/cloud.google.com/go/go.sum generated vendored
View file

@ -265,8 +265,8 @@ golang.org/x/net v0.0.0-20200822124328-c89045814202 h1:VvcQYSHwXgi7W+TpUR6A9g6Up
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20201021035429-f5854403a974 h1:IX6qOQeG5uLjB/hjjwjedwfjND0hgjPMMyO1RoIXQNI= golang.org/x/net v0.0.0-20201021035429-f5854403a974 h1:IX6qOQeG5uLjB/hjjwjedwfjND0hgjPMMyO1RoIXQNI=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201026091529-146b70c837a4 h1:awiuzyrRjJDb+OXi9ceHO3SDxVoN3JER57mhtqkdQBs= golang.org/x/net v0.0.0-20201031054903-ff519b6c9102 h1:42cLlJJdEh+ySyeUUbEQ5bsTiq8voBeTuweGVkY6Puw=
golang.org/x/net v0.0.0-20201026091529-146b70c837a4/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20201031054903-ff519b6c9102/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421 h1:Wo7BWFiOk0QRFMLYMqJGFMd9CgUAcGx7V+qEg/h5IBI= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421 h1:Wo7BWFiOk0QRFMLYMqJGFMd9CgUAcGx7V+qEg/h5IBI=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@ -338,6 +338,8 @@ golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3 h1:cokOdA+Jmi5PJGXLlLllQSgYigAEfHXJAERHVMaCc2k= golang.org/x/text v0.3.3 h1:cokOdA+Jmi5PJGXLlLllQSgYigAEfHXJAERHVMaCc2k=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.4 h1:0YWbFKbhXG/wIiuHDSKpS0Iy7FSA+u45VtBMfQcFTTc=
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c h1:fqgJT0MGcGpPgpWU7VRdRjuArfcOvC4AoJmILihzhDg= golang.org/x/time v0.0.0-20181108054448-85acf8d2951c h1:fqgJT0MGcGpPgpWU7VRdRjuArfcOvC4AoJmILihzhDg=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 h1:SvFZT6jyqRaOeXpc5h/JSfZenJ2O330aBsf7JfSUXmQ= golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 h1:SvFZT6jyqRaOeXpc5h/JSfZenJ2O330aBsf7JfSUXmQ=
@ -391,8 +393,8 @@ golang.org/x/tools v0.0.0-20200804011535-6c149bb5ef0d h1:szSOL78iTCl0LF1AMjhSWJj
golang.org/x/tools v0.0.0-20200804011535-6c149bb5ef0d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= golang.org/x/tools v0.0.0-20200804011535-6c149bb5ef0d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200825202427-b303f430e36d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= golang.org/x/tools v0.0.0-20200825202427-b303f430e36d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200904185747-39188db58858/go.mod h1:Cj7w3i3Rnn0Xh82ur9kSqwfTHTeVxaDqrfMjpcNT6bE= golang.org/x/tools v0.0.0-20200904185747-39188db58858/go.mod h1:Cj7w3i3Rnn0Xh82ur9kSqwfTHTeVxaDqrfMjpcNT6bE=
golang.org/x/tools v0.0.0-20201030143252-cf7a54d06671 h1:8ylPbtgKXakJwDQKPjMJ6BSnlEIFViV0tYnu5/1Omk8= golang.org/x/tools v0.0.0-20201110124207-079ba7bd75cd h1:kJP9fbfkpUoA4y03Nxor8be+YbShcXP16fc7G4nlgpw=
golang.org/x/tools v0.0.0-20201030143252-cf7a54d06671/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20201110124207-079ba7bd75cd/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
@ -423,8 +425,8 @@ google.golang.org/api v0.29.0 h1:BaiDisFir8O4IJxvAabCGGkQ6yCJegNQqSVoYUNAnbk=
google.golang.org/api v0.29.0/go.mod h1:Lcubydp8VUV7KeIHD9z2Bys/sm/vGKnG1UHuDBSrHWM= google.golang.org/api v0.29.0/go.mod h1:Lcubydp8VUV7KeIHD9z2Bys/sm/vGKnG1UHuDBSrHWM=
google.golang.org/api v0.30.0 h1:yfrXXP61wVuLb0vBcG6qaOoIoqYEzOQS8jum51jkv2w= google.golang.org/api v0.30.0 h1:yfrXXP61wVuLb0vBcG6qaOoIoqYEzOQS8jum51jkv2w=
google.golang.org/api v0.30.0/go.mod h1:QGmEvQ87FHZNiUVJkT14jQNYJ4ZJjdRF23ZXz5138Fc= google.golang.org/api v0.30.0/go.mod h1:QGmEvQ87FHZNiUVJkT14jQNYJ4ZJjdRF23ZXz5138Fc=
google.golang.org/api v0.34.0 h1:k40adF3uR+6x/+hO5Dh4ZFUqFp67vxvbpafFiJxl10A= google.golang.org/api v0.35.0 h1:TBCmTTxUrRDA1iTctnK/fIeitxIZ+TQuaf0j29fmCGo=
google.golang.org/api v0.34.0/go.mod h1:/XrVsuzM0rZmrsbjJutiuftIzeuTQcEeaYcSk/mQ1dg= google.golang.org/api v0.35.0/go.mod h1:/XrVsuzM0rZmrsbjJutiuftIzeuTQcEeaYcSk/mQ1dg=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0 h1:/wp5JvzpHIxhs/dumFmF7BXTf3Z+dd4uXta4kVyO508= google.golang.org/appengine v1.4.0 h1:/wp5JvzpHIxhs/dumFmF7BXTf3Z+dd4uXta4kVyO508=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
@ -474,8 +476,8 @@ google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c h1:Lq4llNryJoaVFRm
google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20200904004341-0bd0a958aa1d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20200904004341-0bd0a958aa1d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201030142918-24207fddd1c3 h1:sg8vLDNIxFPHTchfhH1E3AI32BL3f23oie38xUWnJM8= google.golang.org/genproto v0.0.0-20201109203340-2640f1f9cdfb h1:MoNcrN5yaH+35Ge8RUwFbL7ekwq9ED2fiDpgWKrR29w=
google.golang.org/genproto v0.0.0-20201030142918-24207fddd1c3/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20201109203340-2640f1f9cdfb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/grpc v1.19.0 h1:cfg4PD8YEdSFnm7qLV4++93WcmhH2nIUhMjhdCvl3j8= google.golang.org/grpc v1.19.0 h1:cfg4PD8YEdSFnm7qLV4++93WcmhH2nIUhMjhdCvl3j8=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1 h1:Hz2g2wirWK7H0qIIhGIqRGTuMwTE8HEKFnDZZ7lm9NU= google.golang.org/grpc v1.20.1 h1:Hz2g2wirWK7H0qIIhGIqRGTuMwTE8HEKFnDZZ7lm9NU=
@ -497,8 +499,8 @@ google.golang.org/grpc v1.31.0 h1:T7P4R73V3SSDPhH7WW7ATbfViLtmamH0DKrP3f9AuDI=
google.golang.org/grpc v1.31.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak= google.golang.org/grpc v1.31.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.31.1 h1:SfXqXS5hkufcdZ/mHtYCh53P2b+92WQq/DZcKLgsFRs= google.golang.org/grpc v1.31.1 h1:SfXqXS5hkufcdZ/mHtYCh53P2b+92WQq/DZcKLgsFRs=
google.golang.org/grpc v1.31.1/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak= google.golang.org/grpc v1.31.1/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.33.1 h1:DGeFlSan2f+WEtCERJ4J9GJWk15TxUi8QGagfI87Xyc= google.golang.org/grpc v1.33.2 h1:EQyQC3sa8M+p6Ulc8yy9SWSS2GVwyRc83gAbG8lrl4o=
google.golang.org/grpc v1.33.1/go.mod h1:fr5YgcSWrqhRRxogOsw7RzIpsmvOZ6IcH4kBYTpR3n0= google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=

View file

@ -26,7 +26,7 @@ import (
// Repo is the current version of the client libraries in this // Repo is the current version of the client libraries in this
// repo. It should be a date in YYYYMMDD format. // repo. It should be a date in YYYYMMDD format.
const Repo = "20201027" const Repo = "20201104"
// Go returns the Go runtime version. The returned string // Go returns the Go runtime version. The returned string
// has no whitespace. // has no whitespace.

View file

@ -763,6 +763,7 @@ var awsPartition = partition{
"appsync": service{ "appsync": service{
Endpoints: endpoints{ Endpoints: endpoints{
"ap-east-1": endpoint{},
"ap-northeast-1": endpoint{}, "ap-northeast-1": endpoint{},
"ap-northeast-2": endpoint{}, "ap-northeast-2": endpoint{},
"ap-south-1": endpoint{}, "ap-south-1": endpoint{},
@ -775,6 +776,7 @@ var awsPartition = partition{
"eu-west-1": endpoint{}, "eu-west-1": endpoint{},
"eu-west-2": endpoint{}, "eu-west-2": endpoint{},
"eu-west-3": endpoint{}, "eu-west-3": endpoint{},
"me-south-1": endpoint{},
"sa-east-1": endpoint{}, "sa-east-1": endpoint{},
"us-east-1": endpoint{}, "us-east-1": endpoint{},
"us-east-2": endpoint{}, "us-east-2": endpoint{},
@ -2996,6 +2998,7 @@ var awsPartition = partition{
"glue": service{ "glue": service{
Endpoints: endpoints{ Endpoints: endpoints{
"af-south-1": endpoint{},
"ap-east-1": endpoint{}, "ap-east-1": endpoint{},
"ap-northeast-1": endpoint{}, "ap-northeast-1": endpoint{},
"ap-northeast-2": endpoint{}, "ap-northeast-2": endpoint{},
@ -6790,7 +6793,8 @@ var awscnPartition = partition{
"appsync": service{ "appsync": service{
Endpoints: endpoints{ Endpoints: endpoints{
"cn-north-1": endpoint{}, "cn-north-1": endpoint{},
"cn-northwest-1": endpoint{},
}, },
}, },
"athena": service{ "athena": service{
@ -8485,6 +8489,18 @@ var awsusgovPartition = partition{
"us-gov-west-1": endpoint{}, "us-gov-west-1": endpoint{},
}, },
}, },
"lakeformation": service{
Endpoints: endpoints{
"fips-us-gov-west-1": endpoint{
Hostname: "lakeformation-fips.us-gov-west-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-gov-west-1",
},
},
"us-gov-west-1": endpoint{},
},
},
"lambda": service{ "lambda": service{
Endpoints: endpoints{ Endpoints: endpoints{
@ -9593,6 +9609,14 @@ var awsisoPartition = partition{
"us-iso-east-1": endpoint{}, "us-iso-east-1": endpoint{},
}, },
}, },
"translate": service{
Defaults: endpoint{
Protocols: []string{"https"},
},
Endpoints: endpoints{
"us-iso-east-1": endpoint{},
},
},
"workspaces": service{ "workspaces": service{
Endpoints: endpoints{ Endpoints: endpoints{

View file

@ -5,4 +5,4 @@ package aws
const SDKName = "aws-sdk-go" const SDKName = "aws-sdk-go"
// SDKVersion is the version of this SDK // SDKVersion is the version of this SDK
const SDKVersion = "1.35.23" const SDKVersion = "1.35.28"

File diff suppressed because it is too large Load diff

View file

@ -21,6 +21,12 @@ const (
// bucket access control lists (ACLs). // bucket access control lists (ACLs).
ErrCodeBucketAlreadyOwnedByYou = "BucketAlreadyOwnedByYou" ErrCodeBucketAlreadyOwnedByYou = "BucketAlreadyOwnedByYou"
// ErrCodeInvalidObjectState for service response error code
// "InvalidObjectState".
//
// Object is archived and inaccessible until restored.
ErrCodeInvalidObjectState = "InvalidObjectState"
// ErrCodeNoSuchBucket for service response error code // ErrCodeNoSuchBucket for service response error code
// "NoSuchBucket". // "NoSuchBucket".
// //

View file

@ -96,6 +96,10 @@ type S3API interface {
DeleteBucketEncryptionWithContext(aws.Context, *s3.DeleteBucketEncryptionInput, ...request.Option) (*s3.DeleteBucketEncryptionOutput, error) DeleteBucketEncryptionWithContext(aws.Context, *s3.DeleteBucketEncryptionInput, ...request.Option) (*s3.DeleteBucketEncryptionOutput, error)
DeleteBucketEncryptionRequest(*s3.DeleteBucketEncryptionInput) (*request.Request, *s3.DeleteBucketEncryptionOutput) DeleteBucketEncryptionRequest(*s3.DeleteBucketEncryptionInput) (*request.Request, *s3.DeleteBucketEncryptionOutput)
DeleteBucketIntelligentTieringConfiguration(*s3.DeleteBucketIntelligentTieringConfigurationInput) (*s3.DeleteBucketIntelligentTieringConfigurationOutput, error)
DeleteBucketIntelligentTieringConfigurationWithContext(aws.Context, *s3.DeleteBucketIntelligentTieringConfigurationInput, ...request.Option) (*s3.DeleteBucketIntelligentTieringConfigurationOutput, error)
DeleteBucketIntelligentTieringConfigurationRequest(*s3.DeleteBucketIntelligentTieringConfigurationInput) (*request.Request, *s3.DeleteBucketIntelligentTieringConfigurationOutput)
DeleteBucketInventoryConfiguration(*s3.DeleteBucketInventoryConfigurationInput) (*s3.DeleteBucketInventoryConfigurationOutput, error) DeleteBucketInventoryConfiguration(*s3.DeleteBucketInventoryConfigurationInput) (*s3.DeleteBucketInventoryConfigurationOutput, error)
DeleteBucketInventoryConfigurationWithContext(aws.Context, *s3.DeleteBucketInventoryConfigurationInput, ...request.Option) (*s3.DeleteBucketInventoryConfigurationOutput, error) DeleteBucketInventoryConfigurationWithContext(aws.Context, *s3.DeleteBucketInventoryConfigurationInput, ...request.Option) (*s3.DeleteBucketInventoryConfigurationOutput, error)
DeleteBucketInventoryConfigurationRequest(*s3.DeleteBucketInventoryConfigurationInput) (*request.Request, *s3.DeleteBucketInventoryConfigurationOutput) DeleteBucketInventoryConfigurationRequest(*s3.DeleteBucketInventoryConfigurationInput) (*request.Request, *s3.DeleteBucketInventoryConfigurationOutput)
@ -164,6 +168,10 @@ type S3API interface {
GetBucketEncryptionWithContext(aws.Context, *s3.GetBucketEncryptionInput, ...request.Option) (*s3.GetBucketEncryptionOutput, error) GetBucketEncryptionWithContext(aws.Context, *s3.GetBucketEncryptionInput, ...request.Option) (*s3.GetBucketEncryptionOutput, error)
GetBucketEncryptionRequest(*s3.GetBucketEncryptionInput) (*request.Request, *s3.GetBucketEncryptionOutput) GetBucketEncryptionRequest(*s3.GetBucketEncryptionInput) (*request.Request, *s3.GetBucketEncryptionOutput)
GetBucketIntelligentTieringConfiguration(*s3.GetBucketIntelligentTieringConfigurationInput) (*s3.GetBucketIntelligentTieringConfigurationOutput, error)
GetBucketIntelligentTieringConfigurationWithContext(aws.Context, *s3.GetBucketIntelligentTieringConfigurationInput, ...request.Option) (*s3.GetBucketIntelligentTieringConfigurationOutput, error)
GetBucketIntelligentTieringConfigurationRequest(*s3.GetBucketIntelligentTieringConfigurationInput) (*request.Request, *s3.GetBucketIntelligentTieringConfigurationOutput)
GetBucketInventoryConfiguration(*s3.GetBucketInventoryConfigurationInput) (*s3.GetBucketInventoryConfigurationOutput, error) GetBucketInventoryConfiguration(*s3.GetBucketInventoryConfigurationInput) (*s3.GetBucketInventoryConfigurationOutput, error)
GetBucketInventoryConfigurationWithContext(aws.Context, *s3.GetBucketInventoryConfigurationInput, ...request.Option) (*s3.GetBucketInventoryConfigurationOutput, error) GetBucketInventoryConfigurationWithContext(aws.Context, *s3.GetBucketInventoryConfigurationInput, ...request.Option) (*s3.GetBucketInventoryConfigurationOutput, error)
GetBucketInventoryConfigurationRequest(*s3.GetBucketInventoryConfigurationInput) (*request.Request, *s3.GetBucketInventoryConfigurationOutput) GetBucketInventoryConfigurationRequest(*s3.GetBucketInventoryConfigurationInput) (*request.Request, *s3.GetBucketInventoryConfigurationOutput)
@ -272,6 +280,10 @@ type S3API interface {
ListBucketAnalyticsConfigurationsWithContext(aws.Context, *s3.ListBucketAnalyticsConfigurationsInput, ...request.Option) (*s3.ListBucketAnalyticsConfigurationsOutput, error) ListBucketAnalyticsConfigurationsWithContext(aws.Context, *s3.ListBucketAnalyticsConfigurationsInput, ...request.Option) (*s3.ListBucketAnalyticsConfigurationsOutput, error)
ListBucketAnalyticsConfigurationsRequest(*s3.ListBucketAnalyticsConfigurationsInput) (*request.Request, *s3.ListBucketAnalyticsConfigurationsOutput) ListBucketAnalyticsConfigurationsRequest(*s3.ListBucketAnalyticsConfigurationsInput) (*request.Request, *s3.ListBucketAnalyticsConfigurationsOutput)
ListBucketIntelligentTieringConfigurations(*s3.ListBucketIntelligentTieringConfigurationsInput) (*s3.ListBucketIntelligentTieringConfigurationsOutput, error)
ListBucketIntelligentTieringConfigurationsWithContext(aws.Context, *s3.ListBucketIntelligentTieringConfigurationsInput, ...request.Option) (*s3.ListBucketIntelligentTieringConfigurationsOutput, error)
ListBucketIntelligentTieringConfigurationsRequest(*s3.ListBucketIntelligentTieringConfigurationsInput) (*request.Request, *s3.ListBucketIntelligentTieringConfigurationsOutput)
ListBucketInventoryConfigurations(*s3.ListBucketInventoryConfigurationsInput) (*s3.ListBucketInventoryConfigurationsOutput, error) ListBucketInventoryConfigurations(*s3.ListBucketInventoryConfigurationsInput) (*s3.ListBucketInventoryConfigurationsOutput, error)
ListBucketInventoryConfigurationsWithContext(aws.Context, *s3.ListBucketInventoryConfigurationsInput, ...request.Option) (*s3.ListBucketInventoryConfigurationsOutput, error) ListBucketInventoryConfigurationsWithContext(aws.Context, *s3.ListBucketInventoryConfigurationsInput, ...request.Option) (*s3.ListBucketInventoryConfigurationsOutput, error)
ListBucketInventoryConfigurationsRequest(*s3.ListBucketInventoryConfigurationsInput) (*request.Request, *s3.ListBucketInventoryConfigurationsOutput) ListBucketInventoryConfigurationsRequest(*s3.ListBucketInventoryConfigurationsInput) (*request.Request, *s3.ListBucketInventoryConfigurationsOutput)
@ -339,6 +351,10 @@ type S3API interface {
PutBucketEncryptionWithContext(aws.Context, *s3.PutBucketEncryptionInput, ...request.Option) (*s3.PutBucketEncryptionOutput, error) PutBucketEncryptionWithContext(aws.Context, *s3.PutBucketEncryptionInput, ...request.Option) (*s3.PutBucketEncryptionOutput, error)
PutBucketEncryptionRequest(*s3.PutBucketEncryptionInput) (*request.Request, *s3.PutBucketEncryptionOutput) PutBucketEncryptionRequest(*s3.PutBucketEncryptionInput) (*request.Request, *s3.PutBucketEncryptionOutput)
PutBucketIntelligentTieringConfiguration(*s3.PutBucketIntelligentTieringConfigurationInput) (*s3.PutBucketIntelligentTieringConfigurationOutput, error)
PutBucketIntelligentTieringConfigurationWithContext(aws.Context, *s3.PutBucketIntelligentTieringConfigurationInput, ...request.Option) (*s3.PutBucketIntelligentTieringConfigurationOutput, error)
PutBucketIntelligentTieringConfigurationRequest(*s3.PutBucketIntelligentTieringConfigurationInput) (*request.Request, *s3.PutBucketIntelligentTieringConfigurationOutput)
PutBucketInventoryConfiguration(*s3.PutBucketInventoryConfigurationInput) (*s3.PutBucketInventoryConfigurationOutput, error) PutBucketInventoryConfiguration(*s3.PutBucketInventoryConfigurationInput) (*s3.PutBucketInventoryConfigurationOutput, error)
PutBucketInventoryConfigurationWithContext(aws.Context, *s3.PutBucketInventoryConfigurationInput, ...request.Option) (*s3.PutBucketInventoryConfigurationOutput, error) PutBucketInventoryConfigurationWithContext(aws.Context, *s3.PutBucketInventoryConfigurationInput, ...request.Option) (*s3.PutBucketInventoryConfigurationOutput, error)
PutBucketInventoryConfigurationRequest(*s3.PutBucketInventoryConfigurationInput) (*request.Request, *s3.PutBucketInventoryConfigurationOutput) PutBucketInventoryConfigurationRequest(*s3.PutBucketInventoryConfigurationInput) (*request.Request, *s3.PutBucketInventoryConfigurationOutput)

View file

@ -42,16 +42,6 @@ func (f *decompressor) $FUNCNAME$() {
stateDict stateDict
) )
fr := f.r.($TYPE$) fr := f.r.($TYPE$)
moreBits := func() error {
c, err := fr.ReadByte()
if err != nil {
return noEOF(err)
}
f.roffset++
f.b |= uint32(c) << f.nb
f.nb += 8
return nil
}
switch f.stepState { switch f.stepState {
case stateInit: case stateInit:
@ -112,9 +102,7 @@ readLiteral:
} }
} }
var n uint // number of bits extra
var length int var length int
var err error
switch { switch {
case v < 256: case v < 256:
f.dict.writeByte(byte(v)) f.dict.writeByte(byte(v))
@ -131,25 +119,26 @@ readLiteral:
// otherwise, reference to older data // otherwise, reference to older data
case v < 265: case v < 265:
length = v - (257 - 3) length = v - (257 - 3)
n = 0
case v < 269:
length = v*2 - (265*2 - 11)
n = 1
case v < 273:
length = v*4 - (269*4 - 19)
n = 2
case v < 277:
length = v*8 - (273*8 - 35)
n = 3
case v < 281:
length = v*16 - (277*16 - 67)
n = 4
case v < 285:
length = v*32 - (281*32 - 131)
n = 5
case v < maxNumLit: case v < maxNumLit:
length = 258 val := decCodeToLen[(v - 257)]
n = 0 length = int(val.length) + 3
n := uint(val.extra)
for f.nb < n {
c, err := fr.ReadByte()
if err != nil {
if debugDecode {
fmt.Println("morebits n>0:", err)
}
f.err = err
return
}
f.roffset++
f.b |= uint32(c) << f.nb
f.nb += 8
}
length += int(f.b & uint32(1<<(n&regSizeMaskUint32)-1))
f.b >>= n & regSizeMaskUint32
f.nb -= n
default: default:
if debugDecode { if debugDecode {
fmt.Println(v, ">= maxNumLit") fmt.Println(v, ">= maxNumLit")
@ -157,45 +146,70 @@ readLiteral:
f.err = CorruptInputError(f.roffset) f.err = CorruptInputError(f.roffset)
return return
} }
if n > 0 {
for f.nb < n {
if err = moreBits(); err != nil {
if debugDecode {
fmt.Println("morebits n>0:", err)
}
f.err = err
return
}
}
length += int(f.b & uint32(1<<(n&regSizeMaskUint32)-1))
f.b >>= n & regSizeMaskUint32
f.nb -= n
}
var dist uint32 var dist uint32
if f.hd == nil { if f.hd == nil {
for f.nb < 5 { for f.nb < 5 {
if err = f.moreBits(); err != nil { c, err := fr.ReadByte()
if err != nil {
if debugDecode { if debugDecode {
fmt.Println("morebits f.nb<5:", err) fmt.Println("morebits f.nb<5:", err)
} }
f.err = err f.err = err
return return
} }
f.roffset++
f.b |= uint32(c) << f.nb
f.nb += 8
} }
dist = uint32(bits.Reverse8(uint8(f.b & 0x1F << 3))) dist = uint32(bits.Reverse8(uint8(f.b & 0x1F << 3)))
f.b >>= 5 f.b >>= 5
f.nb -= 5 f.nb -= 5
} else { } else {
sym, err := f.huffSym(f.hd) // Since a huffmanDecoder can be empty or be composed of a degenerate tree
if err != nil { // with single element, huffSym must error on these two edge cases. In both
if debugDecode { // cases, the chunks slice will be 0 for the invalid sequence, leading it
fmt.Println("huffsym:", err) // satisfy the n == 0 check below.
n := uint(f.hd.maxRead)
// Optimization. Compiler isn't smart enough to keep f.b,f.nb in registers,
// but is smart enough to keep local variables in registers, so use nb and b,
// inline call to moreBits and reassign b,nb back to f on return.
nb, b := f.nb, f.b
for {
for nb < n {
c, err := fr.ReadByte()
if err != nil {
f.b = b
f.nb = nb
f.err = noEOF(err)
return
}
f.roffset++
b |= uint32(c) << (nb & regSizeMaskUint32)
nb += 8
}
chunk := f.hd.chunks[b&(huffmanNumChunks-1)]
n = uint(chunk & huffmanCountMask)
if n > huffmanChunkBits {
chunk = f.hd.links[chunk>>huffmanValueShift][(b>>huffmanChunkBits)&f.hd.linkMask]
n = uint(chunk & huffmanCountMask)
}
if n <= nb {
if n == 0 {
f.b = b
f.nb = nb
if debugDecode {
fmt.Println("huffsym: n==0")
}
f.err = CorruptInputError(f.roffset)
return
}
f.b = b >> (n & regSizeMaskUint32)
f.nb = nb - n
dist = uint32(chunk >> huffmanValueShift)
break
} }
f.err = err
return
} }
dist = uint32(sym)
} }
switch { switch {
@ -206,13 +220,17 @@ readLiteral:
// have 1 bit in bottom of dist, need nb more. // have 1 bit in bottom of dist, need nb more.
extra := (dist & 1) << (nb & regSizeMaskUint32) extra := (dist & 1) << (nb & regSizeMaskUint32)
for f.nb < nb { for f.nb < nb {
if err = f.moreBits(); err != nil { c, err := fr.ReadByte()
if err != nil {
if debugDecode { if debugDecode {
fmt.Println("morebits f.nb<nb:", err) fmt.Println("morebits f.nb<nb:", err)
} }
f.err = err f.err = err
return return
} }
f.roffset++
f.b |= uint32(c) << f.nb
f.nb += 8
} }
extra |= f.b & uint32(1<<(nb&regSizeMaskUint32)-1) extra |= f.b & uint32(1<<(nb&regSizeMaskUint32)-1)
f.b >>= nb & regSizeMaskUint32 f.b >>= nb & regSizeMaskUint32

View file

@ -29,6 +29,13 @@ const (
debugDecode = false debugDecode = false
) )
// Value of length - 3 and extra bits.
type lengthExtra struct {
length, extra uint8
}
var decCodeToLen = [32]lengthExtra{{length: 0x0, extra: 0x0}, {length: 0x1, extra: 0x0}, {length: 0x2, extra: 0x0}, {length: 0x3, extra: 0x0}, {length: 0x4, extra: 0x0}, {length: 0x5, extra: 0x0}, {length: 0x6, extra: 0x0}, {length: 0x7, extra: 0x0}, {length: 0x8, extra: 0x1}, {length: 0xa, extra: 0x1}, {length: 0xc, extra: 0x1}, {length: 0xe, extra: 0x1}, {length: 0x10, extra: 0x2}, {length: 0x14, extra: 0x2}, {length: 0x18, extra: 0x2}, {length: 0x1c, extra: 0x2}, {length: 0x20, extra: 0x3}, {length: 0x28, extra: 0x3}, {length: 0x30, extra: 0x3}, {length: 0x38, extra: 0x3}, {length: 0x40, extra: 0x4}, {length: 0x50, extra: 0x4}, {length: 0x60, extra: 0x4}, {length: 0x70, extra: 0x4}, {length: 0x80, extra: 0x5}, {length: 0xa0, extra: 0x5}, {length: 0xc0, extra: 0x5}, {length: 0xe0, extra: 0x5}, {length: 0xff, extra: 0x0}, {length: 0x0, extra: 0x0}, {length: 0x0, extra: 0x0}, {length: 0x0, extra: 0x0}}
// Initialize the fixedHuffmanDecoder only once upon first use. // Initialize the fixedHuffmanDecoder only once upon first use.
var fixedOnce sync.Once var fixedOnce sync.Once
var fixedHuffmanDecoder huffmanDecoder var fixedHuffmanDecoder huffmanDecoder

View file

@ -20,16 +20,6 @@ func (f *decompressor) huffmanBytesBuffer() {
stateDict stateDict
) )
fr := f.r.(*bytes.Buffer) fr := f.r.(*bytes.Buffer)
moreBits := func() error {
c, err := fr.ReadByte()
if err != nil {
return noEOF(err)
}
f.roffset++
f.b |= uint32(c) << f.nb
f.nb += 8
return nil
}
switch f.stepState { switch f.stepState {
case stateInit: case stateInit:
@ -90,9 +80,7 @@ readLiteral:
} }
} }
var n uint // number of bits extra
var length int var length int
var err error
switch { switch {
case v < 256: case v < 256:
f.dict.writeByte(byte(v)) f.dict.writeByte(byte(v))
@ -109,25 +97,26 @@ readLiteral:
// otherwise, reference to older data // otherwise, reference to older data
case v < 265: case v < 265:
length = v - (257 - 3) length = v - (257 - 3)
n = 0
case v < 269:
length = v*2 - (265*2 - 11)
n = 1
case v < 273:
length = v*4 - (269*4 - 19)
n = 2
case v < 277:
length = v*8 - (273*8 - 35)
n = 3
case v < 281:
length = v*16 - (277*16 - 67)
n = 4
case v < 285:
length = v*32 - (281*32 - 131)
n = 5
case v < maxNumLit: case v < maxNumLit:
length = 258 val := decCodeToLen[(v - 257)]
n = 0 length = int(val.length) + 3
n := uint(val.extra)
for f.nb < n {
c, err := fr.ReadByte()
if err != nil {
if debugDecode {
fmt.Println("morebits n>0:", err)
}
f.err = err
return
}
f.roffset++
f.b |= uint32(c) << f.nb
f.nb += 8
}
length += int(f.b & uint32(1<<(n&regSizeMaskUint32)-1))
f.b >>= n & regSizeMaskUint32
f.nb -= n
default: default:
if debugDecode { if debugDecode {
fmt.Println(v, ">= maxNumLit") fmt.Println(v, ">= maxNumLit")
@ -135,45 +124,70 @@ readLiteral:
f.err = CorruptInputError(f.roffset) f.err = CorruptInputError(f.roffset)
return return
} }
if n > 0 {
for f.nb < n {
if err = moreBits(); err != nil {
if debugDecode {
fmt.Println("morebits n>0:", err)
}
f.err = err
return
}
}
length += int(f.b & uint32(1<<(n&regSizeMaskUint32)-1))
f.b >>= n & regSizeMaskUint32
f.nb -= n
}
var dist uint32 var dist uint32
if f.hd == nil { if f.hd == nil {
for f.nb < 5 { for f.nb < 5 {
if err = f.moreBits(); err != nil { c, err := fr.ReadByte()
if err != nil {
if debugDecode { if debugDecode {
fmt.Println("morebits f.nb<5:", err) fmt.Println("morebits f.nb<5:", err)
} }
f.err = err f.err = err
return return
} }
f.roffset++
f.b |= uint32(c) << f.nb
f.nb += 8
} }
dist = uint32(bits.Reverse8(uint8(f.b & 0x1F << 3))) dist = uint32(bits.Reverse8(uint8(f.b & 0x1F << 3)))
f.b >>= 5 f.b >>= 5
f.nb -= 5 f.nb -= 5
} else { } else {
sym, err := f.huffSym(f.hd) // Since a huffmanDecoder can be empty or be composed of a degenerate tree
if err != nil { // with single element, huffSym must error on these two edge cases. In both
if debugDecode { // cases, the chunks slice will be 0 for the invalid sequence, leading it
fmt.Println("huffsym:", err) // satisfy the n == 0 check below.
n := uint(f.hd.maxRead)
// Optimization. Compiler isn't smart enough to keep f.b,f.nb in registers,
// but is smart enough to keep local variables in registers, so use nb and b,
// inline call to moreBits and reassign b,nb back to f on return.
nb, b := f.nb, f.b
for {
for nb < n {
c, err := fr.ReadByte()
if err != nil {
f.b = b
f.nb = nb
f.err = noEOF(err)
return
}
f.roffset++
b |= uint32(c) << (nb & regSizeMaskUint32)
nb += 8
}
chunk := f.hd.chunks[b&(huffmanNumChunks-1)]
n = uint(chunk & huffmanCountMask)
if n > huffmanChunkBits {
chunk = f.hd.links[chunk>>huffmanValueShift][(b>>huffmanChunkBits)&f.hd.linkMask]
n = uint(chunk & huffmanCountMask)
}
if n <= nb {
if n == 0 {
f.b = b
f.nb = nb
if debugDecode {
fmt.Println("huffsym: n==0")
}
f.err = CorruptInputError(f.roffset)
return
}
f.b = b >> (n & regSizeMaskUint32)
f.nb = nb - n
dist = uint32(chunk >> huffmanValueShift)
break
} }
f.err = err
return
} }
dist = uint32(sym)
} }
switch { switch {
@ -184,13 +198,17 @@ readLiteral:
// have 1 bit in bottom of dist, need nb more. // have 1 bit in bottom of dist, need nb more.
extra := (dist & 1) << (nb & regSizeMaskUint32) extra := (dist & 1) << (nb & regSizeMaskUint32)
for f.nb < nb { for f.nb < nb {
if err = f.moreBits(); err != nil { c, err := fr.ReadByte()
if err != nil {
if debugDecode { if debugDecode {
fmt.Println("morebits f.nb<nb:", err) fmt.Println("morebits f.nb<nb:", err)
} }
f.err = err f.err = err
return return
} }
f.roffset++
f.b |= uint32(c) << f.nb
f.nb += 8
} }
extra |= f.b & uint32(1<<(nb&regSizeMaskUint32)-1) extra |= f.b & uint32(1<<(nb&regSizeMaskUint32)-1)
f.b >>= nb & regSizeMaskUint32 f.b >>= nb & regSizeMaskUint32
@ -246,16 +264,6 @@ func (f *decompressor) huffmanBytesReader() {
stateDict stateDict
) )
fr := f.r.(*bytes.Reader) fr := f.r.(*bytes.Reader)
moreBits := func() error {
c, err := fr.ReadByte()
if err != nil {
return noEOF(err)
}
f.roffset++
f.b |= uint32(c) << f.nb
f.nb += 8
return nil
}
switch f.stepState { switch f.stepState {
case stateInit: case stateInit:
@ -316,9 +324,7 @@ readLiteral:
} }
} }
var n uint // number of bits extra
var length int var length int
var err error
switch { switch {
case v < 256: case v < 256:
f.dict.writeByte(byte(v)) f.dict.writeByte(byte(v))
@ -335,25 +341,26 @@ readLiteral:
// otherwise, reference to older data // otherwise, reference to older data
case v < 265: case v < 265:
length = v - (257 - 3) length = v - (257 - 3)
n = 0
case v < 269:
length = v*2 - (265*2 - 11)
n = 1
case v < 273:
length = v*4 - (269*4 - 19)
n = 2
case v < 277:
length = v*8 - (273*8 - 35)
n = 3
case v < 281:
length = v*16 - (277*16 - 67)
n = 4
case v < 285:
length = v*32 - (281*32 - 131)
n = 5
case v < maxNumLit: case v < maxNumLit:
length = 258 val := decCodeToLen[(v - 257)]
n = 0 length = int(val.length) + 3
n := uint(val.extra)
for f.nb < n {
c, err := fr.ReadByte()
if err != nil {
if debugDecode {
fmt.Println("morebits n>0:", err)
}
f.err = err
return
}
f.roffset++
f.b |= uint32(c) << f.nb
f.nb += 8
}
length += int(f.b & uint32(1<<(n&regSizeMaskUint32)-1))
f.b >>= n & regSizeMaskUint32
f.nb -= n
default: default:
if debugDecode { if debugDecode {
fmt.Println(v, ">= maxNumLit") fmt.Println(v, ">= maxNumLit")
@ -361,45 +368,70 @@ readLiteral:
f.err = CorruptInputError(f.roffset) f.err = CorruptInputError(f.roffset)
return return
} }
if n > 0 {
for f.nb < n {
if err = moreBits(); err != nil {
if debugDecode {
fmt.Println("morebits n>0:", err)
}
f.err = err
return
}
}
length += int(f.b & uint32(1<<(n&regSizeMaskUint32)-1))
f.b >>= n & regSizeMaskUint32
f.nb -= n
}
var dist uint32 var dist uint32
if f.hd == nil { if f.hd == nil {
for f.nb < 5 { for f.nb < 5 {
if err = f.moreBits(); err != nil { c, err := fr.ReadByte()
if err != nil {
if debugDecode { if debugDecode {
fmt.Println("morebits f.nb<5:", err) fmt.Println("morebits f.nb<5:", err)
} }
f.err = err f.err = err
return return
} }
f.roffset++
f.b |= uint32(c) << f.nb
f.nb += 8
} }
dist = uint32(bits.Reverse8(uint8(f.b & 0x1F << 3))) dist = uint32(bits.Reverse8(uint8(f.b & 0x1F << 3)))
f.b >>= 5 f.b >>= 5
f.nb -= 5 f.nb -= 5
} else { } else {
sym, err := f.huffSym(f.hd) // Since a huffmanDecoder can be empty or be composed of a degenerate tree
if err != nil { // with single element, huffSym must error on these two edge cases. In both
if debugDecode { // cases, the chunks slice will be 0 for the invalid sequence, leading it
fmt.Println("huffsym:", err) // satisfy the n == 0 check below.
n := uint(f.hd.maxRead)
// Optimization. Compiler isn't smart enough to keep f.b,f.nb in registers,
// but is smart enough to keep local variables in registers, so use nb and b,
// inline call to moreBits and reassign b,nb back to f on return.
nb, b := f.nb, f.b
for {
for nb < n {
c, err := fr.ReadByte()
if err != nil {
f.b = b
f.nb = nb
f.err = noEOF(err)
return
}
f.roffset++
b |= uint32(c) << (nb & regSizeMaskUint32)
nb += 8
}
chunk := f.hd.chunks[b&(huffmanNumChunks-1)]
n = uint(chunk & huffmanCountMask)
if n > huffmanChunkBits {
chunk = f.hd.links[chunk>>huffmanValueShift][(b>>huffmanChunkBits)&f.hd.linkMask]
n = uint(chunk & huffmanCountMask)
}
if n <= nb {
if n == 0 {
f.b = b
f.nb = nb
if debugDecode {
fmt.Println("huffsym: n==0")
}
f.err = CorruptInputError(f.roffset)
return
}
f.b = b >> (n & regSizeMaskUint32)
f.nb = nb - n
dist = uint32(chunk >> huffmanValueShift)
break
} }
f.err = err
return
} }
dist = uint32(sym)
} }
switch { switch {
@ -410,13 +442,17 @@ readLiteral:
// have 1 bit in bottom of dist, need nb more. // have 1 bit in bottom of dist, need nb more.
extra := (dist & 1) << (nb & regSizeMaskUint32) extra := (dist & 1) << (nb & regSizeMaskUint32)
for f.nb < nb { for f.nb < nb {
if err = f.moreBits(); err != nil { c, err := fr.ReadByte()
if err != nil {
if debugDecode { if debugDecode {
fmt.Println("morebits f.nb<nb:", err) fmt.Println("morebits f.nb<nb:", err)
} }
f.err = err f.err = err
return return
} }
f.roffset++
f.b |= uint32(c) << f.nb
f.nb += 8
} }
extra |= f.b & uint32(1<<(nb&regSizeMaskUint32)-1) extra |= f.b & uint32(1<<(nb&regSizeMaskUint32)-1)
f.b >>= nb & regSizeMaskUint32 f.b >>= nb & regSizeMaskUint32
@ -472,16 +508,6 @@ func (f *decompressor) huffmanBufioReader() {
stateDict stateDict
) )
fr := f.r.(*bufio.Reader) fr := f.r.(*bufio.Reader)
moreBits := func() error {
c, err := fr.ReadByte()
if err != nil {
return noEOF(err)
}
f.roffset++
f.b |= uint32(c) << f.nb
f.nb += 8
return nil
}
switch f.stepState { switch f.stepState {
case stateInit: case stateInit:
@ -542,9 +568,7 @@ readLiteral:
} }
} }
var n uint // number of bits extra
var length int var length int
var err error
switch { switch {
case v < 256: case v < 256:
f.dict.writeByte(byte(v)) f.dict.writeByte(byte(v))
@ -561,25 +585,26 @@ readLiteral:
// otherwise, reference to older data // otherwise, reference to older data
case v < 265: case v < 265:
length = v - (257 - 3) length = v - (257 - 3)
n = 0
case v < 269:
length = v*2 - (265*2 - 11)
n = 1
case v < 273:
length = v*4 - (269*4 - 19)
n = 2
case v < 277:
length = v*8 - (273*8 - 35)
n = 3
case v < 281:
length = v*16 - (277*16 - 67)
n = 4
case v < 285:
length = v*32 - (281*32 - 131)
n = 5
case v < maxNumLit: case v < maxNumLit:
length = 258 val := decCodeToLen[(v - 257)]
n = 0 length = int(val.length) + 3
n := uint(val.extra)
for f.nb < n {
c, err := fr.ReadByte()
if err != nil {
if debugDecode {
fmt.Println("morebits n>0:", err)
}
f.err = err
return
}
f.roffset++
f.b |= uint32(c) << f.nb
f.nb += 8
}
length += int(f.b & uint32(1<<(n&regSizeMaskUint32)-1))
f.b >>= n & regSizeMaskUint32
f.nb -= n
default: default:
if debugDecode { if debugDecode {
fmt.Println(v, ">= maxNumLit") fmt.Println(v, ">= maxNumLit")
@ -587,45 +612,70 @@ readLiteral:
f.err = CorruptInputError(f.roffset) f.err = CorruptInputError(f.roffset)
return return
} }
if n > 0 {
for f.nb < n {
if err = moreBits(); err != nil {
if debugDecode {
fmt.Println("morebits n>0:", err)
}
f.err = err
return
}
}
length += int(f.b & uint32(1<<(n&regSizeMaskUint32)-1))
f.b >>= n & regSizeMaskUint32
f.nb -= n
}
var dist uint32 var dist uint32
if f.hd == nil { if f.hd == nil {
for f.nb < 5 { for f.nb < 5 {
if err = f.moreBits(); err != nil { c, err := fr.ReadByte()
if err != nil {
if debugDecode { if debugDecode {
fmt.Println("morebits f.nb<5:", err) fmt.Println("morebits f.nb<5:", err)
} }
f.err = err f.err = err
return return
} }
f.roffset++
f.b |= uint32(c) << f.nb
f.nb += 8
} }
dist = uint32(bits.Reverse8(uint8(f.b & 0x1F << 3))) dist = uint32(bits.Reverse8(uint8(f.b & 0x1F << 3)))
f.b >>= 5 f.b >>= 5
f.nb -= 5 f.nb -= 5
} else { } else {
sym, err := f.huffSym(f.hd) // Since a huffmanDecoder can be empty or be composed of a degenerate tree
if err != nil { // with single element, huffSym must error on these two edge cases. In both
if debugDecode { // cases, the chunks slice will be 0 for the invalid sequence, leading it
fmt.Println("huffsym:", err) // satisfy the n == 0 check below.
n := uint(f.hd.maxRead)
// Optimization. Compiler isn't smart enough to keep f.b,f.nb in registers,
// but is smart enough to keep local variables in registers, so use nb and b,
// inline call to moreBits and reassign b,nb back to f on return.
nb, b := f.nb, f.b
for {
for nb < n {
c, err := fr.ReadByte()
if err != nil {
f.b = b
f.nb = nb
f.err = noEOF(err)
return
}
f.roffset++
b |= uint32(c) << (nb & regSizeMaskUint32)
nb += 8
}
chunk := f.hd.chunks[b&(huffmanNumChunks-1)]
n = uint(chunk & huffmanCountMask)
if n > huffmanChunkBits {
chunk = f.hd.links[chunk>>huffmanValueShift][(b>>huffmanChunkBits)&f.hd.linkMask]
n = uint(chunk & huffmanCountMask)
}
if n <= nb {
if n == 0 {
f.b = b
f.nb = nb
if debugDecode {
fmt.Println("huffsym: n==0")
}
f.err = CorruptInputError(f.roffset)
return
}
f.b = b >> (n & regSizeMaskUint32)
f.nb = nb - n
dist = uint32(chunk >> huffmanValueShift)
break
} }
f.err = err
return
} }
dist = uint32(sym)
} }
switch { switch {
@ -636,13 +686,17 @@ readLiteral:
// have 1 bit in bottom of dist, need nb more. // have 1 bit in bottom of dist, need nb more.
extra := (dist & 1) << (nb & regSizeMaskUint32) extra := (dist & 1) << (nb & regSizeMaskUint32)
for f.nb < nb { for f.nb < nb {
if err = f.moreBits(); err != nil { c, err := fr.ReadByte()
if err != nil {
if debugDecode { if debugDecode {
fmt.Println("morebits f.nb<nb:", err) fmt.Println("morebits f.nb<nb:", err)
} }
f.err = err f.err = err
return return
} }
f.roffset++
f.b |= uint32(c) << f.nb
f.nb += 8
} }
extra |= f.b & uint32(1<<(nb&regSizeMaskUint32)-1) extra |= f.b & uint32(1<<(nb&regSizeMaskUint32)-1)
f.b >>= nb & regSizeMaskUint32 f.b >>= nb & regSizeMaskUint32
@ -698,16 +752,6 @@ func (f *decompressor) huffmanStringsReader() {
stateDict stateDict
) )
fr := f.r.(*strings.Reader) fr := f.r.(*strings.Reader)
moreBits := func() error {
c, err := fr.ReadByte()
if err != nil {
return noEOF(err)
}
f.roffset++
f.b |= uint32(c) << f.nb
f.nb += 8
return nil
}
switch f.stepState { switch f.stepState {
case stateInit: case stateInit:
@ -768,9 +812,7 @@ readLiteral:
} }
} }
var n uint // number of bits extra
var length int var length int
var err error
switch { switch {
case v < 256: case v < 256:
f.dict.writeByte(byte(v)) f.dict.writeByte(byte(v))
@ -787,25 +829,26 @@ readLiteral:
// otherwise, reference to older data // otherwise, reference to older data
case v < 265: case v < 265:
length = v - (257 - 3) length = v - (257 - 3)
n = 0
case v < 269:
length = v*2 - (265*2 - 11)
n = 1
case v < 273:
length = v*4 - (269*4 - 19)
n = 2
case v < 277:
length = v*8 - (273*8 - 35)
n = 3
case v < 281:
length = v*16 - (277*16 - 67)
n = 4
case v < 285:
length = v*32 - (281*32 - 131)
n = 5
case v < maxNumLit: case v < maxNumLit:
length = 258 val := decCodeToLen[(v - 257)]
n = 0 length = int(val.length) + 3
n := uint(val.extra)
for f.nb < n {
c, err := fr.ReadByte()
if err != nil {
if debugDecode {
fmt.Println("morebits n>0:", err)
}
f.err = err
return
}
f.roffset++
f.b |= uint32(c) << f.nb
f.nb += 8
}
length += int(f.b & uint32(1<<(n&regSizeMaskUint32)-1))
f.b >>= n & regSizeMaskUint32
f.nb -= n
default: default:
if debugDecode { if debugDecode {
fmt.Println(v, ">= maxNumLit") fmt.Println(v, ">= maxNumLit")
@ -813,45 +856,70 @@ readLiteral:
f.err = CorruptInputError(f.roffset) f.err = CorruptInputError(f.roffset)
return return
} }
if n > 0 {
for f.nb < n {
if err = moreBits(); err != nil {
if debugDecode {
fmt.Println("morebits n>0:", err)
}
f.err = err
return
}
}
length += int(f.b & uint32(1<<(n&regSizeMaskUint32)-1))
f.b >>= n & regSizeMaskUint32
f.nb -= n
}
var dist uint32 var dist uint32
if f.hd == nil { if f.hd == nil {
for f.nb < 5 { for f.nb < 5 {
if err = f.moreBits(); err != nil { c, err := fr.ReadByte()
if err != nil {
if debugDecode { if debugDecode {
fmt.Println("morebits f.nb<5:", err) fmt.Println("morebits f.nb<5:", err)
} }
f.err = err f.err = err
return return
} }
f.roffset++
f.b |= uint32(c) << f.nb
f.nb += 8
} }
dist = uint32(bits.Reverse8(uint8(f.b & 0x1F << 3))) dist = uint32(bits.Reverse8(uint8(f.b & 0x1F << 3)))
f.b >>= 5 f.b >>= 5
f.nb -= 5 f.nb -= 5
} else { } else {
sym, err := f.huffSym(f.hd) // Since a huffmanDecoder can be empty or be composed of a degenerate tree
if err != nil { // with single element, huffSym must error on these two edge cases. In both
if debugDecode { // cases, the chunks slice will be 0 for the invalid sequence, leading it
fmt.Println("huffsym:", err) // satisfy the n == 0 check below.
n := uint(f.hd.maxRead)
// Optimization. Compiler isn't smart enough to keep f.b,f.nb in registers,
// but is smart enough to keep local variables in registers, so use nb and b,
// inline call to moreBits and reassign b,nb back to f on return.
nb, b := f.nb, f.b
for {
for nb < n {
c, err := fr.ReadByte()
if err != nil {
f.b = b
f.nb = nb
f.err = noEOF(err)
return
}
f.roffset++
b |= uint32(c) << (nb & regSizeMaskUint32)
nb += 8
}
chunk := f.hd.chunks[b&(huffmanNumChunks-1)]
n = uint(chunk & huffmanCountMask)
if n > huffmanChunkBits {
chunk = f.hd.links[chunk>>huffmanValueShift][(b>>huffmanChunkBits)&f.hd.linkMask]
n = uint(chunk & huffmanCountMask)
}
if n <= nb {
if n == 0 {
f.b = b
f.nb = nb
if debugDecode {
fmt.Println("huffsym: n==0")
}
f.err = CorruptInputError(f.roffset)
return
}
f.b = b >> (n & regSizeMaskUint32)
f.nb = nb - n
dist = uint32(chunk >> huffmanValueShift)
break
} }
f.err = err
return
} }
dist = uint32(sym)
} }
switch { switch {
@ -862,13 +930,17 @@ readLiteral:
// have 1 bit in bottom of dist, need nb more. // have 1 bit in bottom of dist, need nb more.
extra := (dist & 1) << (nb & regSizeMaskUint32) extra := (dist & 1) << (nb & regSizeMaskUint32)
for f.nb < nb { for f.nb < nb {
if err = f.moreBits(); err != nil { c, err := fr.ReadByte()
if err != nil {
if debugDecode { if debugDecode {
fmt.Println("morebits f.nb<nb:", err) fmt.Println("morebits f.nb<nb:", err)
} }
f.err = err f.err = err
return return
} }
f.roffset++
f.b |= uint32(c) << f.nb
f.nb += 8
} }
extra |= f.b & uint32(1<<(nb&regSizeMaskUint32)-1) extra |= f.b & uint32(1<<(nb&regSizeMaskUint32)-1)
f.b >>= nb & regSizeMaskUint32 f.b >>= nb & regSizeMaskUint32

View file

@ -54,11 +54,11 @@ To create a writer with default options, do like this:
```Go ```Go
// Compress input to output. // Compress input to output.
func Compress(in io.Reader, out io.Writer) error { func Compress(in io.Reader, out io.Writer) error {
w, err := NewWriter(output) enc, err := zstd.NewWriter(out)
if err != nil { if err != nil {
return err return err
} }
_, err := io.Copy(w, input) _, err = io.Copy(enc, in)
if err != nil { if err != nil {
enc.Close() enc.Close()
return err return err

View file

@ -323,19 +323,23 @@ func (d *Decoder) DecodeAll(input, dst []byte) ([]byte, error) {
} }
if frame.FrameContentSize > 0 && frame.FrameContentSize < 1<<30 { if frame.FrameContentSize > 0 && frame.FrameContentSize < 1<<30 {
// Never preallocate moe than 1 GB up front. // Never preallocate moe than 1 GB up front.
if uint64(cap(dst)) < frame.FrameContentSize { if cap(dst)-len(dst) < int(frame.FrameContentSize) {
dst2 := make([]byte, len(dst), len(dst)+int(frame.FrameContentSize)) dst2 := make([]byte, len(dst), len(dst)+int(frame.FrameContentSize))
copy(dst2, dst) copy(dst2, dst)
dst = dst2 dst = dst2
} }
} }
if cap(dst) == 0 { if cap(dst) == 0 {
// Allocate window size * 2 by default if nothing is provided and we didn't get frame content size. // Allocate len(input) * 2 by default if nothing is provided
size := frame.WindowSize * 2 // and we didn't get frame content size.
size := len(input) * 2
// Cap to 1 MB. // Cap to 1 MB.
if size > 1<<20 { if size > 1<<20 {
size = 1 << 20 size = 1 << 20
} }
if uint64(size) > d.o.maxDecodedSize {
size = int(d.o.maxDecodedSize)
}
dst = make([]byte, 0, size) dst = make([]byte, 0, size)
} }

View file

@ -299,6 +299,17 @@ func (p *TextParser) startLabelName() stateFn {
p.parseError(fmt.Sprintf("expected '=' after label name, found %q", p.currentByte)) p.parseError(fmt.Sprintf("expected '=' after label name, found %q", p.currentByte))
return nil return nil
} }
// Check for duplicate label names.
labels := make(map[string]struct{})
for _, l := range p.currentMetric.Label {
lName := l.GetName()
if _, exists := labels[lName]; !exists {
labels[lName] = struct{}{}
} else {
p.parseError(fmt.Sprintf("duplicate label names for metric %q", p.currentMF.GetName()))
return nil
}
}
return p.startLabelValue return p.startLabelValue
} }

View file

@ -2,7 +2,7 @@
// Use of this source code is governed by a BSD-style // Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file. // license that can be found in the LICENSE file.
// +build !gccgo // +build gc
#include "textflag.h" #include "textflag.h"

View file

@ -2,7 +2,7 @@
// Use of this source code is governed by a BSD-style // Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file. // license that can be found in the LICENSE file.
// +build !gccgo // +build gc
#include "textflag.h" #include "textflag.h"

View file

@ -2,7 +2,7 @@
// Use of this source code is governed by a BSD-style // Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file. // license that can be found in the LICENSE file.
// +build !gccgo // +build gc
#include "textflag.h" #include "textflag.h"

View file

@ -2,7 +2,7 @@
// Use of this source code is governed by a BSD-style // Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file. // license that can be found in the LICENSE file.
// +build !gccgo // +build gc
// +build arm,darwin // +build arm,darwin
#include "textflag.h" #include "textflag.h"

View file

@ -2,7 +2,7 @@
// Use of this source code is governed by a BSD-style // Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file. // license that can be found in the LICENSE file.
// +build !gccgo // +build gc
// +build arm64,darwin // +build arm64,darwin
#include "textflag.h" #include "textflag.h"

View file

@ -2,7 +2,7 @@
// Use of this source code is governed by a BSD-style // Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file. // license that can be found in the LICENSE file.
// +build !gccgo // +build gc
#include "textflag.h" #include "textflag.h"

View file

@ -2,7 +2,7 @@
// Use of this source code is governed by a BSD-style // Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file. // license that can be found in the LICENSE file.
// +build !gccgo // +build gc
#include "textflag.h" #include "textflag.h"

View file

@ -2,7 +2,7 @@
// Use of this source code is governed by a BSD-style // Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file. // license that can be found in the LICENSE file.
// +build !gccgo // +build gc
#include "textflag.h" #include "textflag.h"

View file

@ -2,7 +2,7 @@
// Use of this source code is governed by a BSD-style // Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file. // license that can be found in the LICENSE file.
// +build !gccgo // +build gc
#include "textflag.h" #include "textflag.h"

View file

@ -2,7 +2,7 @@
// Use of this source code is governed by a BSD-style // Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file. // license that can be found in the LICENSE file.
// +build !gccgo // +build gc
#include "textflag.h" #include "textflag.h"

View file

@ -2,7 +2,7 @@
// Use of this source code is governed by a BSD-style // Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file. // license that can be found in the LICENSE file.
// +build !gccgo // +build gc
#include "textflag.h" #include "textflag.h"

View file

@ -2,7 +2,7 @@
// Use of this source code is governed by a BSD-style // Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file. // license that can be found in the LICENSE file.
// +build !gccgo // +build gc
#include "textflag.h" #include "textflag.h"

View file

@ -2,7 +2,7 @@
// Use of this source code is governed by a BSD-style // Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file. // license that can be found in the LICENSE file.
// +build !gccgo // +build gc
#include "textflag.h" #include "textflag.h"

View file

@ -4,7 +4,7 @@
// +build linux // +build linux
// +build arm64 // +build arm64
// +build !gccgo // +build gc
#include "textflag.h" #include "textflag.h"

View file

@ -4,7 +4,7 @@
// +build linux // +build linux
// +build mips64 mips64le // +build mips64 mips64le
// +build !gccgo // +build gc
#include "textflag.h" #include "textflag.h"

Some files were not shown because too many files have changed in this diff Show more