mirror of
https://github.com/VictoriaMetrics/VictoriaMetrics.git
synced 2025-02-19 15:30:17 +00:00
Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files
This commit is contained in:
commit
8df5a3c5f6
94 changed files with 3415 additions and 1516 deletions
13
.github/workflows/main.yml
vendored
13
.github/workflows/main.yml
vendored
|
@ -43,7 +43,20 @@ jobs:
|
|||
make victoria-metrics-arm64
|
||||
make vmutils
|
||||
GOOS=freebsd go build -mod=vendor ./app/victoria-metrics
|
||||
GOOS=freebsd go build -mod=vendor ./app/vmagent
|
||||
GOOS=freebsd go build -mod=vendor ./app/vmalert
|
||||
GOOS=freebsd go build -mod=vendor ./app/vmbackup
|
||||
GOOS=freebsd go build -mod=vendor ./app/vmrestore
|
||||
GOOS=openbsd go build -mod=vendor ./app/victoria-metrics
|
||||
GOOS=openbsd go build -mod=vendor ./app/vmagent
|
||||
GOOS=openbsd go build -mod=vendor ./app/vmalert
|
||||
GOOS=openbsd go build -mod=vendor ./app/vmbackup
|
||||
GOOS=openbsd go build -mod=vendor ./app/vmrestore
|
||||
GOOS=darwin go build -mod=vendor ./app/victoria-metrics
|
||||
GOOS=darwin go build -mod=vendor ./app/vmagent
|
||||
GOOS=darwin go build -mod=vendor ./app/vmalert
|
||||
GOOS=darwin go build -mod=vendor ./app/vmbackup
|
||||
GOOS=darwin go build -mod=vendor ./app/vmrestore
|
||||
- name: Publish coverage
|
||||
uses: codecov/codecov-action@v1.0.6
|
||||
with:
|
||||
|
|
36
CHANGELOG.md
Normal file
36
CHANGELOG.md
Normal file
|
@ -0,0 +1,36 @@
|
|||
# tip
|
||||
|
||||
|
||||
# [v1.42.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.42.0)
|
||||
|
||||
* FEATURE: use all the available CPU cores when accepting data via a single TCP connection
|
||||
for [all the supported protocols](https://victoriametrics.github.io/#how-to-import-time-series-data).
|
||||
Previously data ingested via a single TCP connection could use only a single CPU core. This could limit data ingestion performance.
|
||||
The main benefit of this feature is that data can be imported at max speed via a single connection - there is no need to open multiple concurrent
|
||||
connections to VictoriaMetrics or [vmagent](https://victoriametrics.github.io/vmagent.html) in order to achieve the maximum data ingestion speed.
|
||||
* FEATURE: cluster: improve performance for data ingestion path from `vminsert` to `vmstorage` nodes. The maximum data ingestion performance
|
||||
for a single connection between `vminsert` and `vmstorage` node scales with the number of available CPU cores on `vmstorage` side.
|
||||
This should help with https://github.com/VictoriaMetrics/VictoriaMetrics/issues/791 .
|
||||
* FEATURE: add ability to export / import data in native format via `/api/v1/export/native` and `/api/v1/import/native`.
|
||||
This is the most optimized approach for data migration between VictoriaMetrics instances. Both single-node and cluster instances are supported.
|
||||
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/787#issuecomment-700632551 .
|
||||
* FEATURE: add `reduce_mem_usage` query option to `/api/v1/export` in order to reduce memory usage during data export / import.
|
||||
See [these docs](https://victoriametrics.github.io/#how-to-export-data-in-json-line-format) for details.
|
||||
* FEATURE: improve performance for `/api/v1/series` handler when it returns big number of time series.
|
||||
* FEATURE: add `vm_merge_need_free_disk_space` metric, which can be used for estimating the number of deferred background data merges due to the lack of free disk space.
|
||||
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/686 .
|
||||
* FEATURE: add OpenBSD support. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/785 .
|
||||
|
||||
* BUGFIX: properly apply `-search.maxStalenessInterval` command-line flag value. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/784 .
|
||||
* BUGFIX: fix displaying data in Grafana tables. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/720 .
|
||||
* BUGFIX: do not adjust the number of detected CPU cores found at `/sys/devices/system/cpu/online`.
|
||||
The adjustement was increasing the resulting GOMAXPROC by 1, which looked confusing to users.
|
||||
See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/685#issuecomment-698595309 .
|
||||
* BUGFIX: vmagent: do not show `-remoteWrite.url` in initial logs if `-remoteWrite.showURL` isn't set. See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/773 .
|
||||
* BUGFIX: properly handle case when [/metrics/find](https://victoriametrics.github.io/#graphite-metrics-api-usage) finds both a leaf and a node for the given `query=prefix.*`.
|
||||
In this case only the node must be returned with stripped dot in the end of id as carbonapi does.
|
||||
|
||||
|
||||
# Previous releases
|
||||
|
||||
See [releases page](https://github.com/VictoriaMetrics/VictoriaMetrics/releases).
|
302
README.md
302
README.md
|
@ -77,7 +77,8 @@ See [features available for enterprise customers](https://github.com/VictoriaMet
|
|||
if `-graphiteListenAddr` is set.
|
||||
* [OpenTSDB put message](#sending-data-via-telnet-put-protocol) if `-opentsdbListenAddr` is set.
|
||||
* [HTTP OpenTSDB /api/put requests](#sending-opentsdb-data-via-http-apiput-requests) if `-opentsdbHTTPListenAddr` is set.
|
||||
* [/api/v1/import](#how-to-import-time-series-data).
|
||||
* [JSON line format](#how-to-import-data-in-json-line-format).
|
||||
* [Native binary format](#how-to-import-data-in-native-format).
|
||||
* [Prometheus exposition format](#how-to-import-data-in-prometheus-exposition-format).
|
||||
* [Arbitrary CSV data](#how-to-import-csv-data).
|
||||
* Supports metrics' relabeling. See [these docs](#relabeling) for details.
|
||||
|
@ -100,8 +101,6 @@ See [features available for enterprise customers](https://github.com/VictoriaMet
|
|||
* [How to send data from Graphite-compatible agents such as StatsD](#how-to-send-data-from-graphite-compatible-agents-such-as-statsd)
|
||||
* [Querying Graphite data](#querying-graphite-data)
|
||||
* [How to send data from OpenTSDB-compatible agents](#how-to-send-data-from-opentsdb-compatible-agents)
|
||||
* [How to import data in Prometheus exposition format](#how-to-import-data-in-prometheus-exposition-format)
|
||||
* [How to import CSV data](#how-to-import-csv-data)
|
||||
* [Prometheus querying API usage](#prometheus-querying-api-usage)
|
||||
* [Prometheus querying API enhancements](#prometheus-querying-api-enhancements)
|
||||
* [Graphite Metrics API usage](#graphite-metrics-api-usage)
|
||||
|
@ -117,7 +116,13 @@ See [features available for enterprise customers](https://github.com/VictoriaMet
|
|||
* [How to delete time series](#how-to-delete-time-series)
|
||||
* [Forced merge](#forced-merge)
|
||||
* [How to export time series](#how-to-export-time-series)
|
||||
* [How to export data in native format](#how-to-export-data-in-native-format)
|
||||
* [How to export data in JSON line format](#how-to-export-data-in-json-line-format)
|
||||
* [How to import time series data](#how-to-import-time-series-data)
|
||||
* [How to import data in native format](#how-to-import-data-in-native-format)
|
||||
* [How to import data in json line format](#how-to-import-data-in-json-line-format)
|
||||
* [How to import CSV data](#how-to-import-csv-data)
|
||||
* [How to import data in Prometheus exposition format](#how-to-import-data-in-prometheus-exposition-format)
|
||||
* [Relabeling](#relabeling)
|
||||
* [Federation](#federation)
|
||||
* [Capacity planning](#capacity-planning)
|
||||
|
@ -345,7 +350,7 @@ curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'ht
|
|||
```
|
||||
|
||||
An arbitrary number of lines delimited by '\n' (aka newline char) may be sent in a single request.
|
||||
After that the data may be read via [/api/v1/export](#how-to-export-time-series) endpoint:
|
||||
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
|
||||
|
||||
```bash
|
||||
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"measurement_.*"}'
|
||||
|
@ -381,7 +386,7 @@ echo "foo.bar.baz;tag1=value1;tag2=value2 123 `date +%s`" | nc -N localhost 2003
|
|||
|
||||
VictoriaMetrics sets the current time if the timestamp is omitted.
|
||||
An arbitrary number of lines delimited by `\n` (aka newline char) may be sent in one go.
|
||||
After that the data may be read via [/api/v1/export](#how-to-export-time-series) endpoint:
|
||||
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
|
||||
|
||||
```bash
|
||||
curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz'
|
||||
|
@ -425,7 +430,7 @@ echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2" | nc -N localhost
|
|||
```
|
||||
|
||||
An arbitrary number of lines delimited by `\n` (aka newline char) may be sent in one go.
|
||||
After that the data may be read via [/api/v1/export](#how-to-export-time-series) endpoint:
|
||||
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
|
||||
|
||||
```bash
|
||||
curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz'
|
||||
|
@ -460,7 +465,7 @@ Example for writing multiple data points in a single request:
|
|||
curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"metric":"bar","value":43}]' http://localhost:4242/api/put
|
||||
```
|
||||
|
||||
After that the data may be read via [/api/v1/export](#how-to-export-time-series) endpoint:
|
||||
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
|
||||
|
||||
```bash
|
||||
curl -G 'http://localhost:8428/api/v1/export' -d 'match[]=x.y.z' -d 'match[]=foo' -d 'match[]=bar'
|
||||
|
@ -475,91 +480,6 @@ The `/api/v1/export` endpoint should return the following response:
|
|||
```
|
||||
|
||||
|
||||
### How to import CSV data
|
||||
|
||||
Arbitrary CSV data can be imported via `/api/v1/import/csv`. The CSV data is imported according to the provided `format` query arg.
|
||||
The `format` query arg must contain comma-separated list of parsing rules for CSV fields. Each rule consists of three parts delimited by a colon:
|
||||
|
||||
```
|
||||
<column_pos>:<type>:<context>
|
||||
```
|
||||
|
||||
* `<column_pos>` is the position of the CSV column (field). Column numbering starts from 1. The order of parsing rules may be arbitrary.
|
||||
* `<type>` describes the column type. Supported types are:
|
||||
* `metric` - the corresponding CSV column at `<column_pos>` contains metric value, which must be integer or floating-point number.
|
||||
The metric name is read from the `<context>`. CSV line must have at least a single metric field. Multiple metric fields per CSV line is OK.
|
||||
* `label` - the corresponding CSV column at `<column_pos>` contains label value. The label name is read from the `<context>`.
|
||||
CSV line may have arbitrary number of label fields. All these labels are attached to all the configured metrics.
|
||||
* `time` - the corresponding CSV column at `<column_pos>` contains metric time. CSV line may contain either one or zero columns with time.
|
||||
If CSV line has no time, then the current time is used. The time is applied to all the configured metrics.
|
||||
The format of the time is configured via `<context>`. Supported time formats are:
|
||||
* `unix_s` - unix timestamp in seconds.
|
||||
* `unix_ms` - unix timestamp in milliseconds.
|
||||
* `unix_ns` - unix timestamp in nanoseconds. Note that VictoriaMetrics rounds the timestamp to milliseconds.
|
||||
* `rfc3339` - timestamp in [RFC3339](https://tools.ietf.org/html/rfc3339) format, i.e. `2006-01-02T15:04:05Z`.
|
||||
* `custom:<layout>` - custom layout for the timestamp. The `<layout>` may contain arbitrary time layout according to [time.Parse rules in Go](https://golang.org/pkg/time/#Parse).
|
||||
|
||||
Each request to `/api/v1/import/csv` may contain arbitrary number of CSV lines.
|
||||
|
||||
Example for importing CSV data via `/api/v1/import/csv`:
|
||||
|
||||
```bash
|
||||
curl -d "GOOG,1.23,4.56,NYSE" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
|
||||
curl -d "MSFT,3.21,1.67,NASDAQ" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
|
||||
```
|
||||
|
||||
After that the data may be read via [/api/v1/export](#how-to-export-time-series) endpoint:
|
||||
|
||||
```bash
|
||||
curl -G 'http://localhost:8428/api/v1/export' -d 'match[]={ticker!=""}'
|
||||
```
|
||||
|
||||
The following response should be returned:
|
||||
```bash
|
||||
{"metric":{"__name__":"bid","market":"NASDAQ","ticker":"MSFT"},"values":[1.67],"timestamps":[1583865146520]}
|
||||
{"metric":{"__name__":"bid","market":"NYSE","ticker":"GOOG"},"values":[4.56],"timestamps":[1583865146495]}
|
||||
{"metric":{"__name__":"ask","market":"NASDAQ","ticker":"MSFT"},"values":[3.21],"timestamps":[1583865146520]}
|
||||
{"metric":{"__name__":"ask","market":"NYSE","ticker":"GOOG"},"values":[1.23],"timestamps":[1583865146495]}
|
||||
```
|
||||
|
||||
Extra labels may be added to all the imported lines by passing `extra_label=name=value` query args.
|
||||
For example, `/api/v1/import/csv?extra_label=foo=bar` would add `"foo":"bar"` label to all the imported lines.
|
||||
|
||||
Note that it could be required to flush response cache after importing historical data. See [these docs](#backfilling) for detail.
|
||||
|
||||
|
||||
### How to import data in Prometheus exposition format
|
||||
|
||||
VictoriaMetrics accepts data in [Prometheus exposition format](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#text-based-format)
|
||||
via `/api/v1/import/prometheus` path. For example, the following line imports a single line in Prometheus exposition format into VictoriaMetrics:
|
||||
|
||||
```bash
|
||||
curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus'
|
||||
```
|
||||
|
||||
The following command may be used for verifying the imported data:
|
||||
|
||||
```bash
|
||||
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"foo"}'
|
||||
```
|
||||
|
||||
It should return something like the following:
|
||||
|
||||
```
|
||||
{"metric":{"__name__":"foo","bar":"baz"},"values":[123],"timestamps":[1594370496905]}
|
||||
```
|
||||
|
||||
Extra labels may be added to all the imported metrics by passing `extra_label=name=value` query args.
|
||||
For example, `/api/v1/import/prometheus?extra_label=foo=bar` would add `{foo="bar"}` label to all the imported metrics.
|
||||
|
||||
If timestamp is missing in `<metric> <value> <timestamp>` Prometheus exposition format line, then the current timestamp is used during data ingestion.
|
||||
It can be overriden by passing unix timestamp in *milliseconds* via `timestamp` query arg. For example, `/api/v1/import/prometheus?timestamp=1594370496905`.
|
||||
|
||||
VictoriaMetrics accepts arbitrary number of lines in a single request to `/api/v1/import/prometheus`, i.e. it supports data streaming.
|
||||
|
||||
VictoriaMetrics also may scrape Prometheus targets - see [these docs](#how-to-scrape-prometheus-exporters-such-as-node-exporter).
|
||||
|
||||
|
||||
### Prometheus querying API usage
|
||||
|
||||
VictoriaMetrics supports the following handlers from [Prometheus querying API](https://prometheus.io/docs/prometheus/latest/querying/api/):
|
||||
|
@ -660,8 +580,8 @@ Run `make package-victoria-metrics`. It builds `victoriametrics/victoria-metrics
|
|||
`<PKG_TAG>` is auto-generated image tag, which depends on source code in the repository.
|
||||
The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-victoria-metrics`.
|
||||
|
||||
By default the image is built on top of [alpine](https://hub.docker.com/_/alpine) image for improved debuggability.
|
||||
It is possible to build the package on top of any other base image by setting it via `<ROOT_IMAGE>` environment variable.
|
||||
The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image
|
||||
by setting it via `<ROOT_IMAGE>` environment variable.
|
||||
For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
|
||||
|
||||
```bash
|
||||
|
@ -741,13 +661,13 @@ It is better using `-retentionPeriod` command-line flag for efficient pruning of
|
|||
|
||||
### Forced merge
|
||||
|
||||
VictoriaMetrics performs [data compations in background](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282)
|
||||
VictoriaMetrics performs [data compactions in background](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282)
|
||||
in order to keep good performance characteristics when accepting new data. These compactions (merges) are performed independently on per-month partitions.
|
||||
This means that compactions are stopped for per-month partitions if no new data is ingested into these partitions.
|
||||
Sometimes it is necessary to trigger compactions for old partitions. For instance, in order to free up disk space occupied by [deleted time series](#how-to-delete-time-series).
|
||||
In this case forced compaction may be initiated on the specified per-month partition by sending request to `/internal/force_merge?partition_prefix=YYYY_MM`,
|
||||
where `YYYY_MM` is per-month partition name. For example, `http://victoriametrics:8428/internal/force_merge?partition_prefix=2020_08` would initiate forced
|
||||
merge for August 2020 partition. The call to `/internal/force_merge` returns immediately, while the corresponding forced merges continues running in background.
|
||||
merge for August 2020 partition. The call to `/internal/force_merge` returns immediately, while the corresponding forced merge continues running in background.
|
||||
|
||||
Forced merges may require additional CPU, disk IO and storage space resources. It is unnecessary to run forced merge under normal conditions,
|
||||
since VictoriaMetrics automatically performs [optimal merges in background](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282)
|
||||
|
@ -756,11 +676,35 @@ when new data is ingested into it.
|
|||
|
||||
### How to export time series
|
||||
|
||||
Send a request to `http://<victoriametrics-addr>:8428/api/v1/export?match[]=<timeseries_selector_for_export>`,
|
||||
VictoriaMetrics provides the following handlers for exporting data:
|
||||
|
||||
* `/api/v1/export/native` for exporting data in native binary format. This is the most efficient format for data export.
|
||||
See [these docs](#how-to-export-data-in-native-format) for details.
|
||||
* `/api/v1/export` for exporing data in JSON line format. See [these docs](#how-to-export-data-in-json-line-format) for details.
|
||||
|
||||
|
||||
#### How to export data in native format
|
||||
|
||||
Send a request to `http://<victoriametrics-addr>:8428/api/v1/export/native?match[]=<timeseries_selector_for_export>`,
|
||||
where `<timeseries_selector_for_export>` may contain any [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors)
|
||||
for metrics to export. Use `{__name__!=""}` selector for fetching all the time series.
|
||||
|
||||
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
|
||||
unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
|
||||
|
||||
The exported data can be imported to VictoriaMetrics via [/api/v1/import/native](#how-to-import-data-in-native-format).
|
||||
|
||||
|
||||
#### How to export data in JSON line format
|
||||
|
||||
Consider [exporting data in native format](#how-to-export-data-in-native-format) if big amounts of data must be migrated between VictoriaMetrics instances,
|
||||
since exporting in native format usually consumes lower amounts of CPU and memory resources, while the resulting exported data occupies lower amounts of disk space.
|
||||
|
||||
In order to export data in JSON line format, send a request to `http://<victoriametrics-addr>:8428/api/v1/export?match[]=<timeseries_selector_for_export>`,
|
||||
where `<timeseries_selector_for_export>` may contain any [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors)
|
||||
for metrics to export. Use `{__name__!=""}` selector for fetching all the time series.
|
||||
The response would contain all the data for the selected time series in [JSON streaming format](https://en.wikipedia.org/wiki/JSON_streaming#Line-delimited_JSON).
|
||||
Each JSON line would contain data for a single time series. An example output:
|
||||
Each JSON line contains samples for a single time series. An example output:
|
||||
|
||||
```jsonl
|
||||
{"metric":{"__name__":"up","job":"node_exporter","instance":"localhost:9100"},"values":[0,0,0],"timestamps":[1549891472010,1549891487724,1549891503438]}
|
||||
|
@ -770,8 +714,9 @@ Each JSON line would contain data for a single time series. An example output:
|
|||
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
|
||||
unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
|
||||
|
||||
Optional `max_rows_per_line` arg may be added to the request in order to limit the maximum number of rows exported per each JSON line.
|
||||
By default each JSON line contains all the rows for a single time series.
|
||||
Optional `max_rows_per_line` arg may be added to the request for limiting the maximum number of rows exported per each JSON line.
|
||||
Optional `reduce_mem_usage=1` arg may be added to the request for reducing memory usage when exporting big number of time series.
|
||||
In this case the output may contain multiple lines with distinct samples for the same time series.
|
||||
|
||||
Pass `Accept-Encoding: gzip` HTTP header in the request to `/api/v1/export` in order to reduce network bandwidth during exporing big amounts
|
||||
of time series data. This enables gzip compression for the exported data. Example for exporting gzipped data:
|
||||
|
@ -782,22 +727,58 @@ curl -H 'Accept-Encoding: gzip' http://localhost:8428/api/v1/export -d 'match[]=
|
|||
|
||||
The maximum duration for each request to `/api/v1/export` is limited by `-search.maxExportDuration` command-line flag.
|
||||
|
||||
Exported data can be imported via POST'ing it to [/api/v1/import](#how-to-import-time-series-data).
|
||||
Exported data can be imported via POST'ing it to [/api/v1/import](#how-to-import-data-in-json-line-format).
|
||||
|
||||
|
||||
### How to import time series data
|
||||
|
||||
Time series data can be imported via any supported ingestion protocol:
|
||||
|
||||
* [Prometheus remote_write API](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write)
|
||||
* [Influx line protocol](#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf)
|
||||
* [Graphite plaintext protocol](#how-to-send-data-from-graphite-compatible-agents-such-as-statsd)
|
||||
* [OpenTSDB telnet put protocol](#sending-data-via-telnet-put-protocol)
|
||||
* [OpenTSDB http /api/put](#sending-opentsdb-data-via-http-apiput-requests)
|
||||
* `/api/v1/import` http POST handler, which accepts data from [/api/v1/export](#how-to-export-time-series).
|
||||
* `/api/v1/import/csv` http POST handler, which accepts CSV data. See [these docs](#how-to-import-csv-data) for details.
|
||||
* `/api/v1/import/prometheus` http POST handler, which accepts data in Prometheus exposition format. See [these docs](#how-to-import-data-in-prometheus-exposition-format) for details.
|
||||
* [Prometheus remote_write API](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write).
|
||||
* Influx line protocol. See [these docs](#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf) for details.
|
||||
* Graphite plaintext protocol. See[these docs](#how-to-send-data-from-graphite-compatible-agents-such-as-statsd) for details.
|
||||
* OpenTSDB telnet put protocol. See [these docs](#sending-data-via-telnet-put-protocol) for details.
|
||||
* OpenTSDB http `/api/put` protocol. See [these docs](#sending-opentsdb-data-via-http-apiput-requests) for details.
|
||||
* `/api/v1/import` for importing data obtained from [/api/v1/export](#how-to-export-data-in-json-line-format).
|
||||
See [these docs](##how-to-import-data-in-json-line-format) for details.
|
||||
* `/api/v1/import/native` for importing data obtained from [/api/v1/export/native](#how-to-export-data-in-native-format).
|
||||
See [these docs](#how-to-import-data-in-native-format) for details.
|
||||
* `/api/v1/import/csv` for importing arbitrary CSV data. See [these docs](#how-to-import-csv-data) for details.
|
||||
* `/api/v1/import/prometheus` for importing data in Prometheus exposition format. See [these docs](#how-to-import-data-in-prometheus-exposition-format) for details.
|
||||
|
||||
The most efficient protocol for importing data into VictoriaMetrics is `/api/v1/import`. Example for importing data obtained via `/api/v1/export`:
|
||||
|
||||
#### How to import data in native format
|
||||
|
||||
The most efficient protocol for importing data into VictoriaMetrics is `/api/v1/import/native`.
|
||||
Example for importing data obtained via [/api/v1/export/native](#how-to-export-data-in-native-format):
|
||||
|
||||
```bash
|
||||
# Export the data from <source-victoriametrics>:
|
||||
curl http://source-victoriametrics:8428/api/v1/export/native -d 'match={__name__!=""}' > exported_data.bin
|
||||
|
||||
# Import the data to <destination-victoriametrics>:
|
||||
curl -X POST http://destination-victoriametrics:8428/api/v1/import/native -T exported_data.bin
|
||||
```
|
||||
|
||||
Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import/native` for importing gzipped data:
|
||||
|
||||
```bash
|
||||
# Export gzipped data from <source-victoriametrics>:
|
||||
curl -H 'Accept-Encoding: gzip' http://source-victoriametrics:8428/api/v1/export/native -d 'match={__name__!=""}' > exported_data.bin.gz
|
||||
|
||||
# Import gzipped data to <destination-victoriametrics>:
|
||||
curl -X POST -H 'Content-Encoding: gzip' http://destination-victoriametrics:8428/api/v1/import/native -T exported_data.bin.gz
|
||||
```
|
||||
|
||||
Extra labels may be added to all the imported time series by passing `extra_label=name=value` query args.
|
||||
For example, `/api/v1/import/native?extra_label=foo=bar` would add `"foo":"bar"` label to all the imported time series.
|
||||
|
||||
Note that it could be required to flush response cache after importing historical data. See [these docs](#backfilling) for detail.
|
||||
|
||||
|
||||
#### How to import data in JSON line format
|
||||
|
||||
Example for importing data obtained via [/api/v1/export](#how-to-export-data-in-json-line-format):
|
||||
|
||||
```bash
|
||||
# Export the data from <source-victoriametrics>:
|
||||
|
@ -822,14 +803,108 @@ For example, `/api/v1/import?extra_label=foo=bar` would add `"foo":"bar"` label
|
|||
|
||||
Note that it could be required to flush response cache after importing historical data. See [these docs](#backfilling) for detail.
|
||||
|
||||
Each request to `/api/v1/import` can load up to a single vCPU core on VictoriaMetrics. Import speed can be improved by splitting the original file into smaller parts
|
||||
and importing them concurrently. Note that the original file must be split on newlines.
|
||||
|
||||
#### How to import CSV data
|
||||
|
||||
Arbitrary CSV data can be imported via `/api/v1/import/csv`. The CSV data is imported according to the provided `format` query arg.
|
||||
The `format` query arg must contain comma-separated list of parsing rules for CSV fields. Each rule consists of three parts delimited by a colon:
|
||||
|
||||
```
|
||||
<column_pos>:<type>:<context>
|
||||
```
|
||||
|
||||
* `<column_pos>` is the position of the CSV column (field). Column numbering starts from 1. The order of parsing rules may be arbitrary.
|
||||
* `<type>` describes the column type. Supported types are:
|
||||
* `metric` - the corresponding CSV column at `<column_pos>` contains metric value, which must be integer or floating-point number.
|
||||
The metric name is read from the `<context>`. CSV line must have at least a single metric field. Multiple metric fields per CSV line is OK.
|
||||
* `label` - the corresponding CSV column at `<column_pos>` contains label value. The label name is read from the `<context>`.
|
||||
CSV line may have arbitrary number of label fields. All these labels are attached to all the configured metrics.
|
||||
* `time` - the corresponding CSV column at `<column_pos>` contains metric time. CSV line may contain either one or zero columns with time.
|
||||
If CSV line has no time, then the current time is used. The time is applied to all the configured metrics.
|
||||
The format of the time is configured via `<context>`. Supported time formats are:
|
||||
* `unix_s` - unix timestamp in seconds.
|
||||
* `unix_ms` - unix timestamp in milliseconds.
|
||||
* `unix_ns` - unix timestamp in nanoseconds. Note that VictoriaMetrics rounds the timestamp to milliseconds.
|
||||
* `rfc3339` - timestamp in [RFC3339](https://tools.ietf.org/html/rfc3339) format, i.e. `2006-01-02T15:04:05Z`.
|
||||
* `custom:<layout>` - custom layout for the timestamp. The `<layout>` may contain arbitrary time layout according to [time.Parse rules in Go](https://golang.org/pkg/time/#Parse).
|
||||
|
||||
Each request to `/api/v1/import/csv` may contain arbitrary number of CSV lines.
|
||||
|
||||
Example for importing CSV data via `/api/v1/import/csv`:
|
||||
|
||||
```bash
|
||||
curl -d "GOOG,1.23,4.56,NYSE" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
|
||||
curl -d "MSFT,3.21,1.67,NASDAQ" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
|
||||
```
|
||||
|
||||
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
|
||||
|
||||
```bash
|
||||
curl -G 'http://localhost:8428/api/v1/export' -d 'match[]={ticker!=""}'
|
||||
```
|
||||
|
||||
The following response should be returned:
|
||||
```bash
|
||||
{"metric":{"__name__":"bid","market":"NASDAQ","ticker":"MSFT"},"values":[1.67],"timestamps":[1583865146520]}
|
||||
{"metric":{"__name__":"bid","market":"NYSE","ticker":"GOOG"},"values":[4.56],"timestamps":[1583865146495]}
|
||||
{"metric":{"__name__":"ask","market":"NASDAQ","ticker":"MSFT"},"values":[3.21],"timestamps":[1583865146520]}
|
||||
{"metric":{"__name__":"ask","market":"NYSE","ticker":"GOOG"},"values":[1.23],"timestamps":[1583865146495]}
|
||||
```
|
||||
|
||||
Extra labels may be added to all the imported lines by passing `extra_label=name=value` query args.
|
||||
For example, `/api/v1/import/csv?extra_label=foo=bar` would add `"foo":"bar"` label to all the imported lines.
|
||||
|
||||
Note that it could be required to flush response cache after importing historical data. See [these docs](#backfilling) for detail.
|
||||
|
||||
|
||||
#### How to import data in Prometheus exposition format
|
||||
|
||||
VictoriaMetrics accepts data in [Prometheus exposition format](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#text-based-format)
|
||||
via `/api/v1/import/prometheus` path. For example, the following line imports a single line in Prometheus exposition format into VictoriaMetrics:
|
||||
|
||||
```bash
|
||||
curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus'
|
||||
```
|
||||
|
||||
The following command may be used for verifying the imported data:
|
||||
|
||||
```bash
|
||||
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"foo"}'
|
||||
```
|
||||
|
||||
It should return something like the following:
|
||||
|
||||
```
|
||||
{"metric":{"__name__":"foo","bar":"baz"},"values":[123],"timestamps":[1594370496905]}
|
||||
```
|
||||
|
||||
Extra labels may be added to all the imported metrics by passing `extra_label=name=value` query args.
|
||||
For example, `/api/v1/import/prometheus?extra_label=foo=bar` would add `{foo="bar"}` label to all the imported metrics.
|
||||
|
||||
If timestamp is missing in `<metric> <value> <timestamp>` Prometheus exposition format line, then the current timestamp is used during data ingestion.
|
||||
It can be overriden by passing unix timestamp in *milliseconds* via `timestamp` query arg. For example, `/api/v1/import/prometheus?timestamp=1594370496905`.
|
||||
|
||||
VictoriaMetrics accepts arbitrary number of lines in a single request to `/api/v1/import/prometheus`, i.e. it supports data streaming.
|
||||
|
||||
Note that it could be required to flush response cache after importing historical data. See [these docs](#backfilling) for detail.
|
||||
|
||||
VictoriaMetrics also may scrape Prometheus targets - see [these docs](#how-to-scrape-prometheus-exporters-such-as-node-exporter).
|
||||
|
||||
|
||||
|
||||
### Relabeling
|
||||
|
||||
VictoriaMetrics supports Prometheus-compatible relabeling for all the ingested metrics if `-relabelConfig` command-line flag points
|
||||
to a file containing a list of [relabel_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config) entries.
|
||||
Example contents for `-relabelConfig` file:
|
||||
```yml
|
||||
# relabel_config.yml
|
||||
- target_label: cluster
|
||||
replacement: dev
|
||||
- action: drop
|
||||
source_labels: [__meta_kubernetes_pod_container_init]
|
||||
regex: true
|
||||
```
|
||||
|
||||
VictoriaMetrics provides the following extra actions for relabeling rules:
|
||||
|
||||
|
@ -1100,6 +1175,9 @@ VictoriaMetrics also exposes currently running queries with their execution time
|
|||
has at least 20% of free space comparing to disk size. The remaining amount of free space
|
||||
can be [monitored](#monitoring) via `vm_free_disk_space_bytes` metric. The total size of data
|
||||
stored on the disk can be monitored via sum of `vm_data_size_bytes` metrics.
|
||||
See also `vm_merge_need_free_disk_space` metrics, which are set to values higher than 0
|
||||
if background merge cannot be initiated due to free disk space shortage. The value shows the number of per-month partitions,
|
||||
which would start background merge if they had more free disk space.
|
||||
|
||||
* If VictoriaMetrics doesn't work because of certain parts are corrupted due to disk errors,
|
||||
then just remove directories with broken parts. This will recover VictoriaMetrics at the cost
|
||||
|
|
|
@ -25,7 +25,8 @@ to `vmagent` (like the ability to push metrics instead of pulling them). We did
|
|||
* Graphite plaintext protocol if `-graphiteListenAddr` command-line flag is set. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-send-data-from-graphite-compatible-agents-such-as-statsd).
|
||||
* OpenTSDB telnet and http protocols if `-opentsdbListenAddr` command-line flag is set. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-send-data-from-opentsdb-compatible-agents).
|
||||
* Prometheus remote write protocol via `http://<vmagent>:8429/api/v1/write`.
|
||||
* JSON lines import protocol via `http://<vmagent>:8429/api/v1/import`. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-import-time-series-data).
|
||||
* JSON lines import protocol via `http://<vmagent>:8429/api/v1/import`. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-import-data-in-json-line-format).
|
||||
* Native data import protocol via `http://<vmagent>:8429/api/v1/import/native`. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-import-data-in-native-format).
|
||||
* Data in Prometheus exposition format. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-import-data-in-prometheus-exposition-format) for details.
|
||||
* Arbitrary CSV data via `http://<vmagent>:8429/api/v1/import/csv`. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-import-csv-data).
|
||||
* Can replicate collected metrics simultaneously to multiple remote storage systems.
|
||||
|
@ -278,7 +279,7 @@ Run `make package-vmagent`. It builds `victoriametrics/vmagent:<PKG_TAG>` docker
|
|||
`<PKG_TAG>` is auto-generated image tag, which depends on source code in the repository.
|
||||
The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmagent`.
|
||||
|
||||
By default the image is built on top of [alpine](https://hub.docker.com/_/alpine) image. It is possible to build the package on top of any other base image
|
||||
The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image
|
||||
by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
|
||||
|
||||
```bash
|
||||
|
|
|
@ -11,6 +11,7 @@ import (
|
|||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/csvimport"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/graphite"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/influx"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/native"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/opentsdb"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/opentsdbhttp"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/prometheusimport"
|
||||
|
@ -28,6 +29,7 @@ import (
|
|||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/procutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
)
|
||||
|
@ -59,6 +61,7 @@ func main() {
|
|||
flag.CommandLine.SetOutput(os.Stdout)
|
||||
flag.Usage = usage
|
||||
envflag.Parse()
|
||||
remotewrite.InitSecretFlags()
|
||||
buildinfo.Init()
|
||||
logger.Init()
|
||||
cgroup.UpdateGOMAXPROCSToCPUQuota()
|
||||
|
@ -80,6 +83,7 @@ func main() {
|
|||
logger.Infof("starting vmagent at %q...", *httpListenAddr)
|
||||
startTime := time.Now()
|
||||
remotewrite.Init()
|
||||
common.StartUnmarshalWorkers()
|
||||
writeconcurrencylimiter.Init()
|
||||
if len(*influxListenAddr) > 0 {
|
||||
influxServer = influxserver.MustStart(*influxListenAddr, influx.InsertHandlerForReader)
|
||||
|
@ -127,6 +131,7 @@ func main() {
|
|||
if len(*opentsdbHTTPListenAddr) > 0 {
|
||||
opentsdbhttpServer.MustStop()
|
||||
}
|
||||
common.StopUnmarshalWorkers()
|
||||
remotewrite.Stop()
|
||||
|
||||
logger.Infof("successfully stopped vmagent in %.3f seconds", time.Since(startTime).Seconds())
|
||||
|
@ -171,6 +176,15 @@ func requestHandler(w http.ResponseWriter, r *http.Request) bool {
|
|||
}
|
||||
w.WriteHeader(http.StatusNoContent)
|
||||
return true
|
||||
case "/api/v1/import/native":
|
||||
nativeimportRequests.Inc()
|
||||
if err := native.InsertHandler(r); err != nil {
|
||||
nativeimportErrors.Inc()
|
||||
httpserver.Errorf(w, r, "error in %q: %s", r.URL.Path, err)
|
||||
return true
|
||||
}
|
||||
w.WriteHeader(http.StatusNoContent)
|
||||
return true
|
||||
case "/write", "/api/v2/write":
|
||||
influxWriteRequests.Inc()
|
||||
if err := influx.InsertHandlerForHTTP(r); err != nil {
|
||||
|
@ -213,6 +227,9 @@ var (
|
|||
prometheusimportRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/api/v1/import/prometheus", protocol="prometheusimport"}`)
|
||||
prometheusimportErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/api/v1/import/prometheus", protocol="prometheusimport"}`)
|
||||
|
||||
nativeimportRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/api/v1/import/native", protocol="nativeimport"}`)
|
||||
nativeimportErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/api/v1/import/native", protocol="nativeimport"}`)
|
||||
|
||||
influxWriteRequests = metrics.NewCounter(`vmagent_http_requests_total{path="/write", protocol="influx"}`)
|
||||
influxWriteErrors = metrics.NewCounter(`vmagent_http_request_errors_total{path="/write", protocol="influx"}`)
|
||||
|
||||
|
|
82
app/vmagent/native/request_handler.go
Normal file
82
app/vmagent/native/request_handler.go
Normal file
|
@ -0,0 +1,82 @@
|
|||
package native
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/common"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
|
||||
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
|
||||
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/native"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
)
|
||||
|
||||
var (
|
||||
rowsInserted = metrics.NewCounter(`vmagent_rows_inserted_total{type="native"}`)
|
||||
rowsPerInsert = metrics.NewHistogram(`vmagent_rows_per_insert{type="native"}`)
|
||||
)
|
||||
|
||||
// InsertHandler processes `/api/v1/import` request.
|
||||
//
|
||||
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6
|
||||
func InsertHandler(req *http.Request) error {
|
||||
extraLabels, err := parserCommon.GetExtraLabels(req)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return writeconcurrencylimiter.Do(func() error {
|
||||
return parser.ParseStream(req, func(block *parser.Block) error {
|
||||
return insertRows(block, extraLabels)
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
func insertRows(block *parser.Block, extraLabels []prompbmarshal.Label) error {
|
||||
ctx := common.GetPushCtx()
|
||||
defer common.PutPushCtx(ctx)
|
||||
|
||||
tssDst := ctx.WriteRequest.Timeseries[:0]
|
||||
labels := ctx.Labels[:0]
|
||||
samples := ctx.Samples[:0]
|
||||
mn := &block.MetricName
|
||||
labelsLen := len(labels)
|
||||
labels = append(labels, prompbmarshal.Label{
|
||||
Name: "__name__",
|
||||
Value: bytesutil.ToUnsafeString(mn.MetricGroup),
|
||||
})
|
||||
for j := range mn.Tags {
|
||||
tag := &mn.Tags[j]
|
||||
labels = append(labels, prompbmarshal.Label{
|
||||
Name: bytesutil.ToUnsafeString(tag.Key),
|
||||
Value: bytesutil.ToUnsafeString(tag.Value),
|
||||
})
|
||||
}
|
||||
labels = append(labels, extraLabels...)
|
||||
values := block.Values
|
||||
timestamps := block.Timestamps
|
||||
if len(timestamps) != len(values) {
|
||||
logger.Panicf("BUG: len(timestamps)=%d must match len(values)=%d", len(timestamps), len(values))
|
||||
}
|
||||
samplesLen := len(samples)
|
||||
for j, value := range values {
|
||||
samples = append(samples, prompbmarshal.Sample{
|
||||
Value: value,
|
||||
Timestamp: timestamps[j],
|
||||
})
|
||||
}
|
||||
tssDst = append(tssDst, prompbmarshal.TimeSeries{
|
||||
Labels: labels[labelsLen:],
|
||||
Samples: samples[samplesLen:],
|
||||
})
|
||||
rowsTotal := len(values)
|
||||
ctx.WriteRequest.Timeseries = tssDst
|
||||
ctx.Labels = labels
|
||||
ctx.Samples = samples
|
||||
remotewrite.Push(&ctx.WriteRequest)
|
||||
rowsInserted.Add(rowsTotal)
|
||||
rowsPerInsert.Update(float64(rowsTotal))
|
||||
return nil
|
||||
}
|
|
@ -19,7 +19,7 @@ var (
|
|||
flushInterval = flag.Duration("remoteWrite.flushInterval", time.Second, "Interval for flushing the data to remote storage. "+
|
||||
"Higher value reduces network bandwidth usage at the cost of delayed push of scraped data to remote storage. "+
|
||||
"Minimum supported interval is 1 second")
|
||||
maxUnpackedBlockSize = flagutil.NewBytes("remoteWrite.maxBlockSize", 32*1024*1024, "The maximum size in bytes of unpacked request to send to remote storage. "+
|
||||
maxUnpackedBlockSize = flagutil.NewBytes("remoteWrite.maxBlockSize", 8*1024*1024, "The maximum size in bytes of unpacked request to send to remote storage. "+
|
||||
"It shouldn't exceed -maxInsertRequestSize from VictoriaMetrics")
|
||||
)
|
||||
|
||||
|
@ -127,7 +127,7 @@ func (wr *writeRequest) push(src []prompbmarshal.TimeSeries) {
|
|||
for i := range src {
|
||||
tssDst = append(tssDst, prompbmarshal.TimeSeries{})
|
||||
wr.copyTimeSeries(&tssDst[len(tssDst)-1], &src[i])
|
||||
if len(tssDst) >= maxRowsPerBlock {
|
||||
if len(wr.samples) >= maxRowsPerBlock {
|
||||
wr.tss = tssDst
|
||||
wr.flush()
|
||||
tssDst = wr.tss
|
||||
|
|
|
@ -45,6 +45,14 @@ var allRelabelConfigs atomic.Value
|
|||
// since it may lead to high memory usage due to big number of buffers.
|
||||
var maxQueues = runtime.GOMAXPROCS(-1) * 4
|
||||
|
||||
// InitSecretFlags must be called after flag.Parse and before any logging.
|
||||
func InitSecretFlags() {
|
||||
if !*showRemoteWriteURL {
|
||||
// remoteWrite.url can contain authentication codes, so hide it at `/metrics` output.
|
||||
flagutil.RegisterSecretFlag("remoteWrite.url")
|
||||
}
|
||||
}
|
||||
|
||||
// Init initializes remotewrite.
|
||||
//
|
||||
// It must be called after flag.Parse().
|
||||
|
@ -60,10 +68,6 @@ func Init() {
|
|||
if *queues <= 0 {
|
||||
*queues = 1
|
||||
}
|
||||
if !*showRemoteWriteURL {
|
||||
// remoteWrite.url can contain authentication codes, so hide it at `/metrics` output.
|
||||
flagutil.RegisterSecretFlag("remoteWrite.url")
|
||||
}
|
||||
initLabelsGlobal()
|
||||
rcs, err := loadRelabelConfigs()
|
||||
if err != nil {
|
||||
|
@ -153,10 +157,19 @@ func Push(wr *prompbmarshal.WriteRequest) {
|
|||
tss := wr.Timeseries
|
||||
for len(tss) > 0 {
|
||||
// Process big tss in smaller blocks in order to reduce the maximum memory usage
|
||||
samplesCount := 0
|
||||
i := 0
|
||||
for i < len(tss) {
|
||||
samplesCount += len(tss[i].Samples)
|
||||
i++
|
||||
if samplesCount > maxRowsPerBlock {
|
||||
break
|
||||
}
|
||||
}
|
||||
tssBlock := tss
|
||||
if len(tssBlock) > maxRowsPerBlock {
|
||||
tssBlock = tss[:maxRowsPerBlock]
|
||||
tss = tss[maxRowsPerBlock:]
|
||||
if i < len(tss) {
|
||||
tssBlock = tss[:i]
|
||||
tss = tss[i:]
|
||||
} else {
|
||||
tss = nil
|
||||
}
|
||||
|
|
|
@ -6,6 +6,7 @@ import (
|
|||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/common"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
|
||||
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
|
||||
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/vmimport"
|
||||
|
@ -54,7 +55,9 @@ func insertRows(rows []parser.Row, extraLabels []prompbmarshal.Label) error {
|
|||
labels = append(labels, extraLabels...)
|
||||
values := r.Values
|
||||
timestamps := r.Timestamps
|
||||
_ = timestamps[len(values)-1]
|
||||
if len(timestamps) != len(values) {
|
||||
logger.Panicf("BUG: len(timestamps)=%d must match len(values)=%d", len(timestamps), len(values))
|
||||
}
|
||||
samplesLen := len(samples)
|
||||
for j, value := range values {
|
||||
samples = append(samples, prompbmarshal.Sample{
|
||||
|
|
|
@ -113,7 +113,7 @@ Run `make package-vmauth`. It builds `victoriametrics/vmauth:<PKG_TAG>` docker i
|
|||
`<PKG_TAG>` is auto-generated image tag, which depends on source code in the repository.
|
||||
The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmauth`.
|
||||
|
||||
By default the image is built on top of [alpine](https://hub.docker.com/_/alpine) image. It is possible to build the package on top of any other base image
|
||||
The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image
|
||||
by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
|
||||
|
||||
```bash
|
||||
|
|
|
@ -6,12 +6,12 @@ Supported storage systems for backups:
|
|||
|
||||
* [GCS](https://cloud.google.com/storage/). Example: `gcs://<bucket>/<path/to/backup>`
|
||||
* [S3](https://aws.amazon.com/s3/). Example: `s3://<bucket>/<path/to/backup>`
|
||||
* Any S3-compatible storage such as [MinIO](https://github.com/minio/minio), [Ceph](https://docs.ceph.com/docs/mimic/radosgw/s3/) or [Swift](https://www.swiftstack.com/docs/admin/middleware/s3_middleware.html). See `-customS3Endpoint` command-line flag.
|
||||
* Any S3-compatible storage such as [MinIO](https://github.com/minio/minio), [Ceph](https://docs.ceph.com/docs/mimic/radosgw/s3/) or [Swift](https://www.swiftstack.com/docs/admin/middleware/s3_middleware.html). See [these docs](#advanced-usage) for details.
|
||||
* Local filesystem. Example: `fs://</absolute/path/to/backup>`
|
||||
|
||||
Incremental backups and full backups are supported. Incremental backups are created automatically if the destination path already contains data from the previous backup.
|
||||
`vmbackup` supports incremental and full backups. Incremental backups created automatically if the destination path already contains data from the previous backup.
|
||||
Full backups can be sped up with `-origin` pointing to already existing backup on the same remote storage. In this case `vmbackup` makes server-side copy for the shared
|
||||
data between the existing backup and new backup. This saves time and costs on data transfer.
|
||||
data between the existing backup and new backup. It saves time and costs on data transfer.
|
||||
|
||||
Backup process can be interrupted at any time. It is automatically resumed from the interruption point when restarting `vmbackup` with the same args.
|
||||
|
||||
|
@ -35,8 +35,8 @@ vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshotName=<local-
|
|||
|
||||
* `</path/to/victoria-metrics-data>` - path to VictoriaMetrics data pointed by `-storageDataPath` command-line flag in single-node VictoriaMetrics or in cluster `vmstorage`.
|
||||
There is no need to stop VictoriaMetrics for creating backups, since they are performed from immutable [instant snapshots](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-work-with-snapshots).
|
||||
* `<local-snapshot>` is the snapshot to backup. See [how to create instant snapshots](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-work-with-snapshots).
|
||||
* `<bucket>` is already existing name for [GCS bucket](https://cloud.google.com/storage/docs/creating-buckets).
|
||||
* `<local-snapshot>` is the snapshot to back up. See [how to create instant snapshots](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-work-with-snapshots).
|
||||
* `<bucket>` is an already existing name for [GCS bucket](https://cloud.google.com/storage/docs/creating-buckets).
|
||||
* `<path/to/new/backup>` is the destination path where new backup will be placed.
|
||||
|
||||
|
||||
|
@ -49,13 +49,13 @@ with the following command:
|
|||
vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshotName=<local-snapshot> -dst=gcs://<bucket>/<path/to/new/backup> -origin=gcs://<bucket>/<path/to/existing/backup>
|
||||
```
|
||||
|
||||
This saves time and network bandwidth costs by performing server-side copy for the shared data from the `-origin` to `-dst`.
|
||||
It saves time and network bandwidth costs by performing server-side copy for the shared data from the `-origin` to `-dst`.
|
||||
|
||||
|
||||
#### Incremental backups
|
||||
|
||||
Incremental backups are performed if `-dst` points to already existing backup. In this case only new data is uploaded to remote storage.
|
||||
This saves time and network bandwidth costs when working with big backups:
|
||||
Incremental backups performed if `-dst` points to an already existing backup. In this case only new data uploaded to remote storage.
|
||||
It saves time and network bandwidth costs when working with big backups:
|
||||
|
||||
```
|
||||
vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshotName=<local-snapshot> -dst=gcs://<bucket>/<path/to/existing/backup>
|
||||
|
@ -100,16 +100,16 @@ The backup algorithm is the following:
|
|||
2. Determine files in `-dst`, which are missing in `-snapshotName`, and delete them. These are usually small files, which are already merged into bigger files in the snapshot.
|
||||
3. Determine files from `-snapshotName`, which are missing in `-dst`. These are usually small new files and bigger merged files.
|
||||
4. Determine files from step 3, which exist in the `-origin`, and perform server-side copy of these files from `-origin` to `-dst`.
|
||||
This are usually the biggest and the oldest files, which are shared between backups.
|
||||
5. Upload the remaining files from setp 3 from `-snapshotName` to `-dst`.
|
||||
These are usually the biggest and the oldest files, which are shared between backups.
|
||||
5. Upload the remaining files from step 3 from `-snapshotName` to `-dst`.
|
||||
|
||||
The algorithm splits source files into 100MB chunks in the backup. Each chunk is stored as a separate file in the backup.
|
||||
The algorithm splits source files into 100 MB chunks in the backup. Each chunk stored as a separate file in the backup.
|
||||
Such splitting minimizes the amounts of data to re-transfer after temporary errors.
|
||||
|
||||
`vmbackup` relies on [instant snapshot](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282) properties:
|
||||
|
||||
- All the files in the snapshot are immutable.
|
||||
- Old files are periodically merged into new files.
|
||||
- Old files periodically merged into new files.
|
||||
- Smaller files have higher probability to be merged.
|
||||
- Consecutive snapshots share many identical files.
|
||||
|
||||
|
@ -129,7 +129,45 @@ See [this article](https://medium.com/@valyala/speeding-up-backups-for-big-time-
|
|||
|
||||
### Advanced usage
|
||||
|
||||
Run `vmbackup -help` in order to see all the available options:
|
||||
|
||||
* Obtaining credentials from a file.
|
||||
|
||||
Add flag `-credsFilePath=/etc/credentials` with the following content:
|
||||
|
||||
for s3 (aws, minio or other s3 compatible storages):
|
||||
```bash
|
||||
[default]
|
||||
aws_access_key_id=theaccesskey
|
||||
aws_secret_access_key=thesecretaccesskeyvalue
|
||||
```
|
||||
|
||||
for gce cloud storage:
|
||||
```json
|
||||
{
|
||||
"type": "service_account",
|
||||
"project_id": "project-id",
|
||||
"private_key_id": "key-id",
|
||||
"private_key": "-----BEGIN PRIVATE KEY-----\nprivate-key\n-----END PRIVATE KEY-----\n",
|
||||
"client_email": "service-account-email",
|
||||
"client_id": "client-id",
|
||||
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
|
||||
"token_uri": "https://accounts.google.com/o/oauth2/token",
|
||||
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
|
||||
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/service-account-email"
|
||||
}
|
||||
```
|
||||
|
||||
* Usage with s3 custom url endpoint. It is possible to use `vmbackup` with s3 compatible storages like minio, cloudian, etc.
|
||||
You have to add a custom url endpoint via flag:
|
||||
```
|
||||
# for minio
|
||||
-customS3Endpoint=http://localhost:9000
|
||||
|
||||
# for aws gov region
|
||||
-customS3Endpoint=https://s3-fips.us-gov-west-1.amazonaws.com
|
||||
```
|
||||
|
||||
* Run `vmbackup -help` in order to see all the available options:
|
||||
|
||||
```
|
||||
-concurrency int
|
||||
|
@ -207,7 +245,7 @@ Run `make package-vmbackup`. It builds `victoriametrics/vmbackup:<PKG_TAG>` dock
|
|||
`<PKG_TAG>` is auto-generated image tag, which depends on source code in the repository.
|
||||
The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmbackup`.
|
||||
|
||||
By default the image is built on top of [alpine](https://hub.docker.com/_/alpine) image. It is possible to build the package on top of any other base image
|
||||
The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image
|
||||
by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
|
||||
|
||||
```bash
|
||||
|
|
|
@ -10,6 +10,7 @@ import (
|
|||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/csvimport"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/graphite"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/influx"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/native"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/opentsdb"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/opentsdbhttp"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/prometheusimport"
|
||||
|
@ -24,6 +25,7 @@ import (
|
|||
opentsdbhttpserver "github.com/VictoriaMetrics/VictoriaMetrics/lib/ingestserver/opentsdbhttp"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/procutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/promscrape"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
|
@ -51,7 +53,7 @@ var (
|
|||
func Init() {
|
||||
relabel.Init()
|
||||
storage.SetMaxLabelsPerTimeseries(*maxLabelsPerTimeseries)
|
||||
|
||||
common.StartUnmarshalWorkers()
|
||||
writeconcurrencylimiter.Init()
|
||||
if len(*influxListenAddr) > 0 {
|
||||
influxServer = influxserver.MustStart(*influxListenAddr, influx.InsertHandlerForReader)
|
||||
|
@ -83,6 +85,7 @@ func Stop() {
|
|||
if len(*opentsdbHTTPListenAddr) > 0 {
|
||||
opentsdbhttpServer.MustStop()
|
||||
}
|
||||
common.StopUnmarshalWorkers()
|
||||
}
|
||||
|
||||
// RequestHandler is a handler for Prometheus remote storage write API
|
||||
|
@ -125,6 +128,15 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
|
|||
}
|
||||
w.WriteHeader(http.StatusNoContent)
|
||||
return true
|
||||
case "/api/v1/import/native":
|
||||
nativeimportRequests.Inc()
|
||||
if err := native.InsertHandler(r); err != nil {
|
||||
nativeimportErrors.Inc()
|
||||
httpserver.Errorf(w, r, "error in %q: %s", r.URL.Path, err)
|
||||
return true
|
||||
}
|
||||
w.WriteHeader(http.StatusNoContent)
|
||||
return true
|
||||
case "/write", "/api/v2/write":
|
||||
influxWriteRequests.Inc()
|
||||
if err := influx.InsertHandlerForHTTP(r); err != nil {
|
||||
|
@ -169,6 +181,9 @@ var (
|
|||
prometheusimportRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/import/prometheus", protocol="prometheusimport"}`)
|
||||
prometheusimportErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/api/v1/import/prometheus", protocol="prometheusimport"}`)
|
||||
|
||||
nativeimportRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/import/native", protocol="nativeimport"}`)
|
||||
nativeimportErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/api/v1/import/native", protocol="nativeimport"}`)
|
||||
|
||||
influxWriteRequests = metrics.NewCounter(`vm_http_requests_total{path="/write", protocol="influx"}`)
|
||||
influxWriteErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/write", protocol="influx"}`)
|
||||
|
||||
|
|
113
app/vminsert/native/request_handler.go
Normal file
113
app/vminsert/native/request_handler.go
Normal file
|
@ -0,0 +1,113 @@
|
|||
package native
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"runtime"
|
||||
"sync"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/relabel"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
|
||||
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
|
||||
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/native"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/writeconcurrencylimiter"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
)
|
||||
|
||||
var (
|
||||
rowsInserted = metrics.NewCounter(`vm_rows_inserted_total{type="native"}`)
|
||||
rowsPerInsert = metrics.NewHistogram(`vm_rows_per_insert{type="native"}`)
|
||||
)
|
||||
|
||||
// InsertHandler processes `/api/v1/import/native` request.
|
||||
func InsertHandler(req *http.Request) error {
|
||||
extraLabels, err := parserCommon.GetExtraLabels(req)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return writeconcurrencylimiter.Do(func() error {
|
||||
return parser.ParseStream(req, func(block *parser.Block) error {
|
||||
return insertRows(block, extraLabels)
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
func insertRows(block *parser.Block, extraLabels []prompbmarshal.Label) error {
|
||||
ctx := getPushCtx()
|
||||
defer putPushCtx(ctx)
|
||||
|
||||
rowsLen := len(block.Values)
|
||||
ic := &ctx.Common
|
||||
ic.Reset(rowsLen)
|
||||
hasRelabeling := relabel.HasRelabeling()
|
||||
mn := &block.MetricName
|
||||
ic.Labels = ic.Labels[:0]
|
||||
ic.AddLabelBytes(nil, mn.MetricGroup)
|
||||
for j := range mn.Tags {
|
||||
tag := &mn.Tags[j]
|
||||
ic.AddLabelBytes(tag.Key, tag.Value)
|
||||
}
|
||||
for j := range extraLabels {
|
||||
label := &extraLabels[j]
|
||||
ic.AddLabel(label.Name, label.Value)
|
||||
}
|
||||
if hasRelabeling {
|
||||
ic.ApplyRelabeling()
|
||||
}
|
||||
if len(ic.Labels) == 0 {
|
||||
// Skip metric without labels.
|
||||
return nil
|
||||
}
|
||||
ctx.metricNameBuf = storage.MarshalMetricNameRaw(ctx.metricNameBuf[:0], ic.Labels)
|
||||
values := block.Values
|
||||
timestamps := block.Timestamps
|
||||
if len(timestamps) != len(values) {
|
||||
logger.Panicf("BUG: len(timestamps)=%d must match len(values)=%d", len(timestamps), len(values))
|
||||
}
|
||||
for j, value := range values {
|
||||
timestamp := timestamps[j]
|
||||
if err := ic.WriteDataPoint(ctx.metricNameBuf, nil, timestamp, value); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
rowsTotal := len(values)
|
||||
rowsInserted.Add(rowsTotal)
|
||||
rowsPerInsert.Update(float64(rowsTotal))
|
||||
return ic.FlushBufs()
|
||||
}
|
||||
|
||||
type pushCtx struct {
|
||||
Common common.InsertCtx
|
||||
metricNameBuf []byte
|
||||
}
|
||||
|
||||
func (ctx *pushCtx) reset() {
|
||||
ctx.Common.Reset(0)
|
||||
ctx.metricNameBuf = ctx.metricNameBuf[:0]
|
||||
}
|
||||
|
||||
func getPushCtx() *pushCtx {
|
||||
select {
|
||||
case ctx := <-pushCtxPoolCh:
|
||||
return ctx
|
||||
default:
|
||||
if v := pushCtxPool.Get(); v != nil {
|
||||
return v.(*pushCtx)
|
||||
}
|
||||
return &pushCtx{}
|
||||
}
|
||||
}
|
||||
|
||||
func putPushCtx(ctx *pushCtx) {
|
||||
ctx.reset()
|
||||
select {
|
||||
case pushCtxPoolCh <- ctx:
|
||||
default:
|
||||
pushCtxPool.Put(ctx)
|
||||
}
|
||||
}
|
||||
|
||||
var pushCtxPool sync.Pool
|
||||
var pushCtxPoolCh = make(chan *pushCtx, runtime.GOMAXPROCS(-1))
|
|
@ -22,10 +22,19 @@ func Push(wr *prompbmarshal.WriteRequest) {
|
|||
tss := wr.Timeseries
|
||||
for len(tss) > 0 {
|
||||
// Process big tss in smaller blocks in order to reduce maxmimum memory usage
|
||||
samplesCount := 0
|
||||
i := 0
|
||||
for i < len(tss) {
|
||||
samplesCount += len(tss[i].Samples)
|
||||
i++
|
||||
if samplesCount > maxRowsPerBlock {
|
||||
break
|
||||
}
|
||||
}
|
||||
tssBlock := tss
|
||||
if len(tssBlock) > maxRowsPerBlock {
|
||||
tssBlock = tss[:maxRowsPerBlock]
|
||||
tss = tss[maxRowsPerBlock:]
|
||||
if i < len(tss) {
|
||||
tssBlock = tss[:i]
|
||||
tss = tss[i:]
|
||||
} else {
|
||||
tss = nil
|
||||
}
|
||||
|
|
|
@ -50,14 +50,15 @@ func insertRows(timeseries []prompb.TimeSeries) error {
|
|||
}
|
||||
var metricNameRaw []byte
|
||||
var err error
|
||||
for i := range ts.Samples {
|
||||
r := &ts.Samples[i]
|
||||
samples := ts.Samples
|
||||
for i := range samples {
|
||||
r := &samples[i]
|
||||
metricNameRaw, err = ctx.WriteDataPointExt(metricNameRaw, ctx.Labels, r.Timestamp, r.Value)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
rowsTotal += len(ts.Samples)
|
||||
rowsTotal += len(samples)
|
||||
}
|
||||
rowsInserted.Add(rowsTotal)
|
||||
rowsPerInsert.Update(float64(rowsTotal))
|
||||
|
|
|
@ -7,6 +7,7 @@ import (
|
|||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/common"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vminsert/relabel"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
|
||||
parserCommon "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
|
||||
parser "github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/vmimport"
|
||||
|
@ -68,7 +69,9 @@ func insertRows(rows []parser.Row, extraLabels []prompbmarshal.Label) error {
|
|||
ctx.metricNameBuf = storage.MarshalMetricNameRaw(ctx.metricNameBuf[:0], ic.Labels)
|
||||
values := r.Values
|
||||
timestamps := r.Timestamps
|
||||
_ = timestamps[len(values)-1]
|
||||
if len(timestamps) != len(values) {
|
||||
logger.Panicf("BUG: len(timestamps)=%d must match len(values)=%d", len(timestamps), len(values))
|
||||
}
|
||||
for j, value := range values {
|
||||
timestamp := timestamps[j]
|
||||
if err := ic.WriteDataPoint(ctx.metricNameBuf, nil, timestamp, value); err != nil {
|
||||
|
|
|
@ -21,7 +21,7 @@ vmrestore -src=gcs://<bucket>/<path/to/backup> -storageDataPath=<local/path/to/r
|
|||
* `<local/path/to/restore>` is the path to folder where data will be restored. This folder must be passed
|
||||
to VictoriaMetrics in `-storageDataPath` command-line flag after the restore process is complete.
|
||||
|
||||
The original `-storageDataPath` directory may contain old files. They will be susbstituted by the files from backup,
|
||||
The original `-storageDataPath` directory may contain old files. They will be substituted by the files from backup,
|
||||
i.e. the end result would be similar to [rsync --delete](https://askubuntu.com/questions/476041/how-do-i-make-rsync-delete-files-that-have-been-deleted-from-the-source-folder).
|
||||
|
||||
|
||||
|
@ -33,7 +33,44 @@ i.e. the end result would be similar to [rsync --delete](https://askubuntu.com/q
|
|||
|
||||
### Advanced usage
|
||||
|
||||
Run `vmrestore -help` in order to see all the available options:
|
||||
* Obtaining credentials from a file.
|
||||
|
||||
Add flag `-credsFilePath=/etc/credentials` with following content:
|
||||
|
||||
for s3 (aws, minio or other s3 compatible storages):
|
||||
```bash
|
||||
[default]
|
||||
aws_access_key_id=theaccesskey
|
||||
aws_secret_access_key=thesecretaccesskeyvalue
|
||||
```
|
||||
|
||||
for gce cloud storage:
|
||||
```json
|
||||
{
|
||||
"type": "service_account",
|
||||
"project_id": "project-id",
|
||||
"private_key_id": "key-id",
|
||||
"private_key": "-----BEGIN PRIVATE KEY-----\nprivate-key\n-----END PRIVATE KEY-----\n",
|
||||
"client_email": "service-account-email",
|
||||
"client_id": "client-id",
|
||||
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
|
||||
"token_uri": "https://accounts.google.com/o/oauth2/token",
|
||||
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
|
||||
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/service-account-email"
|
||||
}
|
||||
```
|
||||
|
||||
* Usage with s3 custom url endpoint. It is possible to use `vmrestore` with s3 api compatible storages, like minio, cloudian and other.
|
||||
You have to add custom url endpoint with a flag:
|
||||
```
|
||||
# for minio:
|
||||
-customS3Endpoint=http://localhost:9000
|
||||
|
||||
# for aws gov region
|
||||
-customS3Endpoint=https://s3-fips.us-gov-west-1.amazonaws.com
|
||||
```
|
||||
|
||||
* Run `vmrestore -help` in order to see all the available options:
|
||||
|
||||
```
|
||||
-concurrency int
|
||||
|
@ -104,7 +141,7 @@ Run `make package-vmrestore`. It builds `victoriametrics/vmrestore:<PKG_TAG>` do
|
|||
`<PKG_TAG>` is auto-generated image tag, which depends on source code in the repository.
|
||||
The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmrestore`.
|
||||
|
||||
By default the image is built on top of [alpine](https://hub.docker.com/_/alpine) image. It is possible to build the package on top of any other base image
|
||||
The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image
|
||||
by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
|
||||
|
||||
```bash
|
||||
|
|
86
app/vmselect/bufferedwriter/bufferedwriter.go
Normal file
86
app/vmselect/bufferedwriter/bufferedwriter.go
Normal file
|
@ -0,0 +1,86 @@
|
|||
package bufferedwriter
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"io"
|
||||
"sync"
|
||||
)
|
||||
|
||||
// Get returns buffered writer for the given w.
|
||||
//
|
||||
// The writer must be returned to the pool after use by calling Put().
|
||||
func Get(w io.Writer) *Writer {
|
||||
v := writerPool.Get()
|
||||
if v == nil {
|
||||
v = &Writer{
|
||||
// By default net/http.Server uses 4KB buffers, which are flushed to client with chunked responses.
|
||||
// These buffers may result in visible overhead for responses exceeding a few megabytes.
|
||||
// So allocate 64Kb buffers.
|
||||
bw: bufio.NewWriterSize(w, 64*1024),
|
||||
}
|
||||
}
|
||||
bw := v.(*Writer)
|
||||
bw.bw.Reset(w)
|
||||
return bw
|
||||
}
|
||||
|
||||
// Put returns back bw to the pool.
|
||||
//
|
||||
// bw cannot be used after returning to the pool.
|
||||
func Put(bw *Writer) {
|
||||
bw.reset()
|
||||
writerPool.Put(bw)
|
||||
}
|
||||
|
||||
var writerPool sync.Pool
|
||||
|
||||
// Writer is buffered writer, which may be used in order to reduce overhead
|
||||
// when sending moderately big responses to http server.
|
||||
//
|
||||
// Writer methods can be called from concurrently running goroutines.
|
||||
// The writer remembers the first occurred error, which can be inspected with Error method.
|
||||
type Writer struct {
|
||||
lock sync.Mutex
|
||||
bw *bufio.Writer
|
||||
err error
|
||||
}
|
||||
|
||||
func (bw *Writer) reset() {
|
||||
bw.bw.Reset(nil)
|
||||
bw.err = nil
|
||||
}
|
||||
|
||||
// Write writes p to bw.
|
||||
func (bw *Writer) Write(p []byte) (int, error) {
|
||||
bw.lock.Lock()
|
||||
defer bw.lock.Unlock()
|
||||
if bw.err != nil {
|
||||
return 0, bw.err
|
||||
}
|
||||
n, err := bw.bw.Write(p)
|
||||
if err != nil {
|
||||
bw.err = fmt.Errorf("cannot send %d bytes to client: %w", len(p), err)
|
||||
}
|
||||
return n, bw.err
|
||||
}
|
||||
|
||||
// Flush flushes bw to the underlying writer.
|
||||
func (bw *Writer) Flush() error {
|
||||
bw.lock.Lock()
|
||||
defer bw.lock.Unlock()
|
||||
if bw.err != nil {
|
||||
return bw.err
|
||||
}
|
||||
if err := bw.bw.Flush(); err != nil {
|
||||
bw.err = fmt.Errorf("cannot flush data to client: %w", err)
|
||||
}
|
||||
return bw.err
|
||||
}
|
||||
|
||||
// Error returns the first occurred error in bw.
|
||||
func (bw *Writer) Error() error {
|
||||
bw.lock.Lock()
|
||||
defer bw.lock.Unlock()
|
||||
return bw.err
|
||||
}
|
|
@ -9,6 +9,7 @@ import (
|
|||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/bufferedwriter"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/searchutils"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
|
||||
|
@ -78,28 +79,38 @@ func MetricsFindHandler(startTime time.Time, w http.ResponseWriter, r *http.Requ
|
|||
if err != nil {
|
||||
return err
|
||||
}
|
||||
paths = deduplicatePaths(paths)
|
||||
if leavesOnly {
|
||||
paths = filterLeaves(paths, delimiter)
|
||||
}
|
||||
paths = deduplicatePaths(paths, delimiter)
|
||||
sortPaths(paths, delimiter)
|
||||
contentType := "application/json"
|
||||
if jsonp != "" {
|
||||
contentType = "text/javascript"
|
||||
}
|
||||
w.Header().Set("Content-Type", contentType)
|
||||
WriteMetricsFindResponse(w, paths, delimiter, format, wildcards, jsonp)
|
||||
bw := bufferedwriter.Get(w)
|
||||
defer bufferedwriter.Put(bw)
|
||||
WriteMetricsFindResponse(bw, paths, delimiter, format, wildcards, jsonp)
|
||||
if err := bw.Flush(); err != nil {
|
||||
return err
|
||||
}
|
||||
metricsFindDuration.UpdateDuration(startTime)
|
||||
return nil
|
||||
}
|
||||
|
||||
func deduplicatePaths(paths []string) []string {
|
||||
m := make(map[string]struct{}, len(paths))
|
||||
for _, path := range paths {
|
||||
m[path] = struct{}{}
|
||||
func deduplicatePaths(paths []string, delimiter string) []string {
|
||||
if len(paths) == 0 {
|
||||
return nil
|
||||
}
|
||||
dst := make([]string, 0, len(m))
|
||||
for path := range m {
|
||||
sort.Strings(paths)
|
||||
dst := paths[:1]
|
||||
for _, path := range paths[1:] {
|
||||
prevPath := dst[len(dst)-1]
|
||||
if path == prevPath {
|
||||
// Skip duplicate path.
|
||||
continue
|
||||
}
|
||||
dst = append(dst, path)
|
||||
}
|
||||
return dst
|
||||
|
@ -181,7 +192,12 @@ func MetricsExpandHandler(startTime time.Time, w http.ResponseWriter, r *http.Re
|
|||
}
|
||||
}
|
||||
sortPaths(paths, delimiter)
|
||||
WriteMetricsExpandResponseFlat(w, paths, jsonp)
|
||||
bw := bufferedwriter.Get(w)
|
||||
defer bufferedwriter.Put(bw)
|
||||
WriteMetricsExpandResponseFlat(bw, paths, jsonp)
|
||||
if err := bw.Flush(); err != nil {
|
||||
return err
|
||||
}
|
||||
metricsExpandDuration.UpdateDuration(startTime)
|
||||
return nil
|
||||
}
|
||||
|
@ -204,7 +220,12 @@ func MetricsIndexHandler(startTime time.Time, w http.ResponseWriter, r *http.Req
|
|||
contentType = "text/javascript"
|
||||
}
|
||||
w.Header().Set("Content-Type", contentType)
|
||||
WriteMetricsIndexResponse(w, metricNames, jsonp)
|
||||
bw := bufferedwriter.Get(w)
|
||||
defer bufferedwriter.Put(bw)
|
||||
WriteMetricsIndexResponse(bw, metricNames, jsonp)
|
||||
if err := bw.Flush(); err != nil {
|
||||
return err
|
||||
}
|
||||
metricsIndexDuration.UpdateDuration(startTime)
|
||||
return nil
|
||||
}
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
{% import (
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
|
@ -43,17 +44,44 @@ See https://graphite-api.readthedocs.io/en/latest/api.html#metrics-find
|
|||
|
||||
{% func metricsFindResponseTreeJSON(paths []string, delimiter string, addWildcards bool) %}
|
||||
[
|
||||
{% code
|
||||
if len(paths) > 1 {
|
||||
sort.Strings(paths)
|
||||
// Substitute `path` and `path<delimiter>` with `path<delimiter><delimiter>`.
|
||||
// Such path is treated specially during rendering - see code below for details.
|
||||
dst := paths[:1]
|
||||
for _, path := range paths[1:] {
|
||||
prevPath := dst[len(dst)-1]
|
||||
if len(path) == len(prevPath)+1 && strings.HasSuffix(path, delimiter) && strings.HasPrefix(path, prevPath) {
|
||||
// The path is equivalent to <prevPath> + <delimiter>
|
||||
// Overwrite the prevPath with <path> + <delimiter> as carbonapi does.
|
||||
// I.e. the resulting path ends with double delimiter.
|
||||
// Such path is treated specially during rendering - see metrics_find_response.qtpl for details.
|
||||
dst[len(dst)-1] = path + delimiter
|
||||
continue
|
||||
}
|
||||
dst = append(dst, path)
|
||||
}
|
||||
paths = dst
|
||||
}
|
||||
%}
|
||||
{% for i, path := range paths %}
|
||||
{
|
||||
{% code
|
||||
id := path
|
||||
allowChildren := "0"
|
||||
isLeaf := "1"
|
||||
if strings.HasSuffix(path, delimiter) {
|
||||
if strings.HasSuffix(id, delimiter) {
|
||||
if strings.HasSuffix(id[:len(id)-1], delimiter) {
|
||||
// Special case when id ends with double delimiter.
|
||||
// See the code above for details.
|
||||
id = id[:len(id)-2]
|
||||
}
|
||||
allowChildren = "1"
|
||||
isLeaf = "0"
|
||||
}
|
||||
%}
|
||||
"id": {%q= path %},
|
||||
"id": {%q= id %},
|
||||
"text": {%= metricPathName(path, delimiter) %},
|
||||
"allowChildren": {%s= allowChildren %},
|
||||
"expandable": {%s= allowChildren %},
|
||||
|
|
|
@ -6,6 +6,7 @@ package graphite
|
|||
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:1
|
||||
import (
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
|
@ -13,205 +14,232 @@ import (
|
|||
|
||||
// MetricsFindResponse generates response for /metrics/find .See https://graphite-api.readthedocs.io/en/latest/api.html#metrics-find
|
||||
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:11
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:12
|
||||
import (
|
||||
qtio422016 "io"
|
||||
|
||||
qt422016 "github.com/valyala/quicktemplate"
|
||||
)
|
||||
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:11
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:12
|
||||
var (
|
||||
_ = qtio422016.Copy
|
||||
_ = qt422016.AcquireByteBuffer
|
||||
)
|
||||
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:11
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:12
|
||||
func StreamMetricsFindResponse(qw422016 *qt422016.Writer, paths []string, delimiter, format string, addWildcards bool, jsonp string) {
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:12
|
||||
if jsonp != "" {
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:12
|
||||
qw422016.N().S(jsonp)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:12
|
||||
qw422016.N().S(`(`)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:12
|
||||
}
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:13
|
||||
switch format {
|
||||
if jsonp != "" {
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:13
|
||||
qw422016.N().S(jsonp)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:13
|
||||
qw422016.N().S(`(`)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:13
|
||||
}
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:14
|
||||
case "completer":
|
||||
switch format {
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:15
|
||||
streammetricsFindResponseCompleter(qw422016, paths, delimiter, addWildcards)
|
||||
case "completer":
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:16
|
||||
case "treejson":
|
||||
streammetricsFindResponseCompleter(qw422016, paths, delimiter, addWildcards)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:17
|
||||
streammetricsFindResponseTreeJSON(qw422016, paths, delimiter, addWildcards)
|
||||
case "treejson":
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:18
|
||||
default:
|
||||
streammetricsFindResponseTreeJSON(qw422016, paths, delimiter, addWildcards)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:19
|
||||
default:
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:20
|
||||
logger.Panicf("BUG: unexpected format=%q", format)
|
||||
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:20
|
||||
}
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:21
|
||||
}
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:22
|
||||
if jsonp != "" {
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:21
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:22
|
||||
qw422016.N().S(`)`)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:21
|
||||
}
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:22
|
||||
}
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:23
|
||||
}
|
||||
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:22
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:23
|
||||
func WriteMetricsFindResponse(qq422016 qtio422016.Writer, paths []string, delimiter, format string, addWildcards bool, jsonp string) {
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:22
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:23
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:22
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:23
|
||||
StreamMetricsFindResponse(qw422016, paths, delimiter, format, addWildcards, jsonp)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:22
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:23
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:22
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:23
|
||||
}
|
||||
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:22
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:23
|
||||
func MetricsFindResponse(paths []string, delimiter, format string, addWildcards bool, jsonp string) string {
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:22
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:23
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:22
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:23
|
||||
WriteMetricsFindResponse(qb422016, paths, delimiter, format, addWildcards, jsonp)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:22
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:23
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:22
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:23
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:22
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:23
|
||||
return qs422016
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:22
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:23
|
||||
}
|
||||
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:24
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:25
|
||||
func streammetricsFindResponseCompleter(qw422016 *qt422016.Writer, paths []string, delimiter string, addWildcards bool) {
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:24
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:25
|
||||
qw422016.N().S(`{"metrics":[`)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:27
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:28
|
||||
for i, path := range paths {
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:27
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:28
|
||||
qw422016.N().S(`{"path":`)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:29
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:30
|
||||
qw422016.N().Q(path)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:29
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:30
|
||||
qw422016.N().S(`,"name":`)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:30
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:31
|
||||
streammetricPathName(qw422016, path, delimiter)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:30
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:31
|
||||
qw422016.N().S(`,"is_leaf":`)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:31
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:32
|
||||
if strings.HasSuffix(path, delimiter) {
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:31
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:32
|
||||
qw422016.N().S(`0`)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:31
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:32
|
||||
} else {
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:31
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:32
|
||||
qw422016.N().S(`1`)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:31
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:32
|
||||
}
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:31
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:32
|
||||
qw422016.N().S(`}`)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:33
|
||||
if i+1 < len(paths) {
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:33
|
||||
qw422016.N().S(`,`)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:33
|
||||
}
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:34
|
||||
}
|
||||
if i+1 < len(paths) {
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:34
|
||||
qw422016.N().S(`,`)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:34
|
||||
}
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:35
|
||||
}
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:36
|
||||
if addWildcards && len(paths) > 1 {
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:35
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:36
|
||||
qw422016.N().S(`,{"name": "*"}`)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:39
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:40
|
||||
}
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:39
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:40
|
||||
qw422016.N().S(`]}`)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:42
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:43
|
||||
}
|
||||
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:42
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:43
|
||||
func writemetricsFindResponseCompleter(qq422016 qtio422016.Writer, paths []string, delimiter string, addWildcards bool) {
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:42
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:43
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:42
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:43
|
||||
streammetricsFindResponseCompleter(qw422016, paths, delimiter, addWildcards)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:42
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:43
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:42
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:43
|
||||
}
|
||||
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:42
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:43
|
||||
func metricsFindResponseCompleter(paths []string, delimiter string, addWildcards bool) string {
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:42
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:43
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:42
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:43
|
||||
writemetricsFindResponseCompleter(qb422016, paths, delimiter, addWildcards)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:42
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:43
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:42
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:43
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:42
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:43
|
||||
return qs422016
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:42
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:43
|
||||
}
|
||||
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:44
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:45
|
||||
func streammetricsFindResponseTreeJSON(qw422016 *qt422016.Writer, paths []string, delimiter string, addWildcards bool) {
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:44
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:45
|
||||
qw422016.N().S(`[`)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:46
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:48
|
||||
if len(paths) > 1 {
|
||||
sort.Strings(paths)
|
||||
// Substitute `path` and `path<delimiter>` with `path<delimiter><delimiter>`.
|
||||
// Such path is treated specially during rendering - see code below for details.
|
||||
dst := paths[:1]
|
||||
for _, path := range paths[1:] {
|
||||
prevPath := dst[len(dst)-1]
|
||||
if len(path) == len(prevPath)+1 && strings.HasSuffix(path, delimiter) && strings.HasPrefix(path, prevPath) {
|
||||
// The path is equivalent to <prevPath> + <delimiter>
|
||||
// Overwrite the prevPath with <path> + <delimiter> as carbonapi does.
|
||||
// I.e. the resulting path ends with double delimiter.
|
||||
// Such path is treated specially during rendering - see metrics_find_response.qtpl for details.
|
||||
dst[len(dst)-1] = path + delimiter
|
||||
continue
|
||||
}
|
||||
dst = append(dst, path)
|
||||
}
|
||||
paths = dst
|
||||
}
|
||||
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:68
|
||||
for i, path := range paths {
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:46
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:68
|
||||
qw422016.N().S(`{`)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:49
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:71
|
||||
id := path
|
||||
allowChildren := "0"
|
||||
isLeaf := "1"
|
||||
if strings.HasSuffix(path, delimiter) {
|
||||
if strings.HasSuffix(id, delimiter) {
|
||||
if strings.HasSuffix(id[:len(id)-1], delimiter) {
|
||||
// Special case when id ends with double delimiter.
|
||||
// See the code above for details.
|
||||
id = id[:len(id)-2]
|
||||
}
|
||||
allowChildren = "1"
|
||||
isLeaf = "0"
|
||||
}
|
||||
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:55
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:83
|
||||
qw422016.N().S(`"id":`)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:56
|
||||
qw422016.N().Q(path)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:56
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:84
|
||||
qw422016.N().Q(id)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:84
|
||||
qw422016.N().S(`,"text":`)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:57
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:85
|
||||
streammetricPathName(qw422016, path, delimiter)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:57
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:85
|
||||
qw422016.N().S(`,"allowChildren":`)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:58
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:86
|
||||
qw422016.N().S(allowChildren)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:58
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:86
|
||||
qw422016.N().S(`,"expandable":`)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:59
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:87
|
||||
qw422016.N().S(allowChildren)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:59
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:87
|
||||
qw422016.N().S(`,"leaf":`)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:60
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:88
|
||||
qw422016.N().S(isLeaf)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:60
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:88
|
||||
qw422016.N().S(`}`)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:62
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:90
|
||||
if i+1 < len(paths) {
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:62
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:90
|
||||
qw422016.N().S(`,`)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:62
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:90
|
||||
}
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:63
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:91
|
||||
}
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:64
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:92
|
||||
if addWildcards && len(paths) > 1 {
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:64
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:92
|
||||
qw422016.N().S(`,{`)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:67
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:95
|
||||
path := paths[0]
|
||||
for strings.HasSuffix(path, delimiter) {
|
||||
path = path[:len(path)-1]
|
||||
|
@ -232,60 +260,60 @@ func streammetricsFindResponseTreeJSON(qw422016 *qt422016.Writer, paths []string
|
|||
}
|
||||
}
|
||||
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:86
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:114
|
||||
qw422016.N().S(`"id":`)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:87
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:115
|
||||
qw422016.N().Q(id)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:87
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:115
|
||||
qw422016.N().S(`,"text": "*","allowChildren":`)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:89
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:117
|
||||
qw422016.N().S(allowChildren)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:89
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:117
|
||||
qw422016.N().S(`,"expandable":`)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:90
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:118
|
||||
qw422016.N().S(allowChildren)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:90
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:118
|
||||
qw422016.N().S(`,"leaf":`)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:91
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:119
|
||||
qw422016.N().S(isLeaf)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:91
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:119
|
||||
qw422016.N().S(`}`)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:93
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:121
|
||||
}
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:93
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:121
|
||||
qw422016.N().S(`]`)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:95
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:123
|
||||
}
|
||||
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:95
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:123
|
||||
func writemetricsFindResponseTreeJSON(qq422016 qtio422016.Writer, paths []string, delimiter string, addWildcards bool) {
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:95
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:123
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:95
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:123
|
||||
streammetricsFindResponseTreeJSON(qw422016, paths, delimiter, addWildcards)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:95
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:123
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:95
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:123
|
||||
}
|
||||
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:95
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:123
|
||||
func metricsFindResponseTreeJSON(paths []string, delimiter string, addWildcards bool) string {
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:95
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:123
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:95
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:123
|
||||
writemetricsFindResponseTreeJSON(qb422016, paths, delimiter, addWildcards)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:95
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:123
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:95
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:123
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:95
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:123
|
||||
return qs422016
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:95
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:123
|
||||
}
|
||||
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:97
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:125
|
||||
func streammetricPathName(qw422016 *qt422016.Writer, path, delimiter string) {
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:99
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:127
|
||||
name := path
|
||||
for strings.HasSuffix(name, delimiter) {
|
||||
name = name[:len(name)-1]
|
||||
|
@ -294,33 +322,33 @@ func streammetricPathName(qw422016 *qt422016.Writer, path, delimiter string) {
|
|||
name = name[n+1:]
|
||||
}
|
||||
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:107
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:135
|
||||
qw422016.N().Q(name)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:108
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:136
|
||||
}
|
||||
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:108
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:136
|
||||
func writemetricPathName(qq422016 qtio422016.Writer, path, delimiter string) {
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:108
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:136
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:108
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:136
|
||||
streammetricPathName(qw422016, path, delimiter)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:108
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:136
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:108
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:136
|
||||
}
|
||||
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:108
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:136
|
||||
func metricPathName(path, delimiter string) string {
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:108
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:136
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:108
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:136
|
||||
writemetricPathName(qb422016, path, delimiter)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:108
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:136
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:108
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:136
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:108
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:136
|
||||
return qs422016
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:108
|
||||
//line app/vmselect/graphite/metrics_find_response.qtpl:136
|
||||
}
|
||||
|
|
|
@ -203,6 +203,14 @@ func RequestHandler(w http.ResponseWriter, r *http.Request) bool {
|
|||
return true
|
||||
}
|
||||
return true
|
||||
case "/api/v1/export/native":
|
||||
exportNativeRequests.Inc()
|
||||
if err := prometheus.ExportNativeHandler(startTime, w, r); err != nil {
|
||||
exportNativeErrors.Inc()
|
||||
httpserver.Errorf(w, r, "error in %q: %s", r.URL.Path, err)
|
||||
return true
|
||||
}
|
||||
return true
|
||||
case "/federate":
|
||||
federateRequests.Inc()
|
||||
if err := prometheus.FederateHandler(startTime, w, r); err != nil {
|
||||
|
@ -321,6 +329,9 @@ var (
|
|||
exportRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/export"}`)
|
||||
exportErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/api/v1/export"}`)
|
||||
|
||||
exportNativeRequests = metrics.NewCounter(`vm_http_requests_total{path="/api/v1/export/native"}`)
|
||||
exportNativeErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/api/v1/export/native"}`)
|
||||
|
||||
federateRequests = metrics.NewCounter(`vm_http_requests_total{path="/federate"}`)
|
||||
federateErrors = metrics.NewCounter(`vm_http_request_errors_total{path="/federate"}`)
|
||||
|
||||
|
|
|
@ -8,12 +8,12 @@ import (
|
|||
"runtime"
|
||||
"sort"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/searchutils"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmstorage"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmstorage/promdb"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/decimal"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
|
||||
|
@ -78,10 +78,11 @@ func (rss *Results) mustClose() {
|
|||
var timeseriesWorkCh = make(chan *timeseriesWork, gomaxprocs*16)
|
||||
|
||||
type timeseriesWork struct {
|
||||
rss *Results
|
||||
pts *packedTimeseries
|
||||
f func(rs *Result, workerID uint)
|
||||
doneCh chan error
|
||||
mustStop uint64
|
||||
rss *Results
|
||||
pts *packedTimeseries
|
||||
f func(rs *Result, workerID uint) error
|
||||
doneCh chan error
|
||||
|
||||
rowsProcessed int
|
||||
}
|
||||
|
@ -101,12 +102,19 @@ func timeseriesWorker(workerID uint) {
|
|||
tsw.doneCh <- fmt.Errorf("timeout exceeded during query execution: %s", rss.deadline.String())
|
||||
continue
|
||||
}
|
||||
if atomic.LoadUint64(&tsw.mustStop) != 0 {
|
||||
tsw.doneCh <- nil
|
||||
continue
|
||||
}
|
||||
if err := tsw.pts.Unpack(&rs, rss.tr, rss.fetchData); err != nil {
|
||||
tsw.doneCh <- fmt.Errorf("error during time series unpacking: %w", err)
|
||||
continue
|
||||
}
|
||||
if len(rs.Timestamps) > 0 || !rss.fetchData {
|
||||
tsw.f(&rs, workerID)
|
||||
if err := tsw.f(&rs, workerID); err != nil {
|
||||
tsw.doneCh <- err
|
||||
continue
|
||||
}
|
||||
}
|
||||
tsw.rowsProcessed = len(rs.Values)
|
||||
tsw.doneCh <- nil
|
||||
|
@ -123,9 +131,10 @@ func timeseriesWorker(workerID uint) {
|
|||
//
|
||||
// f shouldn't hold references to rs after returning.
|
||||
// workerID is the id of the worker goroutine that calls f.
|
||||
// Data processing is immediately stopped if f returns non-nil error.
|
||||
//
|
||||
// rss becomes unusable after the call to RunParallel.
|
||||
func (rss *Results) RunParallel(f func(rs *Result, workerID uint)) error {
|
||||
func (rss *Results) RunParallel(f func(rs *Result, workerID uint) error) error {
|
||||
defer rss.mustClose()
|
||||
|
||||
// Feed workers with work.
|
||||
|
@ -151,6 +160,10 @@ func (rss *Results) RunParallel(f func(rs *Result, workerID uint)) error {
|
|||
// Return just the first error, since other errors
|
||||
// are likely duplicate the first error.
|
||||
firstErr = err
|
||||
// Notify all the the tsws that they shouldn't be executed.
|
||||
for _, tsw := range tsws {
|
||||
atomic.StoreUint64(&tsw.mustStop, 1)
|
||||
}
|
||||
}
|
||||
rowsProcessedTotal += tsw.rowsProcessed
|
||||
}
|
||||
|
@ -428,33 +441,13 @@ func (sb *sortBlock) reset() {
|
|||
|
||||
func (sb *sortBlock) unpackFrom(tmpBlock *storage.Block, br storage.BlockRef, tr storage.TimeRange) error {
|
||||
tmpBlock.Reset()
|
||||
br.MustReadBlock(tmpBlock)
|
||||
br.MustReadBlock(tmpBlock, true)
|
||||
if err := tmpBlock.UnmarshalData(); err != nil {
|
||||
return fmt.Errorf("cannot unmarshal block: %w", err)
|
||||
}
|
||||
timestamps := tmpBlock.Timestamps()
|
||||
|
||||
// Skip timestamps smaller than tr.MinTimestamp.
|
||||
i := 0
|
||||
for i < len(timestamps) && timestamps[i] < tr.MinTimestamp {
|
||||
i++
|
||||
}
|
||||
|
||||
// Skip timestamps bigger than tr.MaxTimestamp.
|
||||
j := len(timestamps)
|
||||
for j > i && timestamps[j-1] > tr.MaxTimestamp {
|
||||
j--
|
||||
}
|
||||
skippedRows := tmpBlock.RowsCount() - (j - i)
|
||||
sb.Timestamps, sb.Values = tmpBlock.AppendRowsWithTimeRangeFilter(sb.Timestamps[:0], sb.Values[:0], tr)
|
||||
skippedRows := tmpBlock.RowsCount() - len(sb.Timestamps)
|
||||
metricRowsSkipped.Add(skippedRows)
|
||||
|
||||
// Copy the remaining values.
|
||||
if i == j {
|
||||
return nil
|
||||
}
|
||||
values := tmpBlock.Values()
|
||||
sb.Timestamps = append(sb.Timestamps, timestamps[i:j]...)
|
||||
sb.Values = decimal.AppendDecimalToFloat(sb.Values, values[i:j], tmpBlock.Scale())
|
||||
return nil
|
||||
}
|
||||
|
||||
|
@ -658,7 +651,116 @@ func putStorageSearch(sr *storage.Search) {
|
|||
|
||||
var ssPool sync.Pool
|
||||
|
||||
// ProcessSearchQuery performs sq on storage nodes until the given deadline.
|
||||
// ExportBlocks searches for time series matching sq and calls f for each found block.
|
||||
//
|
||||
// f is called in parallel from multiple goroutines.
|
||||
// Data processing is immediately stopped if f returns non-nil error.
|
||||
// It is the responsibility of f to call b.UnmarshalData before reading timestamps and values from the block.
|
||||
// It is the responsibility of f to filter blocks according to the given tr.
|
||||
func ExportBlocks(sq *storage.SearchQuery, deadline searchutils.Deadline, f func(mn *storage.MetricName, b *storage.Block, tr storage.TimeRange) error) error {
|
||||
if deadline.Exceeded() {
|
||||
return fmt.Errorf("timeout exceeded before starting data export: %s", deadline.String())
|
||||
}
|
||||
tfss, err := setupTfss(sq.TagFilterss)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
tr := storage.TimeRange{
|
||||
MinTimestamp: sq.MinTimestamp,
|
||||
MaxTimestamp: sq.MaxTimestamp,
|
||||
}
|
||||
if err := vmstorage.CheckTimeRange(tr); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
vmstorage.WG.Add(1)
|
||||
defer vmstorage.WG.Done()
|
||||
|
||||
sr := getStorageSearch()
|
||||
defer putStorageSearch(sr)
|
||||
sr.Init(vmstorage.Storage, tfss, tr, *maxMetricsPerSearch, deadline.Deadline())
|
||||
|
||||
// Start workers that call f in parallel on available CPU cores.
|
||||
gomaxprocs := runtime.GOMAXPROCS(-1)
|
||||
workCh := make(chan *exportWork, gomaxprocs*8)
|
||||
var (
|
||||
errGlobal error
|
||||
errGlobalLock sync.Mutex
|
||||
mustStop uint32
|
||||
)
|
||||
var wg sync.WaitGroup
|
||||
wg.Add(gomaxprocs)
|
||||
for i := 0; i < gomaxprocs; i++ {
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
for xw := range workCh {
|
||||
if err := f(&xw.mn, &xw.b, tr); err != nil {
|
||||
errGlobalLock.Lock()
|
||||
if errGlobal != nil {
|
||||
errGlobal = err
|
||||
atomic.StoreUint32(&mustStop, 1)
|
||||
}
|
||||
errGlobalLock.Unlock()
|
||||
}
|
||||
xw.reset()
|
||||
exportWorkPool.Put(xw)
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
// Feed workers with work
|
||||
blocksRead := 0
|
||||
for sr.NextMetricBlock() {
|
||||
blocksRead++
|
||||
if deadline.Exceeded() {
|
||||
return fmt.Errorf("timeout exceeded while fetching data block #%d from storage: %s", blocksRead, deadline.String())
|
||||
}
|
||||
if atomic.LoadUint32(&mustStop) != 0 {
|
||||
break
|
||||
}
|
||||
xw := exportWorkPool.Get().(*exportWork)
|
||||
if err := xw.mn.Unmarshal(sr.MetricBlockRef.MetricName); err != nil {
|
||||
return fmt.Errorf("cannot unmarshal metricName for block #%d: %w", blocksRead, err)
|
||||
}
|
||||
sr.MetricBlockRef.BlockRef.MustReadBlock(&xw.b, true)
|
||||
workCh <- xw
|
||||
}
|
||||
close(workCh)
|
||||
|
||||
// Wait for workers to finish.
|
||||
wg.Wait()
|
||||
|
||||
// Check errors.
|
||||
err = sr.Error()
|
||||
if err == nil {
|
||||
err = errGlobal
|
||||
}
|
||||
if err != nil {
|
||||
if errors.Is(err, storage.ErrDeadlineExceeded) {
|
||||
return fmt.Errorf("timeout exceeded during the query: %s", deadline.String())
|
||||
}
|
||||
return fmt.Errorf("search error after reading %d data blocks: %w", blocksRead, err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type exportWork struct {
|
||||
mn storage.MetricName
|
||||
b storage.Block
|
||||
}
|
||||
|
||||
func (xw *exportWork) reset() {
|
||||
xw.mn.Reset()
|
||||
xw.b.Reset()
|
||||
}
|
||||
|
||||
var exportWorkPool = &sync.Pool{
|
||||
New: func() interface{} {
|
||||
return &exportWork{}
|
||||
},
|
||||
}
|
||||
|
||||
// ProcessSearchQuery performs sq until the given deadline.
|
||||
//
|
||||
// Results.RunParallel or Results.Cancel must be called on the returned Results.
|
||||
func ProcessSearchQuery(sq *storage.SearchQuery, fetchData bool, deadline searchutils.Deadline) (*Results, error) {
|
||||
|
|
|
@ -1,30 +1,29 @@
|
|||
{% import (
|
||||
"github.com/valyala/quicktemplate"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
|
||||
) %}
|
||||
|
||||
{% stripspace %}
|
||||
|
||||
{% func ExportPrometheusLine(rs *netstorage.Result) %}
|
||||
{% if len(rs.Timestamps) == 0 %}{% return %}{% endif %}
|
||||
{% func ExportPrometheusLine(xb *exportBlock) %}
|
||||
{% if len(xb.timestamps) == 0 %}{% return %}{% endif %}
|
||||
{% code bb := quicktemplate.AcquireByteBuffer() %}
|
||||
{% code writeprometheusMetricName(bb, &rs.MetricName) %}
|
||||
{% for i, ts := range rs.Timestamps %}
|
||||
{% code writeprometheusMetricName(bb, xb.mn) %}
|
||||
{% for i, ts := range xb.timestamps %}
|
||||
{%z= bb.B %}{% space %}
|
||||
{%f= rs.Values[i] %}{% space %}
|
||||
{%f= xb.values[i] %}{% space %}
|
||||
{%dl= ts %}{% newline %}
|
||||
{% endfor %}
|
||||
{% code quicktemplate.ReleaseByteBuffer(bb) %}
|
||||
{% endfunc %}
|
||||
|
||||
{% func ExportJSONLine(rs *netstorage.Result) %}
|
||||
{% if len(rs.Timestamps) == 0 %}{% return %}{% endif %}
|
||||
{% func ExportJSONLine(xb *exportBlock) %}
|
||||
{% if len(xb.timestamps) == 0 %}{% return %}{% endif %}
|
||||
{
|
||||
"metric":{%= metricNameObject(&rs.MetricName) %},
|
||||
"metric":{%= metricNameObject(xb.mn) %},
|
||||
"values":[
|
||||
{% if len(rs.Values) > 0 %}
|
||||
{% code values := rs.Values %}
|
||||
{% if len(xb.values) > 0 %}
|
||||
{% code values := xb.values %}
|
||||
{%f= values[0] %}
|
||||
{% code values = values[1:] %}
|
||||
{% for _, v := range values %}
|
||||
|
@ -33,8 +32,8 @@
|
|||
{% endif %}
|
||||
],
|
||||
"timestamps":[
|
||||
{% if len(rs.Timestamps) > 0 %}
|
||||
{% code timestamps := rs.Timestamps %}
|
||||
{% if len(xb.timestamps) > 0 %}
|
||||
{% code timestamps := xb.timestamps %}
|
||||
{%dl= timestamps[0] %}
|
||||
{% code timestamps = timestamps[1:] %}
|
||||
{% for _, ts := range timestamps %}
|
||||
|
@ -45,10 +44,10 @@
|
|||
}{% newline %}
|
||||
{% endfunc %}
|
||||
|
||||
{% func ExportPromAPILine(rs *netstorage.Result) %}
|
||||
{% func ExportPromAPILine(xb *exportBlock) %}
|
||||
{
|
||||
"metric": {%= metricNameObject(&rs.MetricName) %},
|
||||
"values": {%= valuesWithTimestamps(rs.Values, rs.Timestamps) %}
|
||||
"metric": {%= metricNameObject(xb.mn) %},
|
||||
"values": {%= valuesWithTimestamps(xb.values, xb.timestamps) %}
|
||||
}
|
||||
{% endfunc %}
|
||||
|
||||
|
|
|
@ -6,380 +6,379 @@ package prometheus
|
|||
|
||||
//line app/vmselect/prometheus/export.qtpl:1
|
||||
import (
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
|
||||
"github.com/valyala/quicktemplate"
|
||||
)
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:9
|
||||
//line app/vmselect/prometheus/export.qtpl:8
|
||||
import (
|
||||
qtio422016 "io"
|
||||
|
||||
qt422016 "github.com/valyala/quicktemplate"
|
||||
)
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:9
|
||||
//line app/vmselect/prometheus/export.qtpl:8
|
||||
var (
|
||||
_ = qtio422016.Copy
|
||||
_ = qt422016.AcquireByteBuffer
|
||||
)
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:8
|
||||
func StreamExportPrometheusLine(qw422016 *qt422016.Writer, xb *exportBlock) {
|
||||
//line app/vmselect/prometheus/export.qtpl:9
|
||||
if len(xb.timestamps) == 0 {
|
||||
//line app/vmselect/prometheus/export.qtpl:9
|
||||
func StreamExportPrometheusLine(qw422016 *qt422016.Writer, rs *netstorage.Result) {
|
||||
//line app/vmselect/prometheus/export.qtpl:10
|
||||
if len(rs.Timestamps) == 0 {
|
||||
//line app/vmselect/prometheus/export.qtpl:10
|
||||
return
|
||||
//line app/vmselect/prometheus/export.qtpl:10
|
||||
//line app/vmselect/prometheus/export.qtpl:9
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:11
|
||||
//line app/vmselect/prometheus/export.qtpl:10
|
||||
bb := quicktemplate.AcquireByteBuffer()
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:12
|
||||
writeprometheusMetricName(bb, &rs.MetricName)
|
||||
//line app/vmselect/prometheus/export.qtpl:11
|
||||
writeprometheusMetricName(bb, xb.mn)
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:12
|
||||
for i, ts := range xb.timestamps {
|
||||
//line app/vmselect/prometheus/export.qtpl:13
|
||||
for i, ts := range rs.Timestamps {
|
||||
//line app/vmselect/prometheus/export.qtpl:14
|
||||
qw422016.N().Z(bb.B)
|
||||
//line app/vmselect/prometheus/export.qtpl:13
|
||||
qw422016.N().S(` `)
|
||||
//line app/vmselect/prometheus/export.qtpl:14
|
||||
qw422016.N().F(xb.values[i])
|
||||
//line app/vmselect/prometheus/export.qtpl:14
|
||||
qw422016.N().S(` `)
|
||||
//line app/vmselect/prometheus/export.qtpl:15
|
||||
qw422016.N().F(rs.Values[i])
|
||||
//line app/vmselect/prometheus/export.qtpl:15
|
||||
qw422016.N().S(` `)
|
||||
//line app/vmselect/prometheus/export.qtpl:16
|
||||
qw422016.N().DL(ts)
|
||||
//line app/vmselect/prometheus/export.qtpl:16
|
||||
//line app/vmselect/prometheus/export.qtpl:15
|
||||
qw422016.N().S(`
|
||||
`)
|
||||
//line app/vmselect/prometheus/export.qtpl:17
|
||||
//line app/vmselect/prometheus/export.qtpl:16
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:18
|
||||
//line app/vmselect/prometheus/export.qtpl:17
|
||||
quicktemplate.ReleaseByteBuffer(bb)
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:19
|
||||
//line app/vmselect/prometheus/export.qtpl:18
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:19
|
||||
func WriteExportPrometheusLine(qq422016 qtio422016.Writer, rs *netstorage.Result) {
|
||||
//line app/vmselect/prometheus/export.qtpl:19
|
||||
//line app/vmselect/prometheus/export.qtpl:18
|
||||
func WriteExportPrometheusLine(qq422016 qtio422016.Writer, xb *exportBlock) {
|
||||
//line app/vmselect/prometheus/export.qtpl:18
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:19
|
||||
StreamExportPrometheusLine(qw422016, rs)
|
||||
//line app/vmselect/prometheus/export.qtpl:19
|
||||
//line app/vmselect/prometheus/export.qtpl:18
|
||||
StreamExportPrometheusLine(qw422016, xb)
|
||||
//line app/vmselect/prometheus/export.qtpl:18
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:19
|
||||
//line app/vmselect/prometheus/export.qtpl:18
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:19
|
||||
func ExportPrometheusLine(rs *netstorage.Result) string {
|
||||
//line app/vmselect/prometheus/export.qtpl:19
|
||||
//line app/vmselect/prometheus/export.qtpl:18
|
||||
func ExportPrometheusLine(xb *exportBlock) string {
|
||||
//line app/vmselect/prometheus/export.qtpl:18
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vmselect/prometheus/export.qtpl:19
|
||||
WriteExportPrometheusLine(qb422016, rs)
|
||||
//line app/vmselect/prometheus/export.qtpl:19
|
||||
//line app/vmselect/prometheus/export.qtpl:18
|
||||
WriteExportPrometheusLine(qb422016, xb)
|
||||
//line app/vmselect/prometheus/export.qtpl:18
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vmselect/prometheus/export.qtpl:19
|
||||
//line app/vmselect/prometheus/export.qtpl:18
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:19
|
||||
//line app/vmselect/prometheus/export.qtpl:18
|
||||
return qs422016
|
||||
//line app/vmselect/prometheus/export.qtpl:19
|
||||
//line app/vmselect/prometheus/export.qtpl:18
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:20
|
||||
func StreamExportJSONLine(qw422016 *qt422016.Writer, xb *exportBlock) {
|
||||
//line app/vmselect/prometheus/export.qtpl:21
|
||||
if len(xb.timestamps) == 0 {
|
||||
//line app/vmselect/prometheus/export.qtpl:21
|
||||
func StreamExportJSONLine(qw422016 *qt422016.Writer, rs *netstorage.Result) {
|
||||
//line app/vmselect/prometheus/export.qtpl:22
|
||||
if len(rs.Timestamps) == 0 {
|
||||
//line app/vmselect/prometheus/export.qtpl:22
|
||||
return
|
||||
//line app/vmselect/prometheus/export.qtpl:22
|
||||
//line app/vmselect/prometheus/export.qtpl:21
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:22
|
||||
//line app/vmselect/prometheus/export.qtpl:21
|
||||
qw422016.N().S(`{"metric":`)
|
||||
//line app/vmselect/prometheus/export.qtpl:24
|
||||
streammetricNameObject(qw422016, &rs.MetricName)
|
||||
//line app/vmselect/prometheus/export.qtpl:24
|
||||
//line app/vmselect/prometheus/export.qtpl:23
|
||||
streammetricNameObject(qw422016, xb.mn)
|
||||
//line app/vmselect/prometheus/export.qtpl:23
|
||||
qw422016.N().S(`,"values":[`)
|
||||
//line app/vmselect/prometheus/export.qtpl:25
|
||||
if len(xb.values) > 0 {
|
||||
//line app/vmselect/prometheus/export.qtpl:26
|
||||
if len(rs.Values) > 0 {
|
||||
//line app/vmselect/prometheus/export.qtpl:27
|
||||
values := rs.Values
|
||||
values := xb.values
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:28
|
||||
//line app/vmselect/prometheus/export.qtpl:27
|
||||
qw422016.N().F(values[0])
|
||||
//line app/vmselect/prometheus/export.qtpl:29
|
||||
//line app/vmselect/prometheus/export.qtpl:28
|
||||
values = values[1:]
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:30
|
||||
//line app/vmselect/prometheus/export.qtpl:29
|
||||
for _, v := range values {
|
||||
//line app/vmselect/prometheus/export.qtpl:30
|
||||
//line app/vmselect/prometheus/export.qtpl:29
|
||||
qw422016.N().S(`,`)
|
||||
//line app/vmselect/prometheus/export.qtpl:31
|
||||
//line app/vmselect/prometheus/export.qtpl:30
|
||||
qw422016.N().F(v)
|
||||
//line app/vmselect/prometheus/export.qtpl:32
|
||||
//line app/vmselect/prometheus/export.qtpl:31
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:33
|
||||
//line app/vmselect/prometheus/export.qtpl:32
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:33
|
||||
//line app/vmselect/prometheus/export.qtpl:32
|
||||
qw422016.N().S(`],"timestamps":[`)
|
||||
//line app/vmselect/prometheus/export.qtpl:35
|
||||
if len(xb.timestamps) > 0 {
|
||||
//line app/vmselect/prometheus/export.qtpl:36
|
||||
if len(rs.Timestamps) > 0 {
|
||||
//line app/vmselect/prometheus/export.qtpl:37
|
||||
timestamps := rs.Timestamps
|
||||
timestamps := xb.timestamps
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:38
|
||||
//line app/vmselect/prometheus/export.qtpl:37
|
||||
qw422016.N().DL(timestamps[0])
|
||||
//line app/vmselect/prometheus/export.qtpl:39
|
||||
//line app/vmselect/prometheus/export.qtpl:38
|
||||
timestamps = timestamps[1:]
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:40
|
||||
//line app/vmselect/prometheus/export.qtpl:39
|
||||
for _, ts := range timestamps {
|
||||
//line app/vmselect/prometheus/export.qtpl:40
|
||||
//line app/vmselect/prometheus/export.qtpl:39
|
||||
qw422016.N().S(`,`)
|
||||
//line app/vmselect/prometheus/export.qtpl:41
|
||||
//line app/vmselect/prometheus/export.qtpl:40
|
||||
qw422016.N().DL(ts)
|
||||
//line app/vmselect/prometheus/export.qtpl:42
|
||||
//line app/vmselect/prometheus/export.qtpl:41
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:43
|
||||
//line app/vmselect/prometheus/export.qtpl:42
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:43
|
||||
//line app/vmselect/prometheus/export.qtpl:42
|
||||
qw422016.N().S(`]}`)
|
||||
//line app/vmselect/prometheus/export.qtpl:45
|
||||
//line app/vmselect/prometheus/export.qtpl:44
|
||||
qw422016.N().S(`
|
||||
`)
|
||||
//line app/vmselect/prometheus/export.qtpl:46
|
||||
//line app/vmselect/prometheus/export.qtpl:45
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:46
|
||||
func WriteExportJSONLine(qq422016 qtio422016.Writer, rs *netstorage.Result) {
|
||||
//line app/vmselect/prometheus/export.qtpl:46
|
||||
//line app/vmselect/prometheus/export.qtpl:45
|
||||
func WriteExportJSONLine(qq422016 qtio422016.Writer, xb *exportBlock) {
|
||||
//line app/vmselect/prometheus/export.qtpl:45
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:46
|
||||
StreamExportJSONLine(qw422016, rs)
|
||||
//line app/vmselect/prometheus/export.qtpl:46
|
||||
//line app/vmselect/prometheus/export.qtpl:45
|
||||
StreamExportJSONLine(qw422016, xb)
|
||||
//line app/vmselect/prometheus/export.qtpl:45
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:46
|
||||
//line app/vmselect/prometheus/export.qtpl:45
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:46
|
||||
func ExportJSONLine(rs *netstorage.Result) string {
|
||||
//line app/vmselect/prometheus/export.qtpl:46
|
||||
//line app/vmselect/prometheus/export.qtpl:45
|
||||
func ExportJSONLine(xb *exportBlock) string {
|
||||
//line app/vmselect/prometheus/export.qtpl:45
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vmselect/prometheus/export.qtpl:46
|
||||
WriteExportJSONLine(qb422016, rs)
|
||||
//line app/vmselect/prometheus/export.qtpl:46
|
||||
//line app/vmselect/prometheus/export.qtpl:45
|
||||
WriteExportJSONLine(qb422016, xb)
|
||||
//line app/vmselect/prometheus/export.qtpl:45
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vmselect/prometheus/export.qtpl:46
|
||||
//line app/vmselect/prometheus/export.qtpl:45
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:46
|
||||
//line app/vmselect/prometheus/export.qtpl:45
|
||||
return qs422016
|
||||
//line app/vmselect/prometheus/export.qtpl:46
|
||||
//line app/vmselect/prometheus/export.qtpl:45
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:48
|
||||
func StreamExportPromAPILine(qw422016 *qt422016.Writer, rs *netstorage.Result) {
|
||||
//line app/vmselect/prometheus/export.qtpl:48
|
||||
//line app/vmselect/prometheus/export.qtpl:47
|
||||
func StreamExportPromAPILine(qw422016 *qt422016.Writer, xb *exportBlock) {
|
||||
//line app/vmselect/prometheus/export.qtpl:47
|
||||
qw422016.N().S(`{"metric":`)
|
||||
//line app/vmselect/prometheus/export.qtpl:50
|
||||
streammetricNameObject(qw422016, &rs.MetricName)
|
||||
//line app/vmselect/prometheus/export.qtpl:50
|
||||
//line app/vmselect/prometheus/export.qtpl:49
|
||||
streammetricNameObject(qw422016, xb.mn)
|
||||
//line app/vmselect/prometheus/export.qtpl:49
|
||||
qw422016.N().S(`,"values":`)
|
||||
//line app/vmselect/prometheus/export.qtpl:51
|
||||
streamvaluesWithTimestamps(qw422016, rs.Values, rs.Timestamps)
|
||||
//line app/vmselect/prometheus/export.qtpl:51
|
||||
//line app/vmselect/prometheus/export.qtpl:50
|
||||
streamvaluesWithTimestamps(qw422016, xb.values, xb.timestamps)
|
||||
//line app/vmselect/prometheus/export.qtpl:50
|
||||
qw422016.N().S(`}`)
|
||||
//line app/vmselect/prometheus/export.qtpl:53
|
||||
//line app/vmselect/prometheus/export.qtpl:52
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:53
|
||||
func WriteExportPromAPILine(qq422016 qtio422016.Writer, rs *netstorage.Result) {
|
||||
//line app/vmselect/prometheus/export.qtpl:53
|
||||
//line app/vmselect/prometheus/export.qtpl:52
|
||||
func WriteExportPromAPILine(qq422016 qtio422016.Writer, xb *exportBlock) {
|
||||
//line app/vmselect/prometheus/export.qtpl:52
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:53
|
||||
StreamExportPromAPILine(qw422016, rs)
|
||||
//line app/vmselect/prometheus/export.qtpl:53
|
||||
//line app/vmselect/prometheus/export.qtpl:52
|
||||
StreamExportPromAPILine(qw422016, xb)
|
||||
//line app/vmselect/prometheus/export.qtpl:52
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:53
|
||||
//line app/vmselect/prometheus/export.qtpl:52
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:53
|
||||
func ExportPromAPILine(rs *netstorage.Result) string {
|
||||
//line app/vmselect/prometheus/export.qtpl:53
|
||||
//line app/vmselect/prometheus/export.qtpl:52
|
||||
func ExportPromAPILine(xb *exportBlock) string {
|
||||
//line app/vmselect/prometheus/export.qtpl:52
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vmselect/prometheus/export.qtpl:53
|
||||
WriteExportPromAPILine(qb422016, rs)
|
||||
//line app/vmselect/prometheus/export.qtpl:53
|
||||
//line app/vmselect/prometheus/export.qtpl:52
|
||||
WriteExportPromAPILine(qb422016, xb)
|
||||
//line app/vmselect/prometheus/export.qtpl:52
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vmselect/prometheus/export.qtpl:53
|
||||
//line app/vmselect/prometheus/export.qtpl:52
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:53
|
||||
//line app/vmselect/prometheus/export.qtpl:52
|
||||
return qs422016
|
||||
//line app/vmselect/prometheus/export.qtpl:53
|
||||
//line app/vmselect/prometheus/export.qtpl:52
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:55
|
||||
//line app/vmselect/prometheus/export.qtpl:54
|
||||
func StreamExportPromAPIResponse(qw422016 *qt422016.Writer, resultsCh <-chan *quicktemplate.ByteBuffer) {
|
||||
//line app/vmselect/prometheus/export.qtpl:55
|
||||
//line app/vmselect/prometheus/export.qtpl:54
|
||||
qw422016.N().S(`{"status":"success","data":{"resultType":"matrix","result":[`)
|
||||
//line app/vmselect/prometheus/export.qtpl:61
|
||||
//line app/vmselect/prometheus/export.qtpl:60
|
||||
bb, ok := <-resultsCh
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:62
|
||||
//line app/vmselect/prometheus/export.qtpl:61
|
||||
if ok {
|
||||
//line app/vmselect/prometheus/export.qtpl:63
|
||||
//line app/vmselect/prometheus/export.qtpl:62
|
||||
qw422016.N().Z(bb.B)
|
||||
//line app/vmselect/prometheus/export.qtpl:64
|
||||
//line app/vmselect/prometheus/export.qtpl:63
|
||||
quicktemplate.ReleaseByteBuffer(bb)
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:65
|
||||
//line app/vmselect/prometheus/export.qtpl:64
|
||||
for bb := range resultsCh {
|
||||
//line app/vmselect/prometheus/export.qtpl:65
|
||||
//line app/vmselect/prometheus/export.qtpl:64
|
||||
qw422016.N().S(`,`)
|
||||
//line app/vmselect/prometheus/export.qtpl:66
|
||||
//line app/vmselect/prometheus/export.qtpl:65
|
||||
qw422016.N().Z(bb.B)
|
||||
//line app/vmselect/prometheus/export.qtpl:67
|
||||
//line app/vmselect/prometheus/export.qtpl:66
|
||||
quicktemplate.ReleaseByteBuffer(bb)
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:68
|
||||
//line app/vmselect/prometheus/export.qtpl:67
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:69
|
||||
//line app/vmselect/prometheus/export.qtpl:68
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:69
|
||||
//line app/vmselect/prometheus/export.qtpl:68
|
||||
qw422016.N().S(`]}}`)
|
||||
//line app/vmselect/prometheus/export.qtpl:73
|
||||
//line app/vmselect/prometheus/export.qtpl:72
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:73
|
||||
//line app/vmselect/prometheus/export.qtpl:72
|
||||
func WriteExportPromAPIResponse(qq422016 qtio422016.Writer, resultsCh <-chan *quicktemplate.ByteBuffer) {
|
||||
//line app/vmselect/prometheus/export.qtpl:73
|
||||
//line app/vmselect/prometheus/export.qtpl:72
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:73
|
||||
//line app/vmselect/prometheus/export.qtpl:72
|
||||
StreamExportPromAPIResponse(qw422016, resultsCh)
|
||||
//line app/vmselect/prometheus/export.qtpl:73
|
||||
//line app/vmselect/prometheus/export.qtpl:72
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:73
|
||||
//line app/vmselect/prometheus/export.qtpl:72
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:73
|
||||
//line app/vmselect/prometheus/export.qtpl:72
|
||||
func ExportPromAPIResponse(resultsCh <-chan *quicktemplate.ByteBuffer) string {
|
||||
//line app/vmselect/prometheus/export.qtpl:73
|
||||
//line app/vmselect/prometheus/export.qtpl:72
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vmselect/prometheus/export.qtpl:73
|
||||
//line app/vmselect/prometheus/export.qtpl:72
|
||||
WriteExportPromAPIResponse(qb422016, resultsCh)
|
||||
//line app/vmselect/prometheus/export.qtpl:73
|
||||
//line app/vmselect/prometheus/export.qtpl:72
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vmselect/prometheus/export.qtpl:73
|
||||
//line app/vmselect/prometheus/export.qtpl:72
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:73
|
||||
//line app/vmselect/prometheus/export.qtpl:72
|
||||
return qs422016
|
||||
//line app/vmselect/prometheus/export.qtpl:73
|
||||
//line app/vmselect/prometheus/export.qtpl:72
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:75
|
||||
//line app/vmselect/prometheus/export.qtpl:74
|
||||
func StreamExportStdResponse(qw422016 *qt422016.Writer, resultsCh <-chan *quicktemplate.ByteBuffer) {
|
||||
//line app/vmselect/prometheus/export.qtpl:76
|
||||
//line app/vmselect/prometheus/export.qtpl:75
|
||||
for bb := range resultsCh {
|
||||
//line app/vmselect/prometheus/export.qtpl:77
|
||||
//line app/vmselect/prometheus/export.qtpl:76
|
||||
qw422016.N().Z(bb.B)
|
||||
//line app/vmselect/prometheus/export.qtpl:78
|
||||
//line app/vmselect/prometheus/export.qtpl:77
|
||||
quicktemplate.ReleaseByteBuffer(bb)
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:79
|
||||
//line app/vmselect/prometheus/export.qtpl:78
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:80
|
||||
//line app/vmselect/prometheus/export.qtpl:79
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:80
|
||||
//line app/vmselect/prometheus/export.qtpl:79
|
||||
func WriteExportStdResponse(qq422016 qtio422016.Writer, resultsCh <-chan *quicktemplate.ByteBuffer) {
|
||||
//line app/vmselect/prometheus/export.qtpl:80
|
||||
//line app/vmselect/prometheus/export.qtpl:79
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:80
|
||||
//line app/vmselect/prometheus/export.qtpl:79
|
||||
StreamExportStdResponse(qw422016, resultsCh)
|
||||
//line app/vmselect/prometheus/export.qtpl:80
|
||||
//line app/vmselect/prometheus/export.qtpl:79
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:80
|
||||
//line app/vmselect/prometheus/export.qtpl:79
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:80
|
||||
//line app/vmselect/prometheus/export.qtpl:79
|
||||
func ExportStdResponse(resultsCh <-chan *quicktemplate.ByteBuffer) string {
|
||||
//line app/vmselect/prometheus/export.qtpl:80
|
||||
//line app/vmselect/prometheus/export.qtpl:79
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vmselect/prometheus/export.qtpl:80
|
||||
//line app/vmselect/prometheus/export.qtpl:79
|
||||
WriteExportStdResponse(qb422016, resultsCh)
|
||||
//line app/vmselect/prometheus/export.qtpl:80
|
||||
//line app/vmselect/prometheus/export.qtpl:79
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vmselect/prometheus/export.qtpl:80
|
||||
//line app/vmselect/prometheus/export.qtpl:79
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:80
|
||||
//line app/vmselect/prometheus/export.qtpl:79
|
||||
return qs422016
|
||||
//line app/vmselect/prometheus/export.qtpl:80
|
||||
//line app/vmselect/prometheus/export.qtpl:79
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:82
|
||||
//line app/vmselect/prometheus/export.qtpl:81
|
||||
func streamprometheusMetricName(qw422016 *qt422016.Writer, mn *storage.MetricName) {
|
||||
//line app/vmselect/prometheus/export.qtpl:83
|
||||
//line app/vmselect/prometheus/export.qtpl:82
|
||||
qw422016.N().Z(mn.MetricGroup)
|
||||
//line app/vmselect/prometheus/export.qtpl:84
|
||||
//line app/vmselect/prometheus/export.qtpl:83
|
||||
if len(mn.Tags) > 0 {
|
||||
//line app/vmselect/prometheus/export.qtpl:84
|
||||
//line app/vmselect/prometheus/export.qtpl:83
|
||||
qw422016.N().S(`{`)
|
||||
//line app/vmselect/prometheus/export.qtpl:86
|
||||
//line app/vmselect/prometheus/export.qtpl:85
|
||||
tags := mn.Tags
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:87
|
||||
//line app/vmselect/prometheus/export.qtpl:86
|
||||
qw422016.N().Z(tags[0].Key)
|
||||
//line app/vmselect/prometheus/export.qtpl:87
|
||||
//line app/vmselect/prometheus/export.qtpl:86
|
||||
qw422016.N().S(`=`)
|
||||
//line app/vmselect/prometheus/export.qtpl:87
|
||||
//line app/vmselect/prometheus/export.qtpl:86
|
||||
qw422016.N().QZ(tags[0].Value)
|
||||
//line app/vmselect/prometheus/export.qtpl:88
|
||||
//line app/vmselect/prometheus/export.qtpl:87
|
||||
tags = tags[1:]
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:89
|
||||
//line app/vmselect/prometheus/export.qtpl:88
|
||||
for i := range tags {
|
||||
//line app/vmselect/prometheus/export.qtpl:90
|
||||
//line app/vmselect/prometheus/export.qtpl:89
|
||||
tag := &tags[i]
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:90
|
||||
//line app/vmselect/prometheus/export.qtpl:89
|
||||
qw422016.N().S(`,`)
|
||||
//line app/vmselect/prometheus/export.qtpl:91
|
||||
//line app/vmselect/prometheus/export.qtpl:90
|
||||
qw422016.N().Z(tag.Key)
|
||||
//line app/vmselect/prometheus/export.qtpl:91
|
||||
//line app/vmselect/prometheus/export.qtpl:90
|
||||
qw422016.N().S(`=`)
|
||||
//line app/vmselect/prometheus/export.qtpl:91
|
||||
//line app/vmselect/prometheus/export.qtpl:90
|
||||
qw422016.N().QZ(tag.Value)
|
||||
//line app/vmselect/prometheus/export.qtpl:92
|
||||
//line app/vmselect/prometheus/export.qtpl:91
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:92
|
||||
//line app/vmselect/prometheus/export.qtpl:91
|
||||
qw422016.N().S(`}`)
|
||||
//line app/vmselect/prometheus/export.qtpl:94
|
||||
//line app/vmselect/prometheus/export.qtpl:93
|
||||
}
|
||||
//line app/vmselect/prometheus/export.qtpl:95
|
||||
//line app/vmselect/prometheus/export.qtpl:94
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:95
|
||||
//line app/vmselect/prometheus/export.qtpl:94
|
||||
func writeprometheusMetricName(qq422016 qtio422016.Writer, mn *storage.MetricName) {
|
||||
//line app/vmselect/prometheus/export.qtpl:95
|
||||
//line app/vmselect/prometheus/export.qtpl:94
|
||||
qw422016 := qt422016.AcquireWriter(qq422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:95
|
||||
//line app/vmselect/prometheus/export.qtpl:94
|
||||
streamprometheusMetricName(qw422016, mn)
|
||||
//line app/vmselect/prometheus/export.qtpl:95
|
||||
//line app/vmselect/prometheus/export.qtpl:94
|
||||
qt422016.ReleaseWriter(qw422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:95
|
||||
//line app/vmselect/prometheus/export.qtpl:94
|
||||
}
|
||||
|
||||
//line app/vmselect/prometheus/export.qtpl:95
|
||||
//line app/vmselect/prometheus/export.qtpl:94
|
||||
func prometheusMetricName(mn *storage.MetricName) string {
|
||||
//line app/vmselect/prometheus/export.qtpl:95
|
||||
//line app/vmselect/prometheus/export.qtpl:94
|
||||
qb422016 := qt422016.AcquireByteBuffer()
|
||||
//line app/vmselect/prometheus/export.qtpl:95
|
||||
//line app/vmselect/prometheus/export.qtpl:94
|
||||
writeprometheusMetricName(qb422016, mn)
|
||||
//line app/vmselect/prometheus/export.qtpl:95
|
||||
//line app/vmselect/prometheus/export.qtpl:94
|
||||
qs422016 := string(qb422016.B)
|
||||
//line app/vmselect/prometheus/export.qtpl:95
|
||||
//line app/vmselect/prometheus/export.qtpl:94
|
||||
qt422016.ReleaseByteBuffer(qb422016)
|
||||
//line app/vmselect/prometheus/export.qtpl:95
|
||||
//line app/vmselect/prometheus/export.qtpl:94
|
||||
return qs422016
|
||||
//line app/vmselect/prometheus/export.qtpl:95
|
||||
//line app/vmselect/prometheus/export.qtpl:94
|
||||
}
|
||||
|
|
|
@ -11,9 +11,12 @@ import (
|
|||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/bufferedwriter"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/netstorage"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/promql"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/searchutils"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver"
|
||||
|
@ -84,34 +87,117 @@ func FederateHandler(startTime time.Time, w http.ResponseWriter, r *http.Request
|
|||
return fmt.Errorf("cannot fetch data for %q: %w", sq, err)
|
||||
}
|
||||
|
||||
resultsCh := make(chan *quicktemplate.ByteBuffer)
|
||||
doneCh := make(chan error)
|
||||
go func() {
|
||||
err := rss.RunParallel(func(rs *netstorage.Result, workerID uint) {
|
||||
bb := quicktemplate.AcquireByteBuffer()
|
||||
WriteFederate(bb, rs)
|
||||
resultsCh <- bb
|
||||
})
|
||||
close(resultsCh)
|
||||
doneCh <- err
|
||||
}()
|
||||
|
||||
w.Header().Set("Content-Type", "text/plain")
|
||||
for bb := range resultsCh {
|
||||
w.Write(bb.B)
|
||||
bw := bufferedwriter.Get(w)
|
||||
defer bufferedwriter.Put(bw)
|
||||
err = rss.RunParallel(func(rs *netstorage.Result, workerID uint) error {
|
||||
if err := bw.Error(); err != nil {
|
||||
return err
|
||||
}
|
||||
bb := quicktemplate.AcquireByteBuffer()
|
||||
WriteFederate(bb, rs)
|
||||
_, err := bw.Write(bb.B)
|
||||
quicktemplate.ReleaseByteBuffer(bb)
|
||||
}
|
||||
|
||||
err = <-doneCh
|
||||
return err
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf("error during data fetching: %w", err)
|
||||
}
|
||||
if err := bw.Flush(); err != nil {
|
||||
return err
|
||||
}
|
||||
federateDuration.UpdateDuration(startTime)
|
||||
return nil
|
||||
}
|
||||
|
||||
var federateDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/federate"}`)
|
||||
|
||||
// ExportNativeHandler exports data in native format from /api/v1/export/native.
|
||||
func ExportNativeHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
|
||||
ct := startTime.UnixNano() / 1e6
|
||||
if err := r.ParseForm(); err != nil {
|
||||
return fmt.Errorf("cannot parse request form values: %w", err)
|
||||
}
|
||||
matches := r.Form["match[]"]
|
||||
if len(matches) == 0 {
|
||||
// Maintain backwards compatibility
|
||||
match := r.FormValue("match")
|
||||
if len(match) == 0 {
|
||||
return fmt.Errorf("missing `match[]` arg")
|
||||
}
|
||||
matches = []string{match}
|
||||
}
|
||||
start, err := searchutils.GetTime(r, "start", 0)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
end, err := searchutils.GetTime(r, "end", ct)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
deadline := searchutils.GetDeadlineForExport(r, startTime)
|
||||
tagFilterss, err := getTagFilterssFromMatches(matches)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
sq := &storage.SearchQuery{
|
||||
MinTimestamp: start,
|
||||
MaxTimestamp: end,
|
||||
TagFilterss: tagFilterss,
|
||||
}
|
||||
w.Header().Set("Content-Type", "VictoriaMetrics/native")
|
||||
bw := bufferedwriter.Get(w)
|
||||
defer bufferedwriter.Put(bw)
|
||||
|
||||
// Marshal tr
|
||||
trBuf := make([]byte, 0, 16)
|
||||
trBuf = encoding.MarshalInt64(trBuf, start)
|
||||
trBuf = encoding.MarshalInt64(trBuf, end)
|
||||
_, _ = bw.Write(trBuf)
|
||||
|
||||
// Marshal native blocks.
|
||||
err = netstorage.ExportBlocks(sq, deadline, func(mn *storage.MetricName, b *storage.Block, tr storage.TimeRange) error {
|
||||
if err := bw.Error(); err != nil {
|
||||
return err
|
||||
}
|
||||
dstBuf := bbPool.Get()
|
||||
tmpBuf := bbPool.Get()
|
||||
dst := dstBuf.B
|
||||
tmp := tmpBuf.B
|
||||
|
||||
// Marshal mn
|
||||
tmp = mn.Marshal(tmp[:0])
|
||||
dst = encoding.MarshalUint32(dst, uint32(len(tmp)))
|
||||
dst = append(dst, tmp...)
|
||||
|
||||
// Marshal b
|
||||
tmp = b.MarshalPortable(tmp[:0])
|
||||
dst = encoding.MarshalUint32(dst, uint32(len(tmp)))
|
||||
dst = append(dst, tmp...)
|
||||
|
||||
tmpBuf.B = tmp
|
||||
bbPool.Put(tmpBuf)
|
||||
|
||||
_, err := bw.Write(dst)
|
||||
|
||||
dstBuf.B = dst
|
||||
bbPool.Put(dstBuf)
|
||||
return err
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := bw.Flush(); err != nil {
|
||||
return err
|
||||
}
|
||||
exportNativeDuration.UpdateDuration(startTime)
|
||||
return nil
|
||||
}
|
||||
|
||||
var exportNativeDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/api/v1/export/native"}`)
|
||||
|
||||
var bbPool bytesutil.ByteBufferPool
|
||||
|
||||
// ExportHandler exports data in raw format from /api/v1/export.
|
||||
func ExportHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) error {
|
||||
ct := startTime.UnixNano() / 1e6
|
||||
|
@ -137,11 +223,12 @@ func ExportHandler(startTime time.Time, w http.ResponseWriter, r *http.Request)
|
|||
}
|
||||
format := r.FormValue("format")
|
||||
maxRowsPerLine := int(fastfloat.ParseInt64BestEffort(r.FormValue("max_rows_per_line")))
|
||||
reduceMemUsage := searchutils.GetBool(r, "reduce_mem_usage")
|
||||
deadline := searchutils.GetDeadlineForExport(r, startTime)
|
||||
if start >= end {
|
||||
end = start + defaultStep
|
||||
}
|
||||
if err := exportHandler(w, matches, start, end, format, maxRowsPerLine, deadline); err != nil {
|
||||
if err := exportHandler(w, matches, start, end, format, maxRowsPerLine, reduceMemUsage, deadline); err != nil {
|
||||
return fmt.Errorf("error when exporting data for queries=%q on the time range (start=%d, end=%d): %w", matches, start, end, err)
|
||||
}
|
||||
exportDuration.UpdateDuration(startTime)
|
||||
|
@ -150,17 +237,34 @@ func ExportHandler(startTime time.Time, w http.ResponseWriter, r *http.Request)
|
|||
|
||||
var exportDuration = metrics.NewSummary(`vm_request_duration_seconds{path="/api/v1/export"}`)
|
||||
|
||||
func exportHandler(w http.ResponseWriter, matches []string, start, end int64, format string, maxRowsPerLine int, deadline searchutils.Deadline) error {
|
||||
func exportHandler(w http.ResponseWriter, matches []string, start, end int64, format string, maxRowsPerLine int, reduceMemUsage bool, deadline searchutils.Deadline) error {
|
||||
writeResponseFunc := WriteExportStdResponse
|
||||
writeLineFunc := func(rs *netstorage.Result, resultsCh chan<- *quicktemplate.ByteBuffer) {
|
||||
writeLineFunc := func(xb *exportBlock, resultsCh chan<- *quicktemplate.ByteBuffer) {
|
||||
bb := quicktemplate.AcquireByteBuffer()
|
||||
WriteExportJSONLine(bb, rs)
|
||||
WriteExportJSONLine(bb, xb)
|
||||
resultsCh <- bb
|
||||
}
|
||||
contentType := "application/stream+json"
|
||||
if format == "prometheus" {
|
||||
contentType = "text/plain"
|
||||
writeLineFunc = func(xb *exportBlock, resultsCh chan<- *quicktemplate.ByteBuffer) {
|
||||
bb := quicktemplate.AcquireByteBuffer()
|
||||
WriteExportPrometheusLine(bb, xb)
|
||||
resultsCh <- bb
|
||||
}
|
||||
} else if format == "promapi" {
|
||||
writeResponseFunc = WriteExportPromAPIResponse
|
||||
writeLineFunc = func(xb *exportBlock, resultsCh chan<- *quicktemplate.ByteBuffer) {
|
||||
bb := quicktemplate.AcquireByteBuffer()
|
||||
WriteExportPromAPILine(bb, xb)
|
||||
resultsCh <- bb
|
||||
}
|
||||
}
|
||||
if maxRowsPerLine > 0 {
|
||||
writeLineFunc = func(rs *netstorage.Result, resultsCh chan<- *quicktemplate.ByteBuffer) {
|
||||
valuesOrig := rs.Values
|
||||
timestampsOrig := rs.Timestamps
|
||||
writeLineFuncOrig := writeLineFunc
|
||||
writeLineFunc = func(xb *exportBlock, resultsCh chan<- *quicktemplate.ByteBuffer) {
|
||||
valuesOrig := xb.values
|
||||
timestampsOrig := xb.timestamps
|
||||
values := valuesOrig
|
||||
timestamps := timestampsOrig
|
||||
for len(values) > 0 {
|
||||
|
@ -177,30 +281,12 @@ func exportHandler(w http.ResponseWriter, matches []string, start, end int64, fo
|
|||
values = nil
|
||||
timestamps = nil
|
||||
}
|
||||
rs.Values = valuesChunk
|
||||
rs.Timestamps = timestampsChunk
|
||||
bb := quicktemplate.AcquireByteBuffer()
|
||||
WriteExportJSONLine(bb, rs)
|
||||
resultsCh <- bb
|
||||
xb.values = valuesChunk
|
||||
xb.timestamps = timestampsChunk
|
||||
writeLineFuncOrig(xb, resultsCh)
|
||||
}
|
||||
rs.Values = valuesOrig
|
||||
rs.Timestamps = timestampsOrig
|
||||
}
|
||||
}
|
||||
contentType := "application/stream+json"
|
||||
if format == "prometheus" {
|
||||
contentType = "text/plain"
|
||||
writeLineFunc = func(rs *netstorage.Result, resultsCh chan<- *quicktemplate.ByteBuffer) {
|
||||
bb := quicktemplate.AcquireByteBuffer()
|
||||
WriteExportPrometheusLine(bb, rs)
|
||||
resultsCh <- bb
|
||||
}
|
||||
} else if format == "promapi" {
|
||||
writeResponseFunc = WriteExportPromAPIResponse
|
||||
writeLineFunc = func(rs *netstorage.Result, resultsCh chan<- *quicktemplate.ByteBuffer) {
|
||||
bb := quicktemplate.AcquireByteBuffer()
|
||||
WriteExportPromAPILine(bb, rs)
|
||||
resultsCh <- bb
|
||||
xb.values = valuesOrig
|
||||
xb.timestamps = timestampsOrig
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -213,28 +299,62 @@ func exportHandler(w http.ResponseWriter, matches []string, start, end int64, fo
|
|||
MaxTimestamp: end,
|
||||
TagFilterss: tagFilterss,
|
||||
}
|
||||
rss, err := netstorage.ProcessSearchQuery(sq, true, deadline)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot fetch data for %q: %w", sq, err)
|
||||
}
|
||||
w.Header().Set("Content-Type", contentType)
|
||||
bw := bufferedwriter.Get(w)
|
||||
defer bufferedwriter.Put(bw)
|
||||
|
||||
resultsCh := make(chan *quicktemplate.ByteBuffer, runtime.GOMAXPROCS(-1))
|
||||
doneCh := make(chan error)
|
||||
go func() {
|
||||
err := rss.RunParallel(func(rs *netstorage.Result, workerID uint) {
|
||||
writeLineFunc(rs, resultsCh)
|
||||
})
|
||||
close(resultsCh)
|
||||
doneCh <- err
|
||||
}()
|
||||
if !reduceMemUsage {
|
||||
rss, err := netstorage.ProcessSearchQuery(sq, true, deadline)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot fetch data for %q: %w", sq, err)
|
||||
}
|
||||
go func() {
|
||||
err := rss.RunParallel(func(rs *netstorage.Result, workerID uint) error {
|
||||
if err := bw.Error(); err != nil {
|
||||
return err
|
||||
}
|
||||
xb := exportBlockPool.Get().(*exportBlock)
|
||||
xb.mn = &rs.MetricName
|
||||
xb.timestamps = rs.Timestamps
|
||||
xb.values = rs.Values
|
||||
writeLineFunc(xb, resultsCh)
|
||||
xb.reset()
|
||||
exportBlockPool.Put(xb)
|
||||
return nil
|
||||
})
|
||||
close(resultsCh)
|
||||
doneCh <- err
|
||||
}()
|
||||
} else {
|
||||
go func() {
|
||||
err := netstorage.ExportBlocks(sq, deadline, func(mn *storage.MetricName, b *storage.Block, tr storage.TimeRange) error {
|
||||
if err := bw.Error(); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := b.UnmarshalData(); err != nil {
|
||||
return fmt.Errorf("cannot unmarshal block during export: %s", err)
|
||||
}
|
||||
xb := exportBlockPool.Get().(*exportBlock)
|
||||
xb.mn = mn
|
||||
xb.timestamps, xb.values = b.AppendRowsWithTimeRangeFilter(xb.timestamps[:0], xb.values[:0], tr)
|
||||
if len(xb.timestamps) > 0 {
|
||||
writeLineFunc(xb, resultsCh)
|
||||
}
|
||||
xb.reset()
|
||||
exportBlockPool.Put(xb)
|
||||
return nil
|
||||
})
|
||||
close(resultsCh)
|
||||
doneCh <- err
|
||||
}()
|
||||
}
|
||||
|
||||
w.Header().Set("Content-Type", contentType)
|
||||
writeResponseFunc(w, resultsCh)
|
||||
|
||||
// Consume all the data from resultsCh in the event writeResponseFunc
|
||||
// fails to consume all the data.
|
||||
for bb := range resultsCh {
|
||||
quicktemplate.ReleaseByteBuffer(bb)
|
||||
// writeResponseFunc must consume all the data from resultsCh.
|
||||
writeResponseFunc(bw, resultsCh)
|
||||
if err := bw.Flush(); err != nil {
|
||||
return err
|
||||
}
|
||||
err = <-doneCh
|
||||
if err != nil {
|
||||
|
@ -243,6 +363,24 @@ func exportHandler(w http.ResponseWriter, matches []string, start, end int64, fo
|
|||
return nil
|
||||
}
|
||||
|
||||
type exportBlock struct {
|
||||
mn *storage.MetricName
|
||||
timestamps []int64
|
||||
values []float64
|
||||
}
|
||||
|
||||
func (xb *exportBlock) reset() {
|
||||
xb.mn = nil
|
||||
xb.timestamps = xb.timestamps[:0]
|
||||
xb.values = xb.values[:0]
|
||||
}
|
||||
|
||||
var exportBlockPool = &sync.Pool{
|
||||
New: func() interface{} {
|
||||
return &exportBlock{}
|
||||
},
|
||||
}
|
||||
|
||||
// DeleteHandler processes /api/v1/admin/tsdb/delete_series prometheus API request.
|
||||
//
|
||||
// See https://prometheus.io/docs/prometheus/latest/querying/api/#delete-series
|
||||
|
@ -317,7 +455,12 @@ func LabelValuesHandler(startTime time.Time, labelName string, w http.ResponseWr
|
|||
}
|
||||
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
WriteLabelValuesResponse(w, labelValues)
|
||||
bw := bufferedwriter.Get(w)
|
||||
defer bufferedwriter.Put(bw)
|
||||
WriteLabelValuesResponse(bw, labelValues)
|
||||
if err := bw.Flush(); err != nil {
|
||||
return err
|
||||
}
|
||||
labelValuesDuration.UpdateDuration(startTime)
|
||||
return nil
|
||||
}
|
||||
|
@ -358,14 +501,15 @@ func labelValuesWithMatches(labelName string, matches []string, start, end int64
|
|||
|
||||
m := make(map[string]struct{})
|
||||
var mLock sync.Mutex
|
||||
err = rss.RunParallel(func(rs *netstorage.Result, workerID uint) {
|
||||
err = rss.RunParallel(func(rs *netstorage.Result, workerID uint) error {
|
||||
labelValue := rs.MetricName.GetTagValue(labelName)
|
||||
if len(labelValue) == 0 {
|
||||
return
|
||||
return nil
|
||||
}
|
||||
mLock.Lock()
|
||||
m[string(labelValue)] = struct{}{}
|
||||
mLock.Unlock()
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error when data fetching: %w", err)
|
||||
|
@ -389,7 +533,12 @@ func LabelsCountHandler(startTime time.Time, w http.ResponseWriter, r *http.Requ
|
|||
return fmt.Errorf(`cannot obtain label entries: %w`, err)
|
||||
}
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
WriteLabelsCountResponse(w, labelEntries)
|
||||
bw := bufferedwriter.Get(w)
|
||||
defer bufferedwriter.Put(bw)
|
||||
WriteLabelsCountResponse(bw, labelEntries)
|
||||
if err := bw.Flush(); err != nil {
|
||||
return err
|
||||
}
|
||||
labelsCountDuration.UpdateDuration(startTime)
|
||||
return nil
|
||||
}
|
||||
|
@ -435,7 +584,12 @@ func TSDBStatusHandler(startTime time.Time, w http.ResponseWriter, r *http.Reque
|
|||
return fmt.Errorf(`cannot obtain tsdb status for date=%d, topN=%d: %w`, date, topN, err)
|
||||
}
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
WriteTSDBStatusResponse(w, status)
|
||||
bw := bufferedwriter.Get(w)
|
||||
defer bufferedwriter.Put(bw)
|
||||
WriteTSDBStatusResponse(bw, status)
|
||||
if err := bw.Flush(); err != nil {
|
||||
return err
|
||||
}
|
||||
tsdbStatusDuration.UpdateDuration(startTime)
|
||||
return nil
|
||||
}
|
||||
|
@ -480,7 +634,12 @@ func LabelsHandler(startTime time.Time, w http.ResponseWriter, r *http.Request)
|
|||
}
|
||||
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
WriteLabelsResponse(w, labels)
|
||||
bw := bufferedwriter.Get(w)
|
||||
defer bufferedwriter.Put(bw)
|
||||
WriteLabelsResponse(bw, labels)
|
||||
if err := bw.Flush(); err != nil {
|
||||
return err
|
||||
}
|
||||
labelsDuration.UpdateDuration(startTime)
|
||||
return nil
|
||||
}
|
||||
|
@ -508,7 +667,7 @@ func labelsWithMatches(matches []string, start, end int64, deadline searchutils.
|
|||
|
||||
m := make(map[string]struct{})
|
||||
var mLock sync.Mutex
|
||||
err = rss.RunParallel(func(rs *netstorage.Result, workerID uint) {
|
||||
err = rss.RunParallel(func(rs *netstorage.Result, workerID uint) error {
|
||||
mLock.Lock()
|
||||
tags := rs.MetricName.Tags
|
||||
for i := range tags {
|
||||
|
@ -517,6 +676,7 @@ func labelsWithMatches(matches []string, start, end int64, deadline searchutils.
|
|||
}
|
||||
m["__name__"] = struct{}{}
|
||||
mLock.Unlock()
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error when data fetching: %w", err)
|
||||
|
@ -540,7 +700,12 @@ func SeriesCountHandler(startTime time.Time, w http.ResponseWriter, r *http.Requ
|
|||
return fmt.Errorf("cannot obtain series count: %w", err)
|
||||
}
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
WriteSeriesCountResponse(w, n)
|
||||
bw := bufferedwriter.Get(w)
|
||||
defer bufferedwriter.Put(bw)
|
||||
WriteSeriesCountResponse(bw, n)
|
||||
if err := bw.Flush(); err != nil {
|
||||
return err
|
||||
}
|
||||
seriesCountDuration.UpdateDuration(startTime)
|
||||
return nil
|
||||
}
|
||||
|
@ -591,25 +756,28 @@ func SeriesHandler(startTime time.Time, w http.ResponseWriter, r *http.Request)
|
|||
return fmt.Errorf("cannot fetch data for %q: %w", sq, err)
|
||||
}
|
||||
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
bw := bufferedwriter.Get(w)
|
||||
defer bufferedwriter.Put(bw)
|
||||
resultsCh := make(chan *quicktemplate.ByteBuffer)
|
||||
doneCh := make(chan error)
|
||||
go func() {
|
||||
err := rss.RunParallel(func(rs *netstorage.Result, workerID uint) {
|
||||
err := rss.RunParallel(func(rs *netstorage.Result, workerID uint) error {
|
||||
if err := bw.Error(); err != nil {
|
||||
return err
|
||||
}
|
||||
bb := quicktemplate.AcquireByteBuffer()
|
||||
writemetricNameObject(bb, &rs.MetricName)
|
||||
resultsCh <- bb
|
||||
return nil
|
||||
})
|
||||
close(resultsCh)
|
||||
doneCh <- err
|
||||
}()
|
||||
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
WriteSeriesResponse(w, resultsCh)
|
||||
|
||||
// Consume all the data from resultsCh in the event WriteSeriesResponse
|
||||
// fails to consume all the data.
|
||||
for bb := range resultsCh {
|
||||
quicktemplate.ReleaseByteBuffer(bb)
|
||||
// WriteSeriesResponse must consume all the data from resultsCh.
|
||||
WriteSeriesResponse(bw, resultsCh)
|
||||
if err := bw.Flush(); err != nil {
|
||||
return err
|
||||
}
|
||||
err = <-doneCh
|
||||
if err != nil {
|
||||
|
@ -662,7 +830,7 @@ func QueryHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) e
|
|||
start -= offset
|
||||
end := start
|
||||
start = end - window
|
||||
if err := exportHandler(w, []string{childQuery}, start, end, "promapi", 0, deadline); err != nil {
|
||||
if err := exportHandler(w, []string{childQuery}, start, end, "promapi", 0, false, deadline); err != nil {
|
||||
return fmt.Errorf("error when exporting data for query=%q on the time range (start=%d, end=%d): %w", childQuery, start, end, err)
|
||||
}
|
||||
queryDuration.UpdateDuration(startTime)
|
||||
|
@ -726,7 +894,12 @@ func QueryHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) e
|
|||
}
|
||||
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
WriteQueryResponse(w, result)
|
||||
bw := bufferedwriter.Get(w)
|
||||
defer bufferedwriter.Put(bw)
|
||||
WriteQueryResponse(bw, result)
|
||||
if err := bw.Flush(); err != nil {
|
||||
return err
|
||||
}
|
||||
queryDuration.UpdateDuration(startTime)
|
||||
return nil
|
||||
}
|
||||
|
@ -820,7 +993,12 @@ func queryRangeHandler(startTime time.Time, w http.ResponseWriter, query string,
|
|||
result = removeEmptyValuesAndTimeseries(result)
|
||||
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
WriteQueryRangeResponse(w, result)
|
||||
bw := bufferedwriter.Get(w)
|
||||
defer bufferedwriter.Put(bw)
|
||||
WriteQueryRangeResponse(bw, result)
|
||||
if err := bw.Flush(); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
|
|
|
@ -741,7 +741,7 @@ func getRollupMemoryLimiter() *memoryLimiter {
|
|||
|
||||
func evalRollupWithIncrementalAggregate(name string, iafc *incrementalAggrFuncContext, rss *netstorage.Results, rcs []*rollupConfig,
|
||||
preFunc func(values []float64, timestamps []int64), sharedTimestamps []int64, removeMetricGroup bool) ([]*timeseries, error) {
|
||||
err := rss.RunParallel(func(rs *netstorage.Result, workerID uint) {
|
||||
err := rss.RunParallel(func(rs *netstorage.Result, workerID uint) error {
|
||||
preFunc(rs.Values, rs.Timestamps)
|
||||
ts := getTimeseries()
|
||||
defer putTimeseries(ts)
|
||||
|
@ -761,6 +761,7 @@ func evalRollupWithIncrementalAggregate(name string, iafc *incrementalAggrFuncCo
|
|||
ts.Timestamps = nil
|
||||
ts.denyReuse = false
|
||||
}
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
@ -773,7 +774,7 @@ func evalRollupNoIncrementalAggregate(name string, rss *netstorage.Results, rcs
|
|||
preFunc func(values []float64, timestamps []int64), sharedTimestamps []int64, removeMetricGroup bool) ([]*timeseries, error) {
|
||||
tss := make([]*timeseries, 0, rss.Len()*len(rcs))
|
||||
var tssLock sync.Mutex
|
||||
err := rss.RunParallel(func(rs *netstorage.Result, workerID uint) {
|
||||
err := rss.RunParallel(func(rs *netstorage.Result, workerID uint) error {
|
||||
preFunc(rs.Values, rs.Timestamps)
|
||||
for _, rc := range rcs {
|
||||
if tsm := newTimeseriesMap(name, sharedTimestamps, &rs.MetricName); tsm != nil {
|
||||
|
@ -789,6 +790,7 @@ func evalRollupNoIncrementalAggregate(name string, rss *netstorage.Results, rcs
|
|||
tss = append(tss, &ts)
|
||||
tssLock.Unlock()
|
||||
}
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
|
|
@ -403,6 +403,14 @@ func registerStorageMetrics() {
|
|||
return float64(idbm().AssistedMerges)
|
||||
})
|
||||
|
||||
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/686
|
||||
metrics.NewGauge(`vm_merge_need_free_disk_space{type="storage/small"}`, func() float64 {
|
||||
return float64(tm().SmallMergeNeedFreeDiskSpace)
|
||||
})
|
||||
metrics.NewGauge(`vm_merge_need_free_disk_space{type="storage/big"}`, func() float64 {
|
||||
return float64(tm().BigMergeNeedFreeDiskSpace)
|
||||
})
|
||||
|
||||
metrics.NewGauge(`vm_pending_rows{type="storage"}`, func() float64 {
|
||||
return float64(tm().PendingRows)
|
||||
})
|
||||
|
|
|
@ -177,6 +177,7 @@ or [an alternative dashboard for VictoriaMetrics cluster](https://grafana.com/gr
|
|||
This handler is disabled by default. It is exposed on a distinct TCP address set via `-opentsdbHTTPListenAddr` command-line flag.
|
||||
See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#sending-opentsdb-data-via-http-apiput-requests) for details.
|
||||
- `prometheus/api/v1/import` - for importing data obtained via `api/v1/export` on `vmselect` (see below).
|
||||
- `prometheus/api/v1/import/native` - for importing data obtained via `api/v1/export/native` on `vmselect` (see below).
|
||||
- `prometheus/api/v1/import/csv` - for importing arbitrary CSV data. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-import-csv-data) for details.
|
||||
- `prometheus/api/v1/import/prometheus` - for importing data in Prometheus exposition format. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-import-data-in-prometheus-exposition-format) for details.
|
||||
|
||||
|
@ -189,7 +190,8 @@ or [an alternative dashboard for VictoriaMetrics cluster](https://grafana.com/gr
|
|||
- `api/v1/labels` - returns a [list of label names](https://prometheus.io/docs/prometheus/latest/querying/api/#getting-label-names).
|
||||
- `api/v1/label/<label_name>/values` - returns values for the given `<label_name>` according [to API](https://prometheus.io/docs/prometheus/latest/querying/api/#querying-label-values).
|
||||
- `federate` - returns [federated metrics](https://prometheus.io/docs/prometheus/latest/federation/).
|
||||
- `api/v1/export` - exports raw data. See [this article](https://medium.com/@valyala/analyzing-prometheus-data-with-external-tools-5f3e5e147639) for details.
|
||||
- `api/v1/export` - exports raw data in JSON line format. See [this article](https://medium.com/@valyala/analyzing-prometheus-data-with-external-tools-5f3e5e147639) for details.
|
||||
- `api/v1/export/native` - exports raw data in native binary format. It may be imported into another VictoriaMetrics via `api/v1/import/native` (see above).
|
||||
- `api/v1/status/tsdb` - for time series stats. See [these docs](https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-stats) for details.
|
||||
- `api/v1/status/active_queries` - for currently executed active queries. Note that every `vmselect` maintains an independent list of active queries,
|
||||
which is returned in the response.
|
||||
|
|
|
@ -91,7 +91,7 @@ The main differences between Cortex and VictoriaMetrics:
|
|||
VictoriaMetrics may lose only a few seconds of recent data, which isn't synced to persistent storage yet.
|
||||
See [this article for details](https://medium.com/@valyala/wal-usage-looks-broken-in-modern-time-series-databases-b62a627ab704).
|
||||
- Cortex is usually slower and requires more CPU and RAM than VictoriaMetrics. See [this talk from Adidas at PromCon 2019](https://promcon.io/2019-munich/talks/remote-write-storage-wars/) and [other case studies](https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/CaseStudies).
|
||||
- VictoriaMetrics accepts data in multiple popular data ingestion protocols additionally to Prometheus remote_write protocol - InfluxDB, OpenTSDB, Graphite, CSV.
|
||||
- VictoriaMetrics accepts data in multiple popular data ingestion protocols additionally to Prometheus remote_write protocol - InfluxDB, OpenTSDB, Graphite, CSV, JSON, native binary.
|
||||
See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-import-time-series-data) for details.
|
||||
|
||||
|
||||
|
@ -112,7 +112,7 @@ The main differences between Cortex and VictoriaMetrics:
|
|||
- Thanos may be harder to setup and operate comparing to VictoriaMetrics, since it has more moving parts, which can be connected with less reliable networks.
|
||||
See [this article for details](https://medium.com/faun/comparing-thanos-to-victoriametrics-cluster-b193bea1683).
|
||||
- Thanos is usually slower and requires more CPU and RAM than VictoriaMetrics. See [this talk from Adidas at PromCon 2019](https://promcon.io/2019-munich/talks/remote-write-storage-wars/).
|
||||
- VictoriaMetrics accepts data in multiple popular data ingestion protocols additionally to Prometheus remote_write protocol - InfluxDB, OpenTSDB, Graphite, CSV.
|
||||
- VictoriaMetrics accepts data in multiple popular data ingestion protocols additionally to Prometheus remote_write protocol - InfluxDB, OpenTSDB, Graphite, CSV, JSON, native binary.
|
||||
See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-import-time-series-data) for details.
|
||||
|
||||
|
||||
|
@ -120,7 +120,7 @@ The main differences between Cortex and VictoriaMetrics:
|
|||
|
||||
- VictoriaMetrics requires [10x less RAM](https://medium.com/@valyala/insert-benchmarks-with-inch-influxdb-vs-victoriametrics-e31a41ae2893) and it [works faster](https://medium.com/@valyala/measuring-vertical-scalability-for-time-series-databases-in-google-cloud-92550d78d8ae).
|
||||
- VictoriaMetrics provides [better query language](https://medium.com/@valyala/promql-tutorial-for-beginners-9ab455142085) than InfluxQL or Flux.
|
||||
- VictoriaMetrics accepts data in multiple popular data ingestion protocols additionally to InfluxDB - Prometheus remote_write, OpenTSDB, Graphite, CSV.
|
||||
- VictoriaMetrics accepts data in multiple popular data ingestion protocols additionally to InfluxDB - Prometheus remote_write, OpenTSDB, Graphite, CSV, JSON, native binary.
|
||||
See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-import-time-series-data) for details.
|
||||
|
||||
|
||||
|
|
|
@ -2,6 +2,7 @@ Release process guidance
|
|||
|
||||
## Release version and Docker images
|
||||
|
||||
0. Document all the changes for new release in [CHANGELOG.md](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/CHANGELOG.md).
|
||||
1. Create release tag with `git tag v1.xx.y`.
|
||||
2. Run `make release` for creating `*.tar.gz` release archive with the corresponding `_checksums.txt` inside `bin` directory.
|
||||
3. Run `make publish` for creating and publishing Docker images.
|
||||
|
|
|
@ -77,7 +77,8 @@ See [features available for enterprise customers](https://github.com/VictoriaMet
|
|||
if `-graphiteListenAddr` is set.
|
||||
* [OpenTSDB put message](#sending-data-via-telnet-put-protocol) if `-opentsdbListenAddr` is set.
|
||||
* [HTTP OpenTSDB /api/put requests](#sending-opentsdb-data-via-http-apiput-requests) if `-opentsdbHTTPListenAddr` is set.
|
||||
* [/api/v1/import](#how-to-import-time-series-data).
|
||||
* [JSON line format](#how-to-import-data-in-json-line-format).
|
||||
* [Native binary format](#how-to-import-data-in-native-format).
|
||||
* [Prometheus exposition format](#how-to-import-data-in-prometheus-exposition-format).
|
||||
* [Arbitrary CSV data](#how-to-import-csv-data).
|
||||
* Supports metrics' relabeling. See [these docs](#relabeling) for details.
|
||||
|
@ -100,8 +101,6 @@ See [features available for enterprise customers](https://github.com/VictoriaMet
|
|||
* [How to send data from Graphite-compatible agents such as StatsD](#how-to-send-data-from-graphite-compatible-agents-such-as-statsd)
|
||||
* [Querying Graphite data](#querying-graphite-data)
|
||||
* [How to send data from OpenTSDB-compatible agents](#how-to-send-data-from-opentsdb-compatible-agents)
|
||||
* [How to import data in Prometheus exposition format](#how-to-import-data-in-prometheus-exposition-format)
|
||||
* [How to import CSV data](#how-to-import-csv-data)
|
||||
* [Prometheus querying API usage](#prometheus-querying-api-usage)
|
||||
* [Prometheus querying API enhancements](#prometheus-querying-api-enhancements)
|
||||
* [Graphite Metrics API usage](#graphite-metrics-api-usage)
|
||||
|
@ -117,7 +116,13 @@ See [features available for enterprise customers](https://github.com/VictoriaMet
|
|||
* [How to delete time series](#how-to-delete-time-series)
|
||||
* [Forced merge](#forced-merge)
|
||||
* [How to export time series](#how-to-export-time-series)
|
||||
* [How to export data in native format](#how-to-export-data-in-native-format)
|
||||
* [How to export data in JSON line format](#how-to-export-data-in-json-line-format)
|
||||
* [How to import time series data](#how-to-import-time-series-data)
|
||||
* [How to import data in native format](#how-to-import-data-in-native-format)
|
||||
* [How to import data in json line format](#how-to-import-data-in-json-line-format)
|
||||
* [How to import CSV data](#how-to-import-csv-data)
|
||||
* [How to import data in Prometheus exposition format](#how-to-import-data-in-prometheus-exposition-format)
|
||||
* [Relabeling](#relabeling)
|
||||
* [Federation](#federation)
|
||||
* [Capacity planning](#capacity-planning)
|
||||
|
@ -345,7 +350,7 @@ curl -d 'measurement,tag1=value1,tag2=value2 field1=123,field2=1.23' -X POST 'ht
|
|||
```
|
||||
|
||||
An arbitrary number of lines delimited by '\n' (aka newline char) may be sent in a single request.
|
||||
After that the data may be read via [/api/v1/export](#how-to-export-time-series) endpoint:
|
||||
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
|
||||
|
||||
```bash
|
||||
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"measurement_.*"}'
|
||||
|
@ -381,7 +386,7 @@ echo "foo.bar.baz;tag1=value1;tag2=value2 123 `date +%s`" | nc -N localhost 2003
|
|||
|
||||
VictoriaMetrics sets the current time if the timestamp is omitted.
|
||||
An arbitrary number of lines delimited by `\n` (aka newline char) may be sent in one go.
|
||||
After that the data may be read via [/api/v1/export](#how-to-export-time-series) endpoint:
|
||||
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
|
||||
|
||||
```bash
|
||||
curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz'
|
||||
|
@ -425,7 +430,7 @@ echo "put foo.bar.baz `date +%s` 123 tag1=value1 tag2=value2" | nc -N localhost
|
|||
```
|
||||
|
||||
An arbitrary number of lines delimited by `\n` (aka newline char) may be sent in one go.
|
||||
After that the data may be read via [/api/v1/export](#how-to-export-time-series) endpoint:
|
||||
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
|
||||
|
||||
```bash
|
||||
curl -G 'http://localhost:8428/api/v1/export' -d 'match=foo.bar.baz'
|
||||
|
@ -460,7 +465,7 @@ Example for writing multiple data points in a single request:
|
|||
curl -H 'Content-Type: application/json' -d '[{"metric":"foo","value":45.34},{"metric":"bar","value":43}]' http://localhost:4242/api/put
|
||||
```
|
||||
|
||||
After that the data may be read via [/api/v1/export](#how-to-export-time-series) endpoint:
|
||||
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
|
||||
|
||||
```bash
|
||||
curl -G 'http://localhost:8428/api/v1/export' -d 'match[]=x.y.z' -d 'match[]=foo' -d 'match[]=bar'
|
||||
|
@ -475,91 +480,6 @@ The `/api/v1/export` endpoint should return the following response:
|
|||
```
|
||||
|
||||
|
||||
### How to import CSV data
|
||||
|
||||
Arbitrary CSV data can be imported via `/api/v1/import/csv`. The CSV data is imported according to the provided `format` query arg.
|
||||
The `format` query arg must contain comma-separated list of parsing rules for CSV fields. Each rule consists of three parts delimited by a colon:
|
||||
|
||||
```
|
||||
<column_pos>:<type>:<context>
|
||||
```
|
||||
|
||||
* `<column_pos>` is the position of the CSV column (field). Column numbering starts from 1. The order of parsing rules may be arbitrary.
|
||||
* `<type>` describes the column type. Supported types are:
|
||||
* `metric` - the corresponding CSV column at `<column_pos>` contains metric value, which must be integer or floating-point number.
|
||||
The metric name is read from the `<context>`. CSV line must have at least a single metric field. Multiple metric fields per CSV line is OK.
|
||||
* `label` - the corresponding CSV column at `<column_pos>` contains label value. The label name is read from the `<context>`.
|
||||
CSV line may have arbitrary number of label fields. All these labels are attached to all the configured metrics.
|
||||
* `time` - the corresponding CSV column at `<column_pos>` contains metric time. CSV line may contain either one or zero columns with time.
|
||||
If CSV line has no time, then the current time is used. The time is applied to all the configured metrics.
|
||||
The format of the time is configured via `<context>`. Supported time formats are:
|
||||
* `unix_s` - unix timestamp in seconds.
|
||||
* `unix_ms` - unix timestamp in milliseconds.
|
||||
* `unix_ns` - unix timestamp in nanoseconds. Note that VictoriaMetrics rounds the timestamp to milliseconds.
|
||||
* `rfc3339` - timestamp in [RFC3339](https://tools.ietf.org/html/rfc3339) format, i.e. `2006-01-02T15:04:05Z`.
|
||||
* `custom:<layout>` - custom layout for the timestamp. The `<layout>` may contain arbitrary time layout according to [time.Parse rules in Go](https://golang.org/pkg/time/#Parse).
|
||||
|
||||
Each request to `/api/v1/import/csv` may contain arbitrary number of CSV lines.
|
||||
|
||||
Example for importing CSV data via `/api/v1/import/csv`:
|
||||
|
||||
```bash
|
||||
curl -d "GOOG,1.23,4.56,NYSE" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
|
||||
curl -d "MSFT,3.21,1.67,NASDAQ" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
|
||||
```
|
||||
|
||||
After that the data may be read via [/api/v1/export](#how-to-export-time-series) endpoint:
|
||||
|
||||
```bash
|
||||
curl -G 'http://localhost:8428/api/v1/export' -d 'match[]={ticker!=""}'
|
||||
```
|
||||
|
||||
The following response should be returned:
|
||||
```bash
|
||||
{"metric":{"__name__":"bid","market":"NASDAQ","ticker":"MSFT"},"values":[1.67],"timestamps":[1583865146520]}
|
||||
{"metric":{"__name__":"bid","market":"NYSE","ticker":"GOOG"},"values":[4.56],"timestamps":[1583865146495]}
|
||||
{"metric":{"__name__":"ask","market":"NASDAQ","ticker":"MSFT"},"values":[3.21],"timestamps":[1583865146520]}
|
||||
{"metric":{"__name__":"ask","market":"NYSE","ticker":"GOOG"},"values":[1.23],"timestamps":[1583865146495]}
|
||||
```
|
||||
|
||||
Extra labels may be added to all the imported lines by passing `extra_label=name=value` query args.
|
||||
For example, `/api/v1/import/csv?extra_label=foo=bar` would add `"foo":"bar"` label to all the imported lines.
|
||||
|
||||
Note that it could be required to flush response cache after importing historical data. See [these docs](#backfilling) for detail.
|
||||
|
||||
|
||||
### How to import data in Prometheus exposition format
|
||||
|
||||
VictoriaMetrics accepts data in [Prometheus exposition format](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#text-based-format)
|
||||
via `/api/v1/import/prometheus` path. For example, the following line imports a single line in Prometheus exposition format into VictoriaMetrics:
|
||||
|
||||
```bash
|
||||
curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus'
|
||||
```
|
||||
|
||||
The following command may be used for verifying the imported data:
|
||||
|
||||
```bash
|
||||
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"foo"}'
|
||||
```
|
||||
|
||||
It should return something like the following:
|
||||
|
||||
```
|
||||
{"metric":{"__name__":"foo","bar":"baz"},"values":[123],"timestamps":[1594370496905]}
|
||||
```
|
||||
|
||||
Extra labels may be added to all the imported metrics by passing `extra_label=name=value` query args.
|
||||
For example, `/api/v1/import/prometheus?extra_label=foo=bar` would add `{foo="bar"}` label to all the imported metrics.
|
||||
|
||||
If timestamp is missing in `<metric> <value> <timestamp>` Prometheus exposition format line, then the current timestamp is used during data ingestion.
|
||||
It can be overriden by passing unix timestamp in *milliseconds* via `timestamp` query arg. For example, `/api/v1/import/prometheus?timestamp=1594370496905`.
|
||||
|
||||
VictoriaMetrics accepts arbitrary number of lines in a single request to `/api/v1/import/prometheus`, i.e. it supports data streaming.
|
||||
|
||||
VictoriaMetrics also may scrape Prometheus targets - see [these docs](#how-to-scrape-prometheus-exporters-such-as-node-exporter).
|
||||
|
||||
|
||||
### Prometheus querying API usage
|
||||
|
||||
VictoriaMetrics supports the following handlers from [Prometheus querying API](https://prometheus.io/docs/prometheus/latest/querying/api/):
|
||||
|
@ -660,8 +580,8 @@ Run `make package-victoria-metrics`. It builds `victoriametrics/victoria-metrics
|
|||
`<PKG_TAG>` is auto-generated image tag, which depends on source code in the repository.
|
||||
The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-victoria-metrics`.
|
||||
|
||||
By default the image is built on top of [alpine](https://hub.docker.com/_/alpine) image for improved debuggability.
|
||||
It is possible to build the package on top of any other base image by setting it via `<ROOT_IMAGE>` environment variable.
|
||||
The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image
|
||||
by setting it via `<ROOT_IMAGE>` environment variable.
|
||||
For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
|
||||
|
||||
```bash
|
||||
|
@ -741,13 +661,13 @@ It is better using `-retentionPeriod` command-line flag for efficient pruning of
|
|||
|
||||
### Forced merge
|
||||
|
||||
VictoriaMetrics performs [data compations in background](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282)
|
||||
VictoriaMetrics performs [data compactions in background](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282)
|
||||
in order to keep good performance characteristics when accepting new data. These compactions (merges) are performed independently on per-month partitions.
|
||||
This means that compactions are stopped for per-month partitions if no new data is ingested into these partitions.
|
||||
Sometimes it is necessary to trigger compactions for old partitions. For instance, in order to free up disk space occupied by [deleted time series](#how-to-delete-time-series).
|
||||
In this case forced compaction may be initiated on the specified per-month partition by sending request to `/internal/force_merge?partition_prefix=YYYY_MM`,
|
||||
where `YYYY_MM` is per-month partition name. For example, `http://victoriametrics:8428/internal/force_merge?partition_prefix=2020_08` would initiate forced
|
||||
merge for August 2020 partition. The call to `/internal/force_merge` returns immediately, while the corresponding forced merges continues running in background.
|
||||
merge for August 2020 partition. The call to `/internal/force_merge` returns immediately, while the corresponding forced merge continues running in background.
|
||||
|
||||
Forced merges may require additional CPU, disk IO and storage space resources. It is unnecessary to run forced merge under normal conditions,
|
||||
since VictoriaMetrics automatically performs [optimal merges in background](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282)
|
||||
|
@ -756,11 +676,35 @@ when new data is ingested into it.
|
|||
|
||||
### How to export time series
|
||||
|
||||
Send a request to `http://<victoriametrics-addr>:8428/api/v1/export?match[]=<timeseries_selector_for_export>`,
|
||||
VictoriaMetrics provides the following handlers for exporting data:
|
||||
|
||||
* `/api/v1/export/native` for exporting data in native binary format. This is the most efficient format for data export.
|
||||
See [these docs](#how-to-export-data-in-native-format) for details.
|
||||
* `/api/v1/export` for exporing data in JSON line format. See [these docs](#how-to-export-data-in-json-line-format) for details.
|
||||
|
||||
|
||||
#### How to export data in native format
|
||||
|
||||
Send a request to `http://<victoriametrics-addr>:8428/api/v1/export/native?match[]=<timeseries_selector_for_export>`,
|
||||
where `<timeseries_selector_for_export>` may contain any [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors)
|
||||
for metrics to export. Use `{__name__!=""}` selector for fetching all the time series.
|
||||
|
||||
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
|
||||
unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
|
||||
|
||||
The exported data can be imported to VictoriaMetrics via [/api/v1/import/native](#how-to-import-data-in-native-format).
|
||||
|
||||
|
||||
#### How to export data in JSON line format
|
||||
|
||||
Consider [exporting data in native format](#how-to-export-data-in-native-format) if big amounts of data must be migrated between VictoriaMetrics instances,
|
||||
since exporting in native format usually consumes lower amounts of CPU and memory resources, while the resulting exported data occupies lower amounts of disk space.
|
||||
|
||||
In order to export data in JSON line format, send a request to `http://<victoriametrics-addr>:8428/api/v1/export?match[]=<timeseries_selector_for_export>`,
|
||||
where `<timeseries_selector_for_export>` may contain any [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors)
|
||||
for metrics to export. Use `{__name__!=""}` selector for fetching all the time series.
|
||||
The response would contain all the data for the selected time series in [JSON streaming format](https://en.wikipedia.org/wiki/JSON_streaming#Line-delimited_JSON).
|
||||
Each JSON line would contain data for a single time series. An example output:
|
||||
Each JSON line contains samples for a single time series. An example output:
|
||||
|
||||
```jsonl
|
||||
{"metric":{"__name__":"up","job":"node_exporter","instance":"localhost:9100"},"values":[0,0,0],"timestamps":[1549891472010,1549891487724,1549891503438]}
|
||||
|
@ -770,8 +714,9 @@ Each JSON line would contain data for a single time series. An example output:
|
|||
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
|
||||
unix timestamp in seconds or [RFC3339](https://www.ietf.org/rfc/rfc3339.txt) values.
|
||||
|
||||
Optional `max_rows_per_line` arg may be added to the request in order to limit the maximum number of rows exported per each JSON line.
|
||||
By default each JSON line contains all the rows for a single time series.
|
||||
Optional `max_rows_per_line` arg may be added to the request for limiting the maximum number of rows exported per each JSON line.
|
||||
Optional `reduce_mem_usage=1` arg may be added to the request for reducing memory usage when exporting big number of time series.
|
||||
In this case the output may contain multiple lines with distinct samples for the same time series.
|
||||
|
||||
Pass `Accept-Encoding: gzip` HTTP header in the request to `/api/v1/export` in order to reduce network bandwidth during exporing big amounts
|
||||
of time series data. This enables gzip compression for the exported data. Example for exporting gzipped data:
|
||||
|
@ -782,22 +727,58 @@ curl -H 'Accept-Encoding: gzip' http://localhost:8428/api/v1/export -d 'match[]=
|
|||
|
||||
The maximum duration for each request to `/api/v1/export` is limited by `-search.maxExportDuration` command-line flag.
|
||||
|
||||
Exported data can be imported via POST'ing it to [/api/v1/import](#how-to-import-time-series-data).
|
||||
Exported data can be imported via POST'ing it to [/api/v1/import](#how-to-import-data-in-json-line-format).
|
||||
|
||||
|
||||
### How to import time series data
|
||||
|
||||
Time series data can be imported via any supported ingestion protocol:
|
||||
|
||||
* [Prometheus remote_write API](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write)
|
||||
* [Influx line protocol](#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf)
|
||||
* [Graphite plaintext protocol](#how-to-send-data-from-graphite-compatible-agents-such-as-statsd)
|
||||
* [OpenTSDB telnet put protocol](#sending-data-via-telnet-put-protocol)
|
||||
* [OpenTSDB http /api/put](#sending-opentsdb-data-via-http-apiput-requests)
|
||||
* `/api/v1/import` http POST handler, which accepts data from [/api/v1/export](#how-to-export-time-series).
|
||||
* `/api/v1/import/csv` http POST handler, which accepts CSV data. See [these docs](#how-to-import-csv-data) for details.
|
||||
* `/api/v1/import/prometheus` http POST handler, which accepts data in Prometheus exposition format. See [these docs](#how-to-import-data-in-prometheus-exposition-format) for details.
|
||||
* [Prometheus remote_write API](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write).
|
||||
* Influx line protocol. See [these docs](#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf) for details.
|
||||
* Graphite plaintext protocol. See[these docs](#how-to-send-data-from-graphite-compatible-agents-such-as-statsd) for details.
|
||||
* OpenTSDB telnet put protocol. See [these docs](#sending-data-via-telnet-put-protocol) for details.
|
||||
* OpenTSDB http `/api/put` protocol. See [these docs](#sending-opentsdb-data-via-http-apiput-requests) for details.
|
||||
* `/api/v1/import` for importing data obtained from [/api/v1/export](#how-to-export-data-in-json-line-format).
|
||||
See [these docs](##how-to-import-data-in-json-line-format) for details.
|
||||
* `/api/v1/import/native` for importing data obtained from [/api/v1/export/native](#how-to-export-data-in-native-format).
|
||||
See [these docs](#how-to-import-data-in-native-format) for details.
|
||||
* `/api/v1/import/csv` for importing arbitrary CSV data. See [these docs](#how-to-import-csv-data) for details.
|
||||
* `/api/v1/import/prometheus` for importing data in Prometheus exposition format. See [these docs](#how-to-import-data-in-prometheus-exposition-format) for details.
|
||||
|
||||
The most efficient protocol for importing data into VictoriaMetrics is `/api/v1/import`. Example for importing data obtained via `/api/v1/export`:
|
||||
|
||||
#### How to import data in native format
|
||||
|
||||
The most efficient protocol for importing data into VictoriaMetrics is `/api/v1/import/native`.
|
||||
Example for importing data obtained via [/api/v1/export/native](#how-to-export-data-in-native-format):
|
||||
|
||||
```bash
|
||||
# Export the data from <source-victoriametrics>:
|
||||
curl http://source-victoriametrics:8428/api/v1/export/native -d 'match={__name__!=""}' > exported_data.bin
|
||||
|
||||
# Import the data to <destination-victoriametrics>:
|
||||
curl -X POST http://destination-victoriametrics:8428/api/v1/import/native -T exported_data.bin
|
||||
```
|
||||
|
||||
Pass `Content-Encoding: gzip` HTTP request header to `/api/v1/import/native` for importing gzipped data:
|
||||
|
||||
```bash
|
||||
# Export gzipped data from <source-victoriametrics>:
|
||||
curl -H 'Accept-Encoding: gzip' http://source-victoriametrics:8428/api/v1/export/native -d 'match={__name__!=""}' > exported_data.bin.gz
|
||||
|
||||
# Import gzipped data to <destination-victoriametrics>:
|
||||
curl -X POST -H 'Content-Encoding: gzip' http://destination-victoriametrics:8428/api/v1/import/native -T exported_data.bin.gz
|
||||
```
|
||||
|
||||
Extra labels may be added to all the imported time series by passing `extra_label=name=value` query args.
|
||||
For example, `/api/v1/import/native?extra_label=foo=bar` would add `"foo":"bar"` label to all the imported time series.
|
||||
|
||||
Note that it could be required to flush response cache after importing historical data. See [these docs](#backfilling) for detail.
|
||||
|
||||
|
||||
#### How to import data in JSON line format
|
||||
|
||||
Example for importing data obtained via [/api/v1/export](#how-to-export-data-in-json-line-format):
|
||||
|
||||
```bash
|
||||
# Export the data from <source-victoriametrics>:
|
||||
|
@ -822,14 +803,108 @@ For example, `/api/v1/import?extra_label=foo=bar` would add `"foo":"bar"` label
|
|||
|
||||
Note that it could be required to flush response cache after importing historical data. See [these docs](#backfilling) for detail.
|
||||
|
||||
Each request to `/api/v1/import` can load up to a single vCPU core on VictoriaMetrics. Import speed can be improved by splitting the original file into smaller parts
|
||||
and importing them concurrently. Note that the original file must be split on newlines.
|
||||
|
||||
#### How to import CSV data
|
||||
|
||||
Arbitrary CSV data can be imported via `/api/v1/import/csv`. The CSV data is imported according to the provided `format` query arg.
|
||||
The `format` query arg must contain comma-separated list of parsing rules for CSV fields. Each rule consists of three parts delimited by a colon:
|
||||
|
||||
```
|
||||
<column_pos>:<type>:<context>
|
||||
```
|
||||
|
||||
* `<column_pos>` is the position of the CSV column (field). Column numbering starts from 1. The order of parsing rules may be arbitrary.
|
||||
* `<type>` describes the column type. Supported types are:
|
||||
* `metric` - the corresponding CSV column at `<column_pos>` contains metric value, which must be integer or floating-point number.
|
||||
The metric name is read from the `<context>`. CSV line must have at least a single metric field. Multiple metric fields per CSV line is OK.
|
||||
* `label` - the corresponding CSV column at `<column_pos>` contains label value. The label name is read from the `<context>`.
|
||||
CSV line may have arbitrary number of label fields. All these labels are attached to all the configured metrics.
|
||||
* `time` - the corresponding CSV column at `<column_pos>` contains metric time. CSV line may contain either one or zero columns with time.
|
||||
If CSV line has no time, then the current time is used. The time is applied to all the configured metrics.
|
||||
The format of the time is configured via `<context>`. Supported time formats are:
|
||||
* `unix_s` - unix timestamp in seconds.
|
||||
* `unix_ms` - unix timestamp in milliseconds.
|
||||
* `unix_ns` - unix timestamp in nanoseconds. Note that VictoriaMetrics rounds the timestamp to milliseconds.
|
||||
* `rfc3339` - timestamp in [RFC3339](https://tools.ietf.org/html/rfc3339) format, i.e. `2006-01-02T15:04:05Z`.
|
||||
* `custom:<layout>` - custom layout for the timestamp. The `<layout>` may contain arbitrary time layout according to [time.Parse rules in Go](https://golang.org/pkg/time/#Parse).
|
||||
|
||||
Each request to `/api/v1/import/csv` may contain arbitrary number of CSV lines.
|
||||
|
||||
Example for importing CSV data via `/api/v1/import/csv`:
|
||||
|
||||
```bash
|
||||
curl -d "GOOG,1.23,4.56,NYSE" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
|
||||
curl -d "MSFT,3.21,1.67,NASDAQ" 'http://localhost:8428/api/v1/import/csv?format=2:metric:ask,3:metric:bid,1:label:ticker,4:label:market'
|
||||
```
|
||||
|
||||
After that the data may be read via [/api/v1/export](#how-to-export-data-in-json-line-format) endpoint:
|
||||
|
||||
```bash
|
||||
curl -G 'http://localhost:8428/api/v1/export' -d 'match[]={ticker!=""}'
|
||||
```
|
||||
|
||||
The following response should be returned:
|
||||
```bash
|
||||
{"metric":{"__name__":"bid","market":"NASDAQ","ticker":"MSFT"},"values":[1.67],"timestamps":[1583865146520]}
|
||||
{"metric":{"__name__":"bid","market":"NYSE","ticker":"GOOG"},"values":[4.56],"timestamps":[1583865146495]}
|
||||
{"metric":{"__name__":"ask","market":"NASDAQ","ticker":"MSFT"},"values":[3.21],"timestamps":[1583865146520]}
|
||||
{"metric":{"__name__":"ask","market":"NYSE","ticker":"GOOG"},"values":[1.23],"timestamps":[1583865146495]}
|
||||
```
|
||||
|
||||
Extra labels may be added to all the imported lines by passing `extra_label=name=value` query args.
|
||||
For example, `/api/v1/import/csv?extra_label=foo=bar` would add `"foo":"bar"` label to all the imported lines.
|
||||
|
||||
Note that it could be required to flush response cache after importing historical data. See [these docs](#backfilling) for detail.
|
||||
|
||||
|
||||
#### How to import data in Prometheus exposition format
|
||||
|
||||
VictoriaMetrics accepts data in [Prometheus exposition format](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#text-based-format)
|
||||
via `/api/v1/import/prometheus` path. For example, the following line imports a single line in Prometheus exposition format into VictoriaMetrics:
|
||||
|
||||
```bash
|
||||
curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus'
|
||||
```
|
||||
|
||||
The following command may be used for verifying the imported data:
|
||||
|
||||
```bash
|
||||
curl -G 'http://localhost:8428/api/v1/export' -d 'match={__name__=~"foo"}'
|
||||
```
|
||||
|
||||
It should return something like the following:
|
||||
|
||||
```
|
||||
{"metric":{"__name__":"foo","bar":"baz"},"values":[123],"timestamps":[1594370496905]}
|
||||
```
|
||||
|
||||
Extra labels may be added to all the imported metrics by passing `extra_label=name=value` query args.
|
||||
For example, `/api/v1/import/prometheus?extra_label=foo=bar` would add `{foo="bar"}` label to all the imported metrics.
|
||||
|
||||
If timestamp is missing in `<metric> <value> <timestamp>` Prometheus exposition format line, then the current timestamp is used during data ingestion.
|
||||
It can be overriden by passing unix timestamp in *milliseconds* via `timestamp` query arg. For example, `/api/v1/import/prometheus?timestamp=1594370496905`.
|
||||
|
||||
VictoriaMetrics accepts arbitrary number of lines in a single request to `/api/v1/import/prometheus`, i.e. it supports data streaming.
|
||||
|
||||
Note that it could be required to flush response cache after importing historical data. See [these docs](#backfilling) for detail.
|
||||
|
||||
VictoriaMetrics also may scrape Prometheus targets - see [these docs](#how-to-scrape-prometheus-exporters-such-as-node-exporter).
|
||||
|
||||
|
||||
|
||||
### Relabeling
|
||||
|
||||
VictoriaMetrics supports Prometheus-compatible relabeling for all the ingested metrics if `-relabelConfig` command-line flag points
|
||||
to a file containing a list of [relabel_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config) entries.
|
||||
Example contents for `-relabelConfig` file:
|
||||
```yml
|
||||
# relabel_config.yml
|
||||
- target_label: cluster
|
||||
replacement: dev
|
||||
- action: drop
|
||||
source_labels: [__meta_kubernetes_pod_container_init]
|
||||
regex: true
|
||||
```
|
||||
|
||||
VictoriaMetrics provides the following extra actions for relabeling rules:
|
||||
|
||||
|
@ -1100,6 +1175,7 @@ VictoriaMetrics also exposes currently running queries with their execution time
|
|||
has at least 20% of free space comparing to disk size. The remaining amount of free space
|
||||
can be [monitored](#monitoring) via `vm_free_disk_space_bytes` metric. The total size of data
|
||||
stored on the disk can be monitored via sum of `vm_data_size_bytes` metrics.
|
||||
See also `vm_merge_need_free_disk_space` metrics, which are set to 1 if background merge cannot be initiated due to free disk space shortage.
|
||||
|
||||
* If VictoriaMetrics doesn't work because of certain parts are corrupted due to disk errors,
|
||||
then just remove directories with broken parts. This will recover VictoriaMetrics at the cost
|
||||
|
|
|
@ -25,7 +25,8 @@ to `vmagent` (like the ability to push metrics instead of pulling them). We did
|
|||
* Graphite plaintext protocol if `-graphiteListenAddr` command-line flag is set. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-send-data-from-graphite-compatible-agents-such-as-statsd).
|
||||
* OpenTSDB telnet and http protocols if `-opentsdbListenAddr` command-line flag is set. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-send-data-from-opentsdb-compatible-agents).
|
||||
* Prometheus remote write protocol via `http://<vmagent>:8429/api/v1/write`.
|
||||
* JSON lines import protocol via `http://<vmagent>:8429/api/v1/import`. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-import-time-series-data).
|
||||
* JSON lines import protocol via `http://<vmagent>:8429/api/v1/import`. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-import-data-in-json-line-format).
|
||||
* Native data import protocol via `http://<vmagent>:8429/api/v1/import/native`. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-import-data-in-native-format).
|
||||
* Data in Prometheus exposition format. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-import-data-in-prometheus-exposition-format) for details.
|
||||
* Arbitrary CSV data via `http://<vmagent>:8429/api/v1/import/csv`. See [these docs](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-import-csv-data).
|
||||
* Can replicate collected metrics simultaneously to multiple remote storage systems.
|
||||
|
@ -278,7 +279,7 @@ Run `make package-vmagent`. It builds `victoriametrics/vmagent:<PKG_TAG>` docker
|
|||
`<PKG_TAG>` is auto-generated image tag, which depends on source code in the repository.
|
||||
The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmagent`.
|
||||
|
||||
By default the image is built on top of [alpine](https://hub.docker.com/_/alpine) image. It is possible to build the package on top of any other base image
|
||||
The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image
|
||||
by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
|
||||
|
||||
```bash
|
||||
|
|
|
@ -113,7 +113,7 @@ Run `make package-vmauth`. It builds `victoriametrics/vmauth:<PKG_TAG>` docker i
|
|||
`<PKG_TAG>` is auto-generated image tag, which depends on source code in the repository.
|
||||
The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmauth`.
|
||||
|
||||
By default the image is built on top of [alpine](https://hub.docker.com/_/alpine) image. It is possible to build the package on top of any other base image
|
||||
The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image
|
||||
by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
|
||||
|
||||
```bash
|
||||
|
|
|
@ -6,12 +6,12 @@ Supported storage systems for backups:
|
|||
|
||||
* [GCS](https://cloud.google.com/storage/). Example: `gcs://<bucket>/<path/to/backup>`
|
||||
* [S3](https://aws.amazon.com/s3/). Example: `s3://<bucket>/<path/to/backup>`
|
||||
* Any S3-compatible storage such as [MinIO](https://github.com/minio/minio), [Ceph](https://docs.ceph.com/docs/mimic/radosgw/s3/) or [Swift](https://www.swiftstack.com/docs/admin/middleware/s3_middleware.html). See `-customS3Endpoint` command-line flag.
|
||||
* Any S3-compatible storage such as [MinIO](https://github.com/minio/minio), [Ceph](https://docs.ceph.com/docs/mimic/radosgw/s3/) or [Swift](https://www.swiftstack.com/docs/admin/middleware/s3_middleware.html). See [these docs](#advanced-usage) for details.
|
||||
* Local filesystem. Example: `fs://</absolute/path/to/backup>`
|
||||
|
||||
Incremental backups and full backups are supported. Incremental backups are created automatically if the destination path already contains data from the previous backup.
|
||||
`vmbackup` supports incremental and full backups. Incremental backups created automatically if the destination path already contains data from the previous backup.
|
||||
Full backups can be sped up with `-origin` pointing to already existing backup on the same remote storage. In this case `vmbackup` makes server-side copy for the shared
|
||||
data between the existing backup and new backup. This saves time and costs on data transfer.
|
||||
data between the existing backup and new backup. It saves time and costs on data transfer.
|
||||
|
||||
Backup process can be interrupted at any time. It is automatically resumed from the interruption point when restarting `vmbackup` with the same args.
|
||||
|
||||
|
@ -35,8 +35,8 @@ vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshotName=<local-
|
|||
|
||||
* `</path/to/victoria-metrics-data>` - path to VictoriaMetrics data pointed by `-storageDataPath` command-line flag in single-node VictoriaMetrics or in cluster `vmstorage`.
|
||||
There is no need to stop VictoriaMetrics for creating backups, since they are performed from immutable [instant snapshots](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-work-with-snapshots).
|
||||
* `<local-snapshot>` is the snapshot to backup. See [how to create instant snapshots](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-work-with-snapshots).
|
||||
* `<bucket>` is already existing name for [GCS bucket](https://cloud.google.com/storage/docs/creating-buckets).
|
||||
* `<local-snapshot>` is the snapshot to back up. See [how to create instant snapshots](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/README.md#how-to-work-with-snapshots).
|
||||
* `<bucket>` is an already existing name for [GCS bucket](https://cloud.google.com/storage/docs/creating-buckets).
|
||||
* `<path/to/new/backup>` is the destination path where new backup will be placed.
|
||||
|
||||
|
||||
|
@ -49,13 +49,13 @@ with the following command:
|
|||
vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshotName=<local-snapshot> -dst=gcs://<bucket>/<path/to/new/backup> -origin=gcs://<bucket>/<path/to/existing/backup>
|
||||
```
|
||||
|
||||
This saves time and network bandwidth costs by performing server-side copy for the shared data from the `-origin` to `-dst`.
|
||||
It saves time and network bandwidth costs by performing server-side copy for the shared data from the `-origin` to `-dst`.
|
||||
|
||||
|
||||
#### Incremental backups
|
||||
|
||||
Incremental backups are performed if `-dst` points to already existing backup. In this case only new data is uploaded to remote storage.
|
||||
This saves time and network bandwidth costs when working with big backups:
|
||||
Incremental backups performed if `-dst` points to an already existing backup. In this case only new data uploaded to remote storage.
|
||||
It saves time and network bandwidth costs when working with big backups:
|
||||
|
||||
```
|
||||
vmbackup -storageDataPath=</path/to/victoria-metrics-data> -snapshotName=<local-snapshot> -dst=gcs://<bucket>/<path/to/existing/backup>
|
||||
|
@ -100,16 +100,16 @@ The backup algorithm is the following:
|
|||
2. Determine files in `-dst`, which are missing in `-snapshotName`, and delete them. These are usually small files, which are already merged into bigger files in the snapshot.
|
||||
3. Determine files from `-snapshotName`, which are missing in `-dst`. These are usually small new files and bigger merged files.
|
||||
4. Determine files from step 3, which exist in the `-origin`, and perform server-side copy of these files from `-origin` to `-dst`.
|
||||
This are usually the biggest and the oldest files, which are shared between backups.
|
||||
5. Upload the remaining files from setp 3 from `-snapshotName` to `-dst`.
|
||||
These are usually the biggest and the oldest files, which are shared between backups.
|
||||
5. Upload the remaining files from step 3 from `-snapshotName` to `-dst`.
|
||||
|
||||
The algorithm splits source files into 100MB chunks in the backup. Each chunk is stored as a separate file in the backup.
|
||||
The algorithm splits source files into 100 MB chunks in the backup. Each chunk stored as a separate file in the backup.
|
||||
Such splitting minimizes the amounts of data to re-transfer after temporary errors.
|
||||
|
||||
`vmbackup` relies on [instant snapshot](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282) properties:
|
||||
|
||||
- All the files in the snapshot are immutable.
|
||||
- Old files are periodically merged into new files.
|
||||
- Old files periodically merged into new files.
|
||||
- Smaller files have higher probability to be merged.
|
||||
- Consecutive snapshots share many identical files.
|
||||
|
||||
|
@ -129,7 +129,45 @@ See [this article](https://medium.com/@valyala/speeding-up-backups-for-big-time-
|
|||
|
||||
### Advanced usage
|
||||
|
||||
Run `vmbackup -help` in order to see all the available options:
|
||||
|
||||
* Obtaining credentials from a file.
|
||||
|
||||
Add flag `-credsFilePath=/etc/credentials` with the following content:
|
||||
|
||||
for s3 (aws, minio or other s3 compatible storages):
|
||||
```bash
|
||||
[default]
|
||||
aws_access_key_id=theaccesskey
|
||||
aws_secret_access_key=thesecretaccesskeyvalue
|
||||
```
|
||||
|
||||
for gce cloud storage:
|
||||
```json
|
||||
{
|
||||
"type": "service_account",
|
||||
"project_id": "project-id",
|
||||
"private_key_id": "key-id",
|
||||
"private_key": "-----BEGIN PRIVATE KEY-----\nprivate-key\n-----END PRIVATE KEY-----\n",
|
||||
"client_email": "service-account-email",
|
||||
"client_id": "client-id",
|
||||
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
|
||||
"token_uri": "https://accounts.google.com/o/oauth2/token",
|
||||
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
|
||||
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/service-account-email"
|
||||
}
|
||||
```
|
||||
|
||||
* Usage with s3 custom url endpoint. It is possible to use `vmbackup` with s3 compatible storages like minio, cloudian, etc.
|
||||
You have to add a custom url endpoint via flag:
|
||||
```
|
||||
# for minio
|
||||
-customS3Endpoint=http://localhost:9000
|
||||
|
||||
# for aws gov region
|
||||
-customS3Endpoint=https://s3-fips.us-gov-west-1.amazonaws.com
|
||||
```
|
||||
|
||||
* Run `vmbackup -help` in order to see all the available options:
|
||||
|
||||
```
|
||||
-concurrency int
|
||||
|
@ -207,7 +245,7 @@ Run `make package-vmbackup`. It builds `victoriametrics/vmbackup:<PKG_TAG>` dock
|
|||
`<PKG_TAG>` is auto-generated image tag, which depends on source code in the repository.
|
||||
The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmbackup`.
|
||||
|
||||
By default the image is built on top of [alpine](https://hub.docker.com/_/alpine) image. It is possible to build the package on top of any other base image
|
||||
The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image
|
||||
by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
|
||||
|
||||
```bash
|
||||
|
|
|
@ -21,7 +21,7 @@ vmrestore -src=gcs://<bucket>/<path/to/backup> -storageDataPath=<local/path/to/r
|
|||
* `<local/path/to/restore>` is the path to folder where data will be restored. This folder must be passed
|
||||
to VictoriaMetrics in `-storageDataPath` command-line flag after the restore process is complete.
|
||||
|
||||
The original `-storageDataPath` directory may contain old files. They will be susbstituted by the files from backup,
|
||||
The original `-storageDataPath` directory may contain old files. They will be substituted by the files from backup,
|
||||
i.e. the end result would be similar to [rsync --delete](https://askubuntu.com/questions/476041/how-do-i-make-rsync-delete-files-that-have-been-deleted-from-the-source-folder).
|
||||
|
||||
|
||||
|
@ -33,7 +33,44 @@ i.e. the end result would be similar to [rsync --delete](https://askubuntu.com/q
|
|||
|
||||
### Advanced usage
|
||||
|
||||
Run `vmrestore -help` in order to see all the available options:
|
||||
* Obtaining credentials from a file.
|
||||
|
||||
Add flag `-credsFilePath=/etc/credentials` with following content:
|
||||
|
||||
for s3 (aws, minio or other s3 compatible storages):
|
||||
```bash
|
||||
[default]
|
||||
aws_access_key_id=theaccesskey
|
||||
aws_secret_access_key=thesecretaccesskeyvalue
|
||||
```
|
||||
|
||||
for gce cloud storage:
|
||||
```json
|
||||
{
|
||||
"type": "service_account",
|
||||
"project_id": "project-id",
|
||||
"private_key_id": "key-id",
|
||||
"private_key": "-----BEGIN PRIVATE KEY-----\nprivate-key\n-----END PRIVATE KEY-----\n",
|
||||
"client_email": "service-account-email",
|
||||
"client_id": "client-id",
|
||||
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
|
||||
"token_uri": "https://accounts.google.com/o/oauth2/token",
|
||||
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
|
||||
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/service-account-email"
|
||||
}
|
||||
```
|
||||
|
||||
* Usage with s3 custom url endpoint. It is possible to use `vmrestore` with s3 api compatible storages, like minio, cloudian and other.
|
||||
You have to add custom url endpoint with a flag:
|
||||
```
|
||||
# for minio:
|
||||
-customS3Endpoint=http://localhost:9000
|
||||
|
||||
# for aws gov region
|
||||
-customS3Endpoint=https://s3-fips.us-gov-west-1.amazonaws.com
|
||||
```
|
||||
|
||||
* Run `vmrestore -help` in order to see all the available options:
|
||||
|
||||
```
|
||||
-concurrency int
|
||||
|
@ -104,7 +141,7 @@ Run `make package-vmrestore`. It builds `victoriametrics/vmrestore:<PKG_TAG>` do
|
|||
`<PKG_TAG>` is auto-generated image tag, which depends on source code in the repository.
|
||||
The `<PKG_TAG>` may be manually set via `PKG_TAG=foobar make package-vmrestore`.
|
||||
|
||||
By default the image is built on top of [alpine](https://hub.docker.com/_/alpine) image. It is possible to build the package on top of any other base image
|
||||
The base docker image is [alpine](https://hub.docker.com/_/alpine) but it is possible to use any other base image
|
||||
by setting it via `<ROOT_IMAGE>` environment variable. For example, the following command builds the image on top of [scratch](https://hub.docker.com/_/scratch) image:
|
||||
|
||||
```bash
|
||||
|
|
9
go.mod
9
go.mod
|
@ -9,7 +9,7 @@ require (
|
|||
github.com/VictoriaMetrics/fasthttp v1.0.7
|
||||
github.com/VictoriaMetrics/metrics v1.12.3
|
||||
github.com/VictoriaMetrics/metricsql v0.6.0
|
||||
github.com/aws/aws-sdk-go v1.34.29
|
||||
github.com/aws/aws-sdk-go v1.34.33
|
||||
github.com/cespare/xxhash/v2 v2.1.1
|
||||
github.com/go-kit/kit v0.10.0
|
||||
github.com/golang/snappy v0.0.2
|
||||
|
@ -22,11 +22,12 @@ require (
|
|||
github.com/valyala/histogram v1.1.2
|
||||
github.com/valyala/quicktemplate v1.6.3
|
||||
go.uber.org/atomic v1.7.0 // indirect
|
||||
golang.org/x/net v0.0.0-20200927032502-5d4f70055728 // indirect
|
||||
golang.org/x/oauth2 v0.0.0-20200902213428-5d25da1a8d43
|
||||
golang.org/x/sys v0.0.0-20200922070232-aee5d888a860
|
||||
golang.org/x/tools v0.0.0-20200923053713-ba800b16d873 // indirect
|
||||
golang.org/x/sys v0.0.0-20200926100807-9d91bd62050c
|
||||
golang.org/x/tools v0.0.0-20200928182047-19e03678916f // indirect
|
||||
google.golang.org/api v0.32.0
|
||||
google.golang.org/genproto v0.0.0-20200921165018-b9da36f5f452 // indirect
|
||||
google.golang.org/genproto v0.0.0-20200925023002-c2d885f95484 // indirect
|
||||
gopkg.in/yaml.v2 v2.3.0
|
||||
)
|
||||
|
||||
|
|
47
go.sum
47
go.sum
|
@ -11,42 +11,29 @@ cloud.google.com/go v0.51.0/go.mod h1:hWtGJ6gnXH+KgDv+V0zFGDvpi07n3z8ZNj3T1RW0Gc
|
|||
cloud.google.com/go v0.52.0/go.mod h1:pXajvRH/6o3+F9jDHZWQ5PbGhn+o8w9qiu/CffaVdO4=
|
||||
cloud.google.com/go v0.53.0/go.mod h1:fp/UouUEsRkN6ryDKNW/Upv/JBKnv6WDthjR6+vze6M=
|
||||
cloud.google.com/go v0.54.0/go.mod h1:1rq2OEkV3YMf6n/9ZvGWI3GWw0VoqH/1x2nd8Is/bPc=
|
||||
cloud.google.com/go v0.56.0 h1:WRz29PgAsVEyPSDHyk+0fpEkwEFyfhHn+JbksT6gIL4=
|
||||
cloud.google.com/go v0.56.0/go.mod h1:jr7tqZxxKOVYizybht9+26Z/gUq7tiRzu+ACVAMbKVk=
|
||||
cloud.google.com/go v0.57.0 h1:EpMNVUorLiZIELdMZbCYX/ByTFCdoYopYAGxaGVz9ms=
|
||||
cloud.google.com/go v0.57.0/go.mod h1:oXiQ6Rzq3RAkkY7N6t3TcE6jE+CIBBbA36lwQ1JyzZs=
|
||||
cloud.google.com/go v0.62.0 h1:RmDygqvj27Zf3fCQjQRtLyC7KwFcHkeJitcO0OoGOcA=
|
||||
cloud.google.com/go v0.62.0/go.mod h1:jmCYTdRCQuc1PHIIJ/maLInMho30T/Y0M4hTdTShOYc=
|
||||
cloud.google.com/go v0.65.0 h1:Dg9iHVQfrhq82rUNu9ZxUDrJLaxFUe/HlCVaLyRruq8=
|
||||
cloud.google.com/go v0.65.0/go.mod h1:O5N8zS7uWy9vkA9vayVHs65eM1ubvY4h553ofrNHObY=
|
||||
cloud.google.com/go v0.66.0 h1:DZeAkuQGQqnm9Xv36SbMJEU8aFBz4wL04UpMWPWwjzg=
|
||||
cloud.google.com/go v0.66.0/go.mod h1:dgqGAjKCDxyhGTtC9dAREQGUJpkceNm1yt590Qno0Ko=
|
||||
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
|
||||
cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
|
||||
cloud.google.com/go/bigquery v1.4.0 h1:xE3CPsOgttP4ACBePh79zTKALtXwn/Edhcr16R5hMWU=
|
||||
cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
|
||||
cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg=
|
||||
cloud.google.com/go/bigquery v1.7.0 h1:a/O/bK/vWrYGOTFtH8di4rBxMZnmkjy+Y5LxpDwo+dA=
|
||||
cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc=
|
||||
cloud.google.com/go/bigquery v1.8.0 h1:PQcPefKFdaIzjQFbiyOgAqyx8q5djaE7x9Sqe712DPA=
|
||||
cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ=
|
||||
cloud.google.com/go/bigtable v1.2.0/go.mod h1:JcVAOl45lrTmQfLj7T6TxyMzIN/3FGGcFm+2xVAli2o=
|
||||
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
|
||||
cloud.google.com/go/datastore v1.1.0 h1:/May9ojXjRkPBNVrq+oWLqmWCkr4OU5uRY29bu0mRyQ=
|
||||
cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
|
||||
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
|
||||
cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw=
|
||||
cloud.google.com/go/pubsub v1.2.0 h1:Lpy6hKgdcl7a3WGSfJIFmxmcdjSpP6OmBEfcOv1Y680=
|
||||
cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA=
|
||||
cloud.google.com/go/pubsub v1.3.1 h1:ukjixP1wl0LpnZ6LWtZJ0mX5tBmjp1f8Sqer8Z2OMUU=
|
||||
cloud.google.com/go/pubsub v1.3.1/go.mod h1:i+ucay31+CNRpDW4Lu78I4xXG+O1r/MAHgjpRVR+TSU=
|
||||
cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw=
|
||||
cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0ZeosJ0Rtdos=
|
||||
cloud.google.com/go/storage v1.6.0 h1:UDpwYIwla4jHGzZJaEJYx1tOejbgSoNqsAfHAUYe2r8=
|
||||
cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk=
|
||||
cloud.google.com/go/storage v1.8.0 h1:86K1Gel7BQ9/WmNWn7dTKMvTLFzwtBe5FNqYbi9X35g=
|
||||
cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs=
|
||||
cloud.google.com/go/storage v1.10.0 h1:STgFzyU5/8miMl0//zKh2aQeTyeaUH3WN9bSUiJ09bA=
|
||||
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
|
||||
cloud.google.com/go/storage v1.12.0 h1:4y3gHptW1EHVtcPAVE0eBBlFuGqEejTTG3KdIE0lUX4=
|
||||
cloud.google.com/go/storage v1.12.0/go.mod h1:fFLk2dp2oAhDz8QFKwqrjdJvxSp/W2g7nillojlL5Ho=
|
||||
|
@ -72,7 +59,6 @@ github.com/Azure/go-autorest/logger v0.1.0/go.mod h1:oExouG+K6PryycPJfVSxi/koC6L
|
|||
github.com/Azure/go-autorest/logger v0.2.0/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8=
|
||||
github.com/Azure/go-autorest/tracing v0.5.0/go.mod h1:r/s2XiOKccPW3HrqB+W0TQzfbtp2fGCgRFtBroKn4Dk=
|
||||
github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU=
|
||||
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
|
||||
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
||||
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
|
||||
github.com/DATA-DOG/go-sqlmock v1.3.3/go.mod h1:f/Ixk793poVmq4qj/V1dPUg2JEAKC73Q5eFN3EC/SaM=
|
||||
|
@ -93,7 +79,6 @@ github.com/VictoriaMetrics/fastcache v1.5.7 h1:4y6y0G8PRzszQUYIQHHssv/jgPHAb5qQu
|
|||
github.com/VictoriaMetrics/fastcache v1.5.7/go.mod h1:ptDBkNMQI4RtmVo8VS/XwRY6RoTu1dAWCbrk+6WsEM8=
|
||||
github.com/VictoriaMetrics/fasthttp v1.0.7 h1:9RntF8jE9z+ooyXy0tb3Pl76jan4DLaCQoCQFp/cIUE=
|
||||
github.com/VictoriaMetrics/fasthttp v1.0.7/go.mod h1:eaGv8oDDOWE8JQa6GgBgpCwjJhRxavmprtf2oWw8b3o=
|
||||
github.com/VictoriaMetrics/metrics v1.12.2 h1:SG8iAmqavDNuh7GIdHPoGHUhDL23KeKfvSZSozucNeA=
|
||||
github.com/VictoriaMetrics/metrics v1.12.2/go.mod h1:Z1tSfPfngDn12bTfZSCqArT3OPY3u88J12hSoOhuiRE=
|
||||
github.com/VictoriaMetrics/metrics v1.12.3 h1:Fe6JHC6MSEKa+BtLhPN8WIvS+HKPzMc2evEpNeCGy7I=
|
||||
github.com/VictoriaMetrics/metrics v1.12.3/go.mod h1:Z1tSfPfngDn12bTfZSCqArT3OPY3u88J12hSoOhuiRE=
|
||||
|
@ -128,8 +113,8 @@ github.com/asaskevich/govalidator v0.0.0-20200108200545-475eaeb16496/go.mod h1:o
|
|||
github.com/aws/aws-lambda-go v1.13.3/go.mod h1:4UKl9IzQMoD+QF79YdCuzCwp8VbmG4VAQwij/eHl5CU=
|
||||
github.com/aws/aws-sdk-go v1.27.0/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
|
||||
github.com/aws/aws-sdk-go v1.34.9/go.mod h1:5zCpMtNQVjRREroY7sYe8lOMRSxkhG6MZveU8YkpAk0=
|
||||
github.com/aws/aws-sdk-go v1.34.29 h1:4Yw8eC4nCXiIVmHJO5PD4oh0vI/df5o6cYTVzFV7vWA=
|
||||
github.com/aws/aws-sdk-go v1.34.29/go.mod h1:H7NKnBqNVzoTJpGfLrQkkD+ytBA93eiDYi/+8rV9s48=
|
||||
github.com/aws/aws-sdk-go v1.34.33 h1:ymkFm0rNPEOlgjyX3ojEd4zqzW6kGICBkqWs7LqgHtU=
|
||||
github.com/aws/aws-sdk-go v1.34.33/go.mod h1:H7NKnBqNVzoTJpGfLrQkkD+ytBA93eiDYi/+8rV9s48=
|
||||
github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g=
|
||||
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
|
||||
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
|
||||
|
@ -340,7 +325,6 @@ github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QD
|
|||
github.com/golang/protobuf v1.4.2 h1:+Z5KGCizgyZCbGh1KZqA0fcLLkwbsjIzS4aV2v7wJX0=
|
||||
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
|
||||
github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
||||
github.com/golang/snappy v0.0.1 h1:Qgr9rKW7uDUkrbSmQeiDsGa8SjGyCOGtuasMWwvp2P4=
|
||||
github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
||||
github.com/golang/snappy v0.0.2 h1:aeE13tS0IiQgFjYdoL8qN3K1N2bXXtI6Vi51/y7BpMw=
|
||||
github.com/golang/snappy v0.0.2/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
||||
|
@ -353,7 +337,6 @@ github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMyw
|
|||
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.4.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.1 h1:JFrFEBb2xKufg6XkJsJr+WbKb4FQlURi5RUcBveYu9k=
|
||||
github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.2 h1:X2ev0eStA3AbceY54o37/0PQ/UWqKEiiO2dKL5OPaFM=
|
||||
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
|
@ -483,7 +466,6 @@ github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxv
|
|||
github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
|
||||
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
|
||||
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
||||
github.com/kr/pretty v0.2.0 h1:s5hAObm+yFO5uHYt5dYjxi2rXrsnmRpJx4OYvIWUaQs=
|
||||
github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
|
||||
|
@ -739,7 +721,6 @@ go.opencensus.io v0.22.4 h1:LYy1Hy3MJdrCdMwwzxA/dRok4ejH+RwNGbuoD9fCjto=
|
|||
go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
|
||||
go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
|
||||
go.uber.org/atomic v1.5.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
|
||||
go.uber.org/atomic v1.6.0 h1:Ezj3JGmsOnG1MoRWQkPBsKLe9DwWD9QeXzTRzzldNVk=
|
||||
go.uber.org/atomic v1.6.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
|
||||
go.uber.org/atomic v1.7.0 h1:ADUqmZGgLDDfbSL9ZmPxKTybcoEYHgpYfELNoN+7hsw=
|
||||
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
|
||||
|
@ -845,17 +826,15 @@ golang.org/x/net v0.0.0-20200513185701-a91f0712d120/go.mod h1:qpuaurCH72eLCgpAm/
|
|||
golang.org/x/net v0.0.0-20200520182314-0ba52f642ac2/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
|
||||
golang.org/x/net v0.0.0-20200602114024-627f9648deb9/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
|
||||
golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
|
||||
golang.org/x/net v0.0.0-20200707034311-ab3426394381 h1:VXak5I6aEWmAXeQjA+QSZzlgNrpq9mjcfDemuexIKsU=
|
||||
golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
|
||||
golang.org/x/net v0.0.0-20200822124328-c89045814202 h1:VvcQYSHwXgi7W+TpUR6A9g6Up98WAHf3f/ulnJ62IyA=
|
||||
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
|
||||
golang.org/x/net v0.0.0-20200904194848-62affa334b73 h1:MXfv8rhZWmFeqX3GNZRsd6vOLoaCHjYEX3qkRo3YBUA=
|
||||
golang.org/x/net v0.0.0-20200904194848-62affa334b73/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
|
||||
golang.org/x/net v0.0.0-20200927032502-5d4f70055728 h1:5wtQIAulKU5AbLQOkjxl32UufnIOqgBX72pS0AV14H0=
|
||||
golang.org/x/net v0.0.0-20200927032502-5d4f70055728/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d h1:TzXSXBo42m9gQenoE3b9BGiEpg5IG2JkU5FkPIawgtw=
|
||||
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20200902213428-5d25da1a8d43 h1:ld7aEMNHoBnnDAX15v1T6z31v8HwR2A9FYOuAhWqkwc=
|
||||
golang.org/x/oauth2 v0.0.0-20200902213428-5d25da1a8d43/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
||||
|
@ -929,8 +908,8 @@ golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7w
|
|||
golang.org/x/sys v0.0.0-20200821140526-fda516888d29/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200828194041-157a740278f4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200922070232-aee5d888a860 h1:YEu4SMq7D0cmT7CBbXfcH0NZeuChAXwsHe/9XueUO6o=
|
||||
golang.org/x/sys v0.0.0-20200922070232-aee5d888a860/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200926100807-9d91bd62050c h1:38q6VNPWR010vN82/SB121GujZNIfAUb4YttE2rhGuc=
|
||||
golang.org/x/sys v0.0.0-20200926100807-9d91bd62050c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/text v0.0.0-20160726164857-2910a502d2bf/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
|
@ -1011,8 +990,8 @@ golang.org/x/tools v0.0.0-20200828161849-5deb26317202/go.mod h1:njjCfa9FT2d7l9Bc
|
|||
golang.org/x/tools v0.0.0-20200904185747-39188db58858/go.mod h1:Cj7w3i3Rnn0Xh82ur9kSqwfTHTeVxaDqrfMjpcNT6bE=
|
||||
golang.org/x/tools v0.0.0-20200915173823-2db8f0ff891c/go.mod h1:z6u4i615ZeAfBE4XtMziQW1fSVJXACjjbWkB/mvPzlU=
|
||||
golang.org/x/tools v0.0.0-20200918232735-d647fc253266/go.mod h1:z6u4i615ZeAfBE4XtMziQW1fSVJXACjjbWkB/mvPzlU=
|
||||
golang.org/x/tools v0.0.0-20200923053713-ba800b16d873 h1:Q5Sq7Lt0bkn6Ax1NAraQhKRN7xxxy1LV4guxsyFHZx4=
|
||||
golang.org/x/tools v0.0.0-20200923053713-ba800b16d873/go.mod h1:z6u4i615ZeAfBE4XtMziQW1fSVJXACjjbWkB/mvPzlU=
|
||||
golang.org/x/tools v0.0.0-20200928182047-19e03678916f h1:VwGa2Wf+rHGIxvsssCkUNIyFv8jQY0VCBCNWtikoWq0=
|
||||
golang.org/x/tools v0.0.0-20200928182047-19e03678916f/go.mod h1:z6u4i615ZeAfBE4XtMziQW1fSVJXACjjbWkB/mvPzlU=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
|
@ -1040,9 +1019,7 @@ google.golang.org/api v0.22.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/
|
|||
google.golang.org/api v0.24.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE=
|
||||
google.golang.org/api v0.28.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE=
|
||||
google.golang.org/api v0.29.0/go.mod h1:Lcubydp8VUV7KeIHD9z2Bys/sm/vGKnG1UHuDBSrHWM=
|
||||
google.golang.org/api v0.30.0 h1:yfrXXP61wVuLb0vBcG6qaOoIoqYEzOQS8jum51jkv2w=
|
||||
google.golang.org/api v0.30.0/go.mod h1:QGmEvQ87FHZNiUVJkT14jQNYJ4ZJjdRF23ZXz5138Fc=
|
||||
google.golang.org/api v0.31.0 h1:1w5Sz/puhxFo9lTtip2n47k7toB/U2nCqOKNHd3Yrbo=
|
||||
google.golang.org/api v0.31.0/go.mod h1:CL+9IBCa2WWU6gRuBWaKqGWLFFwbEUXkfeMkHLQWYWo=
|
||||
google.golang.org/api v0.32.0 h1:Le77IccnTqEa8ryp9wIpX5W3zYm7Gf9LhOp9PHcwFts=
|
||||
google.golang.org/api v0.32.0/go.mod h1:/XrVsuzM0rZmrsbjJutiuftIzeuTQcEeaYcSk/mQ1dg=
|
||||
|
@ -1089,12 +1066,11 @@ google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6D
|
|||
google.golang.org/genproto v0.0.0-20200815001618-f69a88009b70/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/genproto v0.0.0-20200831141814-d751682dd103/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/genproto v0.0.0-20200904004341-0bd0a958aa1d h1:92D1fum1bJLKSdr11OJ+54YeCMCGYIygTA7R/YZxH5M=
|
||||
google.golang.org/genproto v0.0.0-20200904004341-0bd0a958aa1d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/genproto v0.0.0-20200914193844-75d14daec038/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/genproto v0.0.0-20200921151605-7abf4a1a14d5/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/genproto v0.0.0-20200921165018-b9da36f5f452 h1:2S8dcPkOK9QyAAKa46Cgl//VOCS6ZOsY+iwD3bb9Phg=
|
||||
google.golang.org/genproto v0.0.0-20200921165018-b9da36f5f452/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/genproto v0.0.0-20200925023002-c2d885f95484 h1:Rr9EZdYRq2WLckzJQVtN3ISKoP7dvgwi7jbglILNZ34=
|
||||
google.golang.org/genproto v0.0.0-20200925023002-c2d885f95484/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
|
||||
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
||||
google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM=
|
||||
|
@ -1111,9 +1087,7 @@ google.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8
|
|||
google.golang.org/grpc v1.28.0/go.mod h1:rpkK4SK4GF4Ach/+MFLZUBavHOvF2JJB5uozKKal+60=
|
||||
google.golang.org/grpc v1.29.1/go.mod h1:itym6AZVZYACWQqET3MqgPpjcuV5QH3BxFS3IjizoKk=
|
||||
google.golang.org/grpc v1.30.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
|
||||
google.golang.org/grpc v1.31.0 h1:T7P4R73V3SSDPhH7WW7ATbfViLtmamH0DKrP3f9AuDI=
|
||||
google.golang.org/grpc v1.31.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
|
||||
google.golang.org/grpc v1.31.1 h1:SfXqXS5hkufcdZ/mHtYCh53P2b+92WQq/DZcKLgsFRs=
|
||||
google.golang.org/grpc v1.31.1/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
|
||||
google.golang.org/grpc v1.32.0 h1:zWTV+LMdc3kaiJMSTOFz2UgSBgx8RNQoTGiZu3fR9S0=
|
||||
google.golang.org/grpc v1.32.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
|
||||
|
@ -1130,7 +1104,6 @@ google.golang.org/protobuf v1.25.0 h1:Ejskq+SyPohKW+1uil0JJMtmHCgJPJ/qWTxr8qp+R4
|
|||
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
|
||||
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
|
||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
|
||||
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
|
|
|
@ -65,9 +65,7 @@ func getOnlineCPUCount() float64 {
|
|||
if n <= 0 {
|
||||
return -1
|
||||
}
|
||||
// Add a half of CPU core, since it looks like actual cores is usually bigger than online cores.
|
||||
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/685#issuecomment-674423728
|
||||
return n + 0.5
|
||||
return n
|
||||
}
|
||||
|
||||
func countCPUs(data string) int {
|
||||
|
|
30
lib/filestream/filestream_openbsd.go
Normal file
30
lib/filestream/filestream_openbsd.go
Normal file
|
@ -0,0 +1,30 @@
|
|||
package filestream
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"golang.org/x/sys/unix"
|
||||
)
|
||||
|
||||
func (st *streamTracker) adviseDontNeed(n int, fdatasync bool) error {
|
||||
st.length += uint64(n)
|
||||
if st.fd == 0 {
|
||||
return nil
|
||||
}
|
||||
if st.length < dontNeedBlockSize {
|
||||
return nil
|
||||
}
|
||||
blockSize := st.length - (st.length % dontNeedBlockSize)
|
||||
if fdatasync {
|
||||
if err := unix.Fsync(int(st.fd)); err != nil {
|
||||
return fmt.Errorf("unix.Fsync error: %w", err)
|
||||
}
|
||||
}
|
||||
st.offset += blockSize
|
||||
st.length -= blockSize
|
||||
return nil
|
||||
}
|
||||
|
||||
func (st *streamTracker) close() error {
|
||||
return nil
|
||||
}
|
|
@ -338,6 +338,5 @@ func mustGetFreeSpace(path string) uint64 {
|
|||
if err := unix.Fstatfs(int(fd), &stat); err != nil {
|
||||
logger.Panicf("FATAL: cannot determine free disk space on %q: %s", path, err)
|
||||
}
|
||||
freeSpace := uint64(stat.Bavail) * uint64(stat.Bsize)
|
||||
return freeSpace
|
||||
return freeSpace(stat)
|
||||
}
|
||||
|
|
9
lib/fs/fs_nix.go
Normal file
9
lib/fs/fs_nix.go
Normal file
|
@ -0,0 +1,9 @@
|
|||
// +build linux darwin freebsd
|
||||
|
||||
package fs
|
||||
|
||||
import "golang.org/x/sys/unix"
|
||||
|
||||
func freeSpace(stat unix.Statfs_t) uint64 {
|
||||
return uint64(stat.Bavail) * uint64(stat.Bsize)
|
||||
}
|
9
lib/fs/fs_openbsd.go
Normal file
9
lib/fs/fs_openbsd.go
Normal file
|
@ -0,0 +1,9 @@
|
|||
package fs
|
||||
|
||||
import (
|
||||
"golang.org/x/sys/unix"
|
||||
)
|
||||
|
||||
func freeSpace(stat unix.Statfs_t) uint64 {
|
||||
return uint64(stat.F_bavail) * uint64(stat.F_bsize)
|
||||
}
|
|
@ -15,7 +15,6 @@ func logAllFlags() {
|
|||
lname := strings.ToLower(f.Name)
|
||||
value := f.Value.String()
|
||||
if flagutil.IsSecretFlag(lname) {
|
||||
// Do not expose passwords and keys to prometheus.
|
||||
value = "secret"
|
||||
}
|
||||
Infof("flag %q = %q", f.Name, value)
|
||||
|
|
53
lib/protoparser/common/unmarshal_work.go
Normal file
53
lib/protoparser/common/unmarshal_work.go
Normal file
|
@ -0,0 +1,53 @@
|
|||
package common
|
||||
|
||||
import (
|
||||
"runtime"
|
||||
"sync"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
)
|
||||
|
||||
// ScheduleUnmarshalWork schedules uw to run in the worker pool.
|
||||
//
|
||||
// It is expected that StartUnmarshalWorkers is already called.
|
||||
func ScheduleUnmarshalWork(uw UnmarshalWork) {
|
||||
unmarshalWorkCh <- uw
|
||||
}
|
||||
|
||||
// UnmarshalWork is a unit of unmarshal work.
|
||||
type UnmarshalWork interface {
|
||||
// Unmarshal must implement CPU-bound unmarshal work.
|
||||
Unmarshal()
|
||||
}
|
||||
|
||||
// StartUnmarshalWorkers starts unmarshal workers.
|
||||
func StartUnmarshalWorkers() {
|
||||
if unmarshalWorkCh != nil {
|
||||
logger.Panicf("BUG: it looks like startUnmarshalWorkers() has been alread called without stopUnmarshalWorkers()")
|
||||
}
|
||||
gomaxprocs := runtime.GOMAXPROCS(-1)
|
||||
unmarshalWorkCh = make(chan UnmarshalWork, 2*gomaxprocs)
|
||||
unmarshalWorkersWG.Add(gomaxprocs)
|
||||
for i := 0; i < gomaxprocs; i++ {
|
||||
go func() {
|
||||
defer unmarshalWorkersWG.Done()
|
||||
for uw := range unmarshalWorkCh {
|
||||
uw.Unmarshal()
|
||||
}
|
||||
}()
|
||||
}
|
||||
}
|
||||
|
||||
// StopUnmarshalWorkers stops unmarshal workers.
|
||||
//
|
||||
// No more calles to ScheduleUnmarshalWork are allowed after callsing stopUnmarshalWorkers
|
||||
func StopUnmarshalWorkers() {
|
||||
close(unmarshalWorkCh)
|
||||
unmarshalWorkersWG.Wait()
|
||||
unmarshalWorkCh = nil
|
||||
}
|
||||
|
||||
var (
|
||||
unmarshalWorkCh chan UnmarshalWork
|
||||
unmarshalWorkersWG sync.WaitGroup
|
||||
)
|
|
@ -1,6 +1,7 @@
|
|||
package csvimport
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io"
|
||||
|
@ -10,6 +11,7 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
)
|
||||
|
@ -40,23 +42,24 @@ func ParseStream(req *http.Request, callback func(rows []Row) error) error {
|
|||
defer common.PutGzipReader(zr)
|
||||
r = zr
|
||||
}
|
||||
|
||||
ctx := getStreamContext()
|
||||
ctx := getStreamContext(r)
|
||||
defer putStreamContext(ctx)
|
||||
for ctx.Read(r, cds) {
|
||||
if err := callback(ctx.Rows.Rows); err != nil {
|
||||
return err
|
||||
}
|
||||
for ctx.Read() {
|
||||
uw := getUnmarshalWork()
|
||||
uw.callback = callback
|
||||
uw.cds = cds
|
||||
uw.reqBuf, ctx.reqBuf = ctx.reqBuf, uw.reqBuf
|
||||
common.ScheduleUnmarshalWork(uw)
|
||||
}
|
||||
return ctx.Error()
|
||||
}
|
||||
|
||||
func (ctx *streamContext) Read(r io.Reader, cds []ColumnDescriptor) bool {
|
||||
func (ctx *streamContext) Read() bool {
|
||||
readCalls.Inc()
|
||||
if ctx.err != nil {
|
||||
return false
|
||||
}
|
||||
ctx.reqBuf, ctx.tailBuf, ctx.err = common.ReadLinesBlock(r, ctx.reqBuf, ctx.tailBuf)
|
||||
ctx.reqBuf, ctx.tailBuf, ctx.err = common.ReadLinesBlock(ctx.br, ctx.reqBuf, ctx.tailBuf)
|
||||
if ctx.err != nil {
|
||||
if ctx.err != io.EOF {
|
||||
readErrors.Inc()
|
||||
|
@ -64,10 +67,84 @@ func (ctx *streamContext) Read(r io.Reader, cds []ColumnDescriptor) bool {
|
|||
}
|
||||
return false
|
||||
}
|
||||
ctx.Rows.Unmarshal(bytesutil.ToUnsafeString(ctx.reqBuf), cds)
|
||||
rowsRead.Add(len(ctx.Rows.Rows))
|
||||
return true
|
||||
}
|
||||
|
||||
rows := ctx.Rows.Rows
|
||||
var (
|
||||
readCalls = metrics.NewCounter(`vm_protoparser_read_calls_total{type="csvimport"}`)
|
||||
readErrors = metrics.NewCounter(`vm_protoparser_read_errors_total{type="csvimport"}`)
|
||||
rowsRead = metrics.NewCounter(`vm_protoparser_rows_read_total{type="csvimport"}`)
|
||||
)
|
||||
|
||||
type streamContext struct {
|
||||
br *bufio.Reader
|
||||
reqBuf []byte
|
||||
tailBuf []byte
|
||||
err error
|
||||
}
|
||||
|
||||
func (ctx *streamContext) Error() error {
|
||||
if ctx.err == io.EOF {
|
||||
return nil
|
||||
}
|
||||
return ctx.err
|
||||
}
|
||||
|
||||
func (ctx *streamContext) reset() {
|
||||
ctx.br.Reset(nil)
|
||||
ctx.reqBuf = ctx.reqBuf[:0]
|
||||
ctx.tailBuf = ctx.tailBuf[:0]
|
||||
ctx.err = nil
|
||||
}
|
||||
|
||||
func getStreamContext(r io.Reader) *streamContext {
|
||||
select {
|
||||
case ctx := <-streamContextPoolCh:
|
||||
ctx.br.Reset(r)
|
||||
return ctx
|
||||
default:
|
||||
if v := streamContextPool.Get(); v != nil {
|
||||
ctx := v.(*streamContext)
|
||||
ctx.br.Reset(r)
|
||||
return ctx
|
||||
}
|
||||
return &streamContext{
|
||||
br: bufio.NewReaderSize(r, 64*1024),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func putStreamContext(ctx *streamContext) {
|
||||
ctx.reset()
|
||||
select {
|
||||
case streamContextPoolCh <- ctx:
|
||||
default:
|
||||
streamContextPool.Put(ctx)
|
||||
}
|
||||
}
|
||||
|
||||
var streamContextPool sync.Pool
|
||||
var streamContextPoolCh = make(chan *streamContext, runtime.GOMAXPROCS(-1))
|
||||
|
||||
type unmarshalWork struct {
|
||||
rows Rows
|
||||
callback func(rows []Row) error
|
||||
cds []ColumnDescriptor
|
||||
reqBuf []byte
|
||||
}
|
||||
|
||||
func (uw *unmarshalWork) reset() {
|
||||
uw.rows.Reset()
|
||||
uw.callback = nil
|
||||
uw.cds = nil
|
||||
uw.reqBuf = uw.reqBuf[:0]
|
||||
}
|
||||
|
||||
// Unmarshal implements common.UnmarshalWork
|
||||
func (uw *unmarshalWork) Unmarshal() {
|
||||
uw.rows.Unmarshal(bytesutil.ToUnsafeString(uw.reqBuf), uw.cds)
|
||||
rows := uw.rows.Rows
|
||||
rowsRead.Add(len(rows))
|
||||
|
||||
// Set missing timestamps
|
||||
currentTs := time.Now().UnixNano() / 1e6
|
||||
|
@ -86,56 +163,25 @@ func (ctx *streamContext) Read(r io.Reader, cds []ColumnDescriptor) bool {
|
|||
}
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
var (
|
||||
readCalls = metrics.NewCounter(`vm_protoparser_read_calls_total{type="csvimport"}`)
|
||||
readErrors = metrics.NewCounter(`vm_protoparser_read_errors_total{type="csvimport"}`)
|
||||
rowsRead = metrics.NewCounter(`vm_protoparser_rows_read_total{type="csvimport"}`)
|
||||
)
|
||||
|
||||
type streamContext struct {
|
||||
Rows Rows
|
||||
reqBuf []byte
|
||||
tailBuf []byte
|
||||
err error
|
||||
}
|
||||
|
||||
func (ctx *streamContext) Error() error {
|
||||
if ctx.err == io.EOF {
|
||||
return nil
|
||||
if err := uw.callback(rows); err != nil {
|
||||
logger.Errorf("error when processing imported data: %s", err)
|
||||
putUnmarshalWork(uw)
|
||||
return
|
||||
}
|
||||
return ctx.err
|
||||
putUnmarshalWork(uw)
|
||||
}
|
||||
|
||||
func (ctx *streamContext) reset() {
|
||||
ctx.Rows.Reset()
|
||||
ctx.reqBuf = ctx.reqBuf[:0]
|
||||
ctx.tailBuf = ctx.tailBuf[:0]
|
||||
ctx.err = nil
|
||||
}
|
||||
|
||||
func getStreamContext() *streamContext {
|
||||
select {
|
||||
case ctx := <-streamContextPoolCh:
|
||||
return ctx
|
||||
default:
|
||||
if v := streamContextPool.Get(); v != nil {
|
||||
return v.(*streamContext)
|
||||
}
|
||||
return &streamContext{}
|
||||
func getUnmarshalWork() *unmarshalWork {
|
||||
v := unmarshalWorkPool.Get()
|
||||
if v == nil {
|
||||
return &unmarshalWork{}
|
||||
}
|
||||
return v.(*unmarshalWork)
|
||||
}
|
||||
|
||||
func putStreamContext(ctx *streamContext) {
|
||||
ctx.reset()
|
||||
select {
|
||||
case streamContextPoolCh <- ctx:
|
||||
default:
|
||||
streamContextPool.Put(ctx)
|
||||
}
|
||||
func putUnmarshalWork(uw *unmarshalWork) {
|
||||
uw.reset()
|
||||
unmarshalWorkPool.Put(uw)
|
||||
}
|
||||
|
||||
var streamContextPool sync.Pool
|
||||
var streamContextPoolCh = make(chan *streamContext, runtime.GOMAXPROCS(-1))
|
||||
var unmarshalWorkPool sync.Pool
|
||||
|
|
|
@ -184,13 +184,26 @@ func TestRowsUnmarshalSuccess(t *testing.T) {
|
|||
func Test_streamContext_Read(t *testing.T) {
|
||||
f := func(s string, rowsExpected *Rows) {
|
||||
t.Helper()
|
||||
ctx := &streamContext{}
|
||||
ctx.Read(strings.NewReader(s))
|
||||
if len(ctx.Rows.Rows) != len(rowsExpected.Rows) {
|
||||
t.Fatalf("different len of expected rows;\ngot\n%+v;\nwant\n%+v", ctx.Rows, rowsExpected.Rows)
|
||||
ctx := getStreamContext(strings.NewReader(s))
|
||||
if !ctx.Read() {
|
||||
t.Fatalf("expecting successful read")
|
||||
}
|
||||
if !reflect.DeepEqual(ctx.Rows.Rows, rowsExpected.Rows) {
|
||||
t.Fatalf("unexpected rows;\ngot\n%+v;\nwant\n%+v", ctx.Rows.Rows, rowsExpected.Rows)
|
||||
uw := getUnmarshalWork()
|
||||
callbackCalls := 0
|
||||
uw.callback = func(rows []Row) error {
|
||||
callbackCalls++
|
||||
if len(rows) != len(rowsExpected.Rows) {
|
||||
t.Fatalf("different len of expected rows;\ngot\n%+v;\nwant\n%+v", rows, rowsExpected.Rows)
|
||||
}
|
||||
if !reflect.DeepEqual(rows, rowsExpected.Rows) {
|
||||
t.Fatalf("unexpected rows;\ngot\n%+v;\nwant\n%+v", rows, rowsExpected.Rows)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
uw.reqBuf = append(uw.reqBuf[:0], ctx.reqBuf...)
|
||||
uw.Unmarshal()
|
||||
if callbackCalls != 1 {
|
||||
t.Fatalf("unexpected number of callback calls; got %d; want 1", callbackCalls)
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
package graphite
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io"
|
||||
|
@ -10,6 +11,7 @@ import (
|
|||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
)
|
||||
|
@ -25,23 +27,24 @@ var (
|
|||
//
|
||||
// callback shouldn't hold rows after returning.
|
||||
func ParseStream(r io.Reader, callback func(rows []Row) error) error {
|
||||
ctx := getStreamContext()
|
||||
ctx := getStreamContext(r)
|
||||
defer putStreamContext(ctx)
|
||||
|
||||
for ctx.Read(r) {
|
||||
if err := callback(ctx.Rows.Rows); err != nil {
|
||||
return err
|
||||
}
|
||||
for ctx.Read() {
|
||||
uw := getUnmarshalWork()
|
||||
uw.callback = callback
|
||||
uw.reqBuf, ctx.reqBuf = ctx.reqBuf, uw.reqBuf
|
||||
common.ScheduleUnmarshalWork(uw)
|
||||
}
|
||||
return ctx.Error()
|
||||
}
|
||||
|
||||
func (ctx *streamContext) Read(r io.Reader) bool {
|
||||
func (ctx *streamContext) Read() bool {
|
||||
readCalls.Inc()
|
||||
if ctx.err != nil {
|
||||
return false
|
||||
}
|
||||
ctx.reqBuf, ctx.tailBuf, ctx.err = common.ReadLinesBlock(r, ctx.reqBuf, ctx.tailBuf)
|
||||
ctx.reqBuf, ctx.tailBuf, ctx.err = common.ReadLinesBlock(ctx.br, ctx.reqBuf, ctx.tailBuf)
|
||||
if ctx.err != nil {
|
||||
if ctx.err != io.EOF {
|
||||
readErrors.Inc()
|
||||
|
@ -49,10 +52,82 @@ func (ctx *streamContext) Read(r io.Reader) bool {
|
|||
}
|
||||
return false
|
||||
}
|
||||
ctx.Rows.Unmarshal(bytesutil.ToUnsafeString(ctx.reqBuf))
|
||||
rowsRead.Add(len(ctx.Rows.Rows))
|
||||
return true
|
||||
}
|
||||
|
||||
rows := ctx.Rows.Rows
|
||||
type streamContext struct {
|
||||
br *bufio.Reader
|
||||
reqBuf []byte
|
||||
tailBuf []byte
|
||||
err error
|
||||
}
|
||||
|
||||
func (ctx *streamContext) Error() error {
|
||||
if ctx.err == io.EOF {
|
||||
return nil
|
||||
}
|
||||
return ctx.err
|
||||
}
|
||||
|
||||
func (ctx *streamContext) reset() {
|
||||
ctx.br.Reset(nil)
|
||||
ctx.reqBuf = ctx.reqBuf[:0]
|
||||
ctx.tailBuf = ctx.tailBuf[:0]
|
||||
ctx.err = nil
|
||||
}
|
||||
|
||||
var (
|
||||
readCalls = metrics.NewCounter(`vm_protoparser_read_calls_total{type="graphite"}`)
|
||||
readErrors = metrics.NewCounter(`vm_protoparser_read_errors_total{type="graphite"}`)
|
||||
rowsRead = metrics.NewCounter(`vm_protoparser_rows_read_total{type="graphite"}`)
|
||||
)
|
||||
|
||||
func getStreamContext(r io.Reader) *streamContext {
|
||||
select {
|
||||
case ctx := <-streamContextPoolCh:
|
||||
ctx.br.Reset(r)
|
||||
return ctx
|
||||
default:
|
||||
if v := streamContextPool.Get(); v != nil {
|
||||
ctx := v.(*streamContext)
|
||||
ctx.br.Reset(r)
|
||||
return ctx
|
||||
}
|
||||
return &streamContext{
|
||||
br: bufio.NewReaderSize(r, 64*1024),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func putStreamContext(ctx *streamContext) {
|
||||
ctx.reset()
|
||||
select {
|
||||
case streamContextPoolCh <- ctx:
|
||||
default:
|
||||
streamContextPool.Put(ctx)
|
||||
}
|
||||
}
|
||||
|
||||
var streamContextPool sync.Pool
|
||||
var streamContextPoolCh = make(chan *streamContext, runtime.GOMAXPROCS(-1))
|
||||
|
||||
type unmarshalWork struct {
|
||||
rows Rows
|
||||
callback func(rows []Row) error
|
||||
reqBuf []byte
|
||||
}
|
||||
|
||||
func (uw *unmarshalWork) reset() {
|
||||
uw.rows.Reset()
|
||||
uw.callback = nil
|
||||
uw.reqBuf = uw.reqBuf[:0]
|
||||
}
|
||||
|
||||
// Unmarshal implements common.UnmarshalWork
|
||||
func (uw *unmarshalWork) Unmarshal() {
|
||||
uw.rows.Unmarshal(bytesutil.ToUnsafeString(uw.reqBuf))
|
||||
rows := uw.rows.Rows
|
||||
rowsRead.Add(len(rows))
|
||||
|
||||
// Fill missing timestamps with the current timestamp rounded to seconds.
|
||||
currentTimestamp := int64(fasttime.UnixTimestamp())
|
||||
|
@ -76,56 +151,25 @@ func (ctx *streamContext) Read(r io.Reader) bool {
|
|||
}
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
type streamContext struct {
|
||||
Rows Rows
|
||||
reqBuf []byte
|
||||
tailBuf []byte
|
||||
err error
|
||||
}
|
||||
|
||||
func (ctx *streamContext) Error() error {
|
||||
if ctx.err == io.EOF {
|
||||
return nil
|
||||
if err := uw.callback(rows); err != nil {
|
||||
logger.Errorf("error when processing imported data: %s", err)
|
||||
putUnmarshalWork(uw)
|
||||
return
|
||||
}
|
||||
return ctx.err
|
||||
putUnmarshalWork(uw)
|
||||
}
|
||||
|
||||
func (ctx *streamContext) reset() {
|
||||
ctx.Rows.Reset()
|
||||
ctx.reqBuf = ctx.reqBuf[:0]
|
||||
ctx.tailBuf = ctx.tailBuf[:0]
|
||||
ctx.err = nil
|
||||
}
|
||||
|
||||
var (
|
||||
readCalls = metrics.NewCounter(`vm_protoparser_read_calls_total{type="graphite"}`)
|
||||
readErrors = metrics.NewCounter(`vm_protoparser_read_errors_total{type="graphite"}`)
|
||||
rowsRead = metrics.NewCounter(`vm_protoparser_rows_read_total{type="graphite"}`)
|
||||
)
|
||||
|
||||
func getStreamContext() *streamContext {
|
||||
select {
|
||||
case ctx := <-streamContextPoolCh:
|
||||
return ctx
|
||||
default:
|
||||
if v := streamContextPool.Get(); v != nil {
|
||||
return v.(*streamContext)
|
||||
}
|
||||
return &streamContext{}
|
||||
func getUnmarshalWork() *unmarshalWork {
|
||||
v := unmarshalWorkPool.Get()
|
||||
if v == nil {
|
||||
return &unmarshalWork{}
|
||||
}
|
||||
return v.(*unmarshalWork)
|
||||
}
|
||||
|
||||
func putStreamContext(ctx *streamContext) {
|
||||
ctx.reset()
|
||||
select {
|
||||
case streamContextPoolCh <- ctx:
|
||||
default:
|
||||
streamContextPool.Put(ctx)
|
||||
}
|
||||
func putUnmarshalWork(uw *unmarshalWork) {
|
||||
uw.reset()
|
||||
unmarshalWorkPool.Put(uw)
|
||||
}
|
||||
|
||||
var streamContextPool sync.Pool
|
||||
var streamContextPoolCh = make(chan *streamContext, runtime.GOMAXPROCS(-1))
|
||||
var unmarshalWorkPool sync.Pool
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
package influx
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io"
|
||||
|
@ -9,6 +10,7 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
)
|
||||
|
@ -50,22 +52,25 @@ func ParseStream(r io.Reader, isGzipped bool, precision, db string, callback fun
|
|||
tsMultiplier = -1e3 * 3600
|
||||
}
|
||||
|
||||
ctx := getStreamContext()
|
||||
ctx := getStreamContext(r)
|
||||
defer putStreamContext(ctx)
|
||||
for ctx.Read(r, tsMultiplier) {
|
||||
if err := callback(db, ctx.Rows.Rows); err != nil {
|
||||
return err
|
||||
}
|
||||
for ctx.Read() {
|
||||
uw := getUnmarshalWork()
|
||||
uw.callback = callback
|
||||
uw.db = db
|
||||
uw.tsMultiplier = tsMultiplier
|
||||
uw.reqBuf, ctx.reqBuf = ctx.reqBuf, uw.reqBuf
|
||||
common.ScheduleUnmarshalWork(uw)
|
||||
}
|
||||
return ctx.Error()
|
||||
}
|
||||
|
||||
func (ctx *streamContext) Read(r io.Reader, tsMultiplier int64) bool {
|
||||
func (ctx *streamContext) Read() bool {
|
||||
readCalls.Inc()
|
||||
if ctx.err != nil {
|
||||
return false
|
||||
}
|
||||
ctx.reqBuf, ctx.tailBuf, ctx.err = common.ReadLinesBlock(r, ctx.reqBuf, ctx.tailBuf)
|
||||
ctx.reqBuf, ctx.tailBuf, ctx.err = common.ReadLinesBlock(ctx.br, ctx.reqBuf, ctx.tailBuf)
|
||||
if ctx.err != nil {
|
||||
if ctx.err != io.EOF {
|
||||
readErrors.Inc()
|
||||
|
@ -73,13 +78,90 @@ func (ctx *streamContext) Read(r io.Reader, tsMultiplier int64) bool {
|
|||
}
|
||||
return false
|
||||
}
|
||||
ctx.Rows.Unmarshal(bytesutil.ToUnsafeString(ctx.reqBuf))
|
||||
rowsRead.Add(len(ctx.Rows.Rows))
|
||||
return true
|
||||
}
|
||||
|
||||
rows := ctx.Rows.Rows
|
||||
var (
|
||||
readCalls = metrics.NewCounter(`vm_protoparser_read_calls_total{type="influx"}`)
|
||||
readErrors = metrics.NewCounter(`vm_protoparser_read_errors_total{type="influx"}`)
|
||||
rowsRead = metrics.NewCounter(`vm_protoparser_rows_read_total{type="influx"}`)
|
||||
)
|
||||
|
||||
// Adjust timestamps according to tsMultiplier
|
||||
type streamContext struct {
|
||||
br *bufio.Reader
|
||||
reqBuf []byte
|
||||
tailBuf []byte
|
||||
err error
|
||||
}
|
||||
|
||||
func (ctx *streamContext) Error() error {
|
||||
if ctx.err == io.EOF {
|
||||
return nil
|
||||
}
|
||||
return ctx.err
|
||||
}
|
||||
|
||||
func (ctx *streamContext) reset() {
|
||||
ctx.br.Reset(nil)
|
||||
ctx.reqBuf = ctx.reqBuf[:0]
|
||||
ctx.tailBuf = ctx.tailBuf[:0]
|
||||
ctx.err = nil
|
||||
}
|
||||
|
||||
func getStreamContext(r io.Reader) *streamContext {
|
||||
select {
|
||||
case ctx := <-streamContextPoolCh:
|
||||
ctx.br.Reset(r)
|
||||
return ctx
|
||||
default:
|
||||
if v := streamContextPool.Get(); v != nil {
|
||||
ctx := v.(*streamContext)
|
||||
ctx.br.Reset(r)
|
||||
return ctx
|
||||
}
|
||||
return &streamContext{
|
||||
br: bufio.NewReaderSize(r, 64*1024),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func putStreamContext(ctx *streamContext) {
|
||||
ctx.reset()
|
||||
select {
|
||||
case streamContextPoolCh <- ctx:
|
||||
default:
|
||||
streamContextPool.Put(ctx)
|
||||
}
|
||||
}
|
||||
|
||||
var streamContextPool sync.Pool
|
||||
var streamContextPoolCh = make(chan *streamContext, runtime.GOMAXPROCS(-1))
|
||||
|
||||
type unmarshalWork struct {
|
||||
rows Rows
|
||||
callback func(db string, rows []Row) error
|
||||
db string
|
||||
tsMultiplier int64
|
||||
reqBuf []byte
|
||||
}
|
||||
|
||||
func (uw *unmarshalWork) reset() {
|
||||
uw.rows.Reset()
|
||||
uw.callback = nil
|
||||
uw.db = ""
|
||||
uw.tsMultiplier = 0
|
||||
uw.reqBuf = uw.reqBuf[:0]
|
||||
}
|
||||
|
||||
// Unmarshal implements common.UnmarshalWork
|
||||
func (uw *unmarshalWork) Unmarshal() {
|
||||
uw.rows.Unmarshal(bytesutil.ToUnsafeString(uw.reqBuf))
|
||||
rows := uw.rows.Rows
|
||||
rowsRead.Add(len(rows))
|
||||
|
||||
// Adjust timestamps according to uw.tsMultiplier
|
||||
currentTs := time.Now().UnixNano() / 1e6
|
||||
tsMultiplier := uw.tsMultiplier
|
||||
if tsMultiplier >= 1 {
|
||||
for i := range rows {
|
||||
row := &rows[i]
|
||||
|
@ -110,56 +192,25 @@ func (ctx *streamContext) Read(r io.Reader, tsMultiplier int64) bool {
|
|||
}
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
var (
|
||||
readCalls = metrics.NewCounter(`vm_protoparser_read_calls_total{type="influx"}`)
|
||||
readErrors = metrics.NewCounter(`vm_protoparser_read_errors_total{type="influx"}`)
|
||||
rowsRead = metrics.NewCounter(`vm_protoparser_rows_read_total{type="influx"}`)
|
||||
)
|
||||
|
||||
type streamContext struct {
|
||||
Rows Rows
|
||||
reqBuf []byte
|
||||
tailBuf []byte
|
||||
err error
|
||||
}
|
||||
|
||||
func (ctx *streamContext) Error() error {
|
||||
if ctx.err == io.EOF {
|
||||
return nil
|
||||
if err := uw.callback(uw.db, rows); err != nil {
|
||||
logger.Errorf("error when processing imported data: %s", err)
|
||||
putUnmarshalWork(uw)
|
||||
return
|
||||
}
|
||||
return ctx.err
|
||||
putUnmarshalWork(uw)
|
||||
}
|
||||
|
||||
func (ctx *streamContext) reset() {
|
||||
ctx.Rows.Reset()
|
||||
ctx.reqBuf = ctx.reqBuf[:0]
|
||||
ctx.tailBuf = ctx.tailBuf[:0]
|
||||
ctx.err = nil
|
||||
}
|
||||
|
||||
func getStreamContext() *streamContext {
|
||||
select {
|
||||
case ctx := <-streamContextPoolCh:
|
||||
return ctx
|
||||
default:
|
||||
if v := streamContextPool.Get(); v != nil {
|
||||
return v.(*streamContext)
|
||||
}
|
||||
return &streamContext{}
|
||||
func getUnmarshalWork() *unmarshalWork {
|
||||
v := unmarshalWorkPool.Get()
|
||||
if v == nil {
|
||||
return &unmarshalWork{}
|
||||
}
|
||||
return v.(*unmarshalWork)
|
||||
}
|
||||
|
||||
func putStreamContext(ctx *streamContext) {
|
||||
ctx.reset()
|
||||
select {
|
||||
case streamContextPoolCh <- ctx:
|
||||
default:
|
||||
streamContextPool.Put(ctx)
|
||||
}
|
||||
func putUnmarshalWork(uw *unmarshalWork) {
|
||||
uw.reset()
|
||||
unmarshalWorkPool.Put(uw)
|
||||
}
|
||||
|
||||
var streamContextPool sync.Pool
|
||||
var streamContextPoolCh = make(chan *streamContext, runtime.GOMAXPROCS(-1))
|
||||
var unmarshalWorkPool sync.Pool
|
||||
|
|
210
lib/protoparser/native/streamparser.go
Normal file
210
lib/protoparser/native/streamparser.go
Normal file
|
@ -0,0 +1,210 @@
|
|||
package native
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"sync"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/storage"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
)
|
||||
|
||||
// ParseStream parses /api/v1/import/native lines from req and calls callback for parsed blocks.
|
||||
//
|
||||
// The callback can be called multiple times for streamed data from req.
|
||||
//
|
||||
// callback shouldn't hold block after returning.
|
||||
// callback can be called in parallel from multiple concurrent goroutines.
|
||||
func ParseStream(req *http.Request, callback func(block *Block) error) error {
|
||||
r := req.Body
|
||||
if req.Header.Get("Content-Encoding") == "gzip" {
|
||||
zr, err := common.GetGzipReader(r)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot read gzipped vmimport data: %w", err)
|
||||
}
|
||||
defer common.PutGzipReader(zr)
|
||||
r = zr
|
||||
}
|
||||
br := getBufferedReader(r)
|
||||
defer putBufferedReader(br)
|
||||
|
||||
// Read time range (tr)
|
||||
trBuf := make([]byte, 16)
|
||||
var tr storage.TimeRange
|
||||
if _, err := io.ReadFull(br, trBuf); err != nil {
|
||||
readErrors.Inc()
|
||||
return fmt.Errorf("cannot read time range: %w", err)
|
||||
}
|
||||
tr.MinTimestamp = encoding.UnmarshalInt64(trBuf)
|
||||
tr.MaxTimestamp = encoding.UnmarshalInt64(trBuf[8:])
|
||||
|
||||
// Read native blocks and feed workers with work.
|
||||
sizeBuf := make([]byte, 4)
|
||||
for {
|
||||
uw := getUnmarshalWork()
|
||||
uw.tr = tr
|
||||
uw.callback = callback
|
||||
|
||||
// Read uw.metricNameBuf
|
||||
if _, err := io.ReadFull(br, sizeBuf); err != nil {
|
||||
if err == io.EOF {
|
||||
// End of stream
|
||||
putUnmarshalWork(uw)
|
||||
return nil
|
||||
}
|
||||
readErrors.Inc()
|
||||
return fmt.Errorf("cannot read metricName size: %w", err)
|
||||
}
|
||||
readCalls.Inc()
|
||||
bufSize := encoding.UnmarshalUint32(sizeBuf)
|
||||
if bufSize > 1024*1024 {
|
||||
parseErrors.Inc()
|
||||
return fmt.Errorf("too big metricName size; got %d; shouldn't exceed %d", bufSize, 1024*1024)
|
||||
}
|
||||
uw.metricNameBuf = bytesutil.Resize(uw.metricNameBuf, int(bufSize))
|
||||
if _, err := io.ReadFull(br, uw.metricNameBuf); err != nil {
|
||||
readErrors.Inc()
|
||||
return fmt.Errorf("cannot read metricName with size %d bytes: %w", bufSize, err)
|
||||
}
|
||||
readCalls.Inc()
|
||||
|
||||
// Read uw.blockBuf
|
||||
if _, err := io.ReadFull(br, sizeBuf); err != nil {
|
||||
readErrors.Inc()
|
||||
return fmt.Errorf("cannot read native block size: %w", err)
|
||||
}
|
||||
readCalls.Inc()
|
||||
bufSize = encoding.UnmarshalUint32(sizeBuf)
|
||||
if bufSize > 1024*1024 {
|
||||
parseErrors.Inc()
|
||||
return fmt.Errorf("too big native block size; got %d; shouldn't exceed %d", bufSize, 1024*1024)
|
||||
}
|
||||
uw.blockBuf = bytesutil.Resize(uw.blockBuf, int(bufSize))
|
||||
if _, err := io.ReadFull(br, uw.blockBuf); err != nil {
|
||||
readErrors.Inc()
|
||||
return fmt.Errorf("cannot read native block with size %d bytes: %w", bufSize, err)
|
||||
}
|
||||
readCalls.Inc()
|
||||
blocksRead.Inc()
|
||||
|
||||
common.ScheduleUnmarshalWork(uw)
|
||||
}
|
||||
}
|
||||
|
||||
// Block is a single block from `/api/v1/import/native` request.
|
||||
type Block struct {
|
||||
MetricName storage.MetricName
|
||||
Values []float64
|
||||
Timestamps []int64
|
||||
}
|
||||
|
||||
func (b *Block) reset() {
|
||||
b.MetricName.Reset()
|
||||
b.Values = b.Values[:0]
|
||||
b.Timestamps = b.Timestamps[:0]
|
||||
}
|
||||
|
||||
var (
|
||||
readCalls = metrics.NewCounter(`vm_protoparser_read_calls_total{type="native"}`)
|
||||
readErrors = metrics.NewCounter(`vm_protoparser_read_errors_total{type="native"}`)
|
||||
rowsRead = metrics.NewCounter(`vm_protoparser_rows_read_total{type="native"}`)
|
||||
blocksRead = metrics.NewCounter(`vm_protoparser_blocks_read_total{type="native"}`)
|
||||
|
||||
parseErrors = metrics.NewCounter(`vm_protoparser_parse_errors_total{type="native"}`)
|
||||
processErrors = metrics.NewCounter(`vm_protoparser_process_errors_total{type="native"}`)
|
||||
)
|
||||
|
||||
type unmarshalWork struct {
|
||||
tr storage.TimeRange
|
||||
callback func(block *Block) error
|
||||
metricNameBuf []byte
|
||||
blockBuf []byte
|
||||
block Block
|
||||
}
|
||||
|
||||
func (uw *unmarshalWork) reset() {
|
||||
uw.callback = nil
|
||||
uw.metricNameBuf = uw.metricNameBuf[:0]
|
||||
uw.blockBuf = uw.blockBuf[:0]
|
||||
uw.block.reset()
|
||||
}
|
||||
|
||||
// Unmarshal implements common.UnmarshalWork
|
||||
func (uw *unmarshalWork) Unmarshal() {
|
||||
if err := uw.unmarshal(); err != nil {
|
||||
parseErrors.Inc()
|
||||
logger.Errorf("error when unmarshaling native block: %s", err)
|
||||
putUnmarshalWork(uw)
|
||||
return
|
||||
}
|
||||
if err := uw.callback(&uw.block); err != nil {
|
||||
processErrors.Inc()
|
||||
logger.Errorf("error when processing native block: %s", err)
|
||||
putUnmarshalWork(uw)
|
||||
return
|
||||
}
|
||||
putUnmarshalWork(uw)
|
||||
}
|
||||
|
||||
func (uw *unmarshalWork) unmarshal() error {
|
||||
block := &uw.block
|
||||
if err := block.MetricName.Unmarshal(uw.metricNameBuf); err != nil {
|
||||
return fmt.Errorf("cannot unmarshal metricName from %d bytes: %w", len(uw.metricNameBuf), err)
|
||||
}
|
||||
tmpBlock := blockPool.Get().(*storage.Block)
|
||||
defer blockPool.Put(tmpBlock)
|
||||
tail, err := tmpBlock.UnmarshalPortable(uw.blockBuf)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot unmarshal native block from %d bytes: %w", len(uw.blockBuf), err)
|
||||
}
|
||||
if len(tail) > 0 {
|
||||
return fmt.Errorf("unexpected non-empty tail left after unmarshaling native block from %d bytes; len(tail)=%d bytes", len(uw.blockBuf), len(tail))
|
||||
}
|
||||
block.Timestamps, block.Values = tmpBlock.AppendRowsWithTimeRangeFilter(block.Timestamps[:0], block.Values[:0], uw.tr)
|
||||
rowsRead.Add(len(block.Timestamps))
|
||||
return nil
|
||||
}
|
||||
|
||||
var blockPool = &sync.Pool{
|
||||
New: func() interface{} {
|
||||
return &storage.Block{}
|
||||
},
|
||||
}
|
||||
|
||||
func getUnmarshalWork() *unmarshalWork {
|
||||
v := unmarshalWorkPool.Get()
|
||||
if v == nil {
|
||||
return &unmarshalWork{}
|
||||
}
|
||||
return v.(*unmarshalWork)
|
||||
}
|
||||
|
||||
func putUnmarshalWork(uw *unmarshalWork) {
|
||||
uw.reset()
|
||||
unmarshalWorkPool.Put(uw)
|
||||
}
|
||||
|
||||
var unmarshalWorkPool sync.Pool
|
||||
|
||||
func getBufferedReader(r io.Reader) *bufio.Reader {
|
||||
v := bufferedReaderPool.Get()
|
||||
if v == nil {
|
||||
return bufio.NewReaderSize(r, 64*1024)
|
||||
}
|
||||
br := v.(*bufio.Reader)
|
||||
br.Reset(r)
|
||||
return br
|
||||
}
|
||||
|
||||
func putBufferedReader(br *bufio.Reader) {
|
||||
br.Reset(nil)
|
||||
bufferedReaderPool.Put(br)
|
||||
}
|
||||
|
||||
var bufferedReaderPool sync.Pool
|
|
@ -1,6 +1,7 @@
|
|||
package opentsdb
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io"
|
||||
|
@ -10,6 +11,7 @@ import (
|
|||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
)
|
||||
|
@ -25,22 +27,23 @@ var (
|
|||
//
|
||||
// callback shouldn't hold rows after returning.
|
||||
func ParseStream(r io.Reader, callback func(rows []Row) error) error {
|
||||
ctx := getStreamContext()
|
||||
ctx := getStreamContext(r)
|
||||
defer putStreamContext(ctx)
|
||||
for ctx.Read(r) {
|
||||
if err := callback(ctx.Rows.Rows); err != nil {
|
||||
return err
|
||||
}
|
||||
for ctx.Read() {
|
||||
uw := getUnmarshalWork()
|
||||
uw.callback = callback
|
||||
uw.reqBuf, ctx.reqBuf = ctx.reqBuf, uw.reqBuf
|
||||
common.ScheduleUnmarshalWork(uw)
|
||||
}
|
||||
return ctx.Error()
|
||||
}
|
||||
|
||||
func (ctx *streamContext) Read(r io.Reader) bool {
|
||||
func (ctx *streamContext) Read() bool {
|
||||
readCalls.Inc()
|
||||
if ctx.err != nil {
|
||||
return false
|
||||
}
|
||||
ctx.reqBuf, ctx.tailBuf, ctx.err = common.ReadLinesBlock(r, ctx.reqBuf, ctx.tailBuf)
|
||||
ctx.reqBuf, ctx.tailBuf, ctx.err = common.ReadLinesBlock(ctx.br, ctx.reqBuf, ctx.tailBuf)
|
||||
if ctx.err != nil {
|
||||
if ctx.err != io.EOF {
|
||||
readErrors.Inc()
|
||||
|
@ -48,10 +51,82 @@ func (ctx *streamContext) Read(r io.Reader) bool {
|
|||
}
|
||||
return false
|
||||
}
|
||||
ctx.Rows.Unmarshal(bytesutil.ToUnsafeString(ctx.reqBuf))
|
||||
rowsRead.Add(len(ctx.Rows.Rows))
|
||||
return true
|
||||
}
|
||||
|
||||
rows := ctx.Rows.Rows
|
||||
type streamContext struct {
|
||||
br *bufio.Reader
|
||||
reqBuf []byte
|
||||
tailBuf []byte
|
||||
err error
|
||||
}
|
||||
|
||||
func (ctx *streamContext) Error() error {
|
||||
if ctx.err == io.EOF {
|
||||
return nil
|
||||
}
|
||||
return ctx.err
|
||||
}
|
||||
|
||||
func (ctx *streamContext) reset() {
|
||||
ctx.br.Reset(nil)
|
||||
ctx.reqBuf = ctx.reqBuf[:0]
|
||||
ctx.tailBuf = ctx.tailBuf[:0]
|
||||
ctx.err = nil
|
||||
}
|
||||
|
||||
var (
|
||||
readCalls = metrics.NewCounter(`vm_protoparser_read_calls_total{type="opentsdb"}`)
|
||||
readErrors = metrics.NewCounter(`vm_protoparser_read_errors_total{type="opentsdb"}`)
|
||||
rowsRead = metrics.NewCounter(`vm_protoparser_rows_read_total{type="opentsdb"}`)
|
||||
)
|
||||
|
||||
func getStreamContext(r io.Reader) *streamContext {
|
||||
select {
|
||||
case ctx := <-streamContextPoolCh:
|
||||
ctx.br.Reset(r)
|
||||
return ctx
|
||||
default:
|
||||
if v := streamContextPool.Get(); v != nil {
|
||||
ctx := v.(*streamContext)
|
||||
ctx.br.Reset(r)
|
||||
return ctx
|
||||
}
|
||||
return &streamContext{
|
||||
br: bufio.NewReaderSize(r, 64*1024),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func putStreamContext(ctx *streamContext) {
|
||||
ctx.reset()
|
||||
select {
|
||||
case streamContextPoolCh <- ctx:
|
||||
default:
|
||||
streamContextPool.Put(ctx)
|
||||
}
|
||||
}
|
||||
|
||||
var streamContextPool sync.Pool
|
||||
var streamContextPoolCh = make(chan *streamContext, runtime.GOMAXPROCS(-1))
|
||||
|
||||
type unmarshalWork struct {
|
||||
rows Rows
|
||||
callback func(rows []Row) error
|
||||
reqBuf []byte
|
||||
}
|
||||
|
||||
func (uw *unmarshalWork) reset() {
|
||||
uw.rows.Reset()
|
||||
uw.callback = nil
|
||||
uw.reqBuf = uw.reqBuf[:0]
|
||||
}
|
||||
|
||||
// Unmarshal implements common.UnmarshalWork
|
||||
func (uw *unmarshalWork) Unmarshal() {
|
||||
uw.rows.Unmarshal(bytesutil.ToUnsafeString(uw.reqBuf))
|
||||
rows := uw.rows.Rows
|
||||
rowsRead.Add(len(rows))
|
||||
|
||||
// Fill in missing timestamps
|
||||
currentTimestamp := int64(fasttime.UnixTimestamp())
|
||||
|
@ -75,56 +150,25 @@ func (ctx *streamContext) Read(r io.Reader) bool {
|
|||
}
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
type streamContext struct {
|
||||
Rows Rows
|
||||
reqBuf []byte
|
||||
tailBuf []byte
|
||||
err error
|
||||
}
|
||||
|
||||
func (ctx *streamContext) Error() error {
|
||||
if ctx.err == io.EOF {
|
||||
return nil
|
||||
if err := uw.callback(rows); err != nil {
|
||||
logger.Errorf("error when processing imported data: %s", err)
|
||||
putUnmarshalWork(uw)
|
||||
return
|
||||
}
|
||||
return ctx.err
|
||||
putUnmarshalWork(uw)
|
||||
}
|
||||
|
||||
func (ctx *streamContext) reset() {
|
||||
ctx.Rows.Reset()
|
||||
ctx.reqBuf = ctx.reqBuf[:0]
|
||||
ctx.tailBuf = ctx.tailBuf[:0]
|
||||
ctx.err = nil
|
||||
}
|
||||
|
||||
var (
|
||||
readCalls = metrics.NewCounter(`vm_protoparser_read_calls_total{type="opentsdb"}`)
|
||||
readErrors = metrics.NewCounter(`vm_protoparser_read_errors_total{type="opentsdb"}`)
|
||||
rowsRead = metrics.NewCounter(`vm_protoparser_rows_read_total{type="opentsdb"}`)
|
||||
)
|
||||
|
||||
func getStreamContext() *streamContext {
|
||||
select {
|
||||
case ctx := <-streamContextPoolCh:
|
||||
return ctx
|
||||
default:
|
||||
if v := streamContextPool.Get(); v != nil {
|
||||
return v.(*streamContext)
|
||||
}
|
||||
return &streamContext{}
|
||||
func getUnmarshalWork() *unmarshalWork {
|
||||
v := unmarshalWorkPool.Get()
|
||||
if v == nil {
|
||||
return &unmarshalWork{}
|
||||
}
|
||||
return v.(*unmarshalWork)
|
||||
}
|
||||
|
||||
func putStreamContext(ctx *streamContext) {
|
||||
ctx.reset()
|
||||
select {
|
||||
case streamContextPoolCh <- ctx:
|
||||
default:
|
||||
streamContextPool.Put(ctx)
|
||||
}
|
||||
func putUnmarshalWork(uw *unmarshalWork) {
|
||||
uw.reset()
|
||||
unmarshalWorkPool.Put(uw)
|
||||
}
|
||||
|
||||
var streamContextPool sync.Pool
|
||||
var streamContextPoolCh = make(chan *streamContext, runtime.GOMAXPROCS(-1))
|
||||
var unmarshalWorkPool sync.Pool
|
||||
|
|
|
@ -4,17 +4,17 @@ import (
|
|||
"github.com/valyala/fastjson"
|
||||
)
|
||||
|
||||
// GetParser returns JSON parser.
|
||||
// getJSONParser returns JSON parser.
|
||||
//
|
||||
// The parser must be returned to the pool via PutParser when no longer needed.
|
||||
func GetParser() *fastjson.Parser {
|
||||
// The parser must be returned to the pool via putJSONParser when no longer needed.
|
||||
func getJSONParser() *fastjson.Parser {
|
||||
return parserPool.Get()
|
||||
}
|
||||
|
||||
// PutParser returns p to the pool.
|
||||
// putJSONParser returns p to the pool.
|
||||
//
|
||||
// p cannot be used after returning to the pool.
|
||||
func PutParser(p *fastjson.Parser) {
|
||||
func putJSONParser(p *fastjson.Parser) {
|
||||
parserPool.Put(p)
|
||||
}
|
||||
|
||||
|
|
|
@ -9,8 +9,8 @@ func TestRowsUnmarshalFailure(t *testing.T) {
|
|||
f := func(s string) {
|
||||
t.Helper()
|
||||
var rows Rows
|
||||
p := GetParser()
|
||||
defer PutParser(p)
|
||||
p := getJSONParser()
|
||||
defer putJSONParser(p)
|
||||
v, err := p.Parse(s)
|
||||
if err != nil {
|
||||
// Expected JSON parser error
|
||||
|
@ -84,8 +84,8 @@ func TestRowsUnmarshalSuccess(t *testing.T) {
|
|||
t.Helper()
|
||||
var rows Rows
|
||||
|
||||
p := GetParser()
|
||||
defer PutParser(p)
|
||||
p := getJSONParser()
|
||||
defer putJSONParser(p)
|
||||
v, err := p.Parse(s)
|
||||
if err != nil {
|
||||
t.Fatalf("cannot parse json %s: %s", s, err)
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
package opentsdbhttp
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io"
|
||||
|
@ -12,6 +13,7 @@ import (
|
|||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
)
|
||||
|
@ -40,11 +42,11 @@ func ParseStream(req *http.Request, callback func(rows []Row) error) error {
|
|||
r = zr
|
||||
}
|
||||
|
||||
ctx := getStreamContext()
|
||||
ctx := getStreamContext(r)
|
||||
defer putStreamContext(ctx)
|
||||
|
||||
// Read the request in ctx.reqBuf
|
||||
lr := io.LimitReader(r, int64(maxInsertRequestSize.N)+1)
|
||||
lr := io.LimitReader(ctx.br, int64(maxInsertRequestSize.N)+1)
|
||||
reqLen, err := ctx.reqBuf.ReadFrom(lr)
|
||||
if err != nil {
|
||||
readErrors.Inc()
|
||||
|
@ -55,18 +57,86 @@ func ParseStream(req *http.Request, callback func(rows []Row) error) error {
|
|||
return fmt.Errorf("too big HTTP OpenTSDB request; mustn't exceed `-opentsdbhttp.maxInsertRequestSize=%d` bytes", maxInsertRequestSize.N)
|
||||
}
|
||||
|
||||
// Unmarshal the request to ctx.Rows
|
||||
p := GetParser()
|
||||
defer PutParser(p)
|
||||
v, err := p.ParseBytes(ctx.reqBuf.B)
|
||||
uw := getUnmarshalWork()
|
||||
uw.callback = callback
|
||||
uw.reqBuf, ctx.reqBuf.B = ctx.reqBuf.B, uw.reqBuf
|
||||
common.ScheduleUnmarshalWork(uw)
|
||||
return nil
|
||||
}
|
||||
|
||||
const secondMask int64 = 0x7FFFFFFF00000000
|
||||
|
||||
type streamContext struct {
|
||||
br *bufio.Reader
|
||||
reqBuf bytesutil.ByteBuffer
|
||||
}
|
||||
|
||||
func (ctx *streamContext) reset() {
|
||||
ctx.br.Reset(nil)
|
||||
ctx.reqBuf.Reset()
|
||||
}
|
||||
|
||||
var (
|
||||
readCalls = metrics.NewCounter(`vm_protoparser_read_calls_total{type="opentsdbhttp"}`)
|
||||
readErrors = metrics.NewCounter(`vm_protoparser_read_errors_total{type="opentsdbhttp"}`)
|
||||
rowsRead = metrics.NewCounter(`vm_protoparser_rows_read_total{type="opentsdbhttp"}`)
|
||||
unmarshalErrors = metrics.NewCounter(`vm_protoparser_unmarshal_errors_total{type="opentsdbhttp"}`)
|
||||
)
|
||||
|
||||
func getStreamContext(r io.Reader) *streamContext {
|
||||
select {
|
||||
case ctx := <-streamContextPoolCh:
|
||||
ctx.br.Reset(r)
|
||||
return ctx
|
||||
default:
|
||||
if v := streamContextPool.Get(); v != nil {
|
||||
ctx := v.(*streamContext)
|
||||
ctx.br.Reset(r)
|
||||
return ctx
|
||||
}
|
||||
return &streamContext{
|
||||
br: bufio.NewReaderSize(r, 64*1024),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func putStreamContext(ctx *streamContext) {
|
||||
ctx.reset()
|
||||
select {
|
||||
case streamContextPoolCh <- ctx:
|
||||
default:
|
||||
streamContextPool.Put(ctx)
|
||||
}
|
||||
}
|
||||
|
||||
var streamContextPool sync.Pool
|
||||
var streamContextPoolCh = make(chan *streamContext, runtime.GOMAXPROCS(-1))
|
||||
|
||||
type unmarshalWork struct {
|
||||
rows Rows
|
||||
callback func(rows []Row) error
|
||||
reqBuf []byte
|
||||
}
|
||||
|
||||
func (uw *unmarshalWork) reset() {
|
||||
uw.rows.Reset()
|
||||
uw.callback = nil
|
||||
uw.reqBuf = uw.reqBuf[:0]
|
||||
}
|
||||
|
||||
// Unmarshal implements common.UnmarshalWork
|
||||
func (uw *unmarshalWork) Unmarshal() {
|
||||
p := getJSONParser()
|
||||
defer putJSONParser(p)
|
||||
v, err := p.ParseBytes(uw.reqBuf)
|
||||
if err != nil {
|
||||
unmarshalErrors.Inc()
|
||||
return fmt.Errorf("cannot parse HTTP OpenTSDB json: %w", err)
|
||||
logger.Errorf("cannot parse HTTP OpenTSDB json: %s", err)
|
||||
return
|
||||
}
|
||||
ctx.Rows.Unmarshal(v)
|
||||
rowsRead.Add(len(ctx.Rows.Rows))
|
||||
|
||||
rows := ctx.Rows.Rows
|
||||
uw.rows.Unmarshal(v)
|
||||
rows := uw.rows.Rows
|
||||
rowsRead.Add(len(rows))
|
||||
|
||||
// Fill in missing timestamps
|
||||
currentTimestamp := int64(fasttime.UnixTimestamp())
|
||||
|
@ -94,49 +164,25 @@ func ParseStream(req *http.Request, callback func(rows []Row) error) error {
|
|||
}
|
||||
}
|
||||
|
||||
// Insert ctx.Rows to db.
|
||||
return callback(rows)
|
||||
}
|
||||
|
||||
const secondMask int64 = 0x7FFFFFFF00000000
|
||||
|
||||
type streamContext struct {
|
||||
Rows Rows
|
||||
reqBuf bytesutil.ByteBuffer
|
||||
}
|
||||
|
||||
func (ctx *streamContext) reset() {
|
||||
ctx.Rows.Reset()
|
||||
ctx.reqBuf.Reset()
|
||||
}
|
||||
|
||||
var (
|
||||
readCalls = metrics.NewCounter(`vm_protoparser_read_calls_total{type="opentsdbhttp"}`)
|
||||
readErrors = metrics.NewCounter(`vm_protoparser_read_errors_total{type="opentsdbhttp"}`)
|
||||
rowsRead = metrics.NewCounter(`vm_protoparser_rows_read_total{type="opentsdbhttp"}`)
|
||||
unmarshalErrors = metrics.NewCounter(`vm_protoparser_unmarshal_errors_total{type="opentsdbhttp"}`)
|
||||
)
|
||||
|
||||
func getStreamContext() *streamContext {
|
||||
select {
|
||||
case ctx := <-streamContextPoolCh:
|
||||
return ctx
|
||||
default:
|
||||
if v := streamContextPool.Get(); v != nil {
|
||||
return v.(*streamContext)
|
||||
}
|
||||
return &streamContext{}
|
||||
if err := uw.callback(rows); err != nil {
|
||||
logger.Errorf("error when processing imported data: %s", err)
|
||||
putUnmarshalWork(uw)
|
||||
return
|
||||
}
|
||||
putUnmarshalWork(uw)
|
||||
}
|
||||
|
||||
func putStreamContext(ctx *streamContext) {
|
||||
ctx.reset()
|
||||
select {
|
||||
case streamContextPoolCh <- ctx:
|
||||
default:
|
||||
streamContextPool.Put(ctx)
|
||||
func getUnmarshalWork() *unmarshalWork {
|
||||
v := unmarshalWorkPool.Get()
|
||||
if v == nil {
|
||||
return &unmarshalWork{}
|
||||
}
|
||||
return v.(*unmarshalWork)
|
||||
}
|
||||
|
||||
var streamContextPool sync.Pool
|
||||
var streamContextPoolCh = make(chan *streamContext, runtime.GOMAXPROCS(-1))
|
||||
func putUnmarshalWork(uw *unmarshalWork) {
|
||||
uw.reset()
|
||||
unmarshalWorkPool.Put(uw)
|
||||
}
|
||||
|
||||
var unmarshalWorkPool sync.Pool
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
package prometheus
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"io"
|
||||
"runtime"
|
||||
|
@ -8,6 +9,7 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
)
|
||||
|
@ -26,22 +28,24 @@ func ParseStream(r io.Reader, defaultTimestamp int64, isGzipped bool, callback f
|
|||
defer common.PutGzipReader(zr)
|
||||
r = zr
|
||||
}
|
||||
ctx := getStreamContext()
|
||||
ctx := getStreamContext(r)
|
||||
defer putStreamContext(ctx)
|
||||
for ctx.Read(r, defaultTimestamp) {
|
||||
if err := callback(ctx.Rows.Rows); err != nil {
|
||||
return err
|
||||
}
|
||||
for ctx.Read() {
|
||||
uw := getUnmarshalWork()
|
||||
uw.callback = callback
|
||||
uw.defaultTimestamp = defaultTimestamp
|
||||
uw.reqBuf, ctx.reqBuf = ctx.reqBuf, uw.reqBuf
|
||||
common.ScheduleUnmarshalWork(uw)
|
||||
}
|
||||
return ctx.Error()
|
||||
}
|
||||
|
||||
func (ctx *streamContext) Read(r io.Reader, defaultTimestamp int64) bool {
|
||||
func (ctx *streamContext) Read() bool {
|
||||
readCalls.Inc()
|
||||
if ctx.err != nil {
|
||||
return false
|
||||
}
|
||||
ctx.reqBuf, ctx.tailBuf, ctx.err = common.ReadLinesBlock(r, ctx.reqBuf, ctx.tailBuf)
|
||||
ctx.reqBuf, ctx.tailBuf, ctx.err = common.ReadLinesBlock(ctx.br, ctx.reqBuf, ctx.tailBuf)
|
||||
if ctx.err != nil {
|
||||
if ctx.err != io.EOF {
|
||||
readErrors.Inc()
|
||||
|
@ -49,26 +53,11 @@ func (ctx *streamContext) Read(r io.Reader, defaultTimestamp int64) bool {
|
|||
}
|
||||
return false
|
||||
}
|
||||
ctx.Rows.Unmarshal(bytesutil.ToUnsafeString(ctx.reqBuf))
|
||||
rowsRead.Add(len(ctx.Rows.Rows))
|
||||
|
||||
rows := ctx.Rows.Rows
|
||||
|
||||
// Fill missing timestamps with the current timestamp.
|
||||
if defaultTimestamp <= 0 {
|
||||
defaultTimestamp = int64(time.Now().UnixNano() / 1e6)
|
||||
}
|
||||
for i := range rows {
|
||||
r := &rows[i]
|
||||
if r.Timestamp == 0 {
|
||||
r.Timestamp = defaultTimestamp
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
type streamContext struct {
|
||||
Rows Rows
|
||||
br *bufio.Reader
|
||||
reqBuf []byte
|
||||
tailBuf []byte
|
||||
err error
|
||||
|
@ -82,7 +71,7 @@ func (ctx *streamContext) Error() error {
|
|||
}
|
||||
|
||||
func (ctx *streamContext) reset() {
|
||||
ctx.Rows.Reset()
|
||||
ctx.br.Reset(nil)
|
||||
ctx.reqBuf = ctx.reqBuf[:0]
|
||||
ctx.tailBuf = ctx.tailBuf[:0]
|
||||
ctx.err = nil
|
||||
|
@ -94,15 +83,20 @@ var (
|
|||
rowsRead = metrics.NewCounter(`vm_protoparser_rows_read_total{type="prometheus"}`)
|
||||
)
|
||||
|
||||
func getStreamContext() *streamContext {
|
||||
func getStreamContext(r io.Reader) *streamContext {
|
||||
select {
|
||||
case ctx := <-streamContextPoolCh:
|
||||
ctx.br.Reset(r)
|
||||
return ctx
|
||||
default:
|
||||
if v := streamContextPool.Get(); v != nil {
|
||||
return v.(*streamContext)
|
||||
ctx := v.(*streamContext)
|
||||
ctx.br.Reset(r)
|
||||
return ctx
|
||||
}
|
||||
return &streamContext{
|
||||
br: bufio.NewReaderSize(r, 64*1024),
|
||||
}
|
||||
return &streamContext{}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -117,3 +111,58 @@ func putStreamContext(ctx *streamContext) {
|
|||
|
||||
var streamContextPool sync.Pool
|
||||
var streamContextPoolCh = make(chan *streamContext, runtime.GOMAXPROCS(-1))
|
||||
|
||||
type unmarshalWork struct {
|
||||
rows Rows
|
||||
callback func(rows []Row) error
|
||||
defaultTimestamp int64
|
||||
reqBuf []byte
|
||||
}
|
||||
|
||||
func (uw *unmarshalWork) reset() {
|
||||
uw.rows.Reset()
|
||||
uw.callback = nil
|
||||
uw.defaultTimestamp = 0
|
||||
uw.reqBuf = uw.reqBuf[:0]
|
||||
}
|
||||
|
||||
// Unmarshal implements common.UnmarshalWork
|
||||
func (uw *unmarshalWork) Unmarshal() {
|
||||
uw.rows.Unmarshal(bytesutil.ToUnsafeString(uw.reqBuf))
|
||||
rows := uw.rows.Rows
|
||||
rowsRead.Add(len(rows))
|
||||
|
||||
// Fill missing timestamps with the current timestamp.
|
||||
defaultTimestamp := uw.defaultTimestamp
|
||||
if defaultTimestamp <= 0 {
|
||||
defaultTimestamp = int64(time.Now().UnixNano() / 1e6)
|
||||
}
|
||||
for i := range rows {
|
||||
r := &rows[i]
|
||||
if r.Timestamp == 0 {
|
||||
r.Timestamp = defaultTimestamp
|
||||
}
|
||||
}
|
||||
|
||||
if err := uw.callback(rows); err != nil {
|
||||
logger.Errorf("error when processing imported data: %s", err)
|
||||
putUnmarshalWork(uw)
|
||||
return
|
||||
}
|
||||
putUnmarshalWork(uw)
|
||||
}
|
||||
|
||||
func getUnmarshalWork() *unmarshalWork {
|
||||
v := unmarshalWorkPool.Get()
|
||||
if v == nil {
|
||||
return &unmarshalWork{}
|
||||
}
|
||||
return v.(*unmarshalWork)
|
||||
}
|
||||
|
||||
func putUnmarshalWork(uw *unmarshalWork) {
|
||||
uw.reset()
|
||||
unmarshalWorkPool.Put(uw)
|
||||
}
|
||||
|
||||
var unmarshalWorkPool sync.Pool
|
||||
|
|
|
@ -4,22 +4,43 @@ import (
|
|||
"bytes"
|
||||
"compress/gzip"
|
||||
"reflect"
|
||||
"sort"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
|
||||
)
|
||||
|
||||
func TestParseStream(t *testing.T) {
|
||||
common.StartUnmarshalWorkers()
|
||||
defer common.StopUnmarshalWorkers()
|
||||
|
||||
const defaultTimestamp = 123
|
||||
f := func(s string, rowsExpected []Row) {
|
||||
t.Helper()
|
||||
bb := bytes.NewBufferString(s)
|
||||
var result []Row
|
||||
var lock sync.Mutex
|
||||
doneCh := make(chan struct{})
|
||||
err := ParseStream(bb, defaultTimestamp, false, func(rows []Row) error {
|
||||
lock.Lock()
|
||||
result = appendRowCopies(result, rows)
|
||||
if len(result) == len(rowsExpected) {
|
||||
close(doneCh)
|
||||
}
|
||||
lock.Unlock()
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error when parsing %q: %s", s, err)
|
||||
}
|
||||
select {
|
||||
case <-doneCh:
|
||||
case <-time.After(time.Second):
|
||||
t.Fatalf("timeout")
|
||||
}
|
||||
sortRows(result)
|
||||
if !reflect.DeepEqual(result, rowsExpected) {
|
||||
t.Fatalf("unexpected rows parsed; got\n%v\nwant\n%v", result, rowsExpected)
|
||||
}
|
||||
|
@ -34,25 +55,41 @@ func TestParseStream(t *testing.T) {
|
|||
t.Fatalf("unexpected error when closing gzip writer: %s", err)
|
||||
}
|
||||
result = nil
|
||||
doneCh = make(chan struct{})
|
||||
err = ParseStream(bb, defaultTimestamp, true, func(rows []Row) error {
|
||||
lock.Lock()
|
||||
result = appendRowCopies(result, rows)
|
||||
if len(result) == len(rowsExpected) {
|
||||
close(doneCh)
|
||||
}
|
||||
lock.Unlock()
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error when parsing compressed %q: %s", s, err)
|
||||
}
|
||||
select {
|
||||
case <-doneCh:
|
||||
case <-time.After(time.Second):
|
||||
t.Fatalf("timeout on compressed stream")
|
||||
}
|
||||
sortRows(result)
|
||||
if !reflect.DeepEqual(result, rowsExpected) {
|
||||
t.Fatalf("unexpected rows parsed; got\n%v\nwant\n%v", result, rowsExpected)
|
||||
t.Fatalf("unexpected compressed rows parsed; got\n%v\nwant\n%v", result, rowsExpected)
|
||||
}
|
||||
}
|
||||
|
||||
f("", nil)
|
||||
f("foo 123 456", []Row{{
|
||||
Metric: "foo",
|
||||
Value: 123,
|
||||
Timestamp: 456,
|
||||
}})
|
||||
f(`foo{bar="baz"} 1 2`+"\n"+`aaa{} 3 4`, []Row{
|
||||
{
|
||||
Metric: "aaa",
|
||||
Value: 3,
|
||||
Timestamp: 4,
|
||||
},
|
||||
{
|
||||
Metric: "foo",
|
||||
Tags: []Tag{{
|
||||
|
@ -62,11 +99,6 @@ func TestParseStream(t *testing.T) {
|
|||
Value: 1,
|
||||
Timestamp: 2,
|
||||
},
|
||||
{
|
||||
Metric: "aaa",
|
||||
Value: 3,
|
||||
Timestamp: 4,
|
||||
},
|
||||
})
|
||||
f("foo 23", []Row{{
|
||||
Metric: "foo",
|
||||
|
@ -75,6 +107,13 @@ func TestParseStream(t *testing.T) {
|
|||
}})
|
||||
}
|
||||
|
||||
func sortRows(rows []Row) {
|
||||
sort.Slice(rows, func(i, j int) bool {
|
||||
a, b := rows[i], rows[j]
|
||||
return a.Metric < b.Metric
|
||||
})
|
||||
}
|
||||
|
||||
func appendRowCopies(dst, src []Row) []Row {
|
||||
for _, r := range src {
|
||||
// Make a copy of r, since r may contain garbage after returning from the callback to ParseStream.
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
package promremotewrite
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
|
@ -9,7 +10,9 @@ import (
|
|||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompb"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
"github.com/golang/snappy"
|
||||
)
|
||||
|
@ -18,46 +21,42 @@ var maxInsertRequestSize = flagutil.NewBytes("maxInsertRequestSize", 32*1024*102
|
|||
|
||||
// ParseStream parses Prometheus remote_write message req and calls callback for the parsed timeseries.
|
||||
//
|
||||
// callback shouldn't hold timeseries after returning.
|
||||
func ParseStream(req *http.Request, callback func(timeseries []prompb.TimeSeries) error) error {
|
||||
ctx := getPushCtx()
|
||||
// callback shouldn't hold tss after returning.
|
||||
func ParseStream(req *http.Request, callback func(tss []prompb.TimeSeries) error) error {
|
||||
ctx := getPushCtx(req.Body)
|
||||
defer putPushCtx(ctx)
|
||||
if err := ctx.Read(req); err != nil {
|
||||
if err := ctx.Read(); err != nil {
|
||||
return err
|
||||
}
|
||||
return callback(ctx.wr.Timeseries)
|
||||
uw := getUnmarshalWork()
|
||||
uw.callback = callback
|
||||
uw.reqBuf, ctx.reqBuf.B = ctx.reqBuf.B, uw.reqBuf
|
||||
common.ScheduleUnmarshalWork(uw)
|
||||
return nil
|
||||
}
|
||||
|
||||
type pushCtx struct {
|
||||
wr prompb.WriteRequest
|
||||
reqBuf []byte
|
||||
br *bufio.Reader
|
||||
reqBuf bytesutil.ByteBuffer
|
||||
}
|
||||
|
||||
func (ctx *pushCtx) reset() {
|
||||
ctx.wr.Reset()
|
||||
ctx.reqBuf = ctx.reqBuf[:0]
|
||||
ctx.br.Reset(nil)
|
||||
ctx.reqBuf.Reset()
|
||||
}
|
||||
|
||||
func (ctx *pushCtx) Read(r *http.Request) error {
|
||||
func (ctx *pushCtx) Read() error {
|
||||
readCalls.Inc()
|
||||
var err error
|
||||
ctx.reqBuf, err = readSnappy(ctx.reqBuf[:0], r.Body)
|
||||
lr := io.LimitReader(ctx.br, int64(maxInsertRequestSize.N)+1)
|
||||
reqLen, err := ctx.reqBuf.ReadFrom(lr)
|
||||
if err != nil {
|
||||
readErrors.Inc()
|
||||
return fmt.Errorf("cannot read prompb.WriteRequest: %w", err)
|
||||
return fmt.Errorf("cannot read compressed request: %w", err)
|
||||
}
|
||||
if err = ctx.wr.Unmarshal(ctx.reqBuf); err != nil {
|
||||
unmarshalErrors.Inc()
|
||||
return fmt.Errorf("cannot unmarshal prompb.WriteRequest with size %d bytes: %w", len(ctx.reqBuf), err)
|
||||
if reqLen > int64(maxInsertRequestSize.N) {
|
||||
readErrors.Inc()
|
||||
return fmt.Errorf("too big packed request; mustn't exceed `-maxInsertRequestSize=%d` bytes", maxInsertRequestSize.N)
|
||||
}
|
||||
|
||||
rows := 0
|
||||
tss := ctx.wr.Timeseries
|
||||
for i := range tss {
|
||||
rows += len(tss[i].Samples)
|
||||
}
|
||||
rowsRead.Add(rows)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
|
@ -68,15 +67,20 @@ var (
|
|||
unmarshalErrors = metrics.NewCounter(`vm_protoparser_unmarshal_errors_total{type="promremotewrite"}`)
|
||||
)
|
||||
|
||||
func getPushCtx() *pushCtx {
|
||||
func getPushCtx(r io.Reader) *pushCtx {
|
||||
select {
|
||||
case ctx := <-pushCtxPoolCh:
|
||||
ctx.br.Reset(r)
|
||||
return ctx
|
||||
default:
|
||||
if v := pushCtxPool.Get(); v != nil {
|
||||
return v.(*pushCtx)
|
||||
ctx := v.(*pushCtx)
|
||||
ctx.br.Reset(r)
|
||||
return ctx
|
||||
}
|
||||
return &pushCtx{
|
||||
br: bufio.NewReaderSize(r, 64*1024),
|
||||
}
|
||||
return &pushCtx{}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -92,34 +96,66 @@ func putPushCtx(ctx *pushCtx) {
|
|||
var pushCtxPool sync.Pool
|
||||
var pushCtxPoolCh = make(chan *pushCtx, runtime.GOMAXPROCS(-1))
|
||||
|
||||
func readSnappy(dst []byte, r io.Reader) ([]byte, error) {
|
||||
lr := io.LimitReader(r, int64(maxInsertRequestSize.N)+1)
|
||||
type unmarshalWork struct {
|
||||
wr prompb.WriteRequest
|
||||
callback func(tss []prompb.TimeSeries) error
|
||||
reqBuf []byte
|
||||
}
|
||||
|
||||
func (uw *unmarshalWork) reset() {
|
||||
uw.wr.Reset()
|
||||
uw.callback = nil
|
||||
uw.reqBuf = uw.reqBuf[:0]
|
||||
}
|
||||
|
||||
// Unmarshal implements common.UnmarshalWork
|
||||
func (uw *unmarshalWork) Unmarshal() {
|
||||
bb := bodyBufferPool.Get()
|
||||
reqLen, err := bb.ReadFrom(lr)
|
||||
defer bodyBufferPool.Put(bb)
|
||||
var err error
|
||||
bb.B, err = snappy.Decode(bb.B[:cap(bb.B)], uw.reqBuf)
|
||||
if err != nil {
|
||||
bodyBufferPool.Put(bb)
|
||||
return dst, fmt.Errorf("cannot read compressed request: %w", err)
|
||||
logger.Errorf("cannot decompress request with length %d: %s", len(uw.reqBuf), err)
|
||||
return
|
||||
}
|
||||
if reqLen > int64(maxInsertRequestSize.N) {
|
||||
return dst, fmt.Errorf("too big packed request; mustn't exceed `-maxInsertRequestSize=%d` bytes", maxInsertRequestSize.N)
|
||||
if len(bb.B) > maxInsertRequestSize.N {
|
||||
logger.Errorf("too big unpacked request; mustn't exceed `-maxInsertRequestSize=%d` bytes; got %d bytes", maxInsertRequestSize.N, len(bb.B))
|
||||
return
|
||||
}
|
||||
if err := uw.wr.Unmarshal(bb.B); err != nil {
|
||||
unmarshalErrors.Inc()
|
||||
logger.Errorf("cannot unmarshal prompb.WriteRequest with size %d bytes: %s", len(bb.B), err)
|
||||
return
|
||||
}
|
||||
|
||||
buf := dst[len(dst):cap(dst)]
|
||||
buf, err = snappy.Decode(buf, bb.B)
|
||||
bodyBufferPool.Put(bb)
|
||||
if err != nil {
|
||||
err = fmt.Errorf("cannot decompress request with length %d: %w", reqLen, err)
|
||||
return dst, err
|
||||
rows := 0
|
||||
tss := uw.wr.Timeseries
|
||||
for i := range tss {
|
||||
rows += len(tss[i].Samples)
|
||||
}
|
||||
if len(buf) > maxInsertRequestSize.N {
|
||||
return dst, fmt.Errorf("too big unpacked request; mustn't exceed `-maxInsertRequestSize=%d` bytes; got %d bytes", maxInsertRequestSize.N, len(buf))
|
||||
rowsRead.Add(rows)
|
||||
|
||||
if err := uw.callback(tss); err != nil {
|
||||
logger.Errorf("error when processing imported data: %s", err)
|
||||
putUnmarshalWork(uw)
|
||||
return
|
||||
}
|
||||
if len(buf) > 0 && len(dst) < cap(dst) && &buf[0] == &dst[len(dst):cap(dst)][0] {
|
||||
dst = dst[:len(dst)+len(buf)]
|
||||
} else {
|
||||
dst = append(dst, buf...)
|
||||
}
|
||||
return dst, nil
|
||||
putUnmarshalWork(uw)
|
||||
}
|
||||
|
||||
var bodyBufferPool bytesutil.ByteBufferPool
|
||||
|
||||
func getUnmarshalWork() *unmarshalWork {
|
||||
v := unmarshalWorkPool.Get()
|
||||
if v == nil {
|
||||
return &unmarshalWork{}
|
||||
}
|
||||
return v.(*unmarshalWork)
|
||||
}
|
||||
|
||||
func putUnmarshalWork(uw *unmarshalWork) {
|
||||
uw.reset()
|
||||
unmarshalWorkPool.Put(uw)
|
||||
}
|
||||
|
||||
var unmarshalWorkPool sync.Pool
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
package vmimport
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
|
@ -9,17 +10,20 @@ import (
|
|||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/flagutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/common"
|
||||
"github.com/VictoriaMetrics/metrics"
|
||||
)
|
||||
|
||||
var maxLineLen = flagutil.NewBytes("import.maxLineLen", 100*1024*1024, "The maximum length in bytes of a single line accepted by /api/v1/import")
|
||||
var maxLineLen = flagutil.NewBytes("import.maxLineLen", 100*1024*1024, "The maximum length in bytes of a single line accepted by /api/v1/import; "+
|
||||
"the line length can be limited with `max_rows_per_line` query arg passed to /api/v1/export")
|
||||
|
||||
// ParseStream parses /api/v1/import lines from req and calls callback for the parsed rows.
|
||||
//
|
||||
// The callback can be called multiple times for streamed data from req.
|
||||
//
|
||||
// callback shouldn't hold rows after returning.
|
||||
// callback is called from multiple concurrent goroutines.
|
||||
func ParseStream(req *http.Request, callback func(rows []Row) error) error {
|
||||
r := req.Body
|
||||
if req.Header.Get("Content-Encoding") == "gzip" {
|
||||
|
@ -30,23 +34,23 @@ func ParseStream(req *http.Request, callback func(rows []Row) error) error {
|
|||
defer common.PutGzipReader(zr)
|
||||
r = zr
|
||||
}
|
||||
|
||||
ctx := getStreamContext()
|
||||
ctx := getStreamContext(r)
|
||||
defer putStreamContext(ctx)
|
||||
for ctx.Read(r) {
|
||||
if err := callback(ctx.Rows.Rows); err != nil {
|
||||
return err
|
||||
}
|
||||
for ctx.Read() {
|
||||
uw := getUnmarshalWork()
|
||||
uw.callback = callback
|
||||
uw.reqBuf, ctx.reqBuf = ctx.reqBuf, uw.reqBuf
|
||||
common.ScheduleUnmarshalWork(uw)
|
||||
}
|
||||
return ctx.Error()
|
||||
}
|
||||
|
||||
func (ctx *streamContext) Read(r io.Reader) bool {
|
||||
func (ctx *streamContext) Read() bool {
|
||||
readCalls.Inc()
|
||||
if ctx.err != nil {
|
||||
return false
|
||||
}
|
||||
ctx.reqBuf, ctx.tailBuf, ctx.err = common.ReadLinesBlockExt(r, ctx.reqBuf, ctx.tailBuf, maxLineLen.N)
|
||||
ctx.reqBuf, ctx.tailBuf, ctx.err = common.ReadLinesBlockExt(ctx.br, ctx.reqBuf, ctx.tailBuf, maxLineLen.N)
|
||||
if ctx.err != nil {
|
||||
if ctx.err != io.EOF {
|
||||
readErrors.Inc()
|
||||
|
@ -54,8 +58,6 @@ func (ctx *streamContext) Read(r io.Reader) bool {
|
|||
}
|
||||
return false
|
||||
}
|
||||
ctx.Rows.Unmarshal(bytesutil.ToUnsafeString(ctx.reqBuf))
|
||||
rowsRead.Add(len(ctx.Rows.Rows))
|
||||
return true
|
||||
}
|
||||
|
||||
|
@ -66,7 +68,7 @@ var (
|
|||
)
|
||||
|
||||
type streamContext struct {
|
||||
Rows Rows
|
||||
br *bufio.Reader
|
||||
reqBuf []byte
|
||||
tailBuf []byte
|
||||
err error
|
||||
|
@ -80,21 +82,26 @@ func (ctx *streamContext) Error() error {
|
|||
}
|
||||
|
||||
func (ctx *streamContext) reset() {
|
||||
ctx.Rows.Reset()
|
||||
ctx.br.Reset(nil)
|
||||
ctx.reqBuf = ctx.reqBuf[:0]
|
||||
ctx.tailBuf = ctx.tailBuf[:0]
|
||||
ctx.err = nil
|
||||
}
|
||||
|
||||
func getStreamContext() *streamContext {
|
||||
func getStreamContext(r io.Reader) *streamContext {
|
||||
select {
|
||||
case ctx := <-streamContextPoolCh:
|
||||
ctx.br.Reset(r)
|
||||
return ctx
|
||||
default:
|
||||
if v := streamContextPool.Get(); v != nil {
|
||||
return v.(*streamContext)
|
||||
ctx := v.(*streamContext)
|
||||
ctx.br.Reset(r)
|
||||
return ctx
|
||||
}
|
||||
return &streamContext{
|
||||
br: bufio.NewReaderSize(r, 64*1024),
|
||||
}
|
||||
return &streamContext{}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -109,3 +116,46 @@ func putStreamContext(ctx *streamContext) {
|
|||
|
||||
var streamContextPool sync.Pool
|
||||
var streamContextPoolCh = make(chan *streamContext, runtime.GOMAXPROCS(-1))
|
||||
|
||||
type unmarshalWork struct {
|
||||
rows Rows
|
||||
callback func(rows []Row) error
|
||||
reqBuf []byte
|
||||
}
|
||||
|
||||
func (uw *unmarshalWork) reset() {
|
||||
uw.rows.Reset()
|
||||
uw.callback = nil
|
||||
uw.reqBuf = uw.reqBuf[:0]
|
||||
}
|
||||
|
||||
// Unmarshal implements common.UnmarshalWork
|
||||
func (uw *unmarshalWork) Unmarshal() {
|
||||
uw.rows.Unmarshal(bytesutil.ToUnsafeString(uw.reqBuf))
|
||||
rows := uw.rows.Rows
|
||||
for i := range rows {
|
||||
row := &rows[i]
|
||||
rowsRead.Add(len(row.Timestamps))
|
||||
}
|
||||
if err := uw.callback(rows); err != nil {
|
||||
logger.Errorf("error when processing imported data: %s", err)
|
||||
putUnmarshalWork(uw)
|
||||
return
|
||||
}
|
||||
putUnmarshalWork(uw)
|
||||
}
|
||||
|
||||
func getUnmarshalWork() *unmarshalWork {
|
||||
v := unmarshalWorkPool.Get()
|
||||
if v == nil {
|
||||
return &unmarshalWork{}
|
||||
}
|
||||
return v.(*unmarshalWork)
|
||||
}
|
||||
|
||||
func putUnmarshalWork(uw *unmarshalWork) {
|
||||
uw.reset()
|
||||
unmarshalWorkPool.Put(uw)
|
||||
}
|
||||
|
||||
var unmarshalWorkPool sync.Pool
|
||||
|
|
|
@ -2,9 +2,11 @@ package storage
|
|||
|
||||
import (
|
||||
"fmt"
|
||||
"math"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/decimal"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
)
|
||||
|
@ -86,21 +88,6 @@ func (b *Block) RowsCount() int {
|
|||
return int(b.bh.RowsCount)
|
||||
}
|
||||
|
||||
// Values returns b values.
|
||||
func (b *Block) Values() []int64 {
|
||||
return b.values
|
||||
}
|
||||
|
||||
// Timestamps returns b timestamps.
|
||||
func (b *Block) Timestamps() []int64 {
|
||||
return b.timestamps
|
||||
}
|
||||
|
||||
// Scale returns the decimal scale used for encoding values in the block.
|
||||
func (b *Block) Scale() int16 {
|
||||
return b.bh.Scale
|
||||
}
|
||||
|
||||
// Init initializes b with the given tsid, timestamps, values and scale.
|
||||
func (b *Block) Init(tsid *TSID, timestamps, values []int64, scale int16, precisionBits uint8) {
|
||||
b.Reset()
|
||||
|
@ -289,3 +276,123 @@ func (b *Block) UnmarshalData() error {
|
|||
|
||||
return nil
|
||||
}
|
||||
|
||||
// AppendRowsWithTimeRangeFilter filters samples from b according to tr and appends them to dst*.
|
||||
//
|
||||
// It is expected that UnmarshalData has been already called on b.
|
||||
func (b *Block) AppendRowsWithTimeRangeFilter(dstTimestamps []int64, dstValues []float64, tr TimeRange) ([]int64, []float64) {
|
||||
timestamps, values := b.filterTimestamps(tr)
|
||||
dstTimestamps = append(dstTimestamps, timestamps...)
|
||||
dstValues = decimal.AppendDecimalToFloat(dstValues, values, b.bh.Scale)
|
||||
return dstTimestamps, dstValues
|
||||
}
|
||||
|
||||
func (b *Block) filterTimestamps(tr TimeRange) ([]int64, []int64) {
|
||||
timestamps := b.timestamps
|
||||
|
||||
// Skip timestamps smaller than tr.MinTimestamp.
|
||||
i := 0
|
||||
for i < len(timestamps) && timestamps[i] < tr.MinTimestamp {
|
||||
i++
|
||||
}
|
||||
|
||||
// Skip timestamps bigger than tr.MaxTimestamp.
|
||||
j := len(timestamps)
|
||||
for j > i && timestamps[j-1] > tr.MaxTimestamp {
|
||||
j--
|
||||
}
|
||||
|
||||
if i == j {
|
||||
return nil, nil
|
||||
}
|
||||
return timestamps[i:j], b.values[i:j]
|
||||
}
|
||||
|
||||
// MarshalPortable marshals b to dst, so it could be portably migrated to other VictoriaMetrics instance.
|
||||
//
|
||||
// The marshaled value must be unmarshaled with UnmarshalPortable function.
|
||||
func (b *Block) MarshalPortable(dst []byte) []byte {
|
||||
b.MarshalData(0, 0)
|
||||
|
||||
dst = encoding.MarshalVarInt64(dst, b.bh.MinTimestamp)
|
||||
dst = encoding.MarshalVarInt64(dst, b.bh.FirstValue)
|
||||
dst = encoding.MarshalVarUint64(dst, uint64(b.bh.RowsCount))
|
||||
dst = encoding.MarshalVarInt64(dst, int64(b.bh.Scale))
|
||||
dst = append(dst, byte(b.bh.TimestampsMarshalType))
|
||||
dst = append(dst, byte(b.bh.ValuesMarshalType))
|
||||
dst = encoding.MarshalBytes(dst, b.timestampsData)
|
||||
dst = encoding.MarshalBytes(dst, b.valuesData)
|
||||
|
||||
return dst
|
||||
}
|
||||
|
||||
// UnmarshalPortable unmarshals block from src to b and returns the remaining tail.
|
||||
//
|
||||
// It is assumed that the block has been marshaled with MarshalPortable.
|
||||
func (b *Block) UnmarshalPortable(src []byte) ([]byte, error) {
|
||||
b.Reset()
|
||||
|
||||
// Read header
|
||||
src, firstTimestamp, err := encoding.UnmarshalVarInt64(src)
|
||||
if err != nil {
|
||||
return src, fmt.Errorf("cannot unmarshal firstTimestamp: %w", err)
|
||||
}
|
||||
b.bh.MinTimestamp = firstTimestamp
|
||||
src, firstValue, err := encoding.UnmarshalVarInt64(src)
|
||||
if err != nil {
|
||||
return src, fmt.Errorf("cannot unmarshal firstValue: %w", err)
|
||||
}
|
||||
b.bh.FirstValue = firstValue
|
||||
src, rowsCount, err := encoding.UnmarshalVarUint64(src)
|
||||
if err != nil {
|
||||
return src, fmt.Errorf("cannot unmarshal rowsCount: %w", err)
|
||||
}
|
||||
if rowsCount > math.MaxUint32 {
|
||||
return src, fmt.Errorf("got too big rowsCount=%d; it mustn't exceed %d", rowsCount, uint32(math.MaxUint32))
|
||||
}
|
||||
b.bh.RowsCount = uint32(rowsCount)
|
||||
src, scale, err := encoding.UnmarshalVarInt64(src)
|
||||
if err != nil {
|
||||
return src, fmt.Errorf("cannot unmarshal scale: %w", err)
|
||||
}
|
||||
if scale < math.MinInt16 {
|
||||
return src, fmt.Errorf("got too small scale=%d; it mustn't be smaller than %d", scale, math.MinInt16)
|
||||
}
|
||||
if scale > math.MaxInt16 {
|
||||
return src, fmt.Errorf("got too big scale=%d; it mustn't exceeed %d", scale, math.MaxInt16)
|
||||
}
|
||||
b.bh.Scale = int16(scale)
|
||||
if len(src) < 1 {
|
||||
return src, fmt.Errorf("cannot unmarshal marshalType for timestamps from %d bytes; need at least %d bytes", len(src), 1)
|
||||
}
|
||||
b.bh.TimestampsMarshalType = encoding.MarshalType(src[0])
|
||||
src = src[1:]
|
||||
if len(src) < 1 {
|
||||
return src, fmt.Errorf("cannot unmarshal marshalType for values from %d bytes; need at least %d bytes", len(src), 1)
|
||||
}
|
||||
b.bh.ValuesMarshalType = encoding.MarshalType(src[0])
|
||||
src = src[1:]
|
||||
b.bh.PrecisionBits = 64
|
||||
|
||||
// Read data
|
||||
src, timestampsData, err := encoding.UnmarshalBytes(src)
|
||||
if err != nil {
|
||||
return src, fmt.Errorf("cannot read timestampsData: %w", err)
|
||||
}
|
||||
b.timestampsData = append(b.timestampsData[:0], timestampsData...)
|
||||
src, valuesData, err := encoding.UnmarshalBytes(src)
|
||||
if err != nil {
|
||||
return src, fmt.Errorf("cannot read valuesData: %w", err)
|
||||
}
|
||||
b.valuesData = append(b.valuesData[:0], valuesData...)
|
||||
|
||||
// Validate
|
||||
if err := b.bh.validate(); err != nil {
|
||||
return src, fmt.Errorf("invalid blockHeader: %w", err)
|
||||
}
|
||||
if err := b.UnmarshalData(); err != nil {
|
||||
return src, fmt.Errorf("invalid data: %w", err)
|
||||
}
|
||||
|
||||
return src, nil
|
||||
}
|
||||
|
|
116
lib/storage/block_test.go
Normal file
116
lib/storage/block_test.go
Normal file
|
@ -0,0 +1,116 @@
|
|||
package storage
|
||||
|
||||
import (
|
||||
"math/rand"
|
||||
"reflect"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestBlockMarshalUnmarshalPortable(t *testing.T) {
|
||||
var b Block
|
||||
for i := 0; i < 1000; i++ {
|
||||
b.Reset()
|
||||
rowsCount := rand.Intn(maxRowsPerBlock) + 1
|
||||
b.timestamps = getRandTimestamps(rowsCount)
|
||||
b.values = getRandValues(rowsCount)
|
||||
b.bh.Scale = int16(rand.Intn(30) - 15)
|
||||
b.bh.PrecisionBits = 64
|
||||
testBlockMarshalUnmarshalPortable(t, &b)
|
||||
}
|
||||
}
|
||||
|
||||
func testBlockMarshalUnmarshalPortable(t *testing.T, b *Block) {
|
||||
var b1, b2 Block
|
||||
b1.CopyFrom(b)
|
||||
rowsCount := len(b.values)
|
||||
data := b1.MarshalPortable(nil)
|
||||
if b1.bh.RowsCount != uint32(rowsCount) {
|
||||
t.Fatalf("unexpected number of rows marshaled; got %d; want %d", b1.bh.RowsCount, rowsCount)
|
||||
}
|
||||
tail, err := b2.UnmarshalPortable(data)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
if len(tail) > 0 {
|
||||
t.Fatalf("unexpected non-empty tail: %X", tail)
|
||||
}
|
||||
compareBlocksPortable(t, &b2, b, &b1.bh)
|
||||
|
||||
// Verify non-empty prefix and suffix
|
||||
prefix := "prefix"
|
||||
suffix := "suffix"
|
||||
data = append(data[:0], prefix...)
|
||||
data = b1.MarshalPortable(data)
|
||||
if b1.bh.RowsCount != uint32(rowsCount) {
|
||||
t.Fatalf("unexpected number of rows marshaled; got %d; want %d", b1.bh.RowsCount, rowsCount)
|
||||
}
|
||||
if !strings.HasPrefix(string(data), prefix) {
|
||||
t.Fatalf("unexpected prefix in %X; want %X", data, prefix)
|
||||
}
|
||||
data = data[len(prefix):]
|
||||
data = append(data, suffix...)
|
||||
tail, err = b2.UnmarshalPortable(data)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
if string(tail) != suffix {
|
||||
t.Fatalf("unexpected tail; got %X; want %X", tail, suffix)
|
||||
}
|
||||
compareBlocksPortable(t, &b2, b, &b1.bh)
|
||||
}
|
||||
|
||||
func compareBlocksPortable(t *testing.T, b1, b2 *Block, bhExpected *blockHeader) {
|
||||
t.Helper()
|
||||
if b1.bh.MinTimestamp != bhExpected.MinTimestamp {
|
||||
t.Fatalf("unexpected MinTimestamp; got %d; want %d", b1.bh.MinTimestamp, bhExpected.MinTimestamp)
|
||||
}
|
||||
if b1.bh.FirstValue != bhExpected.FirstValue {
|
||||
t.Fatalf("unexpected FirstValue; got %d; want %d", b1.bh.FirstValue, bhExpected.FirstValue)
|
||||
}
|
||||
if b1.bh.RowsCount != bhExpected.RowsCount {
|
||||
t.Fatalf("unexpected RowsCount; got %d; want %d", b1.bh.RowsCount, bhExpected.RowsCount)
|
||||
}
|
||||
if b1.bh.Scale != bhExpected.Scale {
|
||||
t.Fatalf("unexpected Scale; got %d; want %d", b1.bh.Scale, bhExpected.Scale)
|
||||
}
|
||||
if b1.bh.TimestampsMarshalType != bhExpected.TimestampsMarshalType {
|
||||
t.Fatalf("unexpected TimestampsMarshalType; got %d; want %d", b1.bh.TimestampsMarshalType, bhExpected.TimestampsMarshalType)
|
||||
}
|
||||
if b1.bh.ValuesMarshalType != bhExpected.ValuesMarshalType {
|
||||
t.Fatalf("unexpected ValuesMarshalType; got %d; want %d", b1.bh.ValuesMarshalType, bhExpected.ValuesMarshalType)
|
||||
}
|
||||
if b1.bh.PrecisionBits != bhExpected.PrecisionBits {
|
||||
t.Fatalf("unexpected PrecisionBits; got %d; want %d", b1.bh.PrecisionBits, bhExpected.PrecisionBits)
|
||||
}
|
||||
if !reflect.DeepEqual(b1.values, b2.values) {
|
||||
t.Fatalf("unexpected values; got %d; want %d", b1.values, b2.values)
|
||||
}
|
||||
if !reflect.DeepEqual(b1.timestamps, b2.timestamps) {
|
||||
t.Fatalf("unexpected timestamps; got %d; want %d", b1.timestamps, b2.timestamps)
|
||||
}
|
||||
if len(b1.values) != int(bhExpected.RowsCount) {
|
||||
t.Fatalf("unexpected number of values; got %d; want %d", len(b1.values), bhExpected.RowsCount)
|
||||
}
|
||||
if len(b1.timestamps) != int(bhExpected.RowsCount) {
|
||||
t.Fatalf("unexpected number of timestamps; got %d; want %d", len(b1.timestamps), bhExpected.RowsCount)
|
||||
}
|
||||
}
|
||||
|
||||
func getRandValues(rowsCount int) []int64 {
|
||||
a := make([]int64, rowsCount)
|
||||
for i := 0; i < rowsCount; i++ {
|
||||
a[i] = int64(rand.Intn(1e5) - 0.5e5)
|
||||
}
|
||||
return a
|
||||
}
|
||||
|
||||
func getRandTimestamps(rowsCount int) []int64 {
|
||||
a := make([]int64, rowsCount)
|
||||
ts := int64(rand.Intn(1e9))
|
||||
for i := 0; i < rowsCount; i++ {
|
||||
a[i] = ts
|
||||
ts += int64(rand.Intn(1e5))
|
||||
}
|
||||
return a
|
||||
}
|
|
@ -1251,7 +1251,7 @@ func testPartSearchSerial(p *part, tsids []TSID, tr TimeRange, expectedRawBlocks
|
|||
var bs []Block
|
||||
for ps.NextBlock() {
|
||||
var b Block
|
||||
ps.BlockRef.MustReadBlock(&b)
|
||||
ps.BlockRef.MustReadBlock(&b, true)
|
||||
bs = append(bs, b)
|
||||
}
|
||||
if err := ps.Error(); err != nil {
|
||||
|
|
|
@ -123,6 +123,9 @@ type partition struct {
|
|||
|
||||
smallAssistedMerges uint64
|
||||
|
||||
smallMergeNeedFreeDiskSpace uint64
|
||||
bigMergeNeedFreeDiskSpace uint64
|
||||
|
||||
mergeIdx uint64
|
||||
|
||||
smallPartsPath string
|
||||
|
@ -330,6 +333,9 @@ type partitionMetrics struct {
|
|||
SmallPartsRefCount uint64
|
||||
|
||||
SmallAssistedMerges uint64
|
||||
|
||||
SmallMergeNeedFreeDiskSpace uint64
|
||||
BigMergeNeedFreeDiskSpace uint64
|
||||
}
|
||||
|
||||
// UpdateMetrics updates m with metrics from pt.
|
||||
|
@ -388,6 +394,9 @@ func (pt *partition) UpdateMetrics(m *partitionMetrics) {
|
|||
m.SmallRowsDeleted += atomic.LoadUint64(&pt.smallRowsDeleted)
|
||||
|
||||
m.SmallAssistedMerges += atomic.LoadUint64(&pt.smallAssistedMerges)
|
||||
|
||||
m.SmallMergeNeedFreeDiskSpace += atomic.LoadUint64(&pt.smallMergeNeedFreeDiskSpace)
|
||||
m.BigMergeNeedFreeDiskSpace += atomic.LoadUint64(&pt.bigMergeNeedFreeDiskSpace)
|
||||
}
|
||||
|
||||
// AddRows adds the given rows to the partition pt.
|
||||
|
@ -990,9 +999,10 @@ func (pt *partition) mergeBigParts(isFinal bool) error {
|
|||
maxRows := maxRowsByPath(pt.bigPartsPath)
|
||||
|
||||
pt.partsLock.Lock()
|
||||
pws := getPartsToMerge(pt.bigParts, maxRows, isFinal)
|
||||
pws, needFreeSpace := getPartsToMerge(pt.bigParts, maxRows, isFinal)
|
||||
pt.partsLock.Unlock()
|
||||
|
||||
atomicSetBool(&pt.bigMergeNeedFreeDiskSpace, needFreeSpace)
|
||||
return pt.mergeParts(pws, pt.stopCh)
|
||||
}
|
||||
|
||||
|
@ -1008,14 +1018,23 @@ func (pt *partition) mergeSmallParts(isFinal bool) error {
|
|||
}
|
||||
|
||||
pt.partsLock.Lock()
|
||||
pws := getPartsToMerge(pt.smallParts, maxRows, isFinal)
|
||||
pws, needFreeSpace := getPartsToMerge(pt.smallParts, maxRows, isFinal)
|
||||
pt.partsLock.Unlock()
|
||||
|
||||
atomicSetBool(&pt.smallMergeNeedFreeDiskSpace, needFreeSpace)
|
||||
return pt.mergeParts(pws, pt.stopCh)
|
||||
}
|
||||
|
||||
var errNothingToMerge = fmt.Errorf("nothing to merge")
|
||||
|
||||
func atomicSetBool(p *uint64, b bool) {
|
||||
v := uint64(0)
|
||||
if b {
|
||||
v = 1
|
||||
}
|
||||
atomic.StoreUint64(p, v)
|
||||
}
|
||||
|
||||
// mergeParts merges pws.
|
||||
//
|
||||
// Merging is immediately stopped if stopCh is closed.
|
||||
|
@ -1242,7 +1261,8 @@ func removeParts(pws []*partWrapper, partsToRemove map[*partWrapper]bool, isBig
|
|||
// getPartsToMerge returns optimal parts to merge from pws.
|
||||
//
|
||||
// The returned rows will contain less than maxRows rows.
|
||||
func getPartsToMerge(pws []*partWrapper, maxRows uint64, isFinal bool) []*partWrapper {
|
||||
// The function returns true if pws contains parts, which cannot be merged because of maxRows limit.
|
||||
func getPartsToMerge(pws []*partWrapper, maxRows uint64, isFinal bool) ([]*partWrapper, bool) {
|
||||
pwsRemaining := make([]*partWrapper, 0, len(pws))
|
||||
for _, pw := range pws {
|
||||
if !pw.isInMerge {
|
||||
|
@ -1251,13 +1271,14 @@ func getPartsToMerge(pws []*partWrapper, maxRows uint64, isFinal bool) []*partWr
|
|||
}
|
||||
maxPartsToMerge := defaultPartsToMerge
|
||||
var pms []*partWrapper
|
||||
needFreeSpace := false
|
||||
if isFinal {
|
||||
for len(pms) == 0 && maxPartsToMerge >= finalPartsToMerge {
|
||||
pms = appendPartsToMerge(pms[:0], pwsRemaining, maxPartsToMerge, maxRows)
|
||||
pms, needFreeSpace = appendPartsToMerge(pms[:0], pwsRemaining, maxPartsToMerge, maxRows)
|
||||
maxPartsToMerge--
|
||||
}
|
||||
} else {
|
||||
pms = appendPartsToMerge(pms[:0], pwsRemaining, maxPartsToMerge, maxRows)
|
||||
pms, needFreeSpace = appendPartsToMerge(pms[:0], pwsRemaining, maxPartsToMerge, maxRows)
|
||||
}
|
||||
for _, pw := range pms {
|
||||
if pw.isInMerge {
|
||||
|
@ -1265,15 +1286,16 @@ func getPartsToMerge(pws []*partWrapper, maxRows uint64, isFinal bool) []*partWr
|
|||
}
|
||||
pw.isInMerge = true
|
||||
}
|
||||
return pms
|
||||
return pms, needFreeSpace
|
||||
}
|
||||
|
||||
// appendPartsToMerge finds optimal parts to merge from src, appends
|
||||
// them to dst and returns the result.
|
||||
func appendPartsToMerge(dst, src []*partWrapper, maxPartsToMerge int, maxRows uint64) []*partWrapper {
|
||||
// The function returns true if src contains parts, which cannot be merged because of maxRows limit.
|
||||
func appendPartsToMerge(dst, src []*partWrapper, maxPartsToMerge int, maxRows uint64) ([]*partWrapper, bool) {
|
||||
if len(src) < 2 {
|
||||
// There is no need in merging zero or one part :)
|
||||
return dst
|
||||
return dst, false
|
||||
}
|
||||
if maxPartsToMerge < 2 {
|
||||
logger.Panicf("BUG: maxPartsToMerge cannot be smaller than 2; got %d", maxPartsToMerge)
|
||||
|
@ -1281,10 +1303,12 @@ func appendPartsToMerge(dst, src []*partWrapper, maxPartsToMerge int, maxRows ui
|
|||
|
||||
// Filter out too big parts.
|
||||
// This should reduce N for O(N^2) algorithm below.
|
||||
needFreeSpace := false
|
||||
maxInPartRows := maxRows / 2
|
||||
tmp := make([]*partWrapper, 0, len(src))
|
||||
for _, pw := range src {
|
||||
if pw.p.ph.RowsCount > maxInPartRows {
|
||||
needFreeSpace = true
|
||||
continue
|
||||
}
|
||||
tmp = append(tmp, pw)
|
||||
|
@ -1320,6 +1344,7 @@ func appendPartsToMerge(dst, src []*partWrapper, maxPartsToMerge int, maxRows ui
|
|||
}
|
||||
if rowsSum > maxRows {
|
||||
// There is no need in verifying remaining parts with higher number of rows
|
||||
needFreeSpace = true
|
||||
break
|
||||
}
|
||||
m := float64(rowsSum) / float64(a[len(a)-1].p.ph.RowsCount)
|
||||
|
@ -1337,9 +1362,9 @@ func appendPartsToMerge(dst, src []*partWrapper, maxPartsToMerge int, maxRows ui
|
|||
}
|
||||
if maxM < minM {
|
||||
// There is no sense in merging parts with too small m.
|
||||
return dst
|
||||
return dst, needFreeSpace
|
||||
}
|
||||
return append(dst, pws...)
|
||||
return append(dst, pws...), needFreeSpace
|
||||
}
|
||||
|
||||
func openParts(pathPrefix1, pathPrefix2, path string) ([]*partWrapper, error) {
|
||||
|
|
|
@ -243,7 +243,7 @@ func testPartitionSearchSerial(pt *partition, tsids []TSID, tr TimeRange, rbsExp
|
|||
pts.Init(pt, tsids, tr)
|
||||
for pts.NextBlock() {
|
||||
var b Block
|
||||
pts.BlockRef.MustReadBlock(&b)
|
||||
pts.BlockRef.MustReadBlock(&b, true)
|
||||
bs = append(bs, b)
|
||||
}
|
||||
if err := pts.Error(); err != nil {
|
||||
|
|
|
@ -49,7 +49,7 @@ func TestAppendPartsToMergeManyParts(t *testing.T) {
|
|||
iterationsCount := 0
|
||||
rowsMerged := uint64(0)
|
||||
for {
|
||||
pms := appendPartsToMerge(nil, pws, defaultPartsToMerge, maxOutPartRows)
|
||||
pms, _ := appendPartsToMerge(nil, pws, defaultPartsToMerge, maxOutPartRows)
|
||||
if len(pms) == 0 {
|
||||
break
|
||||
}
|
||||
|
@ -99,7 +99,7 @@ func testAppendPartsToMerge(t *testing.T, maxPartsToMerge int, initialRowsCount,
|
|||
pws := newTestPartWrappersForRowsCount(initialRowsCount)
|
||||
|
||||
// Verify appending to nil.
|
||||
pms := appendPartsToMerge(nil, pws, maxPartsToMerge, 1e9)
|
||||
pms, _ := appendPartsToMerge(nil, pws, maxPartsToMerge, 1e9)
|
||||
rowsCount := newTestRowsCountFromPartWrappers(pms)
|
||||
if !reflect.DeepEqual(rowsCount, expectedRowsCount) {
|
||||
t.Fatalf("unexpected rowsCount for maxPartsToMerge=%d, initialRowsCount=%d; got\n%d; want\n%d",
|
||||
|
@ -118,7 +118,7 @@ func testAppendPartsToMerge(t *testing.T, maxPartsToMerge int, initialRowsCount,
|
|||
{},
|
||||
{},
|
||||
}
|
||||
pms = appendPartsToMerge(prefix, pws, maxPartsToMerge, 1e9)
|
||||
pms, _ = appendPartsToMerge(prefix, pws, maxPartsToMerge, 1e9)
|
||||
if !reflect.DeepEqual(pms[:len(prefix)], prefix) {
|
||||
t.Fatalf("unexpected prefix for maxPartsToMerge=%d, initialRowsCount=%d; got\n%+v; want\n%+v",
|
||||
maxPartsToMerge, initialRowsCount, pms[:len(prefix)], prefix)
|
||||
|
|
|
@ -31,9 +31,14 @@ func (br *BlockRef) init(p *part, bh *blockHeader) {
|
|||
}
|
||||
|
||||
// MustReadBlock reads block from br to dst.
|
||||
func (br *BlockRef) MustReadBlock(dst *Block) {
|
||||
//
|
||||
// if fetchData is false, then only block header is read, otherwise all the data is read.
|
||||
func (br *BlockRef) MustReadBlock(dst *Block, fetchData bool) {
|
||||
dst.Reset()
|
||||
dst.bh = br.bh
|
||||
if !fetchData {
|
||||
return
|
||||
}
|
||||
|
||||
dst.timestampsData = bytesutil.Resize(dst.timestampsData[:0], int(br.bh.TimestampsBlockSize))
|
||||
br.p.timestampsFile.MustReadAt(dst.timestampsData, int64(br.bh.TimestampsBlockOffset))
|
||||
|
|
|
@ -222,7 +222,7 @@ func testSearchInternal(st *Storage, tr TimeRange, mrs []MetricRow, accountsCoun
|
|||
var mbs []metricBlock
|
||||
for s.NextMetricBlock() {
|
||||
var b Block
|
||||
s.MetricBlockRef.BlockRef.MustReadBlock(&b)
|
||||
s.MetricBlockRef.BlockRef.MustReadBlock(&b, true)
|
||||
|
||||
var mb metricBlock
|
||||
mb.MetricName = append(mb.MetricName, s.MetricBlockRef.MetricName...)
|
||||
|
|
|
@ -254,7 +254,7 @@ func testTableSearchSerial(tb *table, tsids []TSID, tr TimeRange, rbsExpected []
|
|||
ts.Init(tb, tsids, tr)
|
||||
for ts.NextBlock() {
|
||||
var b Block
|
||||
ts.BlockRef.MustReadBlock(&b)
|
||||
ts.BlockRef.MustReadBlock(&b, true)
|
||||
bs = append(bs, b)
|
||||
}
|
||||
if err := ts.Error(); err != nil {
|
||||
|
|
|
@ -26,7 +26,11 @@ func BenchmarkTableSearch(b *testing.B) {
|
|||
b.Run(fmt.Sprintf("tsidsCount_%d", tsidsCount), func(b *testing.B) {
|
||||
for _, tsidsSearch := range []int{1, 1e1, 1e2, 1e3, 1e4} {
|
||||
b.Run(fmt.Sprintf("tsidsSearch_%d", tsidsSearch), func(b *testing.B) {
|
||||
benchmarkTableSearch(b, rowsCount, tsidsCount, tsidsSearch)
|
||||
for _, fetchData := range []bool{true, false} {
|
||||
b.Run(fmt.Sprintf("fetchData_%v", fetchData), func(b *testing.B) {
|
||||
benchmarkTableSearch(b, rowsCount, tsidsCount, tsidsSearch, fetchData)
|
||||
})
|
||||
}
|
||||
})
|
||||
}
|
||||
})
|
||||
|
@ -103,7 +107,7 @@ func createBenchTable(b *testing.B, path string, startTimestamp int64, rowsPerIn
|
|||
tb.MustClose()
|
||||
}
|
||||
|
||||
func benchmarkTableSearch(b *testing.B, rowsCount, tsidsCount, tsidsSearch int) {
|
||||
func benchmarkTableSearch(b *testing.B, rowsCount, tsidsCount, tsidsSearch int, fetchData bool) {
|
||||
startTimestamp := timestampFromTime(time.Now()) - 365*24*3600*1000
|
||||
rowsPerInsert := getMaxRawRowsPerPartition()
|
||||
|
||||
|
@ -130,7 +134,7 @@ func benchmarkTableSearch(b *testing.B, rowsCount, tsidsCount, tsidsSearch int)
|
|||
}
|
||||
ts.Init(tb, tsids, tr)
|
||||
for ts.NextBlock() {
|
||||
ts.BlockRef.MustReadBlock(&tmpBlock)
|
||||
ts.BlockRef.MustReadBlock(&tmpBlock, fetchData)
|
||||
}
|
||||
ts.MustClose()
|
||||
}
|
||||
|
|
2
vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go
generated
vendored
2
vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go
generated
vendored
|
@ -1205,6 +1205,7 @@ var awsPartition = partition{
|
|||
"ca-central-1": endpoint{},
|
||||
"eu-central-1": endpoint{},
|
||||
"eu-north-1": endpoint{},
|
||||
"eu-south-1": endpoint{},
|
||||
"eu-west-1": endpoint{},
|
||||
"eu-west-2": endpoint{},
|
||||
"eu-west-3": endpoint{},
|
||||
|
@ -3019,6 +3020,7 @@ var awsPartition = partition{
|
|||
"groundstation": service{
|
||||
|
||||
Endpoints: endpoints{
|
||||
"af-south-1": endpoint{},
|
||||
"ap-southeast-2": endpoint{},
|
||||
"eu-north-1": endpoint{},
|
||||
"eu-west-1": endpoint{},
|
||||
|
|
2
vendor/github.com/aws/aws-sdk-go/aws/version.go
generated
vendored
2
vendor/github.com/aws/aws-sdk-go/aws/version.go
generated
vendored
|
@ -5,4 +5,4 @@ package aws
|
|||
const SDKName = "aws-sdk-go"
|
||||
|
||||
// SDKVersion is the version of this SDK
|
||||
const SDKVersion = "1.34.29"
|
||||
const SDKVersion = "1.34.33"
|
||||
|
|
8
vendor/github.com/aws/aws-sdk-go/service/s3/s3manager/pool.go
generated
vendored
8
vendor/github.com/aws/aws-sdk-go/service/s3/s3manager/pool.go
generated
vendored
|
@ -60,6 +60,14 @@ func (p *maxSlicePool) Get(ctx aws.Context) (*[]byte, error) {
|
|||
return nil, errZeroCapacity
|
||||
}
|
||||
return bs, nil
|
||||
case <-ctx.Done():
|
||||
p.mtx.RUnlock()
|
||||
return nil, ctx.Err()
|
||||
default:
|
||||
// pass
|
||||
}
|
||||
|
||||
select {
|
||||
case _, ok := <-p.allocations:
|
||||
p.mtx.RUnlock()
|
||||
if !ok {
|
||||
|
|
10
vendor/github.com/aws/aws-sdk-go/service/sts/api.go
generated
vendored
10
vendor/github.com/aws/aws-sdk-go/service/sts/api.go
generated
vendored
|
@ -207,6 +207,10 @@ func (c *STS) AssumeRoleRequest(input *AssumeRoleInput) (req *request.Request, o
|
|||
// and Deactivating AWS STS in an AWS Region (https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html)
|
||||
// in the IAM User Guide.
|
||||
//
|
||||
// * ErrCodeExpiredTokenException "ExpiredTokenException"
|
||||
// The web identity token that was passed is expired or is not valid. Get a
|
||||
// new identity token from the identity provider and then retry the request.
|
||||
//
|
||||
// See also, https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/AssumeRole
|
||||
func (c *STS) AssumeRole(input *AssumeRoleInput) (*AssumeRoleOutput, error) {
|
||||
req, out := c.AssumeRoleRequest(input)
|
||||
|
@ -626,7 +630,7 @@ func (c *STS) AssumeRoleWithWebIdentityRequest(input *AssumeRoleWithWebIdentityI
|
|||
// * Using Web Identity Federation API Operations for Mobile Apps (https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc_manual.html)
|
||||
// and Federation Through a Web-based Identity Provider (https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html#api_assumerolewithwebidentity).
|
||||
//
|
||||
// * Web Identity Federation Playground (https://web-identity-federation-playground.s3.amazonaws.com/index.html).
|
||||
// * Web Identity Federation Playground (https://aws.amazon.com/blogs/aws/the-aws-web-identity-federation-playground/).
|
||||
// Walk through the process of authenticating through Login with Amazon,
|
||||
// Facebook, or Google, getting temporary security credentials, and then
|
||||
// using those credentials to make a request to AWS.
|
||||
|
@ -1788,7 +1792,7 @@ type AssumeRoleWithSAMLInput struct {
|
|||
// in the IAM User Guide.
|
||||
//
|
||||
// SAMLAssertion is a required field
|
||||
SAMLAssertion *string `min:"4" type:"string" required:"true" sensitive:"true"`
|
||||
SAMLAssertion *string `min:"4" type:"string" required:"true"`
|
||||
}
|
||||
|
||||
// String returns the string representation
|
||||
|
@ -2100,7 +2104,7 @@ type AssumeRoleWithWebIdentityInput struct {
|
|||
// the application makes an AssumeRoleWithWebIdentity call.
|
||||
//
|
||||
// WebIdentityToken is a required field
|
||||
WebIdentityToken *string `min:"4" type:"string" required:"true" sensitive:"true"`
|
||||
WebIdentityToken *string `min:"4" type:"string" required:"true"`
|
||||
}
|
||||
|
||||
// String returns the string representation
|
||||
|
|
86
vendor/github.com/aws/aws-sdk-go/service/sts/doc.go
generated
vendored
86
vendor/github.com/aws/aws-sdk-go/service/sts/doc.go
generated
vendored
|
@ -3,87 +3,11 @@
|
|||
// Package sts provides the client and types for making API
|
||||
// requests to AWS Security Token Service.
|
||||
//
|
||||
// The AWS Security Token Service (STS) is a web service that enables you to
|
||||
// request temporary, limited-privilege credentials for AWS Identity and Access
|
||||
// Management (IAM) users or for users that you authenticate (federated users).
|
||||
// This guide provides descriptions of the STS API. For more detailed information
|
||||
// about using this service, go to Temporary Security Credentials (https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html).
|
||||
//
|
||||
// For information about setting up signatures and authorization through the
|
||||
// API, go to Signing AWS API Requests (https://docs.aws.amazon.com/general/latest/gr/signing_aws_api_requests.html)
|
||||
// in the AWS General Reference. For general information about the Query API,
|
||||
// go to Making Query Requests (https://docs.aws.amazon.com/IAM/latest/UserGuide/IAM_UsingQueryAPI.html)
|
||||
// in Using IAM. For information about using security tokens with other AWS
|
||||
// products, go to AWS Services That Work with IAM (https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-with-iam.html)
|
||||
// in the IAM User Guide.
|
||||
//
|
||||
// If you're new to AWS and need additional technical information about a specific
|
||||
// AWS product, you can find the product's technical documentation at http://aws.amazon.com/documentation/
|
||||
// (http://aws.amazon.com/documentation/).
|
||||
//
|
||||
// Endpoints
|
||||
//
|
||||
// By default, AWS Security Token Service (STS) is available as a global service,
|
||||
// and all AWS STS requests go to a single endpoint at https://sts.amazonaws.com.
|
||||
// Global requests map to the US East (N. Virginia) region. AWS recommends using
|
||||
// Regional AWS STS endpoints instead of the global endpoint to reduce latency,
|
||||
// build in redundancy, and increase session token validity. For more information,
|
||||
// see Managing AWS STS in an AWS Region (https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html)
|
||||
// in the IAM User Guide.
|
||||
//
|
||||
// Most AWS Regions are enabled for operations in all AWS services by default.
|
||||
// Those Regions are automatically activated for use with AWS STS. Some Regions,
|
||||
// such as Asia Pacific (Hong Kong), must be manually enabled. To learn more
|
||||
// about enabling and disabling AWS Regions, see Managing AWS Regions (https://docs.aws.amazon.com/general/latest/gr/rande-manage.html)
|
||||
// in the AWS General Reference. When you enable these AWS Regions, they are
|
||||
// automatically activated for use with AWS STS. You cannot activate the STS
|
||||
// endpoint for a Region that is disabled. Tokens that are valid in all AWS
|
||||
// Regions are longer than tokens that are valid in Regions that are enabled
|
||||
// by default. Changing this setting might affect existing systems where you
|
||||
// temporarily store tokens. For more information, see Managing Global Endpoint
|
||||
// Session Tokens (https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html#sts-regions-manage-tokens)
|
||||
// in the IAM User Guide.
|
||||
//
|
||||
// After you activate a Region for use with AWS STS, you can direct AWS STS
|
||||
// API calls to that Region. AWS STS recommends that you provide both the Region
|
||||
// and endpoint when you make calls to a Regional endpoint. You can provide
|
||||
// the Region alone for manually enabled Regions, such as Asia Pacific (Hong
|
||||
// Kong). In this case, the calls are directed to the STS Regional endpoint.
|
||||
// However, if you provide the Region alone for Regions enabled by default,
|
||||
// the calls are directed to the global endpoint of https://sts.amazonaws.com.
|
||||
//
|
||||
// To view the list of AWS STS endpoints and whether they are active by default,
|
||||
// see Writing Code to Use AWS STS Regions (https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html#id_credentials_temp_enable-regions_writing_code)
|
||||
// in the IAM User Guide.
|
||||
//
|
||||
// Recording API requests
|
||||
//
|
||||
// STS supports AWS CloudTrail, which is a service that records AWS calls for
|
||||
// your AWS account and delivers log files to an Amazon S3 bucket. By using
|
||||
// information collected by CloudTrail, you can determine what requests were
|
||||
// successfully made to STS, who made the request, when it was made, and so
|
||||
// on.
|
||||
//
|
||||
// If you activate AWS STS endpoints in Regions other than the default global
|
||||
// endpoint, then you must also turn on CloudTrail logging in those Regions.
|
||||
// This is necessary to record any AWS STS API calls that are made in those
|
||||
// Regions. For more information, see Turning On CloudTrail in Additional Regions
|
||||
// (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/aggregating_logs_regions_turn_on_ct.html)
|
||||
// in the AWS CloudTrail User Guide.
|
||||
//
|
||||
// AWS Security Token Service (STS) is a global service with a single endpoint
|
||||
// at https://sts.amazonaws.com. Calls to this endpoint are logged as calls
|
||||
// to a global service. However, because this endpoint is physically located
|
||||
// in the US East (N. Virginia) Region, your logs list us-east-1 as the event
|
||||
// Region. CloudTrail does not write these logs to the US East (Ohio) Region
|
||||
// unless you choose to include global service logs in that Region. CloudTrail
|
||||
// writes calls to all Regional endpoints to their respective Regions. For example,
|
||||
// calls to sts.us-east-2.amazonaws.com are published to the US East (Ohio)
|
||||
// Region and calls to sts.eu-central-1.amazonaws.com are published to the EU
|
||||
// (Frankfurt) Region.
|
||||
//
|
||||
// To learn more about CloudTrail, including how to turn it on and find your
|
||||
// log files, see the AWS CloudTrail User Guide (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/what_is_cloud_trail_top_level.html).
|
||||
// AWS Security Token Service (STS) enables you to request temporary, limited-privilege
|
||||
// credentials for AWS Identity and Access Management (IAM) users or for users
|
||||
// that you authenticate (federated users). This guide provides descriptions
|
||||
// of the STS API. For more information about using this service, see Temporary
|
||||
// Security Credentials (https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html).
|
||||
//
|
||||
// See https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15 for more information on this service.
|
||||
//
|
||||
|
|
1
vendor/golang.org/x/net/http2/server.go
generated
vendored
1
vendor/golang.org/x/net/http2/server.go
generated
vendored
|
@ -1694,6 +1694,7 @@ func (sc *serverConn) processData(f *DataFrame) error {
|
|||
if len(data) > 0 {
|
||||
wrote, err := st.body.Write(data)
|
||||
if err != nil {
|
||||
sc.sendWindowUpdate(nil, int(f.Length)-wrote)
|
||||
return streamError(id, ErrCodeStreamClosed)
|
||||
}
|
||||
if wrote != len(data) {
|
||||
|
|
2
vendor/golang.org/x/sys/unix/sockcmsg_unix_other.go
generated
vendored
2
vendor/golang.org/x/sys/unix/sockcmsg_unix_other.go
generated
vendored
|
@ -20,7 +20,7 @@ func cmsgAlignOf(salen int) int {
|
|||
case "aix":
|
||||
// There is no alignment on AIX.
|
||||
salign = 1
|
||||
case "darwin", "illumos", "solaris":
|
||||
case "darwin", "ios", "illumos", "solaris":
|
||||
// NOTE: It seems like 64-bit Darwin, Illumos and Solaris
|
||||
// kernels still require 32-bit aligned access to network
|
||||
// subsystem.
|
||||
|
|
17
vendor/golang.org/x/sys/unix/syscall_bsd.go
generated
vendored
17
vendor/golang.org/x/sys/unix/syscall_bsd.go
generated
vendored
|
@ -18,6 +18,21 @@ import (
|
|||
"unsafe"
|
||||
)
|
||||
|
||||
const ImplementsGetwd = true
|
||||
|
||||
func Getwd() (string, error) {
|
||||
var buf [PathMax]byte
|
||||
_, err := Getcwd(buf[0:])
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
n := clen(buf[:])
|
||||
if n < 1 {
|
||||
return "", EINVAL
|
||||
}
|
||||
return string(buf[:n]), nil
|
||||
}
|
||||
|
||||
/*
|
||||
* Wrapped
|
||||
*/
|
||||
|
@ -272,7 +287,7 @@ func Accept(fd int) (nfd int, sa Sockaddr, err error) {
|
|||
if err != nil {
|
||||
return
|
||||
}
|
||||
if runtime.GOOS == "darwin" && len == 0 {
|
||||
if (runtime.GOOS == "darwin" || runtime.GOOS == "ios") && len == 0 {
|
||||
// Accepted socket has no address.
|
||||
// This is likely due to a bug in xnu kernels,
|
||||
// where instead of ECONNABORTED error socket
|
||||
|
|
73
vendor/golang.org/x/sys/unix/syscall_darwin.go
generated
vendored
73
vendor/golang.org/x/sys/unix/syscall_darwin.go
generated
vendored
|
@ -13,29 +13,10 @@
|
|||
package unix
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"syscall"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
const ImplementsGetwd = true
|
||||
|
||||
func Getwd() (string, error) {
|
||||
buf := make([]byte, 2048)
|
||||
attrs, err := getAttrList(".", attrList{CommonAttr: attrCmnFullpath}, buf, 0)
|
||||
if err == nil && len(attrs) == 1 && len(attrs[0]) >= 2 {
|
||||
wd := string(attrs[0])
|
||||
// Sanity check that it's an absolute path and ends
|
||||
// in a null byte, which we then strip.
|
||||
if wd[0] == '/' && wd[len(wd)-1] == 0 {
|
||||
return wd[:len(wd)-1], nil
|
||||
}
|
||||
}
|
||||
// If pkg/os/getwd.go gets ENOTSUP, it will fall back to the
|
||||
// slow algorithm.
|
||||
return "", ENOTSUP
|
||||
}
|
||||
|
||||
// SockaddrDatalink implements the Sockaddr interface for AF_LINK type sockets.
|
||||
type SockaddrDatalink struct {
|
||||
Len uint8
|
||||
|
@ -97,11 +78,6 @@ func direntNamlen(buf []byte) (uint64, bool) {
|
|||
func PtraceAttach(pid int) (err error) { return ptrace(PT_ATTACH, pid, 0, 0) }
|
||||
func PtraceDetach(pid int) (err error) { return ptrace(PT_DETACH, pid, 0, 0) }
|
||||
|
||||
const (
|
||||
attrBitMapCount = 5
|
||||
attrCmnFullpath = 0x08000000
|
||||
)
|
||||
|
||||
type attrList struct {
|
||||
bitmapCount uint16
|
||||
_ uint16
|
||||
|
@ -112,54 +88,6 @@ type attrList struct {
|
|||
Forkattr uint32
|
||||
}
|
||||
|
||||
func getAttrList(path string, attrList attrList, attrBuf []byte, options uint) (attrs [][]byte, err error) {
|
||||
if len(attrBuf) < 4 {
|
||||
return nil, errors.New("attrBuf too small")
|
||||
}
|
||||
attrList.bitmapCount = attrBitMapCount
|
||||
|
||||
var _p0 *byte
|
||||
_p0, err = BytePtrFromString(path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if err := getattrlist(_p0, unsafe.Pointer(&attrList), unsafe.Pointer(&attrBuf[0]), uintptr(len(attrBuf)), int(options)); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
size := *(*uint32)(unsafe.Pointer(&attrBuf[0]))
|
||||
|
||||
// dat is the section of attrBuf that contains valid data,
|
||||
// without the 4 byte length header. All attribute offsets
|
||||
// are relative to dat.
|
||||
dat := attrBuf
|
||||
if int(size) < len(attrBuf) {
|
||||
dat = dat[:size]
|
||||
}
|
||||
dat = dat[4:] // remove length prefix
|
||||
|
||||
for i := uint32(0); int(i) < len(dat); {
|
||||
header := dat[i:]
|
||||
if len(header) < 8 {
|
||||
return attrs, errors.New("truncated attribute header")
|
||||
}
|
||||
datOff := *(*int32)(unsafe.Pointer(&header[0]))
|
||||
attrLen := *(*uint32)(unsafe.Pointer(&header[4]))
|
||||
if datOff < 0 || uint32(datOff)+attrLen > uint32(len(dat)) {
|
||||
return attrs, errors.New("truncated results; attrBuf too small")
|
||||
}
|
||||
end := uint32(datOff) + attrLen
|
||||
attrs = append(attrs, dat[datOff:end])
|
||||
i = end
|
||||
if r := i % 4; r != 0 {
|
||||
i += (4 - r)
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
//sys getattrlist(path *byte, list unsafe.Pointer, buf unsafe.Pointer, size uintptr, options int) (err error)
|
||||
|
||||
//sysnb pipe() (r int, w int, err error)
|
||||
|
||||
func Pipe(p []int) (err error) {
|
||||
|
@ -419,6 +347,7 @@ func Sendfile(outfd int, infd int, offset *int64, count int) (written int, err e
|
|||
//sys Fpathconf(fd int, name int) (val int, err error)
|
||||
//sys Fsync(fd int) (err error)
|
||||
//sys Ftruncate(fd int, length int64) (err error)
|
||||
//sys Getcwd(buf []byte) (n int, err error)
|
||||
//sys Getdtablesize() (size int)
|
||||
//sysnb Getegid() (egid int)
|
||||
//sysnb Geteuid() (uid int)
|
||||
|
|
15
vendor/golang.org/x/sys/unix/syscall_dragonfly.go
generated
vendored
15
vendor/golang.org/x/sys/unix/syscall_dragonfly.go
generated
vendored
|
@ -129,23 +129,8 @@ func Accept4(fd, flags int) (nfd int, sa Sockaddr, err error) {
|
|||
return
|
||||
}
|
||||
|
||||
const ImplementsGetwd = true
|
||||
|
||||
//sys Getcwd(buf []byte) (n int, err error) = SYS___GETCWD
|
||||
|
||||
func Getwd() (string, error) {
|
||||
var buf [PathMax]byte
|
||||
_, err := Getcwd(buf[0:])
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
n := clen(buf[:])
|
||||
if n < 1 {
|
||||
return "", EINVAL
|
||||
}
|
||||
return string(buf[:n]), nil
|
||||
}
|
||||
|
||||
func Getfsstat(buf []Statfs_t, flags int) (n int, err error) {
|
||||
var _p0 unsafe.Pointer
|
||||
var bufsize uintptr
|
||||
|
|
15
vendor/golang.org/x/sys/unix/syscall_freebsd.go
generated
vendored
15
vendor/golang.org/x/sys/unix/syscall_freebsd.go
generated
vendored
|
@ -140,23 +140,8 @@ func Accept4(fd, flags int) (nfd int, sa Sockaddr, err error) {
|
|||
return
|
||||
}
|
||||
|
||||
const ImplementsGetwd = true
|
||||
|
||||
//sys Getcwd(buf []byte) (n int, err error) = SYS___GETCWD
|
||||
|
||||
func Getwd() (string, error) {
|
||||
var buf [PathMax]byte
|
||||
_, err := Getcwd(buf[0:])
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
n := clen(buf[:])
|
||||
if n < 1 {
|
||||
return "", EINVAL
|
||||
}
|
||||
return string(buf[:n]), nil
|
||||
}
|
||||
|
||||
func Getfsstat(buf []Statfs_t, flags int) (n int, err error) {
|
||||
var (
|
||||
_p0 unsafe.Pointer
|
||||
|
|
15
vendor/golang.org/x/sys/unix/syscall_netbsd.go
generated
vendored
15
vendor/golang.org/x/sys/unix/syscall_netbsd.go
generated
vendored
|
@ -141,23 +141,8 @@ func Getdirentries(fd int, buf []byte, basep *uintptr) (n int, err error) {
|
|||
return
|
||||
}
|
||||
|
||||
const ImplementsGetwd = true
|
||||
|
||||
//sys Getcwd(buf []byte) (n int, err error) = SYS___GETCWD
|
||||
|
||||
func Getwd() (string, error) {
|
||||
var buf [PathMax]byte
|
||||
_, err := Getcwd(buf[0:])
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
n := clen(buf[:])
|
||||
if n < 1 {
|
||||
return "", EINVAL
|
||||
}
|
||||
return string(buf[:n]), nil
|
||||
}
|
||||
|
||||
// TODO
|
||||
func sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) {
|
||||
return -1, ENOSYS
|
||||
|
|
15
vendor/golang.org/x/sys/unix/syscall_openbsd.go
generated
vendored
15
vendor/golang.org/x/sys/unix/syscall_openbsd.go
generated
vendored
|
@ -114,23 +114,8 @@ func Getdirentries(fd int, buf []byte, basep *uintptr) (n int, err error) {
|
|||
return
|
||||
}
|
||||
|
||||
const ImplementsGetwd = true
|
||||
|
||||
//sys Getcwd(buf []byte) (n int, err error) = SYS___GETCWD
|
||||
|
||||
func Getwd() (string, error) {
|
||||
var buf [PathMax]byte
|
||||
_, err := Getcwd(buf[0:])
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
n := clen(buf[:])
|
||||
if n < 1 {
|
||||
return "", EINVAL
|
||||
}
|
||||
return string(buf[:n]), nil
|
||||
}
|
||||
|
||||
func Sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) {
|
||||
if raceenabled {
|
||||
raceReleaseMerge(unsafe.Pointer(&ioSync))
|
||||
|
|
37
vendor/golang.org/x/sys/unix/zsyscall_darwin_386.go
generated
vendored
37
vendor/golang.org/x/sys/unix/zsyscall_darwin_386.go
generated
vendored
|
@ -490,21 +490,6 @@ func libc_munlockall_trampoline()
|
|||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func getattrlist(path *byte, list unsafe.Pointer, buf unsafe.Pointer, size uintptr, options int) (err error) {
|
||||
_, _, e1 := syscall_syscall6(funcPC(libc_getattrlist_trampoline), uintptr(unsafe.Pointer(path)), uintptr(list), uintptr(buf), uintptr(size), uintptr(options), 0)
|
||||
if e1 != 0 {
|
||||
err = errnoErr(e1)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func libc_getattrlist_trampoline()
|
||||
|
||||
//go:linkname libc_getattrlist libc_getattrlist
|
||||
//go:cgo_import_dynamic libc_getattrlist getattrlist "/usr/lib/libSystem.B.dylib"
|
||||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func pipe() (r int, w int, err error) {
|
||||
r0, r1, e1 := syscall_rawSyscall(funcPC(libc_pipe_trampoline), 0, 0, 0)
|
||||
r = int(r0)
|
||||
|
@ -1277,6 +1262,28 @@ func libc_ftruncate_trampoline()
|
|||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func Getcwd(buf []byte) (n int, err error) {
|
||||
var _p0 unsafe.Pointer
|
||||
if len(buf) > 0 {
|
||||
_p0 = unsafe.Pointer(&buf[0])
|
||||
} else {
|
||||
_p0 = unsafe.Pointer(&_zero)
|
||||
}
|
||||
r0, _, e1 := syscall_syscall(funcPC(libc_getcwd_trampoline), uintptr(_p0), uintptr(len(buf)), 0)
|
||||
n = int(r0)
|
||||
if e1 != 0 {
|
||||
err = errnoErr(e1)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func libc_getcwd_trampoline()
|
||||
|
||||
//go:linkname libc_getcwd libc_getcwd
|
||||
//go:cgo_import_dynamic libc_getcwd getcwd "/usr/lib/libSystem.B.dylib"
|
||||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func Getdtablesize() (size int) {
|
||||
r0, _, _ := syscall_syscall(funcPC(libc_getdtablesize_trampoline), 0, 0, 0)
|
||||
size = int(r0)
|
||||
|
|
4
vendor/golang.org/x/sys/unix/zsyscall_darwin_386.s
generated
vendored
4
vendor/golang.org/x/sys/unix/zsyscall_darwin_386.s
generated
vendored
|
@ -60,8 +60,6 @@ TEXT ·libc_munlock_trampoline(SB),NOSPLIT,$0-0
|
|||
JMP libc_munlock(SB)
|
||||
TEXT ·libc_munlockall_trampoline(SB),NOSPLIT,$0-0
|
||||
JMP libc_munlockall(SB)
|
||||
TEXT ·libc_getattrlist_trampoline(SB),NOSPLIT,$0-0
|
||||
JMP libc_getattrlist(SB)
|
||||
TEXT ·libc_pipe_trampoline(SB),NOSPLIT,$0-0
|
||||
JMP libc_pipe(SB)
|
||||
TEXT ·libc_getxattr_trampoline(SB),NOSPLIT,$0-0
|
||||
|
@ -146,6 +144,8 @@ TEXT ·libc_fsync_trampoline(SB),NOSPLIT,$0-0
|
|||
JMP libc_fsync(SB)
|
||||
TEXT ·libc_ftruncate_trampoline(SB),NOSPLIT,$0-0
|
||||
JMP libc_ftruncate(SB)
|
||||
TEXT ·libc_getcwd_trampoline(SB),NOSPLIT,$0-0
|
||||
JMP libc_getcwd(SB)
|
||||
TEXT ·libc_getdtablesize_trampoline(SB),NOSPLIT,$0-0
|
||||
JMP libc_getdtablesize(SB)
|
||||
TEXT ·libc_getegid_trampoline(SB),NOSPLIT,$0-0
|
||||
|
|
37
vendor/golang.org/x/sys/unix/zsyscall_darwin_amd64.go
generated
vendored
37
vendor/golang.org/x/sys/unix/zsyscall_darwin_amd64.go
generated
vendored
|
@ -490,21 +490,6 @@ func libc_munlockall_trampoline()
|
|||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func getattrlist(path *byte, list unsafe.Pointer, buf unsafe.Pointer, size uintptr, options int) (err error) {
|
||||
_, _, e1 := syscall_syscall6(funcPC(libc_getattrlist_trampoline), uintptr(unsafe.Pointer(path)), uintptr(list), uintptr(buf), uintptr(size), uintptr(options), 0)
|
||||
if e1 != 0 {
|
||||
err = errnoErr(e1)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func libc_getattrlist_trampoline()
|
||||
|
||||
//go:linkname libc_getattrlist libc_getattrlist
|
||||
//go:cgo_import_dynamic libc_getattrlist getattrlist "/usr/lib/libSystem.B.dylib"
|
||||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func pipe() (r int, w int, err error) {
|
||||
r0, r1, e1 := syscall_rawSyscall(funcPC(libc_pipe_trampoline), 0, 0, 0)
|
||||
r = int(r0)
|
||||
|
@ -1277,6 +1262,28 @@ func libc_ftruncate_trampoline()
|
|||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func Getcwd(buf []byte) (n int, err error) {
|
||||
var _p0 unsafe.Pointer
|
||||
if len(buf) > 0 {
|
||||
_p0 = unsafe.Pointer(&buf[0])
|
||||
} else {
|
||||
_p0 = unsafe.Pointer(&_zero)
|
||||
}
|
||||
r0, _, e1 := syscall_syscall(funcPC(libc_getcwd_trampoline), uintptr(_p0), uintptr(len(buf)), 0)
|
||||
n = int(r0)
|
||||
if e1 != 0 {
|
||||
err = errnoErr(e1)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func libc_getcwd_trampoline()
|
||||
|
||||
//go:linkname libc_getcwd libc_getcwd
|
||||
//go:cgo_import_dynamic libc_getcwd getcwd "/usr/lib/libSystem.B.dylib"
|
||||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func Getdtablesize() (size int) {
|
||||
r0, _, _ := syscall_syscall(funcPC(libc_getdtablesize_trampoline), 0, 0, 0)
|
||||
size = int(r0)
|
||||
|
|
4
vendor/golang.org/x/sys/unix/zsyscall_darwin_amd64.s
generated
vendored
4
vendor/golang.org/x/sys/unix/zsyscall_darwin_amd64.s
generated
vendored
|
@ -60,8 +60,6 @@ TEXT ·libc_munlock_trampoline(SB),NOSPLIT,$0-0
|
|||
JMP libc_munlock(SB)
|
||||
TEXT ·libc_munlockall_trampoline(SB),NOSPLIT,$0-0
|
||||
JMP libc_munlockall(SB)
|
||||
TEXT ·libc_getattrlist_trampoline(SB),NOSPLIT,$0-0
|
||||
JMP libc_getattrlist(SB)
|
||||
TEXT ·libc_pipe_trampoline(SB),NOSPLIT,$0-0
|
||||
JMP libc_pipe(SB)
|
||||
TEXT ·libc_getxattr_trampoline(SB),NOSPLIT,$0-0
|
||||
|
@ -146,6 +144,8 @@ TEXT ·libc_fsync_trampoline(SB),NOSPLIT,$0-0
|
|||
JMP libc_fsync(SB)
|
||||
TEXT ·libc_ftruncate_trampoline(SB),NOSPLIT,$0-0
|
||||
JMP libc_ftruncate(SB)
|
||||
TEXT ·libc_getcwd_trampoline(SB),NOSPLIT,$0-0
|
||||
JMP libc_getcwd(SB)
|
||||
TEXT ·libc_getdtablesize_trampoline(SB),NOSPLIT,$0-0
|
||||
JMP libc_getdtablesize(SB)
|
||||
TEXT ·libc_getegid_trampoline(SB),NOSPLIT,$0-0
|
||||
|
|
37
vendor/golang.org/x/sys/unix/zsyscall_darwin_arm.go
generated
vendored
37
vendor/golang.org/x/sys/unix/zsyscall_darwin_arm.go
generated
vendored
|
@ -490,21 +490,6 @@ func libc_munlockall_trampoline()
|
|||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func getattrlist(path *byte, list unsafe.Pointer, buf unsafe.Pointer, size uintptr, options int) (err error) {
|
||||
_, _, e1 := syscall_syscall6(funcPC(libc_getattrlist_trampoline), uintptr(unsafe.Pointer(path)), uintptr(list), uintptr(buf), uintptr(size), uintptr(options), 0)
|
||||
if e1 != 0 {
|
||||
err = errnoErr(e1)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func libc_getattrlist_trampoline()
|
||||
|
||||
//go:linkname libc_getattrlist libc_getattrlist
|
||||
//go:cgo_import_dynamic libc_getattrlist getattrlist "/usr/lib/libSystem.B.dylib"
|
||||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func pipe() (r int, w int, err error) {
|
||||
r0, r1, e1 := syscall_rawSyscall(funcPC(libc_pipe_trampoline), 0, 0, 0)
|
||||
r = int(r0)
|
||||
|
@ -1277,6 +1262,28 @@ func libc_ftruncate_trampoline()
|
|||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func Getcwd(buf []byte) (n int, err error) {
|
||||
var _p0 unsafe.Pointer
|
||||
if len(buf) > 0 {
|
||||
_p0 = unsafe.Pointer(&buf[0])
|
||||
} else {
|
||||
_p0 = unsafe.Pointer(&_zero)
|
||||
}
|
||||
r0, _, e1 := syscall_syscall(funcPC(libc_getcwd_trampoline), uintptr(_p0), uintptr(len(buf)), 0)
|
||||
n = int(r0)
|
||||
if e1 != 0 {
|
||||
err = errnoErr(e1)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func libc_getcwd_trampoline()
|
||||
|
||||
//go:linkname libc_getcwd libc_getcwd
|
||||
//go:cgo_import_dynamic libc_getcwd getcwd "/usr/lib/libSystem.B.dylib"
|
||||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func Getdtablesize() (size int) {
|
||||
r0, _, _ := syscall_syscall(funcPC(libc_getdtablesize_trampoline), 0, 0, 0)
|
||||
size = int(r0)
|
||||
|
|
4
vendor/golang.org/x/sys/unix/zsyscall_darwin_arm.s
generated
vendored
4
vendor/golang.org/x/sys/unix/zsyscall_darwin_arm.s
generated
vendored
|
@ -60,8 +60,6 @@ TEXT ·libc_munlock_trampoline(SB),NOSPLIT,$0-0
|
|||
JMP libc_munlock(SB)
|
||||
TEXT ·libc_munlockall_trampoline(SB),NOSPLIT,$0-0
|
||||
JMP libc_munlockall(SB)
|
||||
TEXT ·libc_getattrlist_trampoline(SB),NOSPLIT,$0-0
|
||||
JMP libc_getattrlist(SB)
|
||||
TEXT ·libc_pipe_trampoline(SB),NOSPLIT,$0-0
|
||||
JMP libc_pipe(SB)
|
||||
TEXT ·libc_getxattr_trampoline(SB),NOSPLIT,$0-0
|
||||
|
@ -146,6 +144,8 @@ TEXT ·libc_fsync_trampoline(SB),NOSPLIT,$0-0
|
|||
JMP libc_fsync(SB)
|
||||
TEXT ·libc_ftruncate_trampoline(SB),NOSPLIT,$0-0
|
||||
JMP libc_ftruncate(SB)
|
||||
TEXT ·libc_getcwd_trampoline(SB),NOSPLIT,$0-0
|
||||
JMP libc_getcwd(SB)
|
||||
TEXT ·libc_getdtablesize_trampoline(SB),NOSPLIT,$0-0
|
||||
JMP libc_getdtablesize(SB)
|
||||
TEXT ·libc_getegid_trampoline(SB),NOSPLIT,$0-0
|
||||
|
|
37
vendor/golang.org/x/sys/unix/zsyscall_darwin_arm64.go
generated
vendored
37
vendor/golang.org/x/sys/unix/zsyscall_darwin_arm64.go
generated
vendored
|
@ -490,21 +490,6 @@ func libc_munlockall_trampoline()
|
|||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func getattrlist(path *byte, list unsafe.Pointer, buf unsafe.Pointer, size uintptr, options int) (err error) {
|
||||
_, _, e1 := syscall_syscall6(funcPC(libc_getattrlist_trampoline), uintptr(unsafe.Pointer(path)), uintptr(list), uintptr(buf), uintptr(size), uintptr(options), 0)
|
||||
if e1 != 0 {
|
||||
err = errnoErr(e1)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func libc_getattrlist_trampoline()
|
||||
|
||||
//go:linkname libc_getattrlist libc_getattrlist
|
||||
//go:cgo_import_dynamic libc_getattrlist getattrlist "/usr/lib/libSystem.B.dylib"
|
||||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func pipe() (r int, w int, err error) {
|
||||
r0, r1, e1 := syscall_rawSyscall(funcPC(libc_pipe_trampoline), 0, 0, 0)
|
||||
r = int(r0)
|
||||
|
@ -1277,6 +1262,28 @@ func libc_ftruncate_trampoline()
|
|||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func Getcwd(buf []byte) (n int, err error) {
|
||||
var _p0 unsafe.Pointer
|
||||
if len(buf) > 0 {
|
||||
_p0 = unsafe.Pointer(&buf[0])
|
||||
} else {
|
||||
_p0 = unsafe.Pointer(&_zero)
|
||||
}
|
||||
r0, _, e1 := syscall_syscall(funcPC(libc_getcwd_trampoline), uintptr(_p0), uintptr(len(buf)), 0)
|
||||
n = int(r0)
|
||||
if e1 != 0 {
|
||||
err = errnoErr(e1)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func libc_getcwd_trampoline()
|
||||
|
||||
//go:linkname libc_getcwd libc_getcwd
|
||||
//go:cgo_import_dynamic libc_getcwd getcwd "/usr/lib/libSystem.B.dylib"
|
||||
|
||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||
|
||||
func Getdtablesize() (size int) {
|
||||
r0, _, _ := syscall_syscall(funcPC(libc_getdtablesize_trampoline), 0, 0, 0)
|
||||
size = int(r0)
|
||||
|
|
4
vendor/golang.org/x/sys/unix/zsyscall_darwin_arm64.s
generated
vendored
4
vendor/golang.org/x/sys/unix/zsyscall_darwin_arm64.s
generated
vendored
|
@ -60,8 +60,6 @@ TEXT ·libc_munlock_trampoline(SB),NOSPLIT,$0-0
|
|||
JMP libc_munlock(SB)
|
||||
TEXT ·libc_munlockall_trampoline(SB),NOSPLIT,$0-0
|
||||
JMP libc_munlockall(SB)
|
||||
TEXT ·libc_getattrlist_trampoline(SB),NOSPLIT,$0-0
|
||||
JMP libc_getattrlist(SB)
|
||||
TEXT ·libc_pipe_trampoline(SB),NOSPLIT,$0-0
|
||||
JMP libc_pipe(SB)
|
||||
TEXT ·libc_getxattr_trampoline(SB),NOSPLIT,$0-0
|
||||
|
@ -146,6 +144,8 @@ TEXT ·libc_fsync_trampoline(SB),NOSPLIT,$0-0
|
|||
JMP libc_fsync(SB)
|
||||
TEXT ·libc_ftruncate_trampoline(SB),NOSPLIT,$0-0
|
||||
JMP libc_ftruncate(SB)
|
||||
TEXT ·libc_getcwd_trampoline(SB),NOSPLIT,$0-0
|
||||
JMP libc_getcwd(SB)
|
||||
TEXT ·libc_getdtablesize_trampoline(SB),NOSPLIT,$0-0
|
||||
JMP libc_getdtablesize(SB)
|
||||
TEXT ·libc_getegid_trampoline(SB),NOSPLIT,$0-0
|
||||
|
|
4
vendor/golang.org/x/sys/unix/ztypes_darwin_386.go
generated
vendored
4
vendor/golang.org/x/sys/unix/ztypes_darwin_386.go
generated
vendored
|
@ -145,6 +145,10 @@ type Dirent struct {
|
|||
_ [3]byte
|
||||
}
|
||||
|
||||
const (
|
||||
PathMax = 0x400
|
||||
)
|
||||
|
||||
type RawSockaddrInet4 struct {
|
||||
Len uint8
|
||||
Family uint8
|
||||
|
|
4
vendor/golang.org/x/sys/unix/ztypes_darwin_amd64.go
generated
vendored
4
vendor/golang.org/x/sys/unix/ztypes_darwin_amd64.go
generated
vendored
|
@ -151,6 +151,10 @@ type Dirent struct {
|
|||
_ [3]byte
|
||||
}
|
||||
|
||||
const (
|
||||
PathMax = 0x400
|
||||
)
|
||||
|
||||
type RawSockaddrInet4 struct {
|
||||
Len uint8
|
||||
Family uint8
|
||||
|
|
4
vendor/golang.org/x/sys/unix/ztypes_darwin_arm.go
generated
vendored
4
vendor/golang.org/x/sys/unix/ztypes_darwin_arm.go
generated
vendored
|
@ -146,6 +146,10 @@ type Dirent struct {
|
|||
_ [3]byte
|
||||
}
|
||||
|
||||
const (
|
||||
PathMax = 0x400
|
||||
)
|
||||
|
||||
type RawSockaddrInet4 struct {
|
||||
Len uint8
|
||||
Family uint8
|
||||
|
|
4
vendor/golang.org/x/sys/unix/ztypes_darwin_arm64.go
generated
vendored
4
vendor/golang.org/x/sys/unix/ztypes_darwin_arm64.go
generated
vendored
|
@ -151,6 +151,10 @@ type Dirent struct {
|
|||
_ [3]byte
|
||||
}
|
||||
|
||||
const (
|
||||
PathMax = 0x400
|
||||
)
|
||||
|
||||
type RawSockaddrInet4 struct {
|
||||
Len uint8
|
||||
Family uint8
|
||||
|
|
10
vendor/modules.txt
vendored
10
vendor/modules.txt
vendored
|
@ -19,7 +19,7 @@ github.com/VictoriaMetrics/metrics
|
|||
# github.com/VictoriaMetrics/metricsql v0.6.0
|
||||
github.com/VictoriaMetrics/metricsql
|
||||
github.com/VictoriaMetrics/metricsql/binaryop
|
||||
# github.com/aws/aws-sdk-go v1.34.29
|
||||
# github.com/aws/aws-sdk-go v1.34.33
|
||||
github.com/aws/aws-sdk-go/aws
|
||||
github.com/aws/aws-sdk-go/aws/arn
|
||||
github.com/aws/aws-sdk-go/aws/awserr
|
||||
|
@ -180,7 +180,7 @@ golang.org/x/lint/golint
|
|||
# golang.org/x/mod v0.3.0
|
||||
golang.org/x/mod/module
|
||||
golang.org/x/mod/semver
|
||||
# golang.org/x/net v0.0.0-20200904194848-62affa334b73
|
||||
# golang.org/x/net v0.0.0-20200927032502-5d4f70055728
|
||||
golang.org/x/net/context
|
||||
golang.org/x/net/context/ctxhttp
|
||||
golang.org/x/net/http/httpguts
|
||||
|
@ -197,7 +197,7 @@ golang.org/x/oauth2/jws
|
|||
golang.org/x/oauth2/jwt
|
||||
# golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208
|
||||
golang.org/x/sync/errgroup
|
||||
# golang.org/x/sys v0.0.0-20200922070232-aee5d888a860
|
||||
# golang.org/x/sys v0.0.0-20200926100807-9d91bd62050c
|
||||
golang.org/x/sys/internal/unsafeheader
|
||||
golang.org/x/sys/unix
|
||||
golang.org/x/sys/windows
|
||||
|
@ -206,7 +206,7 @@ golang.org/x/text/secure/bidirule
|
|||
golang.org/x/text/transform
|
||||
golang.org/x/text/unicode/bidi
|
||||
golang.org/x/text/unicode/norm
|
||||
# golang.org/x/tools v0.0.0-20200923053713-ba800b16d873
|
||||
# golang.org/x/tools v0.0.0-20200928182047-19e03678916f
|
||||
golang.org/x/tools/cmd/goimports
|
||||
golang.org/x/tools/go/ast/astutil
|
||||
golang.org/x/tools/go/gcexportdata
|
||||
|
@ -247,7 +247,7 @@ google.golang.org/appengine/internal/modules
|
|||
google.golang.org/appengine/internal/remote_api
|
||||
google.golang.org/appengine/internal/urlfetch
|
||||
google.golang.org/appengine/urlfetch
|
||||
# google.golang.org/genproto v0.0.0-20200921165018-b9da36f5f452
|
||||
# google.golang.org/genproto v0.0.0-20200925023002-c2d885f95484
|
||||
google.golang.org/genproto/googleapis/api/annotations
|
||||
google.golang.org/genproto/googleapis/iam/v1
|
||||
google.golang.org/genproto/googleapis/rpc/code
|
||||
|
|
Loading…
Reference in a new issue