Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files

This commit is contained in:
Aliaksandr Valialkin 2022-03-03 19:31:14 +02:00
commit 90a1502335
No known key found for this signature in database
GPG key ID: A72BEC6CD3D0DED1
57 changed files with 7444 additions and 3679 deletions

View file

@ -102,7 +102,7 @@ Just download [VictoriaMetrics executable](https://github.com/VictoriaMetrics/Vi
The following command-line flags are used the most: The following command-line flags are used the most:
* `-storageDataPath` - VictoriaMetrics stores all the data in this directory. Default path is `victoria-metrics-data` in the current working directory. * `-storageDataPath` - VictoriaMetrics stores all the data in this directory. Default path is `victoria-metrics-data` in the current working directory.
* `-retentionPeriod` - retention for stored data. Older data is automatically deleted. Default retention is 1 month. See [these docs](#retention) for more details. * `-retentionPeriod` - retention for stored data. Older data is automatically deleted. Default retention is 1 month. See [the Retention section](#retention) for more details.
Other flags have good enough default values, so set them only if you really need this. Pass `-help` to see [all the available flags with description and default values](#list-of-command-line-flags). Other flags have good enough default values, so set them only if you really need this. Pass `-help` to see [all the available flags with description and default values](#list-of-command-line-flags).
@ -744,7 +744,7 @@ The delete API is intended mainly for the following cases:
* One-off deleting of accidentally written invalid (or undesired) time series. * One-off deleting of accidentally written invalid (or undesired) time series.
* One-off deleting of user data due to [GDPR](https://en.wikipedia.org/wiki/General_Data_Protection_Regulation). * One-off deleting of user data due to [GDPR](https://en.wikipedia.org/wiki/General_Data_Protection_Regulation).
It isn't recommended using delete API for the following cases, since it brings non-zero overhead: Using the delete API is not recommended in the following cases, since it brings a non-zero overhead:
* Regular cleanups for unneeded data. Just prevent writing unneeded data into VictoriaMetrics. * Regular cleanups for unneeded data. Just prevent writing unneeded data into VictoriaMetrics.
This can be done with [relabeling](#relabeling). This can be done with [relabeling](#relabeling).
@ -753,7 +753,7 @@ It isn't recommended using delete API for the following cases, since it brings n
time series occupy disk space until the next merge operation, which can never occur when deleting too old data. time series occupy disk space until the next merge operation, which can never occur when deleting too old data.
[Forced merge](#forced-merge) may be used for freeing up disk space occupied by old data. [Forced merge](#forced-merge) may be used for freeing up disk space occupied by old data.
It is better using `-retentionPeriod` command-line flag for efficient pruning of old data. It's better to use the `-retentionPeriod` command-line flag for efficient pruning of old data.
## Forced merge ## Forced merge
@ -1147,7 +1147,7 @@ write data to the same VictoriaMetrics instance. These vmagent or Prometheus ins
VictoriaMetrics stores time series data in [MergeTree](https://en.wikipedia.org/wiki/Log-structured_merge-tree)-like VictoriaMetrics stores time series data in [MergeTree](https://en.wikipedia.org/wiki/Log-structured_merge-tree)-like
data structures. On insert, VictoriaMetrics accumulates up to 1s of data and dumps it on disk to data structures. On insert, VictoriaMetrics accumulates up to 1s of data and dumps it on disk to
`<-storageDataPath>/data/small/YYYY_MM/` subdirectory forming a `part` with the following `<-storageDataPath>/data/small/YYYY_MM/` subdirectory forming a `part` with the following
name pattern `rowsCount_blocksCount_minTimestamp_maxTimestamp`. Each part consists of two "columns": name pattern: `rowsCount_blocksCount_minTimestamp_maxTimestamp`. Each part consists of two "columns":
values and timestamps. These are sorted and compressed raw time series values. Additionally, part contains values and timestamps. These are sorted and compressed raw time series values. Additionally, part contains
index files for searching for specific series in the values and timestamps files. index files for searching for specific series in the values and timestamps files.
@ -1177,24 +1177,24 @@ See also [how to work with snapshots](#how-to-work-with-snapshots).
## Retention ## Retention
Retention is configured with `-retentionPeriod` command-line flag. For instance, `-retentionPeriod=3` means Retention is configured with the `-retentionPeriod` command-line flag, which takes a number followed by a time unit character - `h(ours)`, `d(ays)`, `w(eeks)`, `y(ears)`. If the time unit is not specified, a month is assumed. For instance, `-retentionPeriod=3` means that the data will be stored for 3 months and then deleted. The default retention period is one month.
that the data will be stored for 3 months and then deleted.
Data is split in per-month partitions inside `<-storageDataPath>/data/{small,big}` folders.
Data partitions outside the configured retention are deleted on the first day of new month.
Data is split in per-month partitions inside `<-storageDataPath>/data/{small,big}` folders.
Data partitions outside the configured retention are deleted on the first day of the new month.
Each partition consists of one or more data parts with the following name pattern `rowsCount_blocksCount_minTimestamp_maxTimestamp`. Each partition consists of one or more data parts with the following name pattern `rowsCount_blocksCount_minTimestamp_maxTimestamp`.
Data parts outside of the configured retention are eventually deleted during Data parts outside of the configured retention are eventually deleted during
[background merge](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282). [background merge](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282).
In order to keep data according to `-retentionPeriod` max disk space usage is going to be `-retentionPeriod` + 1 month. The maximum disk space usage for a given `-retentionPeriod` is going to be (`-retentionPeriod` + 1) months.
For example if `-retentionPeriod` is set to 1, data for January is deleted on March 1st. For example, if `-retentionPeriod` is set to 1, data for January is deleted on March 1st.
VictoriaMetrics supports retention smaller than 1 month. For example, `-retentionPeriod=5d` would set data retention for 5 days. Please note, the time range covered by data part is not limited by retention period unit. Hence, data part may contain data
Please note, time range covered by data part is not limited by retention period unit. Hence, data part may contain data
for multiple days and will be deleted only when fully outside of the configured retention. for multiple days and will be deleted only when fully outside of the configured retention.
It is safe to extend `-retentionPeriod` on existing data. If `-retentionPeriod` is set to lower It is safe to extend `-retentionPeriod` on existing data. If `-retentionPeriod` is set to a lower
value than before then data outside the configured period will be eventually deleted. value than before, then data outside the configured period will be eventually deleted.
VictoriaMetrics does not support indefinite retention, but you can specify an arbitrarily high duration, e.g. `-retentionPeriod=100y`.
## Multiple retentions ## Multiple retentions

View file

@ -264,7 +264,7 @@ Labels can be added to metrics by the following mechanisms:
## Relabeling ## Relabeling
`vmagent` and VictoriaMetrics support Prometheus-compatible relabeling. VictoriaMetrics components (including `vmagent`) support Prometheus-compatible relabeling.
They provide the following additional actions on top of actions from the [Prometheus relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config): They provide the following additional actions on top of actions from the [Prometheus relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config):
* `replace_all`: replaces all of the occurences of `regex` in the values of `source_labels` with the `replacement` and stores the results in the `target_label`. * `replace_all`: replaces all of the occurences of `regex` in the values of `source_labels` with the `replacement` and stores the results in the `target_label`.
@ -289,6 +289,21 @@ The `regex` value can be split into multiple lines for improved readability and
- "foo_.+" - "foo_.+"
``` ```
VictoriaMetrics components support an optional `if` filter, which can be used for conditional relabeling. The `if` filter may contain arbitrary [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). For example, the following relabeling rule drops targets, which don't match `foo{bar="baz"}` series selector:
```yaml
- action: keep
if: 'foo{bar="baz"}'
```
This is equivalent to less clear traditional relabeling rule:
```yaml
- action: keep
source_labels: [__name__, bar]
regex: 'foo;baz'
```
The relabeling can be defined in the following places: The relabeling can be defined in the following places:
* At the `scrape_config -> relabel_configs` section in `-promscrape.config` file. This relabeling is applied to target labels. This relabeling can be debugged by passing `relabel_debug: true` option to the corresponding `scrape_config` section. In this case `vmagent` logs target labels before and after the relabeling and then drops the logged target. * At the `scrape_config -> relabel_configs` section in `-promscrape.config` file. This relabeling is applied to target labels. This relabeling can be debugged by passing `relabel_debug: true` option to the corresponding `scrape_config` section. In this case `vmagent` logs target labels before and after the relabeling and then drops the logged target.

View file

@ -382,7 +382,7 @@ func getCommonLabelFilters(tss []*timeseries) []metricsql.LabelFilter {
continue continue
} }
values = getUniqueValues(values) values = getUniqueValues(values)
if len(values) > 10000 { if len(values) > 1000 {
// Skip the filter on the given tag, since it needs to enumerate too many unique values. // Skip the filter on the given tag, since it needs to enumerate too many unique values.
// This may slow down the search for matching time series. // This may slow down the search for matching time series.
continue continue

View file

@ -76,7 +76,7 @@ var checkRollupResultCacheResetOnce sync.Once
var rollupResultResetMetricRowSample atomic.Value var rollupResultResetMetricRowSample atomic.Value
var rollupResultCacheV = &rollupResultCache{ var rollupResultCacheV = &rollupResultCache{
c: workingsetcache.New(1024*1024, time.Hour), // This is a cache for testing. c: workingsetcache.New(1024 * 1024), // This is a cache for testing.
} }
var rollupResultCachePath string var rollupResultCachePath string
@ -104,9 +104,9 @@ func InitRollupResultCache(cachePath string) {
var c *workingsetcache.Cache var c *workingsetcache.Cache
if len(rollupResultCachePath) > 0 { if len(rollupResultCachePath) > 0 {
logger.Infof("loading rollupResult cache from %q...", rollupResultCachePath) logger.Infof("loading rollupResult cache from %q...", rollupResultCachePath)
c = workingsetcache.Load(rollupResultCachePath, cacheSize, time.Hour) c = workingsetcache.Load(rollupResultCachePath, cacheSize)
} else { } else {
c = workingsetcache.New(cacheSize, time.Hour) c = workingsetcache.New(cacheSize)
} }
if *disableCache { if *disableCache {
c.Reset() c.Reset()

View file

@ -1,12 +1,12 @@
{ {
"files": { "files": {
"main.css": "./static/css/main.098d452b.css", "main.css": "./static/css/main.098d452b.css",
"main.js": "./static/js/main.2a9382c2.js", "main.js": "./static/js/main.a292ed17.js",
"static/js/27.939f971b.chunk.js": "./static/js/27.939f971b.chunk.js", "static/js/27.939f971b.chunk.js": "./static/js/27.939f971b.chunk.js",
"index.html": "./index.html" "index.html": "./index.html"
}, },
"entrypoints": [ "entrypoints": [
"static/css/main.098d452b.css", "static/css/main.098d452b.css",
"static/js/main.2a9382c2.js" "static/js/main.a292ed17.js"
] ]
} }

View file

@ -1 +1 @@
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="./favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1"/><meta name="theme-color" content="#000000"/><meta name="description" content="VM-UI is a metric explorer for Victoria Metrics"/><link rel="apple-touch-icon" href="./apple-touch-icon.png"/><link rel="icon" type="image/png" sizes="32x32" href="./favicon-32x32.png"><link rel="manifest" href="./manifest.json"/><title>VM UI</title><link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto:300,400,500,700&display=swap"/><script defer="defer" src="./static/js/main.2a9382c2.js"></script><link href="./static/css/main.098d452b.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div></body></html> <!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="./favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1"/><meta name="theme-color" content="#000000"/><meta name="description" content="VM-UI is a metric explorer for Victoria Metrics"/><link rel="apple-touch-icon" href="./apple-touch-icon.png"/><link rel="icon" type="image/png" sizes="32x32" href="./favicon-32x32.png"><link rel="manifest" href="./manifest.json"/><title>VM UI</title><link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto:300,400,500,700&display=swap"/><script defer="defer" src="./static/js/main.a292ed17.js"></script><link href="./static/css/main.098d452b.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div></body></html>

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View file

@ -6,7 +6,7 @@
* @license MIT * @license MIT
*/ */
/** @license MUI v5.4.2 /** @license MUI v5.4.4
* *
* This source code is licensed under the MIT license found in the * This source code is licensed under the MIT license found in the
* LICENSE file in the root directory of this source tree. * LICENSE file in the root directory of this source tree.

File diff suppressed because it is too large Load diff

View file

@ -13,11 +13,11 @@
"@testing-library/jest-dom": "^5.16.2", "@testing-library/jest-dom": "^5.16.2",
"@testing-library/react": "^12.1.3", "@testing-library/react": "^12.1.3",
"@testing-library/user-event": "^13.5.0", "@testing-library/user-event": "^13.5.0",
"@types/jest": "^27.4.0", "@types/jest": "^27.4.1",
"@types/lodash.debounce": "^4.0.6", "@types/lodash.debounce": "^4.0.6",
"@types/lodash.get": "^4.4.6", "@types/lodash.get": "^4.4.6",
"@types/lodash.throttle": "^4.1.6", "@types/lodash.throttle": "^4.1.6",
"@types/node": "^17.0.19", "@types/node": "^17.0.21",
"@types/qs": "^6.9.7", "@types/qs": "^6.9.7",
"@types/react": "^17.0.39", "@types/react": "^17.0.39",
"@types/react-dom": "^17.0.11", "@types/react-dom": "^17.0.11",
@ -63,7 +63,7 @@
"@typescript-eslint/eslint-plugin": "^5.12.1", "@typescript-eslint/eslint-plugin": "^5.12.1",
"@typescript-eslint/parser": "^5.12.1", "@typescript-eslint/parser": "^5.12.1",
"customize-cra": "^1.0.0", "customize-cra": "^1.0.0",
"eslint-plugin-react": "^7.28.0", "eslint-plugin-react": "^7.29.2",
"react-app-rewired": "^2.2.1" "react-app-rewired": "^2.2.1"
} }
} }

View file

@ -15,6 +15,36 @@ The following tip changes can be tested by building VictoriaMetrics components f
## tip ## tip
## [v1.74.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.74.0)
Released at 03-03-2022
* FEATURE: add support for conditional relabeling via `if` filter. The `if` filter can contain arbitrary [series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). For example, the following rule drops targets matching `foo{bar="baz"}` series selector:
```yml
- action: drop
if: 'foo{bar="baz"}'
```
This rule is equivalent to less clear traditional one:
```yml
- action: drop
source_labels: [__name__, bar]
regex: 'foo;baz'
```
See [relabeling docs](https://docs.victoriametrics.com/vmagent.html#relabeling) and [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1998) for more details.
* FEATURE: reduce memory usage for various caches under [high churn rate](https://docs.victoriametrics.com/FAQ.html#what-is-high-churn-rate).
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): re-use Kafka client when pushing data from [many tenants](https://docs.victoriametrics.com/vmagent.html#multitenancy) to Kafka. Previously a separate Kafka client was created per each tenant. This could lead to increased load on Kafka. See [how to push data from vmagent to Kafka](https://docs.victoriametrics.com/vmagent.html#writing-metrics-to-kafka).
* FEATURE: improve performance when registering new time series. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2247). Thanks to @ahfuzhang .
* BUGFIX: return the proper number of datapoints from `moving*()` functions such as `movingAverage()` in [Graphite Render API](https://docs.victoriametrics.com/#graphite-render-api-usage). Previously these functions could return too big number of samples if [maxDataPoints query arg](https://graphite.readthedocs.io/en/stable/render_api.html#maxdatapoints) is explicitly passed to `/render` API.
* BUGFIX: properly handle [series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors) containing a filter for multiple metric names plus a negative filter. For example, `{__name__=~"foo|bar",job!="baz"}` . Previously VictoriaMetrics could return series with `foo` or `bar` names and with `job="baz"`. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2238).
* BUGFIX: [vmgateway](https://docs.victoriametrics.com/vmgateway.html): properly parse JWT tokens if they are encoded with [URL-safe base64 encoding](https://datatracker.ietf.org/doc/html/rfc4648#section-5).
## [v1.73.1](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.73.1) ## [v1.73.1](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.73.1)
Released at 22-02-2022 Released at 22-02-2022
@ -32,6 +62,7 @@ Released at 22-02-2022
* BUGFIX: vmalert: add support for `$externalLabels` and `$externalURL` template vars in the same way as Prometheus does. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2193). * BUGFIX: vmalert: add support for `$externalLabels` and `$externalURL` template vars in the same way as Prometheus does. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2193).
* BUGFIX: vmalert: make sure notifiers are discovered during initialization if they are configured via `consul_sd_configs`. Previously they could be discovered in 30 seconds (the default value for `-promscrape.consulSDCheckInterval` command-line flag) after the initialization. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2202). * BUGFIX: vmalert: make sure notifiers are discovered during initialization if they are configured via `consul_sd_configs`. Previously they could be discovered in 30 seconds (the default value for `-promscrape.consulSDCheckInterval` command-line flag) after the initialization. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2202).
* BUGFIX: update default value for `-promscrape.fileSDCheckInterval`, so it matches default duration used by Prometheus for checking for updates in `file_sd_configs`. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2187). Thanks to @corporate-gadfly for the fix. * BUGFIX: update default value for `-promscrape.fileSDCheckInterval`, so it matches default duration used by Prometheus for checking for updates in `file_sd_configs`. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2187). Thanks to @corporate-gadfly for the fix.
* BUGFIX: [VictoriaMetrics cluster](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html): do not return partial responses from `vmselect` if at least a single `vmstorage` node was reachable and returned an app-level error. Such errors are usually related to cluster mis-configuration, so they must be returned to the caller instead of being masked by [partial responses](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#cluster-availability). Partial responses can be returned only if some of `vmstorage` nodes are unreachable during the query. This may help the following issues: [one](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1941), [two](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/678).
## [v1.73.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.73.0) ## [v1.73.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.73.0)

View file

@ -102,7 +102,7 @@ Just download [VictoriaMetrics executable](https://github.com/VictoriaMetrics/Vi
The following command-line flags are used the most: The following command-line flags are used the most:
* `-storageDataPath` - VictoriaMetrics stores all the data in this directory. Default path is `victoria-metrics-data` in the current working directory. * `-storageDataPath` - VictoriaMetrics stores all the data in this directory. Default path is `victoria-metrics-data` in the current working directory.
* `-retentionPeriod` - retention for stored data. Older data is automatically deleted. Default retention is 1 month. See [these docs](#retention) for more details. * `-retentionPeriod` - retention for stored data. Older data is automatically deleted. Default retention is 1 month. See [the Retention section](#retention) for more details.
Other flags have good enough default values, so set them only if you really need this. Pass `-help` to see [all the available flags with description and default values](#list-of-command-line-flags). Other flags have good enough default values, so set them only if you really need this. Pass `-help` to see [all the available flags with description and default values](#list-of-command-line-flags).
@ -744,7 +744,7 @@ The delete API is intended mainly for the following cases:
* One-off deleting of accidentally written invalid (or undesired) time series. * One-off deleting of accidentally written invalid (or undesired) time series.
* One-off deleting of user data due to [GDPR](https://en.wikipedia.org/wiki/General_Data_Protection_Regulation). * One-off deleting of user data due to [GDPR](https://en.wikipedia.org/wiki/General_Data_Protection_Regulation).
It isn't recommended using delete API for the following cases, since it brings non-zero overhead: Using the delete API is not recommended in the following cases, since it brings a non-zero overhead:
* Regular cleanups for unneeded data. Just prevent writing unneeded data into VictoriaMetrics. * Regular cleanups for unneeded data. Just prevent writing unneeded data into VictoriaMetrics.
This can be done with [relabeling](#relabeling). This can be done with [relabeling](#relabeling).
@ -753,7 +753,7 @@ It isn't recommended using delete API for the following cases, since it brings n
time series occupy disk space until the next merge operation, which can never occur when deleting too old data. time series occupy disk space until the next merge operation, which can never occur when deleting too old data.
[Forced merge](#forced-merge) may be used for freeing up disk space occupied by old data. [Forced merge](#forced-merge) may be used for freeing up disk space occupied by old data.
It is better using `-retentionPeriod` command-line flag for efficient pruning of old data. It's better to use the `-retentionPeriod` command-line flag for efficient pruning of old data.
## Forced merge ## Forced merge
@ -1147,7 +1147,7 @@ write data to the same VictoriaMetrics instance. These vmagent or Prometheus ins
VictoriaMetrics stores time series data in [MergeTree](https://en.wikipedia.org/wiki/Log-structured_merge-tree)-like VictoriaMetrics stores time series data in [MergeTree](https://en.wikipedia.org/wiki/Log-structured_merge-tree)-like
data structures. On insert, VictoriaMetrics accumulates up to 1s of data and dumps it on disk to data structures. On insert, VictoriaMetrics accumulates up to 1s of data and dumps it on disk to
`<-storageDataPath>/data/small/YYYY_MM/` subdirectory forming a `part` with the following `<-storageDataPath>/data/small/YYYY_MM/` subdirectory forming a `part` with the following
name pattern `rowsCount_blocksCount_minTimestamp_maxTimestamp`. Each part consists of two "columns": name pattern: `rowsCount_blocksCount_minTimestamp_maxTimestamp`. Each part consists of two "columns":
values and timestamps. These are sorted and compressed raw time series values. Additionally, part contains values and timestamps. These are sorted and compressed raw time series values. Additionally, part contains
index files for searching for specific series in the values and timestamps files. index files for searching for specific series in the values and timestamps files.
@ -1177,24 +1177,24 @@ See also [how to work with snapshots](#how-to-work-with-snapshots).
## Retention ## Retention
Retention is configured with `-retentionPeriod` command-line flag. For instance, `-retentionPeriod=3` means Retention is configured with the `-retentionPeriod` command-line flag, which takes a number followed by a time unit character - `h(ours)`, `d(ays)`, `w(eeks)`, `y(ears)`. If the time unit is not specified, a month is assumed. For instance, `-retentionPeriod=3` means that the data will be stored for 3 months and then deleted. The default retention period is one month.
that the data will be stored for 3 months and then deleted.
Data is split in per-month partitions inside `<-storageDataPath>/data/{small,big}` folders.
Data partitions outside the configured retention are deleted on the first day of new month.
Data is split in per-month partitions inside `<-storageDataPath>/data/{small,big}` folders.
Data partitions outside the configured retention are deleted on the first day of the new month.
Each partition consists of one or more data parts with the following name pattern `rowsCount_blocksCount_minTimestamp_maxTimestamp`. Each partition consists of one or more data parts with the following name pattern `rowsCount_blocksCount_minTimestamp_maxTimestamp`.
Data parts outside of the configured retention are eventually deleted during Data parts outside of the configured retention are eventually deleted during
[background merge](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282). [background merge](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282).
In order to keep data according to `-retentionPeriod` max disk space usage is going to be `-retentionPeriod` + 1 month. The maximum disk space usage for a given `-retentionPeriod` is going to be (`-retentionPeriod` + 1) months.
For example if `-retentionPeriod` is set to 1, data for January is deleted on March 1st. For example, if `-retentionPeriod` is set to 1, data for January is deleted on March 1st.
VictoriaMetrics supports retention smaller than 1 month. For example, `-retentionPeriod=5d` would set data retention for 5 days. Please note, the time range covered by data part is not limited by retention period unit. Hence, data part may contain data
Please note, time range covered by data part is not limited by retention period unit. Hence, data part may contain data
for multiple days and will be deleted only when fully outside of the configured retention. for multiple days and will be deleted only when fully outside of the configured retention.
It is safe to extend `-retentionPeriod` on existing data. If `-retentionPeriod` is set to lower It is safe to extend `-retentionPeriod` on existing data. If `-retentionPeriod` is set to a lower
value than before then data outside the configured period will be eventually deleted. value than before, then data outside the configured period will be eventually deleted.
VictoriaMetrics does not support indefinite retention, but you can specify an arbitrarily high duration, e.g. `-retentionPeriod=100y`.
## Multiple retentions ## Multiple retentions

View file

@ -106,7 +106,7 @@ Just download [VictoriaMetrics executable](https://github.com/VictoriaMetrics/Vi
The following command-line flags are used the most: The following command-line flags are used the most:
* `-storageDataPath` - VictoriaMetrics stores all the data in this directory. Default path is `victoria-metrics-data` in the current working directory. * `-storageDataPath` - VictoriaMetrics stores all the data in this directory. Default path is `victoria-metrics-data` in the current working directory.
* `-retentionPeriod` - retention for stored data. Older data is automatically deleted. Default retention is 1 month. See [these docs](#retention) for more details. * `-retentionPeriod` - retention for stored data. Older data is automatically deleted. Default retention is 1 month. See [the Retention section](#retention) for more details.
Other flags have good enough default values, so set them only if you really need this. Pass `-help` to see [all the available flags with description and default values](#list-of-command-line-flags). Other flags have good enough default values, so set them only if you really need this. Pass `-help` to see [all the available flags with description and default values](#list-of-command-line-flags).
@ -748,7 +748,7 @@ The delete API is intended mainly for the following cases:
* One-off deleting of accidentally written invalid (or undesired) time series. * One-off deleting of accidentally written invalid (or undesired) time series.
* One-off deleting of user data due to [GDPR](https://en.wikipedia.org/wiki/General_Data_Protection_Regulation). * One-off deleting of user data due to [GDPR](https://en.wikipedia.org/wiki/General_Data_Protection_Regulation).
It isn't recommended using delete API for the following cases, since it brings non-zero overhead: Using the delete API is not recommended in the following cases, since it brings a non-zero overhead:
* Regular cleanups for unneeded data. Just prevent writing unneeded data into VictoriaMetrics. * Regular cleanups for unneeded data. Just prevent writing unneeded data into VictoriaMetrics.
This can be done with [relabeling](#relabeling). This can be done with [relabeling](#relabeling).
@ -757,7 +757,7 @@ It isn't recommended using delete API for the following cases, since it brings n
time series occupy disk space until the next merge operation, which can never occur when deleting too old data. time series occupy disk space until the next merge operation, which can never occur when deleting too old data.
[Forced merge](#forced-merge) may be used for freeing up disk space occupied by old data. [Forced merge](#forced-merge) may be used for freeing up disk space occupied by old data.
It is better using `-retentionPeriod` command-line flag for efficient pruning of old data. It's better to use the `-retentionPeriod` command-line flag for efficient pruning of old data.
## Forced merge ## Forced merge
@ -1151,7 +1151,7 @@ write data to the same VictoriaMetrics instance. These vmagent or Prometheus ins
VictoriaMetrics stores time series data in [MergeTree](https://en.wikipedia.org/wiki/Log-structured_merge-tree)-like VictoriaMetrics stores time series data in [MergeTree](https://en.wikipedia.org/wiki/Log-structured_merge-tree)-like
data structures. On insert, VictoriaMetrics accumulates up to 1s of data and dumps it on disk to data structures. On insert, VictoriaMetrics accumulates up to 1s of data and dumps it on disk to
`<-storageDataPath>/data/small/YYYY_MM/` subdirectory forming a `part` with the following `<-storageDataPath>/data/small/YYYY_MM/` subdirectory forming a `part` with the following
name pattern `rowsCount_blocksCount_minTimestamp_maxTimestamp`. Each part consists of two "columns": name pattern: `rowsCount_blocksCount_minTimestamp_maxTimestamp`. Each part consists of two "columns":
values and timestamps. These are sorted and compressed raw time series values. Additionally, part contains values and timestamps. These are sorted and compressed raw time series values. Additionally, part contains
index files for searching for specific series in the values and timestamps files. index files for searching for specific series in the values and timestamps files.
@ -1181,24 +1181,24 @@ See also [how to work with snapshots](#how-to-work-with-snapshots).
## Retention ## Retention
Retention is configured with `-retentionPeriod` command-line flag. For instance, `-retentionPeriod=3` means Retention is configured with the `-retentionPeriod` command-line flag, which takes a number followed by a time unit character - `h(ours)`, `d(ays)`, `w(eeks)`, `y(ears)`. If the time unit is not specified, a month is assumed. For instance, `-retentionPeriod=3` means that the data will be stored for 3 months and then deleted. The default retention period is one month.
that the data will be stored for 3 months and then deleted.
Data is split in per-month partitions inside `<-storageDataPath>/data/{small,big}` folders.
Data partitions outside the configured retention are deleted on the first day of new month.
Data is split in per-month partitions inside `<-storageDataPath>/data/{small,big}` folders.
Data partitions outside the configured retention are deleted on the first day of the new month.
Each partition consists of one or more data parts with the following name pattern `rowsCount_blocksCount_minTimestamp_maxTimestamp`. Each partition consists of one or more data parts with the following name pattern `rowsCount_blocksCount_minTimestamp_maxTimestamp`.
Data parts outside of the configured retention are eventually deleted during Data parts outside of the configured retention are eventually deleted during
[background merge](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282). [background merge](https://medium.com/@valyala/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282).
In order to keep data according to `-retentionPeriod` max disk space usage is going to be `-retentionPeriod` + 1 month. The maximum disk space usage for a given `-retentionPeriod` is going to be (`-retentionPeriod` + 1) months.
For example if `-retentionPeriod` is set to 1, data for January is deleted on March 1st. For example, if `-retentionPeriod` is set to 1, data for January is deleted on March 1st.
VictoriaMetrics supports retention smaller than 1 month. For example, `-retentionPeriod=5d` would set data retention for 5 days. Please note, the time range covered by data part is not limited by retention period unit. Hence, data part may contain data
Please note, time range covered by data part is not limited by retention period unit. Hence, data part may contain data
for multiple days and will be deleted only when fully outside of the configured retention. for multiple days and will be deleted only when fully outside of the configured retention.
It is safe to extend `-retentionPeriod` on existing data. If `-retentionPeriod` is set to lower It is safe to extend `-retentionPeriod` on existing data. If `-retentionPeriod` is set to a lower
value than before then data outside the configured period will be eventually deleted. value than before, then data outside the configured period will be eventually deleted.
VictoriaMetrics does not support indefinite retention, but you can specify an arbitrarily high duration, e.g. `-retentionPeriod=100y`.
## Multiple retentions ## Multiple retentions

View file

@ -22,7 +22,7 @@ With CRD (Custom Resource Definition) you can define application configuration a
## Use cases ## Use cases
For kubernetes-cluster administrators, it simplifies installation, configuration, management for `VictoriaMetrics` application. And the main feature of operator - is ability to delegate applications monitoring configuration to the end-users. For kubernetes-cluster administrators, it simplifies installation, configuration and management for `VictoriaMetrics` application. The main feature of operator is its ability to delegate the configuration of applications monitoring to the end-users.
For applications developers, its great possibility for managing observability of applications. You can define metrics scraping and alerting configuration for your application and manage it with an application deployment process. Just define app_deployment.yaml, app_vmpodscrape.yaml and app_vmrule.yaml. That's it, you can apply it to a kubernetes cluster. Check [quick-start](/Operator/quick-start.html) for an example. For applications developers, its great possibility for managing observability of applications. You can define metrics scraping and alerting configuration for your application and manage it with an application deployment process. Just define app_deployment.yaml, app_vmpodscrape.yaml and app_vmrule.yaml. That's it, you can apply it to a kubernetes cluster. Check [quick-start](/Operator/quick-start.html) for an example.

View file

@ -268,7 +268,7 @@ Labels can be added to metrics by the following mechanisms:
## Relabeling ## Relabeling
`vmagent` and VictoriaMetrics support Prometheus-compatible relabeling. VictoriaMetrics components (including `vmagent`) support Prometheus-compatible relabeling.
They provide the following additional actions on top of actions from the [Prometheus relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config): They provide the following additional actions on top of actions from the [Prometheus relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config):
* `replace_all`: replaces all of the occurences of `regex` in the values of `source_labels` with the `replacement` and stores the results in the `target_label`. * `replace_all`: replaces all of the occurences of `regex` in the values of `source_labels` with the `replacement` and stores the results in the `target_label`.
@ -293,6 +293,21 @@ The `regex` value can be split into multiple lines for improved readability and
- "foo_.+" - "foo_.+"
``` ```
VictoriaMetrics components support an optional `if` filter, which can be used for conditional relabeling. The `if` filter may contain arbitrary [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors). For example, the following relabeling rule drops targets, which don't match `foo{bar="baz"}` series selector:
```yaml
- action: keep
if: 'foo{bar="baz"}'
```
This is equivalent to less clear traditional relabeling rule:
```yaml
- action: keep
source_labels: [__name__, bar]
regex: 'foo;baz'
```
The relabeling can be defined in the following places: The relabeling can be defined in the following places:
* At the `scrape_config -> relabel_configs` section in `-promscrape.config` file. This relabeling is applied to target labels. This relabeling can be debugged by passing `relabel_debug: true` option to the corresponding `scrape_config` section. In this case `vmagent` logs target labels before and after the relabeling and then drops the logged target. * At the `scrape_config -> relabel_configs` section in `-promscrape.config` file. This relabeling is applied to target labels. This relabeling can be debugged by passing `relabel_debug: true` option to the corresponding `scrape_config` section. In this case `vmagent` logs target labels before and after the relabeling and then drops the logged target.

14
go.mod
View file

@ -11,7 +11,7 @@ require (
github.com/VictoriaMetrics/fasthttp v1.1.0 github.com/VictoriaMetrics/fasthttp v1.1.0
github.com/VictoriaMetrics/metrics v1.18.1 github.com/VictoriaMetrics/metrics v1.18.1
github.com/VictoriaMetrics/metricsql v0.40.0 github.com/VictoriaMetrics/metricsql v0.40.0
github.com/aws/aws-sdk-go v1.43.3 github.com/aws/aws-sdk-go v1.43.10
github.com/cespare/xxhash/v2 v2.1.2 github.com/cespare/xxhash/v2 v2.1.2
github.com/cheggaaa/pb/v3 v3.0.8 github.com/cheggaaa/pb/v3 v3.0.8
github.com/cpuguy83/go-md2man/v2 v2.0.1 // indirect github.com/cpuguy83/go-md2man/v2 v2.0.1 // indirect
@ -31,17 +31,17 @@ require (
github.com/valyala/fasttemplate v1.2.1 github.com/valyala/fasttemplate v1.2.1
github.com/valyala/gozstd v1.16.0 github.com/valyala/gozstd v1.16.0
github.com/valyala/quicktemplate v1.7.0 github.com/valyala/quicktemplate v1.7.0
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd golang.org/x/net v0.0.0-20220225172249-27dd8689420f
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8 golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b
golang.org/x/sys v0.0.0-20220222172238-00053529121e golang.org/x/sys v0.0.0-20220227234510-4e6760a101f9
google.golang.org/api v0.70.0 google.golang.org/api v0.70.0
gopkg.in/yaml.v2 v2.4.0 gopkg.in/yaml.v2 v2.4.0
) )
require ( require (
cloud.google.com/go v0.100.2 // indirect cloud.google.com/go v0.100.2 // indirect
cloud.google.com/go/compute v1.4.0 // indirect cloud.google.com/go/compute v1.5.0 // indirect
cloud.google.com/go/iam v0.2.0 // indirect cloud.google.com/go/iam v0.3.0 // indirect
github.com/VividCortex/ewma v1.2.0 // indirect github.com/VividCortex/ewma v1.2.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect github.com/beorn7/perks v1.0.1 // indirect
github.com/go-kit/log v0.2.0 // indirect github.com/go-kit/log v0.2.0 // indirect
@ -68,7 +68,7 @@ require (
golang.org/x/text v0.3.7 // indirect golang.org/x/text v0.3.7 // indirect
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
google.golang.org/appengine v1.6.7 // indirect google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20220222154240-daf995802d7b // indirect google.golang.org/genproto v0.0.0-20220302033224-9aa15565e42a // indirect
google.golang.org/grpc v1.44.0 // indirect google.golang.org/grpc v1.44.0 // indirect
google.golang.org/protobuf v1.27.1 // indirect google.golang.org/protobuf v1.27.1 // indirect
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect

28
go.sum
View file

@ -43,13 +43,13 @@ cloud.google.com/go/bigtable v1.10.1/go.mod h1:cyHeKlx6dcZCO0oSQucYdauseD8kIENGu
cloud.google.com/go/compute v0.1.0/go.mod h1:GAesmwr110a34z04OlxYkATPBEfVhkymfTBXtfbBFow= cloud.google.com/go/compute v0.1.0/go.mod h1:GAesmwr110a34z04OlxYkATPBEfVhkymfTBXtfbBFow=
cloud.google.com/go/compute v1.2.0/go.mod h1:xlogom/6gr8RJGBe7nT2eGsQYAFUbbv8dbC29qE3Xmw= cloud.google.com/go/compute v1.2.0/go.mod h1:xlogom/6gr8RJGBe7nT2eGsQYAFUbbv8dbC29qE3Xmw=
cloud.google.com/go/compute v1.3.0/go.mod h1:cCZiE1NHEtai4wiufUhW8I8S1JKkAnhnQJWM7YD99wM= cloud.google.com/go/compute v1.3.0/go.mod h1:cCZiE1NHEtai4wiufUhW8I8S1JKkAnhnQJWM7YD99wM=
cloud.google.com/go/compute v1.4.0 h1:tzSyCe254NKkL8zshJUSoVvI9mcgbFdSpCC44uUNjT0= cloud.google.com/go/compute v1.5.0 h1:b1zWmYuuHz7gO9kDcM/EpHGr06UgsYNRpNJzI2kFiLM=
cloud.google.com/go/compute v1.4.0/go.mod h1:TcrKl8VipL9ZM0wEjdooJ1eet/6YsEV/E/larxxkAdg= cloud.google.com/go/compute v1.5.0/go.mod h1:9SMHyhJlzhlkJqrPAc839t2BZFTSk6Jdj6mkzQJeu0M=
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE= cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk= cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
cloud.google.com/go/iam v0.1.1/go.mod h1:CKqrcnI/suGpybEHxZ7BMehL0oA4LpdyJdUlTl9jVMw= cloud.google.com/go/iam v0.1.1/go.mod h1:CKqrcnI/suGpybEHxZ7BMehL0oA4LpdyJdUlTl9jVMw=
cloud.google.com/go/iam v0.2.0 h1:Ouq6qif4mZdXkb3SiFMpxvu0JQJB1Yid9TsZ23N6hg8= cloud.google.com/go/iam v0.3.0 h1:exkAomrVUuzx9kWFI1wm3KI0uoDeUFPB4kKGzx6x+Gc=
cloud.google.com/go/iam v0.2.0/go.mod h1:BCK88+tmjAwnZYfOSizmKCTSFjJHCa18t3DpdGEY13Y= cloud.google.com/go/iam v0.3.0/go.mod h1:XzJPvDayI+9zsASAFO68Hk07u3z+f+JrT2xXNdp4bnY=
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I= cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw= cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw=
cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA= cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA=
@ -165,8 +165,8 @@ github.com/aws/aws-sdk-go v1.30.12/go.mod h1:5zCpMtNQVjRREroY7sYe8lOMRSxkhG6MZve
github.com/aws/aws-sdk-go v1.34.28/go.mod h1:H7NKnBqNVzoTJpGfLrQkkD+ytBA93eiDYi/+8rV9s48= github.com/aws/aws-sdk-go v1.34.28/go.mod h1:H7NKnBqNVzoTJpGfLrQkkD+ytBA93eiDYi/+8rV9s48=
github.com/aws/aws-sdk-go v1.35.31/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro= github.com/aws/aws-sdk-go v1.35.31/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
github.com/aws/aws-sdk-go v1.40.45/go.mod h1:585smgzpB/KqRA+K3y/NL/oYRqQvpNJYvLm+LY1U59Q= github.com/aws/aws-sdk-go v1.40.45/go.mod h1:585smgzpB/KqRA+K3y/NL/oYRqQvpNJYvLm+LY1U59Q=
github.com/aws/aws-sdk-go v1.43.3 h1:qvCkC4FviA9rR4UvRk4ldr6f3mIJE0VaI3KrsDx1gTk= github.com/aws/aws-sdk-go v1.43.10 h1:lFX6gzTBltYBnlJBjd2DWRCmqn2CbTcs6PW99/Dme7k=
github.com/aws/aws-sdk-go v1.43.3/go.mod h1:OGr6lGMAKGlG9CVrYnWYDKIyb829c6EVBRjxqjmPepc= github.com/aws/aws-sdk-go v1.43.10/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo=
github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g= github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g=
github.com/aws/aws-sdk-go-v2 v1.9.1/go.mod h1:cK/D0BBs0b/oWPIcX/Z/obahJK1TT7IPVjy53i/mX/4= github.com/aws/aws-sdk-go-v2 v1.9.1/go.mod h1:cK/D0BBs0b/oWPIcX/Z/obahJK1TT7IPVjy53i/mX/4=
github.com/aws/aws-sdk-go-v2/service/cloudwatch v1.8.1/go.mod h1:CM+19rL1+4dFWnOQKwDc7H1KwXTz+h61oUSHyhV0b3o= github.com/aws/aws-sdk-go-v2/service/cloudwatch v1.8.1/go.mod h1:CM+19rL1+4dFWnOQKwDc7H1KwXTz+h61oUSHyhV0b3o=
@ -1176,9 +1176,9 @@ golang.org/x/net v0.0.0-20210510120150-4163338589ed/go.mod h1:9nx3DQGgdP8bBQD5qx
golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210614182718-04defd469f4e/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.0.0-20210614182718-04defd469f4e/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210917221730-978cfadd31cf/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.0.0-20210917221730-978cfadd31cf/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211216030914-fe4d6282115f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd h1:O7DYs+zxREGLKzKoMQrtrEacpb0ZVXA5rIwylE2Xchk=
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk= golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220225172249-27dd8689420f h1:oA4XRj0qtSt8Yo1Zms0CUlsT3KG69V2UGQWPBxujDmc=
golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@ -1195,8 +1195,9 @@ golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ
golang.org/x/oauth2 v0.0.0-20210628180205-a41e5a781914/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/oauth2 v0.0.0-20210628180205-a41e5a781914/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210805134026-6f1e6394065a/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/oauth2 v0.0.0-20210805134026-6f1e6394065a/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8 h1:RerP+noqYHUQ8CMRcPlC2nvTa4dcBIjegkuWdcUDuqg=
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b h1:clP8eMhB30EHdc0bd2Twtq6kgU7yl5ub2cQLSdrv1Dg=
golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@ -1311,8 +1312,8 @@ golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220128215802-99c3d69c2c27/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220128215802-99c3d69c2c27/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220204135822-1c1b9b1eba6a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220204135822-1c1b9b1eba6a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220209214540-3681064d5158/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220209214540-3681064d5158/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220222172238-00053529121e h1:AGLQ2aegkB2Y9RY8YdQk+7MDCW9da7YmizIwNIt8NtQ= golang.org/x/sys v0.0.0-20220227234510-4e6760a101f9 h1:nhht2DYV/Sn3qOayu8lM+cU1ii9sTLUeBQwQQfUHtrs=
golang.org/x/sys v0.0.0-20220222172238-00053529121e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220227234510-4e6760a101f9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw= golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
@ -1556,8 +1557,9 @@ google.golang.org/genproto v0.0.0-20220207164111-0872dc986b00/go.mod h1:5CzLGKJ6
google.golang.org/genproto v0.0.0-20220211171837-173942840c17/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI= google.golang.org/genproto v0.0.0-20220211171837-173942840c17/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
google.golang.org/genproto v0.0.0-20220216160803-4663080d8bc8/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI= google.golang.org/genproto v0.0.0-20220216160803-4663080d8bc8/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
google.golang.org/genproto v0.0.0-20220218161850-94dd64e39d7c/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI= google.golang.org/genproto v0.0.0-20220218161850-94dd64e39d7c/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
google.golang.org/genproto v0.0.0-20220222154240-daf995802d7b h1:wHqTlwZVR0x5EG2S6vKlCq63+Tl/vBoQELitHxqxDOo= google.golang.org/genproto v0.0.0-20220222213610-43724f9ea8cf/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
google.golang.org/genproto v0.0.0-20220222154240-daf995802d7b/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI= google.golang.org/genproto v0.0.0-20220302033224-9aa15565e42a h1:uqouglH745GoGeZ1YFZbPBiu961tgi/9Qm5jaorajjQ=
google.golang.org/genproto v0.0.0-20220302033224-9aa15565e42a/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs= google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM= google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM=

View file

@ -98,20 +98,20 @@ func (bsr *blockStreamReader) String() string {
return bsr.ph.String() return bsr.ph.String()
} }
// InitFromInmemoryPart initializes bsr from the given ip. // InitFromInmemoryPart initializes bsr from the given mp.
func (bsr *blockStreamReader) InitFromInmemoryPart(ip *inmemoryPart) { func (bsr *blockStreamReader) InitFromInmemoryPart(mp *inmemoryPart) {
bsr.reset() bsr.reset()
var err error var err error
bsr.mrs, err = unmarshalMetaindexRows(bsr.mrs[:0], ip.metaindexData.NewReader()) bsr.mrs, err = unmarshalMetaindexRows(bsr.mrs[:0], mp.metaindexData.NewReader())
if err != nil { if err != nil {
logger.Panicf("BUG: cannot unmarshal metaindex rows from inmemory part: %s", err) logger.Panicf("BUG: cannot unmarshal metaindex rows from inmemory part: %s", err)
} }
bsr.ph.CopyFrom(&ip.ph) bsr.ph.CopyFrom(&mp.ph)
bsr.indexReader = ip.indexData.NewReader() bsr.indexReader = mp.indexData.NewReader()
bsr.itemsReader = ip.itemsData.NewReader() bsr.itemsReader = mp.itemsData.NewReader()
bsr.lensReader = ip.lensData.NewReader() bsr.lensReader = mp.lensData.NewReader()
if bsr.ph.itemsCount <= 0 { if bsr.ph.itemsCount <= 0 {
logger.Panicf("BUG: source inmemoryPart must contain at least a single item") logger.Panicf("BUG: source inmemoryPart must contain at least a single item")

View file

@ -63,17 +63,17 @@ func (bsw *blockStreamWriter) reset() {
bsw.mrFirstItemCaught = false bsw.mrFirstItemCaught = false
} }
func (bsw *blockStreamWriter) InitFromInmemoryPart(ip *inmemoryPart) { func (bsw *blockStreamWriter) InitFromInmemoryPart(mp *inmemoryPart) {
bsw.reset() bsw.reset()
// Use the minimum compression level for in-memory blocks, // Use the minimum compression level for in-memory blocks,
// since they are going to be re-compressed during the merge into file-based blocks. // since they are going to be re-compressed during the merge into file-based blocks.
bsw.compressLevel = -5 // See https://github.com/facebook/zstd/releases/tag/v1.3.4 bsw.compressLevel = -5 // See https://github.com/facebook/zstd/releases/tag/v1.3.4
bsw.metaindexWriter = &ip.metaindexData bsw.metaindexWriter = &mp.metaindexData
bsw.indexWriter = &ip.indexData bsw.indexWriter = &mp.indexData
bsw.itemsWriter = &ip.itemsData bsw.itemsWriter = &mp.itemsData
bsw.lensWriter = &ip.lensData bsw.lensWriter = &mp.lensData
} }
// InitFromFilePart initializes bsw from a file-based part on the given path. // InitFromFilePart initializes bsw from a file-based part on the given path.

View file

@ -137,25 +137,6 @@ func (ib *inmemoryBlock) Add(x []byte) bool {
// It must fit CPU cache size, i.e. 64KB for the current CPUs. // It must fit CPU cache size, i.e. 64KB for the current CPUs.
const maxInmemoryBlockSize = 64 * 1024 const maxInmemoryBlockSize = 64 * 1024
func (ib *inmemoryBlock) sort() {
sort.Sort(ib)
data := ib.data
items := ib.items
bb := bbPool.Get()
b := bytesutil.ResizeNoCopyMayOverallocate(bb.B, len(data))
b = b[:0]
for i, it := range items {
bLen := len(b)
b = append(b, it.String(data)...)
items[i] = Item{
Start: uint32(bLen),
End: uint32(len(b)),
}
}
bb.B, ib.data = data, b
bbPool.Put(bb)
}
// storageBlock represents a block of data on the storage. // storageBlock represents a block of data on the storage.
type storageBlock struct { type storageBlock struct {
itemsData []byte itemsData []byte
@ -195,7 +176,7 @@ func (ib *inmemoryBlock) isSorted() bool {
// - returns the marshal type used for the encoding. // - returns the marshal type used for the encoding.
func (ib *inmemoryBlock) MarshalUnsortedData(sb *storageBlock, firstItemDst, commonPrefixDst []byte, compressLevel int) ([]byte, []byte, uint32, marshalType) { func (ib *inmemoryBlock) MarshalUnsortedData(sb *storageBlock, firstItemDst, commonPrefixDst []byte, compressLevel int) ([]byte, []byte, uint32, marshalType) {
if !ib.isSorted() { if !ib.isSorted() {
ib.sort() sort.Sort(ib)
} }
ib.updateCommonPrefix() ib.updateCommonPrefix()
return ib.marshalData(sb, firstItemDst, commonPrefixDst, compressLevel) return ib.marshalData(sb, firstItemDst, commonPrefixDst, compressLevel)
@ -251,7 +232,7 @@ func (ib *inmemoryBlock) marshalData(sb *storageBlock, firstItemDst, commonPrefi
firstItemDst = append(firstItemDst, firstItem...) firstItemDst = append(firstItemDst, firstItem...)
commonPrefixDst = append(commonPrefixDst, ib.commonPrefix...) commonPrefixDst = append(commonPrefixDst, ib.commonPrefix...)
if len(ib.data)-len(ib.commonPrefix)*len(ib.items) < 64 || len(ib.items) < 2 { if len(data)-len(ib.commonPrefix)*len(ib.items) < 64 || len(ib.items) < 2 {
// Use plain encoding form small block, since it is cheaper. // Use plain encoding form small block, since it is cheaper.
ib.marshalDataPlain(sb) ib.marshalDataPlain(sb)
return firstItemDst, commonPrefixDst, uint32(len(ib.items)), marshalTypePlain return firstItemDst, commonPrefixDst, uint32(len(ib.items)), marshalTypePlain
@ -302,7 +283,7 @@ func (ib *inmemoryBlock) marshalData(sb *storageBlock, firstItemDst, commonPrefi
bbLens.B = bLens bbLens.B = bLens
bbPool.Put(bbLens) bbPool.Put(bbLens)
if float64(len(sb.itemsData)) > 0.9*float64(len(ib.data)-len(ib.commonPrefix)*len(ib.items)) { if float64(len(sb.itemsData)) > 0.9*float64(len(data)-len(ib.commonPrefix)*len(ib.items)) {
// Bad compression rate. It is cheaper to use plain encoding. // Bad compression rate. It is cheaper to use plain encoding.
ib.marshalDataPlain(sb) ib.marshalDataPlain(sb)
return firstItemDst, commonPrefixDst, uint32(len(ib.items)), marshalTypePlain return firstItemDst, commonPrefixDst, uint32(len(ib.items)), marshalTypePlain

View file

@ -67,7 +67,7 @@ func TestInmemoryBlockSort(t *testing.T) {
} }
// Sort ib. // Sort ib.
ib.sort() sort.Sort(&ib)
sort.Strings(items) sort.Strings(items)
// Verify items are sorted. // Verify items are sorted.

View file

@ -2,6 +2,7 @@ package mergeset
import ( import (
"fmt" "fmt"
"sort"
"testing" "testing"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger" "github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
@ -16,7 +17,7 @@ func BenchmarkInmemoryBlockMarshal(b *testing.B) {
b.Fatalf("cannot add more than %d items", i) b.Fatalf("cannot add more than %d items", i)
} }
} }
ibSrc.sort() sort.Sort(&ibSrc)
b.ResetTimer() b.ResetTimer()
b.SetBytes(itemsCount) b.SetBytes(itemsCount)

View file

@ -1,25 +1,18 @@
package mergeset package mergeset
import ( import (
"sync"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil" "github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding" "github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger" "github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
) )
type inmemoryPart struct { type inmemoryPart struct {
ph partHeader ph partHeader
sb storageBlock
bh blockHeader bh blockHeader
mr metaindexRow mr metaindexRow
unpackedIndexBlockBuf []byte
packedIndexBlockBuf []byte
unpackedMetaindexBuf []byte
packedMetaindexBuf []byte
metaindexData bytesutil.ByteBuffer metaindexData bytesutil.ByteBuffer
indexData bytesutil.ByteBuffer indexData bytesutil.ByteBuffer
itemsData bytesutil.ByteBuffer itemsData bytesutil.ByteBuffer
@ -28,16 +21,9 @@ type inmemoryPart struct {
func (mp *inmemoryPart) Reset() { func (mp *inmemoryPart) Reset() {
mp.ph.Reset() mp.ph.Reset()
mp.sb.Reset()
mp.bh.Reset() mp.bh.Reset()
mp.mr.Reset() mp.mr.Reset()
mp.unpackedIndexBlockBuf = mp.unpackedIndexBlockBuf[:0]
mp.packedIndexBlockBuf = mp.packedIndexBlockBuf[:0]
mp.unpackedMetaindexBuf = mp.unpackedMetaindexBuf[:0]
mp.packedMetaindexBuf = mp.packedMetaindexBuf[:0]
mp.metaindexData.Reset() mp.metaindexData.Reset()
mp.indexData.Reset() mp.indexData.Reset()
mp.itemsData.Reset() mp.itemsData.Reset()
@ -48,37 +34,47 @@ func (mp *inmemoryPart) Reset() {
func (mp *inmemoryPart) Init(ib *inmemoryBlock) { func (mp *inmemoryPart) Init(ib *inmemoryBlock) {
mp.Reset() mp.Reset()
// Re-use mp.itemsData and mp.lensData in sb.
// This eliminates copying itemsData and lensData from sb to mp later.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2247
sb := &storageBlock{}
sb.itemsData = mp.itemsData.B[:0]
sb.lensData = mp.lensData.B[:0]
// Use the minimum possible compressLevel for compressing inmemoryPart, // Use the minimum possible compressLevel for compressing inmemoryPart,
// since it will be merged into file part soon. // since it will be merged into file part soon.
compressLevel := 0 // See https://github.com/facebook/zstd/releases/tag/v1.3.4 for details about negative compression level
mp.bh.firstItem, mp.bh.commonPrefix, mp.bh.itemsCount, mp.bh.marshalType = ib.MarshalUnsortedData(&mp.sb, mp.bh.firstItem[:0], mp.bh.commonPrefix[:0], compressLevel) compressLevel := -5
mp.bh.firstItem, mp.bh.commonPrefix, mp.bh.itemsCount, mp.bh.marshalType = ib.MarshalUnsortedData(sb, mp.bh.firstItem[:0], mp.bh.commonPrefix[:0], compressLevel)
mp.ph.itemsCount = uint64(len(ib.items)) mp.ph.itemsCount = uint64(len(ib.items))
mp.ph.blocksCount = 1 mp.ph.blocksCount = 1
mp.ph.firstItem = append(mp.ph.firstItem[:0], ib.items[0].String(ib.data)...) mp.ph.firstItem = append(mp.ph.firstItem[:0], ib.items[0].String(ib.data)...)
mp.ph.lastItem = append(mp.ph.lastItem[:0], ib.items[len(ib.items)-1].String(ib.data)...) mp.ph.lastItem = append(mp.ph.lastItem[:0], ib.items[len(ib.items)-1].String(ib.data)...)
fs.MustWriteData(&mp.itemsData, mp.sb.itemsData) mp.itemsData.B = sb.itemsData
mp.bh.itemsBlockOffset = 0 mp.bh.itemsBlockOffset = 0
mp.bh.itemsBlockSize = uint32(len(mp.sb.itemsData)) mp.bh.itemsBlockSize = uint32(len(mp.itemsData.B))
fs.MustWriteData(&mp.lensData, mp.sb.lensData) mp.lensData.B = sb.lensData
mp.bh.lensBlockOffset = 0 mp.bh.lensBlockOffset = 0
mp.bh.lensBlockSize = uint32(len(mp.sb.lensData)) mp.bh.lensBlockSize = uint32(len(mp.lensData.B))
mp.unpackedIndexBlockBuf = mp.bh.Marshal(mp.unpackedIndexBlockBuf[:0]) bb := inmemoryPartBytePool.Get()
mp.packedIndexBlockBuf = encoding.CompressZSTDLevel(mp.packedIndexBlockBuf[:0], mp.unpackedIndexBlockBuf, 0) bb.B = mp.bh.Marshal(bb.B[:0])
fs.MustWriteData(&mp.indexData, mp.packedIndexBlockBuf) mp.indexData.B = encoding.CompressZSTDLevel(mp.indexData.B[:0], bb.B, 0)
mp.mr.firstItem = append(mp.mr.firstItem[:0], mp.bh.firstItem...) mp.mr.firstItem = append(mp.mr.firstItem[:0], mp.bh.firstItem...)
mp.mr.blockHeadersCount = 1 mp.mr.blockHeadersCount = 1
mp.mr.indexBlockOffset = 0 mp.mr.indexBlockOffset = 0
mp.mr.indexBlockSize = uint32(len(mp.packedIndexBlockBuf)) mp.mr.indexBlockSize = uint32(len(mp.indexData.B))
mp.unpackedMetaindexBuf = mp.mr.Marshal(mp.unpackedMetaindexBuf[:0]) bb.B = mp.mr.Marshal(bb.B[:0])
mp.packedMetaindexBuf = encoding.CompressZSTDLevel(mp.packedMetaindexBuf[:0], mp.unpackedMetaindexBuf, 0) mp.metaindexData.B = encoding.CompressZSTDLevel(mp.metaindexData.B[:0], bb.B, 0)
fs.MustWriteData(&mp.metaindexData, mp.packedMetaindexBuf) inmemoryPartBytePool.Put(bb)
} }
var inmemoryPartBytePool bytesutil.ByteBufferPool
// It is safe calling NewPart multiple times. // It is safe calling NewPart multiple times.
// It is unsafe re-using mp while the returned part is in use. // It is unsafe re-using mp while the returned part is in use.
func (mp *inmemoryPart) NewPart() *part { func (mp *inmemoryPart) NewPart() *part {
@ -96,25 +92,16 @@ func (mp *inmemoryPart) size() uint64 {
} }
func getInmemoryPart() *inmemoryPart { func getInmemoryPart() *inmemoryPart {
select { v := inmemoryPartPool.Get()
case mp := <-mpPool: if v == nil {
return mp
default:
return &inmemoryPart{} return &inmemoryPart{}
} }
return v.(*inmemoryPart)
} }
func putInmemoryPart(mp *inmemoryPart) { func putInmemoryPart(mp *inmemoryPart) {
mp.Reset() mp.Reset()
select { inmemoryPartPool.Put(mp)
case mpPool <- mp:
default:
// Drop mp in order to reduce memory usage.
}
} }
// Use chan instead of sync.Pool in order to reduce memory usage on systems with big number of CPU cores, var inmemoryPartPool sync.Pool
// since sync.Pool maintains per-CPU pool of inmemoryPart objects.
//
// The inmemoryPart object size can exceed 64KB, so it is better to use chan instead of sync.Pool for reducing memory usage.
var mpPool = make(chan *inmemoryPart, cgroup.AvailableCPUs())

View file

@ -227,7 +227,9 @@ func (pw *partWrapper) decRef() {
} }
if pw.mp != nil { if pw.mp != nil {
putInmemoryPart(pw.mp) // Do not return pw.mp to pool via putInmemoryPart(),
// since pw.mp size may be too big compared to other entries stored in the pool.
// This may result in increased memory usage because of high fragmentation.
pw.mp = nil pw.mp = nil
} }
pw.p.MustClose() pw.p.MustClose()
@ -740,7 +742,10 @@ func (tb *Table) mergeInmemoryBlocks(ibs []*inmemoryBlock) *partWrapper {
// Prepare blockStreamWriter for destination part. // Prepare blockStreamWriter for destination part.
bsw := getBlockStreamWriter() bsw := getBlockStreamWriter()
mpDst := getInmemoryPart() // Do not obtain mpDst via getInmemoryPart(), since its size
// may be too big comparing to other entries in the pool.
// This may result in increased memory usage because of high fragmentation.
mpDst := &inmemoryPart{}
bsw.InitFromInmemoryPart(mpDst) bsw.InitFromInmemoryPart(mpDst)
// Merge parts. // Merge parts.

View file

@ -22,6 +22,7 @@ type RelabelConfig struct {
Modulus uint64 `yaml:"modulus,omitempty"` Modulus uint64 `yaml:"modulus,omitempty"`
Replacement *string `yaml:"replacement,omitempty"` Replacement *string `yaml:"replacement,omitempty"`
Action string `yaml:"action,omitempty"` Action string `yaml:"action,omitempty"`
If *IfExpression `yaml:"if,omitempty"`
} }
// MultiLineRegex contains a regex, which can be split into multiple lines. // MultiLineRegex contains a regex, which can be split into multiple lines.
@ -44,7 +45,7 @@ type MultiLineRegex struct {
func (mlr *MultiLineRegex) UnmarshalYAML(f func(interface{}) error) error { func (mlr *MultiLineRegex) UnmarshalYAML(f func(interface{}) error) error {
var v interface{} var v interface{}
if err := f(&v); err != nil { if err := f(&v); err != nil {
return err return fmt.Errorf("cannot parse multiline regex: %w", err)
} }
s, err := stringValue(v) s, err := stringValue(v)
if err != nil { if err != nil {
@ -224,11 +225,11 @@ func parseRelabelConfig(rc *RelabelConfig) (*parsedRelabelConfig, error) {
return nil, fmt.Errorf("`source_labels` must contain at least two entries for `action=drop_if_equal`; got %q", sourceLabels) return nil, fmt.Errorf("`source_labels` must contain at least two entries for `action=drop_if_equal`; got %q", sourceLabels)
} }
case "keep": case "keep":
if len(sourceLabels) == 0 { if len(sourceLabels) == 0 && rc.If == nil {
return nil, fmt.Errorf("missing `source_labels` for `action=keep`") return nil, fmt.Errorf("missing `source_labels` for `action=keep`")
} }
case "drop": case "drop":
if len(sourceLabels) == 0 { if len(sourceLabels) == 0 && rc.If == nil {
return nil, fmt.Errorf("missing `source_labels` for `action=drop`") return nil, fmt.Errorf("missing `source_labels` for `action=drop`")
} }
case "hashmod": case "hashmod":
@ -242,7 +243,7 @@ func parseRelabelConfig(rc *RelabelConfig) (*parsedRelabelConfig, error) {
return nil, fmt.Errorf("unexpected `modulus` for `action=hashmod`: %d; must be greater than 0", modulus) return nil, fmt.Errorf("unexpected `modulus` for `action=hashmod`: %d; must be greater than 0", modulus)
} }
case "keep_metrics": case "keep_metrics":
if rc.Regex == nil || rc.Regex.s == "" { if (rc.Regex == nil || rc.Regex.s == "") && rc.If == nil {
return nil, fmt.Errorf("`regex` must be non-empty for `action=keep_metrics`") return nil, fmt.Errorf("`regex` must be non-empty for `action=keep_metrics`")
} }
if len(sourceLabels) > 0 { if len(sourceLabels) > 0 {
@ -251,7 +252,7 @@ func parseRelabelConfig(rc *RelabelConfig) (*parsedRelabelConfig, error) {
sourceLabels = []string{"__name__"} sourceLabels = []string{"__name__"}
action = "keep" action = "keep"
case "drop_metrics": case "drop_metrics":
if rc.Regex == nil || rc.Regex.s == "" { if (rc.Regex == nil || rc.Regex.s == "") && rc.If == nil {
return nil, fmt.Errorf("`regex` must be non-empty for `action=drop_metrics`") return nil, fmt.Errorf("`regex` must be non-empty for `action=drop_metrics`")
} }
if len(sourceLabels) > 0 { if len(sourceLabels) > 0 {
@ -274,6 +275,7 @@ func parseRelabelConfig(rc *RelabelConfig) (*parsedRelabelConfig, error) {
Modulus: modulus, Modulus: modulus,
Replacement: replacement, Replacement: replacement,
Action: action, Action: action,
If: rc.If,
regexOriginal: regexOriginalCompiled, regexOriginal: regexOriginalCompiled,
hasCaptureGroupInTargetLabel: strings.Contains(targetLabel, "$"), hasCaptureGroupInTargetLabel: strings.Contains(targetLabel, "$"),

View file

@ -0,0 +1,169 @@
package promrelabel
import (
"fmt"
"regexp"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/VictoriaMetrics/metricsql"
)
// IfExpression represents `if` expression at RelabelConfig.
//
// The `if` expression can contain arbitrary PromQL-like label filters such as `metric_name{filters...}`
type IfExpression struct {
s string
lfs []*labelFilter
}
// UnmarshalYAML unmarshals ie from YAML passed to f.
func (ie *IfExpression) UnmarshalYAML(f func(interface{}) error) error {
var s string
if err := f(&s); err != nil {
return fmt.Errorf("cannot unmarshal `if` option: %w", err)
}
expr, err := metricsql.Parse(s)
if err != nil {
return fmt.Errorf("cannot parse `if` series selector: %w", err)
}
me, ok := expr.(*metricsql.MetricExpr)
if !ok {
return fmt.Errorf("expecting `if` series selector; got %q", expr.AppendString(nil))
}
lfs, err := metricExprToLabelFilters(me)
if err != nil {
return fmt.Errorf("cannot parse `if` filters: %w", err)
}
ie.s = s
ie.lfs = lfs
return nil
}
// MarshalYAML marshals ie to YAML.
func (ie *IfExpression) MarshalYAML() (interface{}, error) {
return ie.s, nil
}
// Match returns true if ie matches the given labels.
func (ie *IfExpression) Match(labels []prompbmarshal.Label) bool {
for _, lf := range ie.lfs {
if !lf.match(labels) {
return false
}
}
return true
}
func metricExprToLabelFilters(me *metricsql.MetricExpr) ([]*labelFilter, error) {
lfs := make([]*labelFilter, len(me.LabelFilters))
for i := range me.LabelFilters {
lf, err := newLabelFilter(&me.LabelFilters[i])
if err != nil {
return nil, fmt.Errorf("cannot parse %s: %w", me.AppendString(nil), err)
}
lfs[i] = lf
}
return lfs, nil
}
// labelFilter contains PromQL filter for `{label op "value"}`
type labelFilter struct {
label string
op string
value string
// re contains compiled regexp for `=~` and `!~` op.
re *regexp.Regexp
}
func newLabelFilter(mlf *metricsql.LabelFilter) (*labelFilter, error) {
lf := &labelFilter{
label: toCanonicalLabelName(mlf.Label),
op: getFilterOp(mlf),
value: mlf.Value,
}
if lf.op == "=~" || lf.op == "!~" {
// PromQL regexps are anchored by default.
// See https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors
reString := "^(?:" + lf.value + ")$"
re, err := regexp.Compile(reString)
if err != nil {
return nil, fmt.Errorf("cannot parse regexp for %s: %w", mlf.AppendString(nil), err)
}
lf.re = re
}
return lf, nil
}
func (lf *labelFilter) match(labels []prompbmarshal.Label) bool {
switch lf.op {
case "=":
return lf.equalValue(labels)
case "!=":
return !lf.equalValue(labels)
case "=~":
return lf.equalRegexp(labels)
case "!~":
return !lf.equalRegexp(labels)
default:
logger.Panicf("BUG: unexpected operation for label filter: %s", lf.op)
}
return false
}
func (lf *labelFilter) equalValue(labels []prompbmarshal.Label) bool {
labelNameMatches := 0
for _, label := range labels {
if toCanonicalLabelName(label.Name) != lf.label {
continue
}
labelNameMatches++
if label.Value == lf.value {
return true
}
}
if labelNameMatches == 0 {
// Special case for {non_existing_label=""}, which matches anything except of non-empty non_existing_label
return lf.value == ""
}
return false
}
func (lf *labelFilter) equalRegexp(labels []prompbmarshal.Label) bool {
labelNameMatches := 0
for _, label := range labels {
if toCanonicalLabelName(label.Name) != lf.label {
continue
}
labelNameMatches++
if lf.re.MatchString(label.Value) {
return true
}
}
if labelNameMatches == 0 {
// Special case for {non_existing_label=~"something|"}, which matches empty non_existing_label
return lf.re.MatchString("")
}
return false
}
func toCanonicalLabelName(labelName string) string {
if labelName == "__name__" {
return ""
}
return labelName
}
func getFilterOp(mlf *metricsql.LabelFilter) string {
if mlf.IsNegative {
if mlf.IsRegexp {
return "!~"
}
return "!="
}
if mlf.IsRegexp {
return "=~"
}
return "="
}

View file

@ -0,0 +1,162 @@
package promrelabel
import (
"bytes"
"fmt"
"testing"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/prompbmarshal"
"github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/prometheus"
"gopkg.in/yaml.v2"
)
func TestIfExpressionUnmarshalFailure(t *testing.T) {
f := func(s string) {
t.Helper()
var ie IfExpression
err := yaml.UnmarshalStrict([]byte(s), &ie)
if err == nil {
t.Fatalf("expecting non-nil error")
}
}
f(`{`)
f(`{x:y}`)
f(`[]`)
f(`"{"`)
f(`'{'`)
f(`foo{bar`)
f(`foo{bar}`)
f(`foo{bar=`)
f(`foo{bar="`)
f(`foo{bar='`)
f(`foo{bar=~"("}`)
f(`foo{bar!~"("}`)
f(`foo{bar==aaa}`)
f(`foo{bar=="b"}`)
f(`'foo+bar'`)
f(`'foo{bar=~"a[b"}'`)
}
func TestIfExpressionUnmarshalSuccess(t *testing.T) {
f := func(s string) {
t.Helper()
var ie IfExpression
if err := yaml.UnmarshalStrict([]byte(s), &ie); err != nil {
t.Fatalf("unexpected error during unmarshal: %s", err)
}
b, err := yaml.Marshal(&ie)
if err != nil {
t.Fatalf("unexpected error during marshal: %s", err)
}
b = bytes.TrimSpace(b)
if string(b) != s {
t.Fatalf("unexpected marshaled data;\ngot\n%s\nwant\n%s", b, s)
}
}
f(`'{}'`)
f(`foo`)
f(`foo{bar="baz"}`)
f(`'{a="b", c!="d", e=~"g", h!~"d"}'`)
f(`foo{bar="zs",a=~"b|c"}`)
}
func TestIfExpressionMatch(t *testing.T) {
f := func(ifExpr, metricWithLabels string) {
t.Helper()
var ie IfExpression
if err := yaml.UnmarshalStrict([]byte(ifExpr), &ie); err != nil {
t.Fatalf("unexpected error during unmarshal: %s", err)
}
labels, err := parseMetricWithLabels(metricWithLabels)
if err != nil {
t.Fatalf("cannot parse %s: %s", metricWithLabels, err)
}
if !ie.Match(labels) {
t.Fatalf("unexpected mismatch of ifExpr=%s for %s", ifExpr, metricWithLabels)
}
}
f(`foo`, `foo`)
f(`foo`, `foo{bar="baz",a="b"}`)
f(`foo{bar="a"}`, `foo{bar="a"}`)
f(`foo{bar="a"}`, `foo{x="y",bar="a",baz="b"}`)
f(`'{a=~"x|abc",y!="z"}'`, `m{x="aa",a="abc"}`)
f(`'{a=~"x|abc",y!="z"}'`, `m{x="aa",a="abc",y="qwe"}`)
f(`'{__name__="foo"}'`, `foo{bar="baz"}`)
f(`'{__name__=~"foo|bar"}'`, `bar`)
f(`'{__name__!=""}'`, `foo`)
f(`'{__name__!=""}'`, `bar{baz="aa",b="c"}`)
f(`'{__name__!~"a.+"}'`, `bar{baz="aa",b="c"}`)
f(`foo{a!~"a.+"}`, `foo{a="baa"}`)
f(`'{foo=""}'`, `bar`)
f(`'{foo!=""}'`, `aa{foo="b"}`)
f(`'{foo=~".*"}'`, `abc`)
f(`'{foo=~".*"}'`, `abc{foo="bar"}`)
f(`'{foo!~".+"}'`, `abc`)
f(`'{foo=~"bar|"}'`, `abc`)
f(`'{foo=~"bar|"}'`, `abc{foo="bar"}`)
f(`'{foo!~"bar|"}'`, `abc{foo="baz"}`)
}
func TestIfExpressionMismatch(t *testing.T) {
f := func(ifExpr, metricWithLabels string) {
t.Helper()
var ie IfExpression
if err := yaml.UnmarshalStrict([]byte(ifExpr), &ie); err != nil {
t.Fatalf("unexpected error during unmarshal: %s", err)
}
labels, err := parseMetricWithLabels(metricWithLabels)
if err != nil {
t.Fatalf("cannot parse %s: %s", metricWithLabels, err)
}
if ie.Match(labels) {
t.Fatalf("unexpected match of ifExpr=%s for %s", ifExpr, metricWithLabels)
}
}
f(`foo`, `bar`)
f(`foo`, `a{foo="bar"}`)
f(`foo{bar="a"}`, `foo`)
f(`foo{bar="a"}`, `foo{bar="b"}`)
f(`foo{bar="a"}`, `foo{baz="b",a="b"}`)
f(`'{a=~"x|abc",y!="z"}'`, `m{x="aa",a="xabc"}`)
f(`'{a=~"x|abc",y!="z"}'`, `m{x="aa",a="abc",y="z"}`)
f(`'{__name__!~".+"}'`, `foo`)
f(`'{a!~"a.+"}'`, `foo{a="abc"}`)
f(`'{foo=""}'`, `bar{foo="aa"}`)
f(`'{foo!=""}'`, `aa`)
f(`'{foo=~".+"}'`, `abc`)
f(`'{foo!~".+"}'`, `abc{foo="x"}`)
f(`'{foo=~"bar|"}'`, `abc{foo="baz"}`)
f(`'{foo!~"bar|"}'`, `abc`)
f(`'{foo!~"bar|"}'`, `abc{foo="bar"}`)
}
func parseMetricWithLabels(metricWithLabels string) ([]prompbmarshal.Label, error) {
// add a value to metricWithLabels, so it could be parsed by prometheus protocol parser.
s := metricWithLabels + " 123"
var rows prometheus.Rows
var err error
rows.UnmarshalWithErrLogger(s, func(s string) {
err = fmt.Errorf("error during metric parse: %s", s)
})
if err != nil {
return nil, err
}
if len(rows.Rows) != 1 {
return nil, fmt.Errorf("unexpected number of rows parsed; got %d; want 1", len(rows.Rows))
}
r := rows.Rows[0]
var lfs []prompbmarshal.Label
if r.Metric != "" {
lfs = append(lfs, prompbmarshal.Label{
Name: "__name__",
Value: r.Metric,
})
}
for _, tag := range r.Tags {
lfs = append(lfs, prompbmarshal.Label{
Name: tag.Key,
Value: tag.Value,
})
}
return lfs, nil
}

View file

@ -23,6 +23,7 @@ type parsedRelabelConfig struct {
Modulus uint64 Modulus uint64
Replacement string Replacement string
Action string Action string
If *IfExpression
regexOriginal *regexp.Regexp regexOriginal *regexp.Regexp
hasCaptureGroupInTargetLabel bool hasCaptureGroupInTargetLabel bool
@ -137,8 +138,17 @@ func FinalizeLabels(dst, src []prompbmarshal.Label) []prompbmarshal.Label {
// See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config // See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config
func (prc *parsedRelabelConfig) apply(labels []prompbmarshal.Label, labelsOffset int) []prompbmarshal.Label { func (prc *parsedRelabelConfig) apply(labels []prompbmarshal.Label, labelsOffset int) []prompbmarshal.Label {
src := labels[labelsOffset:] src := labels[labelsOffset:]
if prc.If != nil && !prc.If.Match(labels) {
if prc.Action == "keep" {
// Drop the target on `if` mismatch for `action: keep`
return labels[:labelsOffset]
}
// Do not apply prc actions on `if` mismatch.
return labels
}
switch prc.Action { switch prc.Action {
case "replace": case "replace":
// Store `replacement` at `target_label` if the `regex` matches `source_labels` joined with `separator`
bb := relabelBufPool.Get() bb := relabelBufPool.Get()
bb.B = concatLabelValues(bb.B[:0], src, prc.SourceLabels, prc.Separator) bb.B = concatLabelValues(bb.B[:0], src, prc.SourceLabels, prc.Separator)
if prc.Regex == defaultRegexForRelabelConfig && !prc.hasCaptureGroupInTargetLabel { if prc.Regex == defaultRegexForRelabelConfig && !prc.hasCaptureGroupInTargetLabel {
@ -174,6 +184,8 @@ func (prc *parsedRelabelConfig) apply(labels []prompbmarshal.Label, labelsOffset
relabelBufPool.Put(bb) relabelBufPool.Put(bb)
return setLabelValue(labels, labelsOffset, nameStr, valueStr) return setLabelValue(labels, labelsOffset, nameStr, valueStr)
case "replace_all": case "replace_all":
// Replace all the occurences of `regex` at `source_labels` joined with `separator` with the `replacement`
// and store the result at `target_label`
bb := relabelBufPool.Get() bb := relabelBufPool.Get()
bb.B = concatLabelValues(bb.B[:0], src, prc.SourceLabels, prc.Separator) bb.B = concatLabelValues(bb.B[:0], src, prc.SourceLabels, prc.Separator)
sourceStr := string(bb.B) sourceStr := string(bb.B)
@ -208,6 +220,15 @@ func (prc *parsedRelabelConfig) apply(labels []prompbmarshal.Label, labelsOffset
} }
return labels return labels
case "keep": case "keep":
// Keep the target if `source_labels` joined with `separator` match the `regex`.
if prc.Regex == defaultRegexForRelabelConfig {
// Fast path for the case with `if` and without explicitly set `regex`:
//
// - action: keep
// if: 'some{label=~"filters"}'
//
return labels
}
bb := relabelBufPool.Get() bb := relabelBufPool.Get()
bb.B = concatLabelValues(bb.B[:0], src, prc.SourceLabels, prc.Separator) bb.B = concatLabelValues(bb.B[:0], src, prc.SourceLabels, prc.Separator)
keep := prc.matchString(bytesutil.ToUnsafeString(bb.B)) keep := prc.matchString(bytesutil.ToUnsafeString(bb.B))
@ -217,6 +238,15 @@ func (prc *parsedRelabelConfig) apply(labels []prompbmarshal.Label, labelsOffset
} }
return labels return labels
case "drop": case "drop":
// Drop the target if `source_labels` joined with `separator` don't match the `regex`.
if prc.Regex == defaultRegexForRelabelConfig {
// Fast path for the case with `if` and without explicitly set `regex`:
//
// - action: drop
// if: 'some{label=~"filters"}'
//
return labels[:labelsOffset]
}
bb := relabelBufPool.Get() bb := relabelBufPool.Get()
bb.B = concatLabelValues(bb.B[:0], src, prc.SourceLabels, prc.Separator) bb.B = concatLabelValues(bb.B[:0], src, prc.SourceLabels, prc.Separator)
drop := prc.matchString(bytesutil.ToUnsafeString(bb.B)) drop := prc.matchString(bytesutil.ToUnsafeString(bb.B))
@ -226,6 +256,7 @@ func (prc *parsedRelabelConfig) apply(labels []prompbmarshal.Label, labelsOffset
} }
return labels return labels
case "hashmod": case "hashmod":
// Calculate the `modulus` from the hash of `source_labels` joined with `separator` and store it at `target_label`
bb := relabelBufPool.Get() bb := relabelBufPool.Get()
bb.B = concatLabelValues(bb.B[:0], src, prc.SourceLabels, prc.Separator) bb.B = concatLabelValues(bb.B[:0], src, prc.SourceLabels, prc.Separator)
h := xxhash.Sum64(bb.B) % prc.Modulus h := xxhash.Sum64(bb.B) % prc.Modulus
@ -233,6 +264,7 @@ func (prc *parsedRelabelConfig) apply(labels []prompbmarshal.Label, labelsOffset
relabelBufPool.Put(bb) relabelBufPool.Put(bb)
return setLabelValue(labels, labelsOffset, prc.TargetLabel, value) return setLabelValue(labels, labelsOffset, prc.TargetLabel, value)
case "labelmap": case "labelmap":
// Replace label names with the `replacement` if they match `regex`
for i := range src { for i := range src {
label := &src[i] label := &src[i]
labelName, ok := prc.replaceFullString(label.Name, prc.Replacement, prc.hasCaptureGroupInReplacement) labelName, ok := prc.replaceFullString(label.Name, prc.Replacement, prc.hasCaptureGroupInReplacement)
@ -242,12 +274,14 @@ func (prc *parsedRelabelConfig) apply(labels []prompbmarshal.Label, labelsOffset
} }
return labels return labels
case "labelmap_all": case "labelmap_all":
// Replace all the occurences of `regex` at label names with `replacement`
for i := range src { for i := range src {
label := &src[i] label := &src[i]
label.Name, _ = prc.replaceStringSubmatches(label.Name, prc.Replacement, prc.hasCaptureGroupInReplacement) label.Name, _ = prc.replaceStringSubmatches(label.Name, prc.Replacement, prc.hasCaptureGroupInReplacement)
} }
return labels return labels
case "labeldrop": case "labeldrop":
// Drop labels with names matching the `regex`
dst := labels[:labelsOffset] dst := labels[:labelsOffset]
for i := range src { for i := range src {
label := &src[i] label := &src[i]
@ -257,6 +291,7 @@ func (prc *parsedRelabelConfig) apply(labels []prompbmarshal.Label, labelsOffset
} }
return dst return dst
case "labelkeep": case "labelkeep":
// Keep labels with names matching the `regex`
dst := labels[:labelsOffset] dst := labels[:labelsOffset]
for i := range src { for i := range src {
label := &src[i] label := &src[i]

View file

@ -134,6 +134,25 @@ func TestApplyRelabelConfigs(t *testing.T) {
source_labels: ["foo"] source_labels: ["foo"]
target_label: "bar" target_label: "bar"
regex: ".+" regex: ".+"
`, []prompbmarshal.Label{
{
Name: "xxx",
Value: "yyy",
},
}, false, []prompbmarshal.Label{
{
Name: "xxx",
Value: "yyy",
},
})
})
t.Run("replace-if-miss", func(t *testing.T) {
f(`
- action: replace
if: '{foo="bar"}'
source_labels: ["xxx", "foo"]
target_label: "bar"
replacement: "a-$1-b"
`, []prompbmarshal.Label{ `, []prompbmarshal.Label{
{ {
Name: "xxx", Name: "xxx",
@ -152,6 +171,29 @@ func TestApplyRelabelConfigs(t *testing.T) {
source_labels: ["xxx", "foo"] source_labels: ["xxx", "foo"]
target_label: "bar" target_label: "bar"
replacement: "a-$1-b" replacement: "a-$1-b"
`, []prompbmarshal.Label{
{
Name: "xxx",
Value: "yyy",
},
}, false, []prompbmarshal.Label{
{
Name: "bar",
Value: "a-yyy;-b",
},
{
Name: "xxx",
Value: "yyy",
},
})
})
t.Run("replace-if-hit", func(t *testing.T) {
f(`
- action: replace
if: '{xxx=~".y."}'
source_labels: ["xxx", "foo"]
target_label: "bar"
replacement: "a-$1-b"
`, []prompbmarshal.Label{ `, []prompbmarshal.Label{
{ {
Name: "xxx", Name: "xxx",
@ -333,6 +375,26 @@ func TestApplyRelabelConfigs(t *testing.T) {
}, },
}) })
}) })
t.Run("replace_all-if-miss", func(t *testing.T) {
f(`
- action: replace_all
if: 'foo'
source_labels: ["xxx"]
target_label: "xxx"
regex: "-"
replacement: "."
`, []prompbmarshal.Label{
{
Name: "xxx",
Value: "a-b-c",
},
}, false, []prompbmarshal.Label{
{
Name: "xxx",
Value: "a-b-c",
},
})
})
t.Run("replace_all-hit", func(t *testing.T) { t.Run("replace_all-hit", func(t *testing.T) {
f(` f(`
- action: replace_all - action: replace_all
@ -340,6 +402,26 @@ func TestApplyRelabelConfigs(t *testing.T) {
target_label: "xxx" target_label: "xxx"
regex: "-" regex: "-"
replacement: "." replacement: "."
`, []prompbmarshal.Label{
{
Name: "xxx",
Value: "a-b-c",
},
}, false, []prompbmarshal.Label{
{
Name: "xxx",
Value: "a.b.c",
},
})
})
t.Run("replace_all-if-hit", func(t *testing.T) {
f(`
- action: replace_all
if: '{non_existing_label=~".*"}'
source_labels: ["xxx"]
target_label: "xxx"
regex: "-"
replacement: "."
`, []prompbmarshal.Label{ `, []prompbmarshal.Label{
{ {
Name: "xxx", Name: "xxx",
@ -530,6 +612,33 @@ func TestApplyRelabelConfigs(t *testing.T) {
}, },
}, true, []prompbmarshal.Label{}) }, true, []prompbmarshal.Label{})
}) })
t.Run("keep-if-miss", func(t *testing.T) {
f(`
- action: keep
if: '{foo="bar"}'
`, []prompbmarshal.Label{
{
Name: "foo",
Value: "yyy",
},
}, false, []prompbmarshal.Label{})
})
t.Run("keep-if-hit", func(t *testing.T) {
f(`
- action: keep
if: '{foo="yyy"}'
`, []prompbmarshal.Label{
{
Name: "foo",
Value: "yyy",
},
}, false, []prompbmarshal.Label{
{
Name: "foo",
Value: "yyy",
},
})
})
t.Run("keep-hit", func(t *testing.T) { t.Run("keep-hit", func(t *testing.T) {
f(` f(`
- action: keep - action: keep
@ -577,6 +686,33 @@ func TestApplyRelabelConfigs(t *testing.T) {
}, },
}, true, []prompbmarshal.Label{}) }, true, []prompbmarshal.Label{})
}) })
t.Run("keep_metrics-if-miss", func(t *testing.T) {
f(`
- action: keep_metrics
if: 'bar'
`, []prompbmarshal.Label{
{
Name: "__name__",
Value: "foo",
},
}, true, []prompbmarshal.Label{})
})
t.Run("keep_metrics-if-hit", func(t *testing.T) {
f(`
- action: keep_metrics
if: 'foo'
`, []prompbmarshal.Label{
{
Name: "__name__",
Value: "foo",
},
}, true, []prompbmarshal.Label{
{
Name: "__name__",
Value: "foo",
},
})
})
t.Run("keep_metrics-hit", func(t *testing.T) { t.Run("keep_metrics-hit", func(t *testing.T) {
f(` f(`
- action: keep_metrics - action: keep_metrics
@ -617,6 +753,33 @@ func TestApplyRelabelConfigs(t *testing.T) {
}, },
}) })
}) })
t.Run("drop-if-miss", func(t *testing.T) {
f(`
- action: drop
if: '{foo="bar"}'
`, []prompbmarshal.Label{
{
Name: "foo",
Value: "yyy",
},
}, true, []prompbmarshal.Label{
{
Name: "foo",
Value: "yyy",
},
})
})
t.Run("drop-if-hit", func(t *testing.T) {
f(`
- action: drop
if: '{foo="yyy"}'
`, []prompbmarshal.Label{
{
Name: "foo",
Value: "yyy",
},
}, true, []prompbmarshal.Label{})
})
t.Run("drop-hit", func(t *testing.T) { t.Run("drop-hit", func(t *testing.T) {
f(` f(`
- action: drop - action: drop
@ -659,6 +822,33 @@ func TestApplyRelabelConfigs(t *testing.T) {
}, },
}) })
}) })
t.Run("drop_metrics-if-miss", func(t *testing.T) {
f(`
- action: drop_metrics
if: bar
`, []prompbmarshal.Label{
{
Name: "__name__",
Value: "foo",
},
}, true, []prompbmarshal.Label{
{
Name: "__name__",
Value: "foo",
},
})
})
t.Run("drop_metrics-if-hit", func(t *testing.T) {
f(`
- action: drop_metrics
if: foo
`, []prompbmarshal.Label{
{
Name: "__name__",
Value: "foo",
},
}, true, []prompbmarshal.Label{})
})
t.Run("drop_metrics-hit", func(t *testing.T) { t.Run("drop_metrics-hit", func(t *testing.T) {
f(` f(`
- action: drop_metrics - action: drop_metrics
@ -694,6 +884,48 @@ func TestApplyRelabelConfigs(t *testing.T) {
}, },
}) })
}) })
t.Run("hashmod-if-miss", func(t *testing.T) {
f(`
- action: hashmod
if: '{foo="bar"}'
source_labels: [foo]
target_label: aaa
modulus: 123
`, []prompbmarshal.Label{
{
Name: "foo",
Value: "yyy",
},
}, true, []prompbmarshal.Label{
{
Name: "foo",
Value: "yyy",
},
})
})
t.Run("hashmod-if-hit", func(t *testing.T) {
f(`
- action: hashmod
if: '{foo="yyy"}'
source_labels: [foo]
target_label: aaa
modulus: 123
`, []prompbmarshal.Label{
{
Name: "foo",
Value: "yyy",
},
}, true, []prompbmarshal.Label{
{
Name: "aaa",
Value: "73",
},
{
Name: "foo",
Value: "yyy",
},
})
})
t.Run("hashmod-hit", func(t *testing.T) { t.Run("hashmod-hit", func(t *testing.T) {
f(` f(`
- action: hashmod - action: hashmod
@ -716,6 +948,62 @@ func TestApplyRelabelConfigs(t *testing.T) {
}, },
}) })
}) })
t.Run("labelmap-copy-label-if-miss", func(t *testing.T) {
f(`
- action: labelmap
if: '{foo="yyy",foobar="aab"}'
regex: "foo"
replacement: "bar"
`, []prompbmarshal.Label{
{
Name: "foo",
Value: "yyy",
},
{
Name: "foobar",
Value: "aaa",
},
}, true, []prompbmarshal.Label{
{
Name: "foo",
Value: "yyy",
},
{
Name: "foobar",
Value: "aaa",
},
})
})
t.Run("labelmap-copy-label-if-hit", func(t *testing.T) {
f(`
- action: labelmap
if: '{foo="yyy",foobar="aaa"}'
regex: "foo"
replacement: "bar"
`, []prompbmarshal.Label{
{
Name: "foo",
Value: "yyy",
},
{
Name: "foobar",
Value: "aaa",
},
}, true, []prompbmarshal.Label{
{
Name: "bar",
Value: "yyy",
},
{
Name: "foo",
Value: "yyy",
},
{
Name: "foobar",
Value: "aaa",
},
})
})
t.Run("labelmap-copy-label", func(t *testing.T) { t.Run("labelmap-copy-label", func(t *testing.T) {
f(` f(`
- action: labelmap - action: labelmap
@ -830,6 +1118,58 @@ func TestApplyRelabelConfigs(t *testing.T) {
}, },
}) })
}) })
t.Run("labelmap_all-if-miss", func(t *testing.T) {
f(`
- action: labelmap_all
if: foobar
regex: "\\."
replacement: "-"
`, []prompbmarshal.Label{
{
Name: "foo.bar.baz",
Value: "yyy",
},
{
Name: "foobar",
Value: "aaa",
},
}, true, []prompbmarshal.Label{
{
Name: "foo.bar.baz",
Value: "yyy",
},
{
Name: "foobar",
Value: "aaa",
},
})
})
t.Run("labelmap_all-if-hit", func(t *testing.T) {
f(`
- action: labelmap_all
if: '{foo.bar.baz="yyy"}'
regex: "\\."
replacement: "-"
`, []prompbmarshal.Label{
{
Name: "foo.bar.baz",
Value: "yyy",
},
{
Name: "foobar",
Value: "aaa",
},
}, true, []prompbmarshal.Label{
{
Name: "foo-bar-baz",
Value: "yyy",
},
{
Name: "foobar",
Value: "aaa",
},
})
})
t.Run("labelmap_all", func(t *testing.T) { t.Run("labelmap_all", func(t *testing.T) {
f(` f(`
- action: labelmap_all - action: labelmap_all
@ -895,6 +1235,66 @@ func TestApplyRelabelConfigs(t *testing.T) {
Value: "bbb", Value: "bbb",
}, },
}) })
// if-miss
f(`
- action: labeldrop
if: foo
regex: dropme
`, []prompbmarshal.Label{
{
Name: "xxx",
Value: "yyy",
},
{
Name: "dropme",
Value: "aaa",
},
{
Name: "foo",
Value: "bar",
},
}, false, []prompbmarshal.Label{
{
Name: "dropme",
Value: "aaa",
},
{
Name: "foo",
Value: "bar",
},
{
Name: "xxx",
Value: "yyy",
},
})
// if-hit
f(`
- action: labeldrop
if: '{xxx="yyy"}'
regex: dropme
`, []prompbmarshal.Label{
{
Name: "xxx",
Value: "yyy",
},
{
Name: "dropme",
Value: "aaa",
},
{
Name: "foo",
Value: "bar",
},
}, false, []prompbmarshal.Label{
{
Name: "foo",
Value: "bar",
},
{
Name: "xxx",
Value: "yyy",
},
})
f(` f(`
- action: labeldrop - action: labeldrop
regex: dropme regex: dropme
@ -1059,6 +1459,62 @@ func TestApplyRelabelConfigs(t *testing.T) {
Value: "aaa", Value: "aaa",
}, },
}) })
// if-miss
f(`
- action: labelkeep
if: '{aaaa="awefx"}'
regex: keepme
`, []prompbmarshal.Label{
{
Name: "keepme",
Value: "aaa",
},
{
Name: "aaaa",
Value: "awef",
},
{
Name: "keepme-aaa",
Value: "234",
},
}, false, []prompbmarshal.Label{
{
Name: "aaaa",
Value: "awef",
},
{
Name: "keepme",
Value: "aaa",
},
{
Name: "keepme-aaa",
Value: "234",
},
})
// if-hit
f(`
- action: labelkeep
if: '{aaaa="awef"}'
regex: keepme
`, []prompbmarshal.Label{
{
Name: "keepme",
Value: "aaa",
},
{
Name: "aaaa",
Value: "awef",
},
{
Name: "keepme-aaa",
Value: "234",
},
}, false, []prompbmarshal.Label{
{
Name: "keepme",
Value: "aaa",
},
})
f(` f(`
- action: labelkeep - action: labelkeep
regex: keepme regex: keepme

View file

@ -142,9 +142,9 @@ func openIndexDB(path string, s *Storage, rotationTimestamp uint64) (*indexDB, e
tb: tb, tb: tb,
name: name, name: name,
tagFiltersCache: workingsetcache.New(mem/32, time.Hour), tagFiltersCache: workingsetcache.New(mem / 32),
s: s, s: s,
loopsPerDateTagFilterCache: workingsetcache.New(mem/128, time.Hour), loopsPerDateTagFilterCache: workingsetcache.New(mem / 128),
} }
return db, nil return db, nil
} }
@ -608,6 +608,11 @@ func (db *indexDB) getOrCreateTSID(dst *TSID, metricName []byte, mn *MetricName)
func generateTSID(dst *TSID, mn *MetricName) { func generateTSID(dst *TSID, mn *MetricName) {
dst.MetricGroupID = xxhash.Sum64(mn.MetricGroup) dst.MetricGroupID = xxhash.Sum64(mn.MetricGroup)
// Assume that the job-like metric is put at mn.Tags[0], while instance-like metric is put at mn.Tags[1]
// This assumption is true because mn.Tags must be sorted with mn.sortTags() before calling generateTSID() function.
// This allows grouping data blocks for the same (job, instance) close to each other on disk.
// This reduces disk seeks and disk read IO when data blocks are read from disk for the same job and/or instance.
// For example, data blocks for time series matching `process_resident_memory_bytes{job="vmstorage"}` are physically adjancent on disk.
if len(mn.Tags) > 0 { if len(mn.Tags) > 0 {
dst.JobID = uint32(xxhash.Sum64(mn.Tags[0].Value)) dst.JobID = uint32(xxhash.Sum64(mn.Tags[0].Value))
} }
@ -1868,7 +1873,12 @@ func (is *indexSearch) containsTimeRange(tr TimeRange) (bool, error) {
ts := &is.ts ts := &is.ts
kb := &is.kb kb := &is.kb
// Verify whether the maximum date in `ts` covers tr.MinTimestamp. // Verify whether the tr.MinTimestamp is included into `ts` or is smaller than the minimum date stored in `ts`.
// Do not check whether tr.MaxTimestamp is included into `ts` or is bigger than the max date stored in `ts` for performance reasons.
// This means that containsTimeRange() can return true if `tr` is located below the min date stored in `ts`.
// This is OK, since this case isn't encountered too much in practice.
// The main practical case allows skipping searching in prev indexdb (`ts`) when `tr`
// is located above the max date stored there.
minDate := uint64(tr.MinTimestamp) / msecPerDay minDate := uint64(tr.MinTimestamp) / msecPerDay
kb.B = is.marshalCommonPrefix(kb.B[:0], nsPrefixDateToMetricID) kb.B = is.marshalCommonPrefix(kb.B[:0], nsPrefixDateToMetricID)
prefix := kb.B prefix := kb.B

View file

@ -1851,9 +1851,9 @@ func newTestStorage() *Storage {
s := &Storage{ s := &Storage{
cachePath: "test-storage-cache", cachePath: "test-storage-cache",
metricIDCache: workingsetcache.New(1234, time.Hour), metricIDCache: workingsetcache.New(1234),
metricNameCache: workingsetcache.New(1234, time.Hour), metricNameCache: workingsetcache.New(1234),
tsidCache: workingsetcache.New(1234, time.Hour), tsidCache: workingsetcache.New(1234),
} }
s.setDeletedMetricIDs(&uint64set.Set{}) s.setDeletedMetricIDs(&uint64set.Set{})
return s return s

View file

@ -614,6 +614,12 @@ func unmarshalBytesFast(src []byte) ([]byte, []byte, error) {
// sortTags sorts tags in mn to canonical form needed for storing in the index. // sortTags sorts tags in mn to canonical form needed for storing in the index.
// //
// The sortTags tries moving job-like tag to mn.Tags[0], while instance-like tag to mn.Tags[1].
// See commonTagKeys list for job-like and instance-like tags.
// This guarantees that indexdb entries for the same (job, instance) are located
// close to each other on disk. This reduces disk seeks and disk read IO when metrics
// for a particular job and/or instance are read from the disk.
//
// The function also de-duplicates tags with identical keys in mn. The last tag value // The function also de-duplicates tags with identical keys in mn. The last tag value
// for duplicate tags wins. // for duplicate tags wins.
// //

View file

@ -1255,6 +1255,13 @@ func (pt *partition) mergeParts(pws []*partWrapper, stopCh <-chan struct{}) erro
func getCompressLevelForRowsCount(rowsCount, blocksCount uint64) int { func getCompressLevelForRowsCount(rowsCount, blocksCount uint64) int {
avgRowsPerBlock := rowsCount / blocksCount avgRowsPerBlock := rowsCount / blocksCount
// See https://github.com/facebook/zstd/releases/tag/v1.3.4 about negative compression levels.
if avgRowsPerBlock <= 10 {
return -5
}
if avgRowsPerBlock <= 50 {
return -2
}
if avgRowsPerBlock <= 200 { if avgRowsPerBlock <= 200 {
return -1 return -1
} }

View file

@ -993,7 +993,7 @@ func (s *Storage) mustLoadCache(info, name string, sizeBytes int) *workingsetcac
path := s.cachePath + "/" + name path := s.cachePath + "/" + name
logger.Infof("loading %s cache from %q...", info, path) logger.Infof("loading %s cache from %q...", info, path)
startTime := time.Now() startTime := time.Now()
c := workingsetcache.Load(path, sizeBytes, time.Hour) c := workingsetcache.Load(path, sizeBytes)
var cs fastcache.Stats var cs fastcache.Stats
c.UpdateStats(&cs) c.UpdateStats(&cs)
logger.Infof("loaded %s cache from %q in %.3f seconds; entriesCount: %d; sizeBytes: %d", logger.Infof("loaded %s cache from %q in %.3f seconds; entriesCount: %d; sizeBytes: %d",

View file

@ -49,7 +49,10 @@ func convertToCompositeTagFilters(tfs *TagFilters) []*TagFilters {
hasPositiveFilter = true hasPositiveFilter = true
} }
} }
if len(names) == 0 { // If tfs have no filters on __name__ or have no non-negative filters,
// then it is impossible to construct composite tag filter.
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2238
if len(names) == 0 || !hasPositiveFilter {
atomic.AddUint64(&compositeFilterMissingConversions, 1) atomic.AddUint64(&compositeFilterMissingConversions, 1)
return []*TagFilters{tfs} return []*TagFilters{tfs}
} }
@ -61,7 +64,7 @@ func convertToCompositeTagFilters(tfs *TagFilters) []*TagFilters {
tfsNew := make([]tagFilter, 0, len(tfs.tfs)) tfsNew := make([]tagFilter, 0, len(tfs.tfs))
for _, tf := range tfs.tfs { for _, tf := range tfs.tfs {
if len(tf.key) == 0 { if len(tf.key) == 0 {
if !hasPositiveFilter || tf.isNegative { if tf.isNegative {
// Negative filters on metric name cannot be used for building composite filter, so leave them as is. // Negative filters on metric name cannot be used for building composite filter, so leave them as is.
tfsNew = append(tfsNew, tf) tfsNew = append(tfsNew, tf)
continue continue

View file

@ -185,7 +185,7 @@ func TestConvertToCompositeTagFilters(t *testing.T) {
IsRegexp: false, IsRegexp: false,
}, },
{ {
Key: []byte("\xfe\x03barfoo"), Key: []byte("foo"),
Value: []byte("abc"), Value: []byte("abc"),
IsNegative: true, IsNegative: true,
IsRegexp: false, IsRegexp: false,
@ -588,21 +588,7 @@ func TestConvertToCompositeTagFilters(t *testing.T) {
IsRegexp: true, IsRegexp: true,
}, },
{ {
Key: []byte("\xfe\x03barfoo"), Key: []byte("foo"),
Value: []byte("abc"),
IsNegative: true,
IsRegexp: false,
},
},
{
{
Key: nil,
Value: []byte("bar|foo"),
IsNegative: false,
IsRegexp: true,
},
{
Key: []byte("\xfe\x03foofoo"),
Value: []byte("abc"), Value: []byte("abc"),
IsNegative: true, IsNegative: true,
IsRegexp: false, IsRegexp: false,

View file

@ -17,6 +17,8 @@ const (
whole = 2 whole = 2
) )
const defaultExpireDuration = 20 * time.Minute
// Cache is a cache for working set entries. // Cache is a cache for working set entries.
// //
// The cache evicts inactive entries after the given expireDuration. // The cache evicts inactive entries after the given expireDuration.
@ -48,10 +50,18 @@ type Cache struct {
} }
// Load loads the cache from filePath and limits its size to maxBytes // Load loads the cache from filePath and limits its size to maxBytes
// and evicts inactive entries in 20 minutes.
//
// Stop must be called on the returned cache when it is no longer needed.
func Load(filePath string, maxBytes int) *Cache {
return LoadWithExpire(filePath, maxBytes, defaultExpireDuration)
}
// LoadWithExpire loads the cache from filePath and limits its size to maxBytes
// and evicts inactive entires after expireDuration. // and evicts inactive entires after expireDuration.
// //
// Stop must be called on the returned cache when it is no longer needed. // Stop must be called on the returned cache when it is no longer needed.
func Load(filePath string, maxBytes int, expireDuration time.Duration) *Cache { func LoadWithExpire(filePath string, maxBytes int, expireDuration time.Duration) *Cache {
curr := fastcache.LoadFromFileOrNew(filePath, maxBytes) curr := fastcache.LoadFromFileOrNew(filePath, maxBytes)
var cs fastcache.Stats var cs fastcache.Stats
curr.UpdateStats(&cs) curr.UpdateStats(&cs)
@ -60,8 +70,10 @@ func Load(filePath string, maxBytes int, expireDuration time.Duration) *Cache {
// The cache couldn't be loaded with maxBytes size. // The cache couldn't be loaded with maxBytes size.
// This may mean that the cache is split into curr and prev caches. // This may mean that the cache is split into curr and prev caches.
// Try loading it again with maxBytes / 2 size. // Try loading it again with maxBytes / 2 size.
curr := fastcache.LoadFromFileOrNew(filePath, maxBytes/2) // Put the loaded cache into `prev` instead of `curr`
prev := fastcache.New(maxBytes / 2) // in order to limit the growth of the cache for the current period of time.
prev := fastcache.LoadFromFileOrNew(filePath, maxBytes/2)
curr := fastcache.New(maxBytes / 2)
c := newCacheInternal(curr, prev, split, maxBytes) c := newCacheInternal(curr, prev, split, maxBytes)
c.runWatchers(expireDuration) c.runWatchers(expireDuration)
return c return c
@ -74,11 +86,18 @@ func Load(filePath string, maxBytes int, expireDuration time.Duration) *Cache {
return newCacheInternal(curr, prev, whole, maxBytes) return newCacheInternal(curr, prev, whole, maxBytes)
} }
// New creates new cache with the given maxBytes capacity and the given expireDuration // New creates new cache with the given maxBytes capacity.
//
// Stop must be called on the returned cache when it is no longer needed.
func New(maxBytes int) *Cache {
return NewWithExpire(maxBytes, defaultExpireDuration)
}
// NewWithExpire creates new cache with the given maxBytes capacity and the given expireDuration
// for inactive entries. // for inactive entries.
// //
// Stop must be called on the returned cache when it is no longer needed. // Stop must be called on the returned cache when it is no longer needed.
func New(maxBytes int, expireDuration time.Duration) *Cache { func NewWithExpire(maxBytes int, expireDuration time.Duration) *Cache {
curr := fastcache.New(maxBytes / 2) curr := fastcache.New(maxBytes / 2)
prev := fastcache.New(1024) prev := fastcache.New(1024)
c := newCacheInternal(curr, prev, split, maxBytes) c := newCacheInternal(curr, prev, split, maxBytes)
@ -153,6 +172,9 @@ func (c *Cache) cacheSizeWatcher() {
return return
case <-t.C: case <-t.C:
} }
if c.loadMode() != split {
continue
}
var cs fastcache.Stats var cs fastcache.Stats
curr := c.curr.Load().(*fastcache.Cache) curr := c.curr.Load().(*fastcache.Cache)
curr.UpdateStats(&cs) curr.UpdateStats(&cs)
@ -169,10 +191,10 @@ func (c *Cache) cacheSizeWatcher() {
// Do this in the following steps: // Do this in the following steps:
// 1) switch to mode=switching // 1) switch to mode=switching
// 2) move curr cache to prev // 2) move curr cache to prev
// 3) create curr with the double size // 3) create curr cache with doubled size
// 4) wait until curr size exceeds maxBytesSize, i.e. it is populated with new data // 4) wait until curr cache size exceeds maxBytesSize, i.e. it is populated with new data
// 5) switch to mode=whole // 5) switch to mode=whole
// 6) drop prev // 6) drop prev cache
c.mu.Lock() c.mu.Lock()
c.setMode(switching) c.setMode(switching)

View file

@ -1,5 +1,12 @@
# Changes # Changes
## [0.3.0](https://github.com/googleapis/google-cloud-go/compare/iam/v0.2.0...iam/v0.3.0) (2022-02-23)
### Features
* **iam:** set versionClient to module version ([55f0d92](https://github.com/googleapis/google-cloud-go/commit/55f0d92bf112f14b024b4ab0076c9875a17423c9))
## [0.2.0](https://github.com/googleapis/google-cloud-go/compare/iam/v0.1.1...iam/v0.2.0) (2022-02-14) ## [0.2.0](https://github.com/googleapis/google-cloud-go/compare/iam/v0.1.1...iam/v0.2.0) (2022-02-14)

View file

@ -170,6 +170,9 @@ type Config struct {
// //
// For example S3's X-Amz-Meta prefixed header will be unmarshaled to lower case // For example S3's X-Amz-Meta prefixed header will be unmarshaled to lower case
// Metadata member's map keys. The value of the header in the map is unaffected. // Metadata member's map keys. The value of the header in the map is unaffected.
//
// The AWS SDK for Go v2, uses lower case header maps by default. The v1
// SDK provides this opt-in for this option, for backwards compatibility.
LowerCaseHeaderMaps *bool LowerCaseHeaderMaps *bool
// Set this to `true` to disable the EC2Metadata client from overriding the // Set this to `true` to disable the EC2Metadata client from overriding the

View file

@ -814,6 +814,61 @@ var awsPartition = partition{
}: endpoint{}, }: endpoint{},
}, },
}, },
"amplifyuibuilder": service{
Endpoints: serviceEndpoints{
endpointKey{
Region: "ap-northeast-1",
}: endpoint{},
endpointKey{
Region: "ap-northeast-2",
}: endpoint{},
endpointKey{
Region: "ap-south-1",
}: endpoint{},
endpointKey{
Region: "ap-southeast-1",
}: endpoint{},
endpointKey{
Region: "ap-southeast-2",
}: endpoint{},
endpointKey{
Region: "ca-central-1",
}: endpoint{},
endpointKey{
Region: "eu-central-1",
}: endpoint{},
endpointKey{
Region: "eu-north-1",
}: endpoint{},
endpointKey{
Region: "eu-west-1",
}: endpoint{},
endpointKey{
Region: "eu-west-2",
}: endpoint{},
endpointKey{
Region: "eu-west-3",
}: endpoint{},
endpointKey{
Region: "me-south-1",
}: endpoint{},
endpointKey{
Region: "sa-east-1",
}: endpoint{},
endpointKey{
Region: "us-east-1",
}: endpoint{},
endpointKey{
Region: "us-east-2",
}: endpoint{},
endpointKey{
Region: "us-west-1",
}: endpoint{},
endpointKey{
Region: "us-west-2",
}: endpoint{},
},
},
"api.detective": service{ "api.detective": service{
Defaults: endpointDefaults{ Defaults: endpointDefaults{
defaultKey{}: endpoint{ defaultKey{}: endpoint{
@ -1669,6 +1724,147 @@ var awsPartition = partition{
}, },
}, },
}, },
"api.tunneling.iot": service{
Defaults: endpointDefaults{
defaultKey{}: endpoint{},
defaultKey{
Variant: fipsVariant,
}: endpoint{
Hostname: "api.tunneling.iot-fips.{region}.{dnsSuffix}",
},
},
Endpoints: serviceEndpoints{
endpointKey{
Region: "ap-east-1",
}: endpoint{},
endpointKey{
Region: "ap-northeast-1",
}: endpoint{},
endpointKey{
Region: "ap-northeast-2",
}: endpoint{},
endpointKey{
Region: "ap-south-1",
}: endpoint{},
endpointKey{
Region: "ap-southeast-1",
}: endpoint{},
endpointKey{
Region: "ap-southeast-2",
}: endpoint{},
endpointKey{
Region: "ca-central-1",
}: endpoint{},
endpointKey{
Region: "ca-central-1",
Variant: fipsVariant,
}: endpoint{
Hostname: "api.tunneling.iot-fips.ca-central-1.amazonaws.com",
},
endpointKey{
Region: "eu-central-1",
}: endpoint{},
endpointKey{
Region: "eu-north-1",
}: endpoint{},
endpointKey{
Region: "eu-west-1",
}: endpoint{},
endpointKey{
Region: "eu-west-2",
}: endpoint{},
endpointKey{
Region: "eu-west-3",
}: endpoint{},
endpointKey{
Region: "fips-ca-central-1",
}: endpoint{
Hostname: "api.tunneling.iot-fips.ca-central-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "ca-central-1",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "fips-us-east-1",
}: endpoint{
Hostname: "api.tunneling.iot-fips.us-east-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-east-1",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "fips-us-east-2",
}: endpoint{
Hostname: "api.tunneling.iot-fips.us-east-2.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-east-2",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "fips-us-west-1",
}: endpoint{
Hostname: "api.tunneling.iot-fips.us-west-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-west-1",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "fips-us-west-2",
}: endpoint{
Hostname: "api.tunneling.iot-fips.us-west-2.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-west-2",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "me-south-1",
}: endpoint{},
endpointKey{
Region: "sa-east-1",
}: endpoint{},
endpointKey{
Region: "us-east-1",
}: endpoint{},
endpointKey{
Region: "us-east-1",
Variant: fipsVariant,
}: endpoint{
Hostname: "api.tunneling.iot-fips.us-east-1.amazonaws.com",
},
endpointKey{
Region: "us-east-2",
}: endpoint{},
endpointKey{
Region: "us-east-2",
Variant: fipsVariant,
}: endpoint{
Hostname: "api.tunneling.iot-fips.us-east-2.amazonaws.com",
},
endpointKey{
Region: "us-west-1",
}: endpoint{},
endpointKey{
Region: "us-west-1",
Variant: fipsVariant,
}: endpoint{
Hostname: "api.tunneling.iot-fips.us-west-1.amazonaws.com",
},
endpointKey{
Region: "us-west-2",
}: endpoint{},
endpointKey{
Region: "us-west-2",
Variant: fipsVariant,
}: endpoint{
Hostname: "api.tunneling.iot-fips.us-west-2.amazonaws.com",
},
},
},
"apigateway": service{ "apigateway": service{
Endpoints: serviceEndpoints{ Endpoints: serviceEndpoints{
endpointKey{ endpointKey{
@ -2810,6 +3006,9 @@ var awsPartition = partition{
}, },
"braket": service{ "braket": service{
Endpoints: serviceEndpoints{ Endpoints: serviceEndpoints{
endpointKey{
Region: "eu-west-2",
}: endpoint{},
endpointKey{ endpointKey{
Region: "us-east-1", Region: "us-east-1",
}: endpoint{}, }: endpoint{},
@ -13574,6 +13773,9 @@ var awsPartition = partition{
endpointKey{ endpointKey{
Region: "ap-southeast-2", Region: "ap-southeast-2",
}: endpoint{}, }: endpoint{},
endpointKey{
Region: "ap-southeast-3",
}: endpoint{},
endpointKey{ endpointKey{
Region: "ca-central-1", Region: "ca-central-1",
}: endpoint{}, }: endpoint{},
@ -21963,6 +22165,16 @@ var awscnPartition = partition{
}: endpoint{}, }: endpoint{},
}, },
}, },
"api.tunneling.iot": service{
Endpoints: serviceEndpoints{
endpointKey{
Region: "cn-north-1",
}: endpoint{},
endpointKey{
Region: "cn-northwest-1",
}: endpoint{},
},
},
"apigateway": service{ "apigateway": service{
Endpoints: serviceEndpoints{ Endpoints: serviceEndpoints{
endpointKey{ endpointKey{
@ -23471,6 +23683,14 @@ var awsusgovPartition = partition{
}, },
}, },
"acm": service{ "acm": service{
Defaults: endpointDefaults{
defaultKey{}: endpoint{},
defaultKey{
Variant: fipsVariant,
}: endpoint{
Hostname: "acm.{region}.{dnsSuffix}",
},
},
Endpoints: serviceEndpoints{ Endpoints: serviceEndpoints{
endpointKey{ endpointKey{
Region: "us-gov-east-1", Region: "us-gov-east-1",
@ -23761,6 +23981,54 @@ var awsusgovPartition = partition{
}, },
}, },
}, },
"api.tunneling.iot": service{
Defaults: endpointDefaults{
defaultKey{}: endpoint{},
defaultKey{
Variant: fipsVariant,
}: endpoint{
Hostname: "api.tunneling.iot-fips.{region}.{dnsSuffix}",
},
},
Endpoints: serviceEndpoints{
endpointKey{
Region: "fips-us-gov-east-1",
}: endpoint{
Hostname: "api.tunneling.iot-fips.us-gov-east-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-gov-east-1",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "fips-us-gov-west-1",
}: endpoint{
Hostname: "api.tunneling.iot-fips.us-gov-west-1.amazonaws.com",
CredentialScope: credentialScope{
Region: "us-gov-west-1",
},
Deprecated: boxedTrue,
},
endpointKey{
Region: "us-gov-east-1",
}: endpoint{},
endpointKey{
Region: "us-gov-east-1",
Variant: fipsVariant,
}: endpoint{
Hostname: "api.tunneling.iot-fips.us-gov-east-1.amazonaws.com",
},
endpointKey{
Region: "us-gov-west-1",
}: endpoint{},
endpointKey{
Region: "us-gov-west-1",
Variant: fipsVariant,
}: endpoint{
Hostname: "api.tunneling.iot-fips.us-gov-west-1.amazonaws.com",
},
},
},
"apigateway": service{ "apigateway": service{
Endpoints: serviceEndpoints{ Endpoints: serviceEndpoints{
endpointKey{ endpointKey{
@ -24697,6 +24965,14 @@ var awsusgovPartition = partition{
}, },
}, },
"ec2": service{ "ec2": service{
Defaults: endpointDefaults{
defaultKey{}: endpoint{},
defaultKey{
Variant: fipsVariant,
}: endpoint{
Hostname: "ec2.{region}.{dnsSuffix}",
},
},
Endpoints: serviceEndpoints{ Endpoints: serviceEndpoints{
endpointKey{ endpointKey{
Region: "us-gov-east-1", Region: "us-gov-east-1",

View file

@ -5,4 +5,4 @@ package aws
const SDKName = "aws-sdk-go" const SDKName = "aws-sdk-go"
// SDKVersion is the version of this SDK // SDKVersion is the version of this SDK
const SDKVersion = "1.43.3" const SDKVersion = "1.43.10"

View file

@ -0,0 +1,10 @@
//go:build go1.18
// +build go1.18
package eventstreamapi
import "github.com/aws/aws-sdk-go/aws/request"
// This is a no-op for Go 1.18 and above.
func ApplyHTTPTransportFixes(r *request.Request) {
}

View file

@ -0,0 +1,19 @@
//go:build !go1.18
// +build !go1.18
package eventstreamapi
import "github.com/aws/aws-sdk-go/aws/request"
// ApplyHTTPTransportFixes applies fixes to the HTTP request for proper event
// stream functionality. Go 1.15 through 1.17 HTTP client could hang forever
// when an HTTP/2 connection failed with an non-200 status code and err. Using
// Expect 100-Continue, allows the HTTP client to gracefully handle the non-200
// status code, and close the connection.
//
// This is a no-op for Go 1.18 and above.
func ApplyHTTPTransportFixes(r *request.Request) {
r.Handlers.Sign.PushBack(func(r *request.Request) {
r.HTTPRequest.Header.Set("Expect", "100-Continue")
})
}

File diff suppressed because it is too large Load diff

View file

@ -244,6 +244,10 @@ type S3API interface {
GetObjectAclWithContext(aws.Context, *s3.GetObjectAclInput, ...request.Option) (*s3.GetObjectAclOutput, error) GetObjectAclWithContext(aws.Context, *s3.GetObjectAclInput, ...request.Option) (*s3.GetObjectAclOutput, error)
GetObjectAclRequest(*s3.GetObjectAclInput) (*request.Request, *s3.GetObjectAclOutput) GetObjectAclRequest(*s3.GetObjectAclInput) (*request.Request, *s3.GetObjectAclOutput)
GetObjectAttributes(*s3.GetObjectAttributesInput) (*s3.GetObjectAttributesOutput, error)
GetObjectAttributesWithContext(aws.Context, *s3.GetObjectAttributesInput, ...request.Option) (*s3.GetObjectAttributesOutput, error)
GetObjectAttributesRequest(*s3.GetObjectAttributesInput) (*request.Request, *s3.GetObjectAttributesOutput)
GetObjectLegalHold(*s3.GetObjectLegalHoldInput) (*s3.GetObjectLegalHoldOutput, error) GetObjectLegalHold(*s3.GetObjectLegalHoldInput) (*s3.GetObjectLegalHoldOutput, error)
GetObjectLegalHoldWithContext(aws.Context, *s3.GetObjectLegalHoldInput, ...request.Option) (*s3.GetObjectLegalHoldOutput, error) GetObjectLegalHoldWithContext(aws.Context, *s3.GetObjectLegalHoldInput, ...request.Option) (*s3.GetObjectLegalHoldOutput, error)
GetObjectLegalHoldRequest(*s3.GetObjectLegalHoldInput) (*request.Request, *s3.GetObjectLegalHoldOutput) GetObjectLegalHoldRequest(*s3.GetObjectLegalHoldInput) (*request.Request, *s3.GetObjectLegalHoldOutput)

View file

@ -124,6 +124,14 @@ func WithUploaderRequestOptions(opts ...request.Option) func(*Uploader) {
// The Uploader structure that calls Upload(). It is safe to call Upload() // The Uploader structure that calls Upload(). It is safe to call Upload()
// on this structure for multiple objects and across concurrent goroutines. // on this structure for multiple objects and across concurrent goroutines.
// Mutating the Uploader's properties is not safe to be done concurrently. // Mutating the Uploader's properties is not safe to be done concurrently.
//
// The ContentMD5 member for pre-computed MD5 checksums will be ignored for
// multipart uploads. Objects that will be uploaded in a single part, the
// ContentMD5 will be used.
//
// The Checksum members for pre-computed checksums will be ignored for
// multipart uploads. Objects that will be uploaded in a single part, will
// include the checksum member in the request.
type Uploader struct { type Uploader struct {
// The buffer size (in bytes) to use when buffering data into chunks and // The buffer size (in bytes) to use when buffering data into chunks and
// sending them as parts to S3. The minimum allowed part size is 5MB, and // sending them as parts to S3. The minimum allowed part size is 5MB, and

View file

@ -11,6 +11,14 @@ import (
// to an object in an Amazon S3 bucket. This type is similar to the s3 // to an object in an Amazon S3 bucket. This type is similar to the s3
// package's PutObjectInput with the exception that the Body member is an // package's PutObjectInput with the exception that the Body member is an
// io.Reader instead of an io.ReadSeeker. // io.Reader instead of an io.ReadSeeker.
//
// The ContentMD5 member for pre-computed MD5 checksums will be ignored for
// multipart uploads. Objects that will be uploaded in a single part, the
// ContentMD5 will be used.
//
// The Checksum members for pre-computed checksums will be ignored for
// multipart uploads. Objects that will be uploaded in a single part, will
// include the checksum member in the request.
type UploadInput struct { type UploadInput struct {
_ struct{} `locationName:"PutObjectRequest" type:"structure" payload:"Body"` _ struct{} `locationName:"PutObjectRequest" type:"structure" payload:"Body"`
@ -35,9 +43,9 @@ type UploadInput struct {
// When using this action with Amazon S3 on Outposts, you must direct requests // When using this action with Amazon S3 on Outposts, you must direct requests
// to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form // to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form
// AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com. When // AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com. When
// using this action using S3 on Outposts through the Amazon Web Services SDKs, // using this action with S3 on Outposts through the Amazon Web Services SDKs,
// you provide the Outposts bucket ARN in place of the bucket name. For more // you provide the Outposts bucket ARN in place of the bucket name. For more
// information about S3 on Outposts ARNs, see Using S3 on Outposts (https://docs.aws.amazon.com/AmazonS3/latest/userguide/S3onOutposts.html) // information about S3 on Outposts ARNs, see Using Amazon S3 on Outposts (https://docs.aws.amazon.com/AmazonS3/latest/userguide/S3onOutposts.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
// //
// Bucket is a required field // Bucket is a required field
@ -57,6 +65,51 @@ type UploadInput struct {
// (http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9). // (http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9).
CacheControl *string `location:"header" locationName:"Cache-Control" type:"string"` CacheControl *string `location:"header" locationName:"Cache-Control" type:"string"`
// Indicates the algorithm used to create the checksum for the object when using
// the SDK. This header will not provide any additional functionality if not
// using the SDK. When sending this header, there must be a corresponding x-amz-checksum
// or x-amz-trailer header sent. Otherwise, Amazon S3 fails the request with
// the HTTP status code 400 Bad Request. For more information, see Checking
// object integrity (https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html)
// in the Amazon S3 User Guide.
//
// If you provide an individual checksum, Amazon S3 ignores any provided ChecksumAlgorithm
// parameter.
//
// The AWS SDK for Go v1 does not support automatic computing request payload
// checksum. This feature is available in the AWS SDK for Go v2. If a value
// is specified for this parameter, the matching algorithm's checksum member
// must be populated with the algorithm's checksum of the request payload.
ChecksumAlgorithm *string `location:"header" locationName:"x-amz-sdk-checksum-algorithm" type:"string" enum:"ChecksumAlgorithm"`
// This header can be used as a data integrity check to verify that the data
// received is the same data that was originally sent. This header specifies
// the base64-encoded, 32-bit CRC32 checksum of the object. For more information,
// see Checking object integrity (https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html)
// in the Amazon S3 User Guide.
ChecksumCRC32 *string `location:"header" locationName:"x-amz-checksum-crc32" type:"string"`
// This header can be used as a data integrity check to verify that the data
// received is the same data that was originally sent. This header specifies
// the base64-encoded, 32-bit CRC32C checksum of the object. For more information,
// see Checking object integrity (https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html)
// in the Amazon S3 User Guide.
ChecksumCRC32C *string `location:"header" locationName:"x-amz-checksum-crc32c" type:"string"`
// This header can be used as a data integrity check to verify that the data
// received is the same data that was originally sent. This header specifies
// the base64-encoded, 160-bit SHA-1 digest of the object. For more information,
// see Checking object integrity (https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html)
// in the Amazon S3 User Guide.
ChecksumSHA1 *string `location:"header" locationName:"x-amz-checksum-sha1" type:"string"`
// This header can be used as a data integrity check to verify that the data
// received is the same data that was originally sent. This header specifies
// the base64-encoded, 256-bit SHA-256 digest of the object. For more information,
// see Checking object integrity (https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html)
// in the Amazon S3 User Guide.
ChecksumSHA256 *string `location:"header" locationName:"x-amz-checksum-sha256" type:"string"`
// Specifies presentational information for the object. For more information, // Specifies presentational information for the object. For more information,
// see http://www.w3.org/Protocols/rfc2616/rfc2616-sec19.html#sec19.5.1 (http://www.w3.org/Protocols/rfc2616/rfc2616-sec19.html#sec19.5.1). // see http://www.w3.org/Protocols/rfc2616/rfc2616-sec19.html#sec19.5.1 (http://www.w3.org/Protocols/rfc2616/rfc2616-sec19.html#sec19.5.1).
ContentDisposition *string `location:"header" locationName:"Content-Disposition" type:"string"` ContentDisposition *string `location:"header" locationName:"Content-Disposition" type:"string"`
@ -76,6 +129,9 @@ type UploadInput struct {
// it is optional, we recommend using the Content-MD5 mechanism as an end-to-end // it is optional, we recommend using the Content-MD5 mechanism as an end-to-end
// integrity check. For more information about REST request authentication, // integrity check. For more information about REST request authentication,
// see REST Authentication (https://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html). // see REST Authentication (https://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html).
//
// If the ContentMD5 is provided for a multipart upload, it will be ignored.
// Objects that will be uploaded in a single part, the ContentMD5 will be used.
ContentMD5 *string `location:"header" locationName:"Content-MD5" type:"string"` ContentMD5 *string `location:"header" locationName:"Content-MD5" type:"string"`
// A standard MIME type describing the format of the contents. For more information, // A standard MIME type describing the format of the contents. For more information,
@ -83,8 +139,8 @@ type UploadInput struct {
ContentType *string `location:"header" locationName:"Content-Type" type:"string"` ContentType *string `location:"header" locationName:"Content-Type" type:"string"`
// The account ID of the expected bucket owner. If the bucket is owned by a // The account ID of the expected bucket owner. If the bucket is owned by a
// different account, the request will fail with an HTTP 403 (Access Denied) // different account, the request fails with the HTTP status code 403 Forbidden
// error. // (access denied).
ExpectedBucketOwner *string `location:"header" locationName:"x-amz-expected-bucket-owner" type:"string"` ExpectedBucketOwner *string `location:"header" locationName:"x-amz-expected-bucket-owner" type:"string"`
// The date and time at which the object is no longer cacheable. For more information, // The date and time at which the object is no longer cacheable. For more information,
@ -132,8 +188,8 @@ type UploadInput struct {
// Confirms that the requester knows that they will be charged for the request. // Confirms that the requester knows that they will be charged for the request.
// Bucket owners need not specify this parameter in their requests. For information // Bucket owners need not specify this parameter in their requests. For information
// about downloading objects from requester pays buckets, see Downloading Objects // about downloading objects from Requester Pays buckets, see Downloading Objects
// in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) // in Requester Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html)
// in the Amazon S3 User Guide. // in the Amazon S3 User Guide.
RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"`

17
vendor/golang.org/x/net/http2/go118.go generated vendored Normal file
View file

@ -0,0 +1,17 @@
// Copyright 2021 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build go1.18
// +build go1.18
package http2
import (
"crypto/tls"
"net"
)
func tlsUnderlyingConn(tc *tls.Conn) net.Conn {
return tc.NetConn()
}

17
vendor/golang.org/x/net/http2/not_go118.go generated vendored Normal file
View file

@ -0,0 +1,17 @@
// Copyright 2021 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build !go1.18
// +build !go1.18
package http2
import (
"crypto/tls"
"net"
)
func tlsUnderlyingConn(tc *tls.Conn) net.Conn {
return nil
}

View file

@ -735,7 +735,6 @@ func (cc *ClientConn) healthCheck() {
err := cc.Ping(ctx) err := cc.Ping(ctx)
if err != nil { if err != nil {
cc.closeForLostPing() cc.closeForLostPing()
cc.t.connPool().MarkDead(cc)
return return
} }
} }
@ -907,6 +906,24 @@ func (cc *ClientConn) onIdleTimeout() {
cc.closeIfIdle() cc.closeIfIdle()
} }
func (cc *ClientConn) closeConn() error {
t := time.AfterFunc(250*time.Millisecond, cc.forceCloseConn)
defer t.Stop()
return cc.tconn.Close()
}
// A tls.Conn.Close can hang for a long time if the peer is unresponsive.
// Try to shut it down more aggressively.
func (cc *ClientConn) forceCloseConn() {
tc, ok := cc.tconn.(*tls.Conn)
if !ok {
return
}
if nc := tlsUnderlyingConn(tc); nc != nil {
nc.Close()
}
}
func (cc *ClientConn) closeIfIdle() { func (cc *ClientConn) closeIfIdle() {
cc.mu.Lock() cc.mu.Lock()
if len(cc.streams) > 0 || cc.streamsReserved > 0 { if len(cc.streams) > 0 || cc.streamsReserved > 0 {
@ -921,7 +938,7 @@ func (cc *ClientConn) closeIfIdle() {
if VerboseLogs { if VerboseLogs {
cc.vlogf("http2: Transport closing idle conn %p (forSingleUse=%v, maxStream=%v)", cc, cc.singleUse, nextID-2) cc.vlogf("http2: Transport closing idle conn %p (forSingleUse=%v, maxStream=%v)", cc, cc.singleUse, nextID-2)
} }
cc.tconn.Close() cc.closeConn()
} }
func (cc *ClientConn) isDoNotReuseAndIdle() bool { func (cc *ClientConn) isDoNotReuseAndIdle() bool {
@ -938,7 +955,7 @@ func (cc *ClientConn) Shutdown(ctx context.Context) error {
return err return err
} }
// Wait for all in-flight streams to complete or connection to close // Wait for all in-flight streams to complete or connection to close
done := make(chan error, 1) done := make(chan struct{})
cancelled := false // guarded by cc.mu cancelled := false // guarded by cc.mu
go func() { go func() {
cc.mu.Lock() cc.mu.Lock()
@ -946,7 +963,7 @@ func (cc *ClientConn) Shutdown(ctx context.Context) error {
for { for {
if len(cc.streams) == 0 || cc.closed { if len(cc.streams) == 0 || cc.closed {
cc.closed = true cc.closed = true
done <- cc.tconn.Close() close(done)
break break
} }
if cancelled { if cancelled {
@ -957,8 +974,8 @@ func (cc *ClientConn) Shutdown(ctx context.Context) error {
}() }()
shutdownEnterWaitStateHook() shutdownEnterWaitStateHook()
select { select {
case err := <-done: case <-done:
return err return cc.closeConn()
case <-ctx.Done(): case <-ctx.Done():
cc.mu.Lock() cc.mu.Lock()
// Free the goroutine above // Free the goroutine above
@ -1001,9 +1018,9 @@ func (cc *ClientConn) closeForError(err error) error {
for _, cs := range cc.streams { for _, cs := range cc.streams {
cs.abortStreamLocked(err) cs.abortStreamLocked(err)
} }
defer cc.cond.Broadcast() cc.cond.Broadcast()
defer cc.mu.Unlock() cc.mu.Unlock()
return cc.tconn.Close() return cc.closeConn()
} }
// Close closes the client connection immediately. // Close closes the client connection immediately.
@ -1978,7 +1995,7 @@ func (cc *ClientConn) forgetStreamID(id uint32) {
cc.vlogf("http2: Transport closing idle conn %p (forSingleUse=%v, maxStream=%v)", cc, cc.singleUse, cc.nextStreamID-2) cc.vlogf("http2: Transport closing idle conn %p (forSingleUse=%v, maxStream=%v)", cc, cc.singleUse, cc.nextStreamID-2)
} }
cc.closed = true cc.closed = true
defer cc.tconn.Close() defer cc.closeConn()
} }
cc.mu.Unlock() cc.mu.Unlock()
@ -2025,8 +2042,8 @@ func isEOFOrNetReadError(err error) bool {
func (rl *clientConnReadLoop) cleanup() { func (rl *clientConnReadLoop) cleanup() {
cc := rl.cc cc := rl.cc
defer cc.tconn.Close() cc.t.connPool().MarkDead(cc)
defer cc.t.connPool().MarkDead(cc) defer cc.closeConn()
defer close(cc.readerDone) defer close(cc.readerDone)
if cc.idleTimer != nil { if cc.idleTimer != nil {

View file

@ -194,3 +194,26 @@ func ioctlIfreqData(fd int, req uint, value *ifreqData) error {
// identical so pass *IfreqData directly. // identical so pass *IfreqData directly.
return ioctlPtr(fd, req, unsafe.Pointer(value)) return ioctlPtr(fd, req, unsafe.Pointer(value))
} }
// IoctlKCMClone attaches a new file descriptor to a multiplexor by cloning an
// existing KCM socket, returning a structure containing the file descriptor of
// the new socket.
func IoctlKCMClone(fd int) (*KCMClone, error) {
var info KCMClone
if err := ioctlPtr(fd, SIOCKCMCLONE, unsafe.Pointer(&info)); err != nil {
return nil, err
}
return &info, nil
}
// IoctlKCMAttach attaches a TCP socket and associated BPF program file
// descriptor to a multiplexor.
func IoctlKCMAttach(fd int, info KCMAttach) error {
return ioctlPtr(fd, SIOCKCMATTACH, unsafe.Pointer(&info))
}
// IoctlKCMUnattach unattaches a TCP socket file descriptor from a multiplexor.
func IoctlKCMUnattach(fd int, info KCMUnattach) error {
return ioctlPtr(fd, SIOCKCMUNATTACH, unsafe.Pointer(&info))
}

View file

@ -205,6 +205,7 @@ struct ltchars {
#include <linux/bpf.h> #include <linux/bpf.h>
#include <linux/can.h> #include <linux/can.h>
#include <linux/can/error.h> #include <linux/can/error.h>
#include <linux/can/netlink.h>
#include <linux/can/raw.h> #include <linux/can/raw.h>
#include <linux/capability.h> #include <linux/capability.h>
#include <linux/cryptouser.h> #include <linux/cryptouser.h>
@ -231,6 +232,7 @@ struct ltchars {
#include <linux/if_packet.h> #include <linux/if_packet.h>
#include <linux/if_xdp.h> #include <linux/if_xdp.h>
#include <linux/input.h> #include <linux/input.h>
#include <linux/kcm.h>
#include <linux/kexec.h> #include <linux/kexec.h>
#include <linux/keyctl.h> #include <linux/keyctl.h>
#include <linux/landlock.h> #include <linux/landlock.h>
@ -503,6 +505,7 @@ ccflags="$@"
$2 ~ /^O?XTABS$/ || $2 ~ /^O?XTABS$/ ||
$2 ~ /^TC[IO](ON|OFF)$/ || $2 ~ /^TC[IO](ON|OFF)$/ ||
$2 ~ /^IN_/ || $2 ~ /^IN_/ ||
$2 ~ /^KCM/ ||
$2 ~ /^LANDLOCK_/ || $2 ~ /^LANDLOCK_/ ||
$2 ~ /^LOCK_(SH|EX|NB|UN)$/ || $2 ~ /^LOCK_(SH|EX|NB|UN)$/ ||
$2 ~ /^LO_(KEY|NAME)_SIZE$/ || $2 ~ /^LO_(KEY|NAME)_SIZE$/ ||

View file

@ -260,6 +260,17 @@ const (
BUS_USB = 0x3 BUS_USB = 0x3
BUS_VIRTUAL = 0x6 BUS_VIRTUAL = 0x6
CAN_BCM = 0x2 CAN_BCM = 0x2
CAN_CTRLMODE_3_SAMPLES = 0x4
CAN_CTRLMODE_BERR_REPORTING = 0x10
CAN_CTRLMODE_CC_LEN8_DLC = 0x100
CAN_CTRLMODE_FD = 0x20
CAN_CTRLMODE_FD_NON_ISO = 0x80
CAN_CTRLMODE_LISTENONLY = 0x2
CAN_CTRLMODE_LOOPBACK = 0x1
CAN_CTRLMODE_ONE_SHOT = 0x8
CAN_CTRLMODE_PRESUME_ACK = 0x40
CAN_CTRLMODE_TDC_AUTO = 0x200
CAN_CTRLMODE_TDC_MANUAL = 0x400
CAN_EFF_FLAG = 0x80000000 CAN_EFF_FLAG = 0x80000000
CAN_EFF_ID_BITS = 0x1d CAN_EFF_ID_BITS = 0x1d
CAN_EFF_MASK = 0x1fffffff CAN_EFF_MASK = 0x1fffffff
@ -337,6 +348,7 @@ const (
CAN_RTR_FLAG = 0x40000000 CAN_RTR_FLAG = 0x40000000
CAN_SFF_ID_BITS = 0xb CAN_SFF_ID_BITS = 0xb
CAN_SFF_MASK = 0x7ff CAN_SFF_MASK = 0x7ff
CAN_TERMINATION_DISABLED = 0x0
CAN_TP16 = 0x3 CAN_TP16 = 0x3
CAN_TP20 = 0x4 CAN_TP20 = 0x4
CAP_AUDIT_CONTROL = 0x1e CAP_AUDIT_CONTROL = 0x1e
@ -1274,6 +1286,8 @@ const (
IUTF8 = 0x4000 IUTF8 = 0x4000
IXANY = 0x800 IXANY = 0x800
JFFS2_SUPER_MAGIC = 0x72b6 JFFS2_SUPER_MAGIC = 0x72b6
KCMPROTO_CONNECTED = 0x0
KCM_RECV_DISABLE = 0x1
KEXEC_ARCH_386 = 0x30000 KEXEC_ARCH_386 = 0x30000
KEXEC_ARCH_68K = 0x40000 KEXEC_ARCH_68K = 0x40000
KEXEC_ARCH_AARCH64 = 0xb70000 KEXEC_ARCH_AARCH64 = 0xb70000
@ -2446,6 +2460,9 @@ const (
SIOCGSTAMPNS = 0x8907 SIOCGSTAMPNS = 0x8907
SIOCGSTAMPNS_OLD = 0x8907 SIOCGSTAMPNS_OLD = 0x8907
SIOCGSTAMP_OLD = 0x8906 SIOCGSTAMP_OLD = 0x8906
SIOCKCMATTACH = 0x89e0
SIOCKCMCLONE = 0x89e2
SIOCKCMUNATTACH = 0x89e1
SIOCOUTQNSD = 0x894b SIOCOUTQNSD = 0x894b
SIOCPROTOPRIVATE = 0x89e0 SIOCPROTOPRIVATE = 0x89e0
SIOCRTMSG = 0x890d SIOCRTMSG = 0x890d

View file

@ -3771,6 +3771,8 @@ const (
ETHTOOL_A_TUNNEL_INFO_MAX = 0x2 ETHTOOL_A_TUNNEL_INFO_MAX = 0x2
) )
const SPEED_UNKNOWN = -0x1
type EthtoolDrvinfo struct { type EthtoolDrvinfo struct {
Cmd uint32 Cmd uint32
Driver [32]byte Driver [32]byte
@ -4070,3 +4072,91 @@ const (
NL_POLICY_TYPE_ATTR_MASK = 0xc NL_POLICY_TYPE_ATTR_MASK = 0xc
NL_POLICY_TYPE_ATTR_MAX = 0xc NL_POLICY_TYPE_ATTR_MAX = 0xc
) )
type CANBitTiming struct {
Bitrate uint32
Sample_point uint32
Tq uint32
Prop_seg uint32
Phase_seg1 uint32
Phase_seg2 uint32
Sjw uint32
Brp uint32
}
type CANBitTimingConst struct {
Name [16]uint8
Tseg1_min uint32
Tseg1_max uint32
Tseg2_min uint32
Tseg2_max uint32
Sjw_max uint32
Brp_min uint32
Brp_max uint32
Brp_inc uint32
}
type CANClock struct {
Freq uint32
}
type CANBusErrorCounters struct {
Txerr uint16
Rxerr uint16
}
type CANCtrlMode struct {
Mask uint32
Flags uint32
}
type CANDeviceStats struct {
Bus_error uint32
Error_warning uint32
Error_passive uint32
Bus_off uint32
Arbitration_lost uint32
Restarts uint32
}
const (
CAN_STATE_ERROR_ACTIVE = 0x0
CAN_STATE_ERROR_WARNING = 0x1
CAN_STATE_ERROR_PASSIVE = 0x2
CAN_STATE_BUS_OFF = 0x3
CAN_STATE_STOPPED = 0x4
CAN_STATE_SLEEPING = 0x5
CAN_STATE_MAX = 0x6
)
const (
IFLA_CAN_UNSPEC = 0x0
IFLA_CAN_BITTIMING = 0x1
IFLA_CAN_BITTIMING_CONST = 0x2
IFLA_CAN_CLOCK = 0x3
IFLA_CAN_STATE = 0x4
IFLA_CAN_CTRLMODE = 0x5
IFLA_CAN_RESTART_MS = 0x6
IFLA_CAN_RESTART = 0x7
IFLA_CAN_BERR_COUNTER = 0x8
IFLA_CAN_DATA_BITTIMING = 0x9
IFLA_CAN_DATA_BITTIMING_CONST = 0xa
IFLA_CAN_TERMINATION = 0xb
IFLA_CAN_TERMINATION_CONST = 0xc
IFLA_CAN_BITRATE_CONST = 0xd
IFLA_CAN_DATA_BITRATE_CONST = 0xe
IFLA_CAN_BITRATE_MAX = 0xf
)
type KCMAttach struct {
Fd int32
Bpf_fd int32
}
type KCMUnattach struct {
Fd int32
}
type KCMClone struct {
Fd int32
}

View file

@ -210,8 +210,8 @@ type PtraceFpregs struct {
} }
type PtracePer struct { type PtracePer struct {
_ [0]uint64 Control_regs [3]uint64
_ [32]byte _ [8]byte
Starting_addr uint64 Starting_addr uint64
Ending_addr uint64 Ending_addr uint64
Perc_atmid uint16 Perc_atmid uint16

14
vendor/modules.txt vendored
View file

@ -5,10 +5,10 @@ cloud.google.com/go/internal
cloud.google.com/go/internal/optional cloud.google.com/go/internal/optional
cloud.google.com/go/internal/trace cloud.google.com/go/internal/trace
cloud.google.com/go/internal/version cloud.google.com/go/internal/version
# cloud.google.com/go/compute v1.4.0 # cloud.google.com/go/compute v1.5.0
## explicit; go 1.15 ## explicit; go 1.15
cloud.google.com/go/compute/metadata cloud.google.com/go/compute/metadata
# cloud.google.com/go/iam v0.2.0 # cloud.google.com/go/iam v0.3.0
## explicit; go 1.15 ## explicit; go 1.15
cloud.google.com/go/iam cloud.google.com/go/iam
# cloud.google.com/go/storage v1.21.0 # cloud.google.com/go/storage v1.21.0
@ -34,7 +34,7 @@ github.com/VictoriaMetrics/metricsql/binaryop
# github.com/VividCortex/ewma v1.2.0 # github.com/VividCortex/ewma v1.2.0
## explicit; go 1.12 ## explicit; go 1.12
github.com/VividCortex/ewma github.com/VividCortex/ewma
# github.com/aws/aws-sdk-go v1.43.3 # github.com/aws/aws-sdk-go v1.43.10
## explicit; go 1.11 ## explicit; go 1.11
github.com/aws/aws-sdk-go/aws github.com/aws/aws-sdk-go/aws
github.com/aws/aws-sdk-go/aws/arn github.com/aws/aws-sdk-go/aws/arn
@ -265,7 +265,7 @@ go.opencensus.io/trace/tracestate
go.uber.org/atomic go.uber.org/atomic
# go.uber.org/goleak v1.1.11-0.20210813005559-691160354723 # go.uber.org/goleak v1.1.11-0.20210813005559-691160354723
## explicit; go 1.13 ## explicit; go 1.13
# golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd # golang.org/x/net v0.0.0-20220225172249-27dd8689420f
## explicit; go 1.17 ## explicit; go 1.17
golang.org/x/net/context golang.org/x/net/context
golang.org/x/net/context/ctxhttp golang.org/x/net/context/ctxhttp
@ -277,7 +277,7 @@ golang.org/x/net/internal/socks
golang.org/x/net/internal/timeseries golang.org/x/net/internal/timeseries
golang.org/x/net/proxy golang.org/x/net/proxy
golang.org/x/net/trace golang.org/x/net/trace
# golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8 # golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b
## explicit; go 1.11 ## explicit; go 1.11
golang.org/x/oauth2 golang.org/x/oauth2
golang.org/x/oauth2/authhandler golang.org/x/oauth2/authhandler
@ -290,7 +290,7 @@ golang.org/x/oauth2/jwt
# golang.org/x/sync v0.0.0-20210220032951-036812b2e83c # golang.org/x/sync v0.0.0-20210220032951-036812b2e83c
## explicit ## explicit
golang.org/x/sync/errgroup golang.org/x/sync/errgroup
# golang.org/x/sys v0.0.0-20220222172238-00053529121e # golang.org/x/sys v0.0.0-20220227234510-4e6760a101f9
## explicit; go 1.17 ## explicit; go 1.17
golang.org/x/sys/internal/unsafeheader golang.org/x/sys/internal/unsafeheader
golang.org/x/sys/unix golang.org/x/sys/unix
@ -338,7 +338,7 @@ google.golang.org/appengine/internal/socket
google.golang.org/appengine/internal/urlfetch google.golang.org/appengine/internal/urlfetch
google.golang.org/appengine/socket google.golang.org/appengine/socket
google.golang.org/appengine/urlfetch google.golang.org/appengine/urlfetch
# google.golang.org/genproto v0.0.0-20220222154240-daf995802d7b # google.golang.org/genproto v0.0.0-20220302033224-9aa15565e42a
## explicit; go 1.15 ## explicit; go 1.15
google.golang.org/genproto/googleapis/api/annotations google.golang.org/genproto/googleapis/api/annotations
google.golang.org/genproto/googleapis/iam/v1 google.golang.org/genproto/googleapis/iam/v1