mirror of
https://github.com/VictoriaMetrics/VictoriaMetrics.git
synced 2025-01-10 15:14:09 +00:00
Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files
This commit is contained in:
commit
1f0432b5c1
208 changed files with 5413 additions and 2389 deletions
83
README.md
83
README.md
|
@ -275,7 +275,7 @@ It also provides the following features:
|
|||
- [query tracer](#query-tracing)
|
||||
- [top queries explorer](#top-queries)
|
||||
|
||||
Graphs in vmui support scrolling and zooming:
|
||||
Graphs in `vmui` support scrolling and zooming:
|
||||
|
||||
* Select the needed time range on the graph in order to zoom in into the selected time range. Hold `ctrl` (or `cmd` on MacOS) and scroll down in order to zoom out.
|
||||
* Hold `ctrl` (or `cmd` on MacOS) and scroll up in order to zoom in the area under cursor.
|
||||
|
@ -293,6 +293,8 @@ VMUI allows investigating correlations between multiple queries on the same grap
|
|||
enter an additional query in the newly appeared input field and press `Enter`.
|
||||
Results for all the queries are displayed simultaneously on the same graph.
|
||||
Graphs for a particular query can be temporarily hidden by clicking the `eye` icon on the right side of the input field.
|
||||
When the `eye` icon is clicked while holding the `ctrl` key, then query results for the rest of queries become hidden
|
||||
except of the current query results.
|
||||
|
||||
See the [example VMUI at VictoriaMetrics playground](https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/?g0.expr=100%20*%20sum(rate(process_cpu_seconds_total))%20by%20(job)&g0.range_input=1d).
|
||||
|
||||
|
@ -1363,18 +1365,50 @@ It is recommended passing different `-promscrape.cluster.name` values to HA pair
|
|||
|
||||
## Storage
|
||||
|
||||
VictoriaMetrics stores time series data in [MergeTree](https://en.wikipedia.org/wiki/Log-structured_merge-tree)-like
|
||||
data structures. On insert, VictoriaMetrics accumulates up to 1s of data and dumps it on disk to
|
||||
`<-storageDataPath>/data/small/YYYY_MM/` subdirectory forming a `part` with the following
|
||||
name pattern: `rowsCount_blocksCount_minTimestamp_maxTimestamp`. Each part consists of two "columns":
|
||||
values and timestamps. These are sorted and compressed raw time series values. Additionally, part contains
|
||||
index files for searching for specific series in the values and timestamps files.
|
||||
VictoriaMetrics buffers the ingested data in memory for up to a second. Then the buffered data is written to in-memory `parts`,
|
||||
which can be searched during queries. The in-memory `parts` are periodically persisted to disk, so they could survive unclean shutdown
|
||||
such as out of memory crash, hardware power loss or `SIGKILL` signal. The interval for flushing the in-memory data to disk
|
||||
can be configured with the `-inmemoryDataFlushInterval` command-line flag (note that too short flush interval may significantly increase disk IO).
|
||||
|
||||
`Parts` are periodically merged into the bigger parts. The resulting `part` is constructed
|
||||
under `<-storageDataPath>/data/{small,big}/YYYY_MM/tmp` subdirectory.
|
||||
When the resulting `part` is complete, it is atomically moved from the `tmp`
|
||||
to its own subdirectory, while the source parts are atomically removed. The end result is that the source
|
||||
parts are substituted by a single resulting bigger `part` in the `<-storageDataPath>/data/{small,big}/YYYY_MM/` directory.
|
||||
In-memory parts are persisted to disk into `part` directories under the `<-storageDataPath>/data/small/YYYY_MM/` folder,
|
||||
where `YYYY_MM` is the month partition for the stored data. For example, `2022_11` is the partition for `parts`
|
||||
with [raw samples](https://docs.victoriametrics.com/keyConcepts.html#raw-samples) from `November 2022`.
|
||||
|
||||
The `part` directory has the following name pattern: `rowsCount_blocksCount_minTimestamp_maxTimestamp`, where:
|
||||
|
||||
- `rowsCount` - the number of [raw samples](https://docs.victoriametrics.com/keyConcepts.html#raw-samples) stored in the part
|
||||
- `blocksCount` - the number of blocks stored in the part (see details about blocks below)
|
||||
- `minTimestamp` and `maxTimestamp` - minimum and maximum timestamps across raw samples stored in the part
|
||||
|
||||
Each `part` consists of `blocks` sorted by internal time series id (aka `TSID`).
|
||||
Each `block` contains up to 8K [raw samples](https://docs.victoriametrics.com/keyConcepts.html#raw-samples),
|
||||
which belong to a single [time series](https://docs.victoriametrics.com/keyConcepts.html#time-series).
|
||||
Raw samples in each block are sorted by `timestamp`. Blocks for the same time series are sorted
|
||||
by the `timestamp` of the first sample. Timestamps and values for all the blocks
|
||||
are stored in [compressed form](https://faun.pub/victoriametrics-achieving-better-compression-for-time-series-data-than-gorilla-317bc1f95932)
|
||||
in separate files under `part` directory - `timestamps.bin` and `values.bin`.
|
||||
|
||||
The `part` directory also contains `index.bin` and `metaindex.bin` files - these files contain index
|
||||
for fast block lookups, which belong to the given `TSID` and cover the given time range.
|
||||
|
||||
`Parts` are periodically merged into bigger parts in background. The background merge provides the following benefits:
|
||||
|
||||
* keeping the number of data files under control, so they don't exceed limits on open files
|
||||
* improved data compression, since bigger parts are usually compressed better than smaller parts
|
||||
* improved query speed, since queries over smaller number of parts are executed faster
|
||||
* various background maintenance tasks such as [de-duplication](#deduplication), [downsampling](#downsampling)
|
||||
and [freeing up disk space for the deleted time series](#how-to-delete-time-series) are performed during the merge
|
||||
|
||||
Newly added `parts` either successfully appear in the storage or fail to appear.
|
||||
The newly added `parts` are being created in a temporary directory under `<-storageDataPath>/data/{small,big}/YYYY_MM/tmp` folder.
|
||||
When the newly added `part` is fully written and [fsynced](https://man7.org/linux/man-pages/man2/fsync.2.html)
|
||||
to a temporary directory, then it is atomically moved to the storage directory.
|
||||
Thanks to this alogrithm, storage never contains partially created parts, even if hardware power off
|
||||
occurrs in the middle of writing the `part` to disk - such incompletely written `parts`
|
||||
are automatically deleted on the next VictoriaMetrics start.
|
||||
|
||||
The same applies to merge process — `parts` are either fully merged into a new `part` or fail to merge,
|
||||
leaving the source `parts` untouched.
|
||||
|
||||
VictoriaMetrics doesn't merge parts if their summary size exceeds free disk space.
|
||||
This prevents from potential out of disk space errors during merge.
|
||||
|
@ -1383,24 +1417,10 @@ This increases overhead during data querying, since VictoriaMetrics needs to rea
|
|||
bigger number of parts per each request. That's why it is recommended to have at least 20%
|
||||
of free disk space under directory pointed by `-storageDataPath` command-line flag.
|
||||
|
||||
Information about merging process is available in [single-node VictoriaMetrics](https://grafana.com/dashboards/10229)
|
||||
and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176) Grafana dashboards.
|
||||
Information about merging process is available in [the dashboard for single-node VictoriaMetrics](https://grafana.com/dashboards/10229)
|
||||
and [the dashboard for VictoriaMetrics cluster](https://grafana.com/grafana/dashboards/11176).
|
||||
See more details in [monitoring docs](#monitoring).
|
||||
|
||||
The `merge` process improves compression rate and keeps number of `parts` on disk relatively low.
|
||||
Benefits of doing the merge process are the following:
|
||||
|
||||
* it improves query performance, since lower number of `parts` are inspected with each query
|
||||
* it reduces the number of data files, since each `part` contains fixed number of files
|
||||
* various background maintenance tasks such as [de-duplication](#deduplication), [downsampling](#downsampling)
|
||||
and [freeing up disk space for the deleted time series](#how-to-delete-time-series) are performed during the merge.
|
||||
|
||||
Newly added `parts` either appear in the storage or fail to appear.
|
||||
Storage never contains partially created parts. The same applies to merge process — `parts` are either fully
|
||||
merged into a new `part` or fail to merge. MergeTree doesn't contain partially merged `parts`.
|
||||
`Part` contents in MergeTree never change. Parts are immutable. They may be only deleted after the merge
|
||||
to a bigger `part` or when the `part` contents goes outside the configured `-retentionPeriod`.
|
||||
|
||||
See [this article](https://valyala.medium.com/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282) for more details.
|
||||
|
||||
See also [how to work with snapshots](#how-to-work-with-snapshots).
|
||||
|
@ -1723,10 +1743,11 @@ and [cardinality explorer docs](#cardinality-explorer).
|
|||
|
||||
* VictoriaMetrics buffers incoming data in memory for up to a few seconds before flushing it to persistent storage.
|
||||
This may lead to the following "issues":
|
||||
* Data becomes available for querying in a few seconds after inserting. It is possible to flush in-memory buffers to persistent storage
|
||||
* Data becomes available for querying in a few seconds after inserting. It is possible to flush in-memory buffers to searchable parts
|
||||
by requesting `/internal/force_flush` http handler. This handler is mostly needed for testing and debugging purposes.
|
||||
* The last few seconds of inserted data may be lost on unclean shutdown (i.e. OOM, `kill -9` or hardware reset).
|
||||
See [this article for technical details](https://valyala.medium.com/wal-usage-looks-broken-in-modern-time-series-databases-b62a627ab704).
|
||||
The `-inmemoryDataFlushInterval` command-line flag allows controlling the frequency of in-memory data flush to persistent storage.
|
||||
See [storage docs](#storage) and [this article](https://valyala.medium.com/wal-usage-looks-broken-in-modern-time-series-databases-b62a627ab704) for more details.
|
||||
|
||||
* If VictoriaMetrics works slowly and eats more than a CPU core per 100K ingested data points per second,
|
||||
then it is likely you have too many [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series) for the current amount of RAM.
|
||||
|
@ -2133,6 +2154,8 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
Uses '{measurement}' instead of '{measurement}{separator}{field_name}' for metic name if InfluxDB line contains only a single field
|
||||
-influxTrimTimestamp duration
|
||||
Trim timestamps for InfluxDB line protocol data to this duration. Minimum practical duration is 1ms. Higher duration (i.e. 1s) may be used for reducing disk space usage for timestamp data (default 1ms)
|
||||
-inmemoryDataFlushInterval duration
|
||||
The interval for guaranteed saving of in-memory data to disk. The saved data survives unclean shutdown such as OOM crash, hardware reset, SIGKILL, etc. Bigger intervals may help increasing lifetime of flash storage with limited write cycles (e.g. Raspberry PI). Smaller intervals increase disk IO load. Minimum supported value is 1s (default 5s)
|
||||
-insert.maxQueueDuration duration
|
||||
The maximum duration for waiting in the queue for insert requests due to -maxConcurrentInserts (default 1m0s)
|
||||
-logNewSeries
|
||||
|
|
|
@ -29,6 +29,10 @@ var (
|
|||
"equal to -dedup.minScrapeInterval > 0. See https://docs.victoriametrics.com/#deduplication and https://docs.victoriametrics.com/#downsampling")
|
||||
dryRun = flag.Bool("dryRun", false, "Whether to check only -promscrape.config and then exit. "+
|
||||
"Unknown config entries aren't allowed in -promscrape.config by default. This can be changed with -promscrape.config.strictParse=false command-line flag")
|
||||
inmemoryDataFlushInterval = flag.Duration("inmemoryDataFlushInterval", 5*time.Second, "The interval for guaranteed saving of in-memory data to disk. "+
|
||||
"The saved data survives unclean shutdown such as OOM crash, hardware reset, SIGKILL, etc. "+
|
||||
"Bigger intervals may help increasing lifetime of flash storage with limited write cycles (e.g. Raspberry PI). "+
|
||||
"Smaller intervals increase disk IO load. Minimum supported value is 1s")
|
||||
downsamplingPeriods = flagutil.NewArrayString("downsampling.period", "Comma-separated downsampling periods in the format 'offset:period'. For example, '30d:10m' instructs "+
|
||||
"to leave a single sample per 10 minutes for samples older than 30 days. See https://docs.victoriametrics.com/#downsampling for details")
|
||||
)
|
||||
|
@ -65,6 +69,7 @@ func main() {
|
|||
if err != nil {
|
||||
logger.Fatalf("cannot parse -downsampling.period: %s", err)
|
||||
}
|
||||
storage.SetDataFlushInterval(*inmemoryDataFlushInterval)
|
||||
vmstorage.Init(promql.ResetRollupResultCacheIfNeeded)
|
||||
vmselect.Init()
|
||||
vminsert.Init()
|
||||
|
|
|
@ -56,6 +56,12 @@ func insertRows(at *auth.Token, series []parser.Series, extraLabels []prompbmars
|
|||
Name: "host",
|
||||
Value: ss.Host,
|
||||
})
|
||||
if ss.Device != "" {
|
||||
labels = append(labels, prompbmarshal.Label{
|
||||
Name: "device",
|
||||
Value: ss.Device,
|
||||
})
|
||||
}
|
||||
for _, tag := range ss.Tags {
|
||||
name, value := parser.SplitTag(tag)
|
||||
if name == "host" {
|
||||
|
|
|
@ -251,7 +251,7 @@ func Stop() {
|
|||
// Push sends wr to remote storage systems set via `-remoteWrite.url`.
|
||||
//
|
||||
// If at is nil, then the data is pushed to the configured `-remoteWrite.url`.
|
||||
// If at isn't nil, the the data is pushed to the configured `-remoteWrite.multitenantURL`.
|
||||
// If at isn't nil, the data is pushed to the configured `-remoteWrite.multitenantURL`.
|
||||
//
|
||||
// Note that wr may be modified by Push due to relabeling and rounding.
|
||||
func Push(at *auth.Token, wr *prompbmarshal.WriteRequest) {
|
||||
|
|
|
@ -54,6 +54,19 @@ func (m *Metric) SetLabel(key, value string) {
|
|||
m.AddLabel(key, value)
|
||||
}
|
||||
|
||||
// SetLabels sets the given map as Metric labels
|
||||
func (m *Metric) SetLabels(ls map[string]string) {
|
||||
var i int
|
||||
m.Labels = make([]Label, len(ls))
|
||||
for k, v := range ls {
|
||||
m.Labels[i] = Label{
|
||||
Name: k,
|
||||
Value: v,
|
||||
}
|
||||
i++
|
||||
}
|
||||
}
|
||||
|
||||
// AddLabel appends the given label to the label set
|
||||
func (m *Metric) AddLabel(key, value string) {
|
||||
m.Labels = append(m.Labels, Label{Name: key, Value: value})
|
||||
|
|
|
@ -32,19 +32,17 @@ type promInstant struct {
|
|||
}
|
||||
|
||||
func (r promInstant) metrics() ([]Metric, error) {
|
||||
var result []Metric
|
||||
result := make([]Metric, len(r.Result))
|
||||
for i, res := range r.Result {
|
||||
f, err := strconv.ParseFloat(res.TV[1].(string), 64)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("metric %v, unable to parse float64 from %s: %w", res, res.TV[1], err)
|
||||
}
|
||||
var m Metric
|
||||
for k, v := range r.Result[i].Labels {
|
||||
m.AddLabel(k, v)
|
||||
}
|
||||
m.SetLabels(res.Labels)
|
||||
m.Timestamps = append(m.Timestamps, int64(res.TV[0].(float64)))
|
||||
m.Values = append(m.Values, f)
|
||||
result = append(result, m)
|
||||
result[i] = m
|
||||
}
|
||||
return result, nil
|
||||
}
|
||||
|
|
20
app/vmalert/datasource/vm_prom_api_test.go
Normal file
20
app/vmalert/datasource/vm_prom_api_test.go
Normal file
|
@ -0,0 +1,20 @@
|
|||
package datasource
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkMetrics(b *testing.B) {
|
||||
payload := []byte(`[{"metric":{"__name__":"vm_rows"},"value":[1583786142,"13763"]},{"metric":{"__name__":"vm_requests", "foo":"bar", "baz": "qux"},"value":[1583786140,"2000"]}]`)
|
||||
|
||||
var pi promInstant
|
||||
if err := json.Unmarshal(payload, &pi.Result); err != nil {
|
||||
b.Fatalf(err.Error())
|
||||
}
|
||||
b.Run("Instant", func(b *testing.B) {
|
||||
for i := 0; i < b.N; i++ {
|
||||
_, _ = pi.metrics()
|
||||
}
|
||||
})
|
||||
}
|
|
@ -7,6 +7,7 @@ import (
|
|||
"net/http/httptest"
|
||||
"net/url"
|
||||
"reflect"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"testing"
|
||||
|
@ -74,7 +75,7 @@ func TestVMInstantQuery(t *testing.T) {
|
|||
case 5:
|
||||
w.Write([]byte(`{"status":"success","data":{"resultType":"matrix"}}`))
|
||||
case 6:
|
||||
w.Write([]byte(`{"status":"success","data":{"resultType":"vector","result":[{"metric":{"__name__":"vm_rows"},"value":[1583786142,"13763"]},{"metric":{"__name__":"vm_requests"},"value":[1583786140,"2000"]}]}}`))
|
||||
w.Write([]byte(`{"status":"success","data":{"resultType":"vector","result":[{"metric":{"__name__":"vm_rows","foo":"bar"},"value":[1583786142,"13763"]},{"metric":{"__name__":"vm_requests","foo":"baz"},"value":[1583786140,"2000"]}]}}`))
|
||||
case 7:
|
||||
w.Write([]byte(`{"status":"success","data":{"resultType":"scalar","result":[1583786142, "1"]}}`))
|
||||
}
|
||||
|
@ -115,19 +116,17 @@ func TestVMInstantQuery(t *testing.T) {
|
|||
}
|
||||
expected := []Metric{
|
||||
{
|
||||
Labels: []Label{{Value: "vm_rows", Name: "__name__"}},
|
||||
Labels: []Label{{Value: "vm_rows", Name: "__name__"}, {Value: "bar", Name: "foo"}},
|
||||
Timestamps: []int64{1583786142},
|
||||
Values: []float64{13763},
|
||||
},
|
||||
{
|
||||
Labels: []Label{{Value: "vm_requests", Name: "__name__"}},
|
||||
Labels: []Label{{Value: "vm_requests", Name: "__name__"}, {Value: "baz", Name: "foo"}},
|
||||
Timestamps: []int64{1583786140},
|
||||
Values: []float64{2000},
|
||||
},
|
||||
}
|
||||
if !reflect.DeepEqual(m, expected) {
|
||||
t.Fatalf("unexpected metric %+v want %+v", m, expected)
|
||||
}
|
||||
metricsEqual(t, m, expected)
|
||||
|
||||
m, req, err := pq.Query(ctx, query, ts) // 7 - scalar
|
||||
if err != nil {
|
||||
|
@ -158,13 +157,36 @@ func TestVMInstantQuery(t *testing.T) {
|
|||
if len(m) != 1 {
|
||||
t.Fatalf("expected 1 metric got %d in %+v", len(m), m)
|
||||
}
|
||||
exp := Metric{
|
||||
Labels: []Label{{Value: "constantLine(10)", Name: "name"}},
|
||||
Timestamps: []int64{1611758403},
|
||||
Values: []float64{10},
|
||||
exp := []Metric{
|
||||
{
|
||||
Labels: []Label{{Value: "constantLine(10)", Name: "name"}},
|
||||
Timestamps: []int64{1611758403},
|
||||
Values: []float64{10},
|
||||
},
|
||||
}
|
||||
if !reflect.DeepEqual(m[0], exp) {
|
||||
t.Fatalf("unexpected metric %+v want %+v", m[0], expected)
|
||||
metricsEqual(t, m, exp)
|
||||
}
|
||||
|
||||
func metricsEqual(t *testing.T, gotM, expectedM []Metric) {
|
||||
for i, exp := range expectedM {
|
||||
got := gotM[i]
|
||||
gotTS, expTS := got.Timestamps, exp.Timestamps
|
||||
if !reflect.DeepEqual(gotTS, expTS) {
|
||||
t.Fatalf("unexpected timestamps %+v want %+v", gotTS, expTS)
|
||||
}
|
||||
gotV, expV := got.Values, exp.Values
|
||||
if !reflect.DeepEqual(gotV, expV) {
|
||||
t.Fatalf("unexpected values %+v want %+v", gotV, expV)
|
||||
}
|
||||
sort.Slice(got.Labels, func(i, j int) bool {
|
||||
return got.Labels[i].Name < got.Labels[j].Name
|
||||
})
|
||||
sort.Slice(exp.Labels, func(i, j int) bool {
|
||||
return exp.Labels[i].Name < exp.Labels[j].Name
|
||||
})
|
||||
if !reflect.DeepEqual(exp.Labels, got.Labels) {
|
||||
t.Fatalf("unexpected labels %+v want %+v", got.Labels, exp.Labels)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -833,6 +833,80 @@ Total: 16 B ↗ Speed: 186.32 KiB p/s
|
|||
2022/08/30 19:48:24 Total time: 12.680582ms
|
||||
```
|
||||
|
||||
#### Cluster-to-cluster migration mode
|
||||
|
||||
Using cluster-to-cluster migration mode helps to migrate all tenants data in a single `vmctl` run.
|
||||
|
||||
Cluster-to-cluster uses `/admin/tenants` endpoint (available starting from [v1.84.0](https://docs.victoriametrics.com/CHANGELOG.html#v1840)) to discover list of tenants from source cluster.
|
||||
|
||||
To use this mode you need to set `--vm-intercluster` flag to `true`, `--vm-native-src-addr` flag to 'http://vmselect:8481/' and `--vm-native-dst-addr` value to http://vminsert:8480/:
|
||||
|
||||
```console
|
||||
./bin/vmctl vm-native --vm-intercluster=true --vm-native-src-addr=http://localhost:8481/ --vm-native-dst-addr=http://172.17.0.3:8480/
|
||||
VictoriaMetrics Native import mode
|
||||
2022/12/05 21:20:06 Discovered tenants: [123:1 12812919:1 1289198:1 1289:1283 12:1 1:0 1:1 1:1231231 1:1271727 1:12819 1:281 812891298:1]
|
||||
2022/12/05 21:20:06 Initing export pipe from "http://localhost:8481/select/123:1/prometheus/api/v1/export/native" with filters:
|
||||
filter: match[]={__name__!=""}
|
||||
Initing import process to "http://172.17.0.3:8480/insert/123:1/prometheus/api/v1/import/native":
|
||||
Total: 61.13 MiB ↖ Speed: 2.05 MiB p/s
|
||||
Total: 61.13 MiB ↗ Speed: 2.30 MiB p/s
|
||||
2022/12/05 21:20:33 Initing export pipe from "http://localhost:8481/select/12812919:1/prometheus/api/v1/export/native" with filters:
|
||||
filter: match[]={__name__!=""}
|
||||
Initing import process to "http://172.17.0.3:8480/insert/12812919:1/prometheus/api/v1/import/native":
|
||||
Total: 43.14 MiB ↘ Speed: 1.86 MiB p/s
|
||||
Total: 43.14 MiB ↙ Speed: 2.36 MiB p/s
|
||||
2022/12/05 21:20:51 Initing export pipe from "http://localhost:8481/select/1289198:1/prometheus/api/v1/export/native" with filters:
|
||||
filter: match[]={__name__!=""}
|
||||
Initing import process to "http://172.17.0.3:8480/insert/1289198:1/prometheus/api/v1/import/native":
|
||||
Total: 16.64 MiB ↗ Speed: 2.66 MiB p/s
|
||||
Total: 16.64 MiB ↘ Speed: 2.19 MiB p/s
|
||||
2022/12/05 21:20:59 Initing export pipe from "http://localhost:8481/select/1289:1283/prometheus/api/v1/export/native" with filters:
|
||||
filter: match[]={__name__!=""}
|
||||
Initing import process to "http://172.17.0.3:8480/insert/1289:1283/prometheus/api/v1/import/native":
|
||||
Total: 43.33 MiB ↙ Speed: 1.94 MiB p/s
|
||||
Total: 43.33 MiB ↖ Speed: 2.35 MiB p/s
|
||||
2022/12/05 21:21:18 Initing export pipe from "http://localhost:8481/select/12:1/prometheus/api/v1/export/native" with filters:
|
||||
filter: match[]={__name__!=""}
|
||||
Initing import process to "http://172.17.0.3:8480/insert/12:1/prometheus/api/v1/import/native":
|
||||
Total: 63.78 MiB ↙ Speed: 1.96 MiB p/s
|
||||
Total: 63.78 MiB ↖ Speed: 2.28 MiB p/s
|
||||
2022/12/05 21:21:46 Initing export pipe from "http://localhost:8481/select/1:0/prometheus/api/v1/export/native" with filters:
|
||||
filter: match[]={__name__!=""}
|
||||
Initing import process to "http://172.17.0.3:8480/insert/1:0/prometheus/api/v1/import/native":
|
||||
2022/12/05 21:21:46 Import finished!
|
||||
Total: 330 B ↗ Speed: 3.53 MiB p/s
|
||||
2022/12/05 21:21:46 Initing export pipe from "http://localhost:8481/select/1:1/prometheus/api/v1/export/native" with filters:
|
||||
filter: match[]={__name__!=""}
|
||||
Initing import process to "http://172.17.0.3:8480/insert/1:1/prometheus/api/v1/import/native":
|
||||
Total: 63.81 MiB ↙ Speed: 1.96 MiB p/s
|
||||
Total: 63.81 MiB ↖ Speed: 2.28 MiB p/s
|
||||
2022/12/05 21:22:14 Initing export pipe from "http://localhost:8481/select/1:1231231/prometheus/api/v1/export/native" with filters:
|
||||
filter: match[]={__name__!=""}
|
||||
Initing import process to "http://172.17.0.3:8480/insert/1:1231231/prometheus/api/v1/import/native":
|
||||
Total: 63.84 MiB ↙ Speed: 1.93 MiB p/s
|
||||
Total: 63.84 MiB ↖ Speed: 2.29 MiB p/s
|
||||
2022/12/05 21:22:42 Initing export pipe from "http://localhost:8481/select/1:1271727/prometheus/api/v1/export/native" with filters:
|
||||
filter: match[]={__name__!=""}
|
||||
Initing import process to "http://172.17.0.3:8480/insert/1:1271727/prometheus/api/v1/import/native":
|
||||
Total: 54.37 MiB ↘ Speed: 1.90 MiB p/s
|
||||
Total: 54.37 MiB ↙ Speed: 2.37 MiB p/s
|
||||
2022/12/05 21:23:05 Initing export pipe from "http://localhost:8481/select/1:12819/prometheus/api/v1/export/native" with filters:
|
||||
filter: match[]={__name__!=""}
|
||||
Initing import process to "http://172.17.0.3:8480/insert/1:12819/prometheus/api/v1/import/native":
|
||||
Total: 17.01 MiB ↙ Speed: 1.75 MiB p/s
|
||||
Total: 17.01 MiB ↖ Speed: 2.15 MiB p/s
|
||||
2022/12/05 21:23:13 Initing export pipe from "http://localhost:8481/select/1:281/prometheus/api/v1/export/native" with filters:
|
||||
filter: match[]={__name__!=""}
|
||||
Initing import process to "http://172.17.0.3:8480/insert/1:281/prometheus/api/v1/import/native":
|
||||
Total: 63.89 MiB ↘ Speed: 1.90 MiB p/s
|
||||
Total: 63.89 MiB ↙ Speed: 2.29 MiB p/s
|
||||
2022/12/05 21:23:42 Initing export pipe from "http://localhost:8481/select/812891298:1/prometheus/api/v1/export/native" with filters:
|
||||
filter: match[]={__name__!=""}
|
||||
Initing import process to "http://172.17.0.3:8480/insert/812891298:1/prometheus/api/v1/import/native":
|
||||
Total: 63.84 MiB ↖ Speed: 1.99 MiB p/s
|
||||
Total: 63.84 MiB ↗ Speed: 2.26 MiB p/s
|
||||
2022/12/05 21:24:10 Total time: 4m4.1466565s
|
||||
```
|
||||
|
||||
## Verifying exported blocks from VictoriaMetrics
|
||||
|
||||
|
|
|
@ -44,6 +44,8 @@ const (
|
|||
// also used in vm-native
|
||||
vmExtraLabel = "vm-extra-label"
|
||||
vmRateLimit = "vm-rate-limit"
|
||||
|
||||
vmInterCluster = "vm-intercluster"
|
||||
)
|
||||
|
||||
var (
|
||||
|
@ -398,6 +400,12 @@ var (
|
|||
Usage: "Optional data transfer rate limit in bytes per second.\n" +
|
||||
"By default the rate limit is disabled. It can be useful for limiting load on source or destination databases.",
|
||||
},
|
||||
&cli.BoolFlag{
|
||||
Name: vmInterCluster,
|
||||
Usage: "Enables cluster-to-cluster migration mode with automatic tenants data migration.\n" +
|
||||
fmt.Sprintf(" In this mode --%s flag format is: 'http://vmselect:8481/'. --%s flag format is: http://vminsert:8480/. \n", vmNativeSrcAddr, vmNativeDstAddr) +
|
||||
" TenantID will be appended automatically after discovering tenants from src.",
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
|
|
|
@ -200,7 +200,8 @@ func main() {
|
|||
}
|
||||
|
||||
p := vmNativeProcessor{
|
||||
rateLimit: c.Int64(vmRateLimit),
|
||||
rateLimit: c.Int64(vmRateLimit),
|
||||
interCluster: c.Bool(vmInterCluster),
|
||||
filter: filter{
|
||||
match: c.String(vmNativeFilterMatch),
|
||||
timeStart: c.String(vmNativeFilterTimeStart),
|
||||
|
|
|
@ -2,6 +2,7 @@ package main
|
|||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
|
@ -19,8 +20,9 @@ type vmNativeProcessor struct {
|
|||
filter filter
|
||||
rateLimit int64
|
||||
|
||||
dst *vmNativeClient
|
||||
src *vmNativeClient
|
||||
dst *vmNativeClient
|
||||
src *vmNativeClient
|
||||
interCluster bool
|
||||
}
|
||||
|
||||
type vmNativeClient struct {
|
||||
|
@ -49,15 +51,16 @@ func (f filter) String() string {
|
|||
}
|
||||
|
||||
const (
|
||||
nativeExportAddr = "api/v1/export/native"
|
||||
nativeImportAddr = "api/v1/import/native"
|
||||
nativeExportAddr = "api/v1/export/native"
|
||||
nativeImportAddr = "api/v1/import/native"
|
||||
nativeTenantsAddr = "admin/tenants"
|
||||
|
||||
nativeBarTpl = `Total: {{counters . }} {{ cycle . "↖" "↗" "↘" "↙" }} Speed: {{speed . }} {{string . "suffix"}}`
|
||||
)
|
||||
|
||||
func (p *vmNativeProcessor) run(ctx context.Context) error {
|
||||
if p.filter.chunk == "" {
|
||||
return p.runSingle(ctx, p.filter)
|
||||
return p.runWithFilter(ctx, p.filter)
|
||||
}
|
||||
|
||||
startOfRange, err := time.Parse(time.RFC3339, p.filter.timeStart)
|
||||
|
@ -89,7 +92,7 @@ func (p *vmNativeProcessor) run(ctx context.Context) error {
|
|||
timeStart: formattedStartTime,
|
||||
timeEnd: formattedEndTime,
|
||||
}
|
||||
err := p.runSingle(ctx, f)
|
||||
err := p.runWithFilter(ctx, f)
|
||||
|
||||
if err != nil {
|
||||
log.Printf("processing failed for range %d/%d: %s - %s \n", rangeIdx+1, len(ranges), formattedStartTime, formattedEndTime)
|
||||
|
@ -99,25 +102,52 @@ func (p *vmNativeProcessor) run(ctx context.Context) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func (p *vmNativeProcessor) runSingle(ctx context.Context, f filter) error {
|
||||
pr, pw := io.Pipe()
|
||||
func (p *vmNativeProcessor) runWithFilter(ctx context.Context, f filter) error {
|
||||
nativeImportAddr, err := vm.AddExtraLabelsToImportPath(nativeImportAddr, p.dst.extraLabels)
|
||||
|
||||
log.Printf("Initing export pipe from %q with filters: %s\n", p.src.addr, f)
|
||||
exportReader, err := p.exportPipe(ctx, f)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to add labels to import path: %s", err)
|
||||
}
|
||||
|
||||
if !p.interCluster {
|
||||
srcURL := fmt.Sprintf("%s/%s", p.src.addr, nativeExportAddr)
|
||||
dstURL := fmt.Sprintf("%s/%s", p.dst.addr, nativeImportAddr)
|
||||
|
||||
return p.runSingle(ctx, f, srcURL, dstURL)
|
||||
}
|
||||
|
||||
tenants, err := p.getSourceTenants(ctx, f)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get source tenants: %s", err)
|
||||
}
|
||||
|
||||
log.Printf("Discovered tenants: %v", tenants)
|
||||
for _, tenant := range tenants {
|
||||
// src and dst expected formats: http://vminsert:8480/ and http://vmselect:8481/
|
||||
srcURL := fmt.Sprintf("%s/select/%s/prometheus/%s", p.src.addr, tenant, nativeExportAddr)
|
||||
dstURL := fmt.Sprintf("%s/insert/%s/prometheus/%s", p.dst.addr, tenant, nativeImportAddr)
|
||||
|
||||
if err := p.runSingle(ctx, f, srcURL, dstURL); err != nil {
|
||||
return fmt.Errorf("failed to migrate data for tenant %q: %s", tenant, err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *vmNativeProcessor) runSingle(ctx context.Context, f filter, srcURL, dstURL string) error {
|
||||
log.Printf("Initing export pipe from %q with filters: %s\n", srcURL, f)
|
||||
|
||||
exportReader, err := p.exportPipe(ctx, srcURL, f)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to init export pipe: %s", err)
|
||||
}
|
||||
|
||||
nativeImportAddr, err := vm.AddExtraLabelsToImportPath(nativeImportAddr, p.dst.extraLabels)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
pr, pw := io.Pipe()
|
||||
sync := make(chan struct{})
|
||||
go func() {
|
||||
defer func() { close(sync) }()
|
||||
u := fmt.Sprintf("%s/%s", p.dst.addr, nativeImportAddr)
|
||||
req, err := http.NewRequestWithContext(ctx, "POST", u, pr)
|
||||
req, err := http.NewRequestWithContext(ctx, "POST", dstURL, pr)
|
||||
if err != nil {
|
||||
log.Fatalf("cannot create import request to %q: %s", p.dst.addr, err)
|
||||
}
|
||||
|
@ -130,7 +160,7 @@ func (p *vmNativeProcessor) runSingle(ctx context.Context, f filter) error {
|
|||
}
|
||||
}()
|
||||
|
||||
fmt.Printf("Initing import process to %q:\n", p.dst.addr)
|
||||
fmt.Printf("Initing import process to %q:\n", dstURL)
|
||||
pool := pb.NewPool()
|
||||
bar := pb.ProgressBarTemplate(nativeBarTpl).New(0)
|
||||
pool.Add(bar)
|
||||
|
@ -166,9 +196,43 @@ func (p *vmNativeProcessor) runSingle(ctx context.Context, f filter) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func (p *vmNativeProcessor) exportPipe(ctx context.Context, f filter) (io.ReadCloser, error) {
|
||||
u := fmt.Sprintf("%s/%s", p.src.addr, nativeExportAddr)
|
||||
func (p *vmNativeProcessor) getSourceTenants(ctx context.Context, f filter) ([]string, error) {
|
||||
u := fmt.Sprintf("%s/%s", p.src.addr, nativeTenantsAddr)
|
||||
req, err := http.NewRequestWithContext(ctx, "GET", u, nil)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot create request to %q: %s", u, err)
|
||||
}
|
||||
|
||||
params := req.URL.Query()
|
||||
if f.timeStart != "" {
|
||||
params.Set("start", f.timeStart)
|
||||
}
|
||||
if f.timeEnd != "" {
|
||||
params.Set("end", f.timeEnd)
|
||||
}
|
||||
req.URL.RawQuery = params.Encode()
|
||||
|
||||
resp, err := p.src.do(req, http.StatusOK)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("tenants request failed: %s", err)
|
||||
}
|
||||
|
||||
var r struct {
|
||||
Tenants []string `json:"data"`
|
||||
}
|
||||
if err := json.NewDecoder(resp.Body).Decode(&r); err != nil {
|
||||
return nil, fmt.Errorf("cannot decode tenants response: %s", err)
|
||||
}
|
||||
|
||||
if err := resp.Body.Close(); err != nil {
|
||||
return nil, fmt.Errorf("cannot close tenants response body: %s", err)
|
||||
}
|
||||
|
||||
return r.Tenants, nil
|
||||
}
|
||||
|
||||
func (p *vmNativeProcessor) exportPipe(ctx context.Context, url string, f filter) (io.ReadCloser, error) {
|
||||
req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot create request to %q: %s", p.src.addr, err)
|
||||
}
|
||||
|
|
|
@ -55,6 +55,9 @@ func insertRows(series []parser.Series, extraLabels []prompbmarshal.Label) error
|
|||
ctx.Labels = ctx.Labels[:0]
|
||||
ctx.AddLabel("", ss.Metric)
|
||||
ctx.AddLabel("host", ss.Host)
|
||||
if ss.Device != "" {
|
||||
ctx.AddLabel("device", ss.Device)
|
||||
}
|
||||
for _, tag := range ss.Tags {
|
||||
name, value := parser.SplitTag(tag)
|
||||
if name == "host" {
|
||||
|
|
|
@ -6385,6 +6385,17 @@ func TestExecSuccess(t *testing.T) {
|
|||
resultExpected := []netstorage.Result{r1, r2}
|
||||
f(q, resultExpected)
|
||||
})
|
||||
t.Run(`range_trim_spikes()`, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
q := `range_trim_spikes(0.2, time())`
|
||||
r := netstorage.Result{
|
||||
MetricName: metricNameExpected,
|
||||
Values: []float64{nan, 1200, 1400, 1600, 1800, nan},
|
||||
Timestamps: timestampsExpected,
|
||||
}
|
||||
resultExpected := []netstorage.Result{r}
|
||||
f(q, resultExpected)
|
||||
})
|
||||
t.Run(`range_quantile(0.5)`, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
q := `range_quantile(0.5, time())`
|
||||
|
@ -8189,6 +8200,7 @@ func TestExecError(t *testing.T) {
|
|||
f(`step(1)`)
|
||||
f(`running_sum(1, 2)`)
|
||||
f(`range_sum(1, 2)`)
|
||||
f(`range_trim_spikes()`)
|
||||
f(`range_first(1, 2)`)
|
||||
f(`range_last(1, 2)`)
|
||||
f(`range_linear_regression(1, 2)`)
|
||||
|
|
|
@ -96,6 +96,7 @@ var transformFuncs = map[string]transformFunc{
|
|||
"range_stddev": transformRangeStddev,
|
||||
"range_stdvar": transformRangeStdvar,
|
||||
"range_sum": newTransformFuncRange(runningSum),
|
||||
"range_trim_spikes": transformRangeTrimSpikes,
|
||||
"remove_resets": transformRemoveResets,
|
||||
"round": transformRound,
|
||||
"running_avg": newTransformFuncRunning(runningAvg),
|
||||
|
@ -1274,6 +1275,54 @@ func transformRangeNormalize(tfa *transformFuncArg) ([]*timeseries, error) {
|
|||
return rvs, nil
|
||||
}
|
||||
|
||||
func transformRangeTrimSpikes(tfa *transformFuncArg) ([]*timeseries, error) {
|
||||
args := tfa.args
|
||||
if err := expectTransformArgsNum(args, 2); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
phis, err := getScalar(args[0], 0)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
phi := float64(0)
|
||||
if len(phis) > 0 {
|
||||
phi = phis[0]
|
||||
}
|
||||
// Trim 100% * (phi / 2) samples with the lowest / highest values per each time series
|
||||
phi /= 2
|
||||
phiUpper := 1 - phi
|
||||
phiLower := phi
|
||||
rvs := args[1]
|
||||
a := getFloat64s()
|
||||
values := a.A[:0]
|
||||
for _, ts := range rvs {
|
||||
values := values[:0]
|
||||
originValues := ts.Values
|
||||
for _, v := range originValues {
|
||||
if math.IsNaN(v) {
|
||||
continue
|
||||
}
|
||||
values = append(values, v)
|
||||
}
|
||||
sort.Float64s(values)
|
||||
vMax := quantileSorted(phiUpper, values)
|
||||
vMin := quantileSorted(phiLower, values)
|
||||
for i, v := range originValues {
|
||||
if math.IsNaN(v) {
|
||||
continue
|
||||
}
|
||||
if v > vMax {
|
||||
originValues[i] = nan
|
||||
} else if v < vMin {
|
||||
originValues[i] = nan
|
||||
}
|
||||
}
|
||||
}
|
||||
a.A = values
|
||||
putFloat64s(a)
|
||||
return rvs, nil
|
||||
}
|
||||
|
||||
func transformRangeLinearRegression(tfa *transformFuncArg) ([]*timeseries, error) {
|
||||
args := tfa.args
|
||||
if err := expectTransformArgsNum(args, 1); err != nil {
|
||||
|
|
|
@ -1,12 +1,12 @@
|
|||
{
|
||||
"files": {
|
||||
"main.css": "./static/css/main.0937c83d.css",
|
||||
"main.js": "./static/js/main.e18cda26.js",
|
||||
"main.css": "./static/css/main.89abca0f.css",
|
||||
"main.js": "./static/js/main.c552245f.js",
|
||||
"static/js/27.c1ccfd29.chunk.js": "./static/js/27.c1ccfd29.chunk.js",
|
||||
"index.html": "./index.html"
|
||||
},
|
||||
"entrypoints": [
|
||||
"static/css/main.0937c83d.css",
|
||||
"static/js/main.e18cda26.js"
|
||||
"static/css/main.89abca0f.css",
|
||||
"static/js/main.c552245f.js"
|
||||
]
|
||||
}
|
|
@ -1 +1 @@
|
|||
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="./favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1"/><meta name="theme-color" content="#000000"/><meta name="description" content="VM-UI is a metric explorer for Victoria Metrics"/><link rel="apple-touch-icon" href="./apple-touch-icon.png"/><link rel="icon" type="image/png" sizes="32x32" href="./favicon-32x32.png"><link rel="manifest" href="./manifest.json"/><title>VM UI</title><link rel="preconnect" href="https://fonts.googleapis.com"><link rel="preconnect" href="https://fonts.gstatic.com" crossorigin><link href="https://fonts.googleapis.com/css2?family=JetBrains+Mono&family=Lato:wght@300;400;700&display=swap" rel="stylesheet"><script src="./dashboards/index.js" type="module"></script><script defer="defer" src="./static/js/main.e18cda26.js"></script><link href="./static/css/main.0937c83d.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div></body></html>
|
||||
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="./favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1"/><meta name="theme-color" content="#000000"/><meta name="description" content="VM-UI is a metric explorer for Victoria Metrics"/><link rel="apple-touch-icon" href="./apple-touch-icon.png"/><link rel="icon" type="image/png" sizes="32x32" href="./favicon-32x32.png"><link rel="manifest" href="./manifest.json"/><title>VM UI</title><link rel="preconnect" href="https://fonts.googleapis.com"><link rel="preconnect" href="https://fonts.gstatic.com" crossorigin><link href="https://fonts.googleapis.com/css2?family=JetBrains+Mono&family=Lato:wght@300;400;700&display=swap" rel="stylesheet"><script src="./dashboards/index.js" type="module"></script><script defer="defer" src="./static/js/main.c552245f.js"></script><link href="./static/css/main.89abca0f.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div></body></html>
|
File diff suppressed because one or more lines are too long
1
app/vmselect/vmui/static/css/main.89abca0f.css
Normal file
1
app/vmselect/vmui/static/css/main.89abca0f.css
Normal file
File diff suppressed because one or more lines are too long
2
app/vmselect/vmui/static/js/main.c552245f.js
Normal file
2
app/vmselect/vmui/static/js/main.c552245f.js
Normal file
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
|
@ -101,7 +101,7 @@ func InitWithoutMetrics(resetCacheIfNeeded func(mrs []storage.MetricRow)) {
|
|||
storage.SetLogNewSeries(*logNewSeries)
|
||||
storage.SetFinalMergeDelay(*finalMergeDelay)
|
||||
storage.SetBigMergeWorkersCount(*bigMergeConcurrency)
|
||||
storage.SetSmallMergeWorkersCount(*smallMergeConcurrency)
|
||||
storage.SetMergeWorkersCount(*smallMergeConcurrency)
|
||||
storage.SetRetentionTimezoneOffset(*retentionTimezoneOffset)
|
||||
storage.SetFreeDiskSpaceLimit(minFreeDiskSpaceBytes.N)
|
||||
storage.SetTSIDCacheSize(cacheSizeStorageTSID.N)
|
||||
|
@ -457,56 +457,80 @@ func registerStorageMetrics(strg *storage.Storage) {
|
|||
return 0
|
||||
})
|
||||
|
||||
metrics.NewGauge(`vm_active_merges{type="storage/big"}`, func() float64 {
|
||||
return float64(tm().ActiveBigMerges)
|
||||
metrics.NewGauge(`vm_active_merges{type="storage/inmemory"}`, func() float64 {
|
||||
return float64(tm().ActiveInmemoryMerges)
|
||||
})
|
||||
metrics.NewGauge(`vm_active_merges{type="storage/small"}`, func() float64 {
|
||||
return float64(tm().ActiveSmallMerges)
|
||||
})
|
||||
metrics.NewGauge(`vm_active_merges{type="indexdb"}`, func() float64 {
|
||||
return float64(idbm().ActiveMerges)
|
||||
metrics.NewGauge(`vm_active_merges{type="storage/big"}`, func() float64 {
|
||||
return float64(tm().ActiveBigMerges)
|
||||
})
|
||||
metrics.NewGauge(`vm_active_merges{type="indexdb/inmemory"}`, func() float64 {
|
||||
return float64(idbm().ActiveInmemoryMerges)
|
||||
})
|
||||
metrics.NewGauge(`vm_active_merges{type="indexdb/file"}`, func() float64 {
|
||||
return float64(idbm().ActiveFileMerges)
|
||||
})
|
||||
|
||||
metrics.NewGauge(`vm_merges_total{type="storage/big"}`, func() float64 {
|
||||
return float64(tm().BigMergesCount)
|
||||
metrics.NewGauge(`vm_merges_total{type="storage/inmemory"}`, func() float64 {
|
||||
return float64(tm().InmemoryMergesCount)
|
||||
})
|
||||
metrics.NewGauge(`vm_merges_total{type="storage/small"}`, func() float64 {
|
||||
return float64(tm().SmallMergesCount)
|
||||
})
|
||||
metrics.NewGauge(`vm_merges_total{type="indexdb"}`, func() float64 {
|
||||
return float64(idbm().MergesCount)
|
||||
metrics.NewGauge(`vm_merges_total{type="storage/big"}`, func() float64 {
|
||||
return float64(tm().BigMergesCount)
|
||||
})
|
||||
metrics.NewGauge(`vm_merges_total{type="indexdb/inmemory"}`, func() float64 {
|
||||
return float64(idbm().InmemoryMergesCount)
|
||||
})
|
||||
metrics.NewGauge(`vm_merges_total{type="indexdb/file"}`, func() float64 {
|
||||
return float64(idbm().FileMergesCount)
|
||||
})
|
||||
|
||||
metrics.NewGauge(`vm_rows_merged_total{type="storage/big"}`, func() float64 {
|
||||
return float64(tm().BigRowsMerged)
|
||||
metrics.NewGauge(`vm_rows_merged_total{type="storage/inmemory"}`, func() float64 {
|
||||
return float64(tm().InmemoryRowsMerged)
|
||||
})
|
||||
metrics.NewGauge(`vm_rows_merged_total{type="storage/small"}`, func() float64 {
|
||||
return float64(tm().SmallRowsMerged)
|
||||
})
|
||||
metrics.NewGauge(`vm_rows_merged_total{type="indexdb"}`, func() float64 {
|
||||
return float64(idbm().ItemsMerged)
|
||||
metrics.NewGauge(`vm_rows_merged_total{type="storage/big"}`, func() float64 {
|
||||
return float64(tm().BigRowsMerged)
|
||||
})
|
||||
metrics.NewGauge(`vm_rows_merged_total{type="indexdb/inmemory"}`, func() float64 {
|
||||
return float64(idbm().InmemoryItemsMerged)
|
||||
})
|
||||
metrics.NewGauge(`vm_rows_merged_total{type="indexdb/file"}`, func() float64 {
|
||||
return float64(idbm().FileItemsMerged)
|
||||
})
|
||||
|
||||
metrics.NewGauge(`vm_rows_deleted_total{type="storage/big"}`, func() float64 {
|
||||
return float64(tm().BigRowsDeleted)
|
||||
metrics.NewGauge(`vm_rows_deleted_total{type="storage/inmemory"}`, func() float64 {
|
||||
return float64(tm().InmemoryRowsDeleted)
|
||||
})
|
||||
metrics.NewGauge(`vm_rows_deleted_total{type="storage/small"}`, func() float64 {
|
||||
return float64(tm().SmallRowsDeleted)
|
||||
})
|
||||
|
||||
metrics.NewGauge(`vm_references{type="storage/big", name="parts"}`, func() float64 {
|
||||
return float64(tm().BigPartsRefCount)
|
||||
metrics.NewGauge(`vm_rows_deleted_total{type="storage/big"}`, func() float64 {
|
||||
return float64(tm().BigRowsDeleted)
|
||||
})
|
||||
metrics.NewGauge(`vm_references{type="storage/small", name="parts"}`, func() float64 {
|
||||
|
||||
metrics.NewGauge(`vm_part_references{type="storage/inmemory"}`, func() float64 {
|
||||
return float64(tm().InmemoryPartsRefCount)
|
||||
})
|
||||
metrics.NewGauge(`vm_part_references{type="storage/small"}`, func() float64 {
|
||||
return float64(tm().SmallPartsRefCount)
|
||||
})
|
||||
metrics.NewGauge(`vm_references{type="storage", name="partitions"}`, func() float64 {
|
||||
metrics.NewGauge(`vm_part_references{type="storage/big"}`, func() float64 {
|
||||
return float64(tm().BigPartsRefCount)
|
||||
})
|
||||
metrics.NewGauge(`vm_partition_references{type="storage"}`, func() float64 {
|
||||
return float64(tm().PartitionsRefCount)
|
||||
})
|
||||
metrics.NewGauge(`vm_references{type="indexdb", name="objects"}`, func() float64 {
|
||||
metrics.NewGauge(`vm_object_references{type="indexdb"}`, func() float64 {
|
||||
return float64(idbm().IndexDBRefCount)
|
||||
})
|
||||
metrics.NewGauge(`vm_references{type="indexdb", name="parts"}`, func() float64 {
|
||||
metrics.NewGauge(`vm_part_references{type="indexdb"}`, func() float64 {
|
||||
return float64(idbm().PartsRefCount)
|
||||
})
|
||||
|
||||
|
@ -535,11 +559,11 @@ func registerStorageMetrics(strg *storage.Storage) {
|
|||
return float64(idbm().CompositeFilterMissingConversions)
|
||||
})
|
||||
|
||||
metrics.NewGauge(`vm_assisted_merges_total{type="storage/small"}`, func() float64 {
|
||||
return float64(tm().SmallAssistedMerges)
|
||||
metrics.NewGauge(`vm_assisted_merges_total{type="storage/inmemory"}`, func() float64 {
|
||||
return float64(tm().InmemoryAssistedMerges)
|
||||
})
|
||||
metrics.NewGauge(`vm_assisted_merges_total{type="indexdb"}`, func() float64 {
|
||||
return float64(idbm().AssistedMerges)
|
||||
metrics.NewGauge(`vm_assisted_merges_total{type="indexdb/inmemory"}`, func() float64 {
|
||||
return float64(idbm().AssistedInmemoryMerges)
|
||||
})
|
||||
|
||||
metrics.NewGauge(`vm_indexdb_items_added_total`, func() float64 {
|
||||
|
@ -550,11 +574,8 @@ func registerStorageMetrics(strg *storage.Storage) {
|
|||
})
|
||||
|
||||
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/686
|
||||
metrics.NewGauge(`vm_merge_need_free_disk_space{type="storage/small"}`, func() float64 {
|
||||
return float64(tm().SmallMergeNeedFreeDiskSpace)
|
||||
})
|
||||
metrics.NewGauge(`vm_merge_need_free_disk_space{type="storage/big"}`, func() float64 {
|
||||
return float64(tm().BigMergeNeedFreeDiskSpace)
|
||||
metrics.NewGauge(`vm_merge_need_free_disk_space`, func() float64 {
|
||||
return float64(tm().MergeNeedFreeDiskSpace)
|
||||
})
|
||||
|
||||
metrics.NewGauge(`vm_pending_rows{type="storage"}`, func() float64 {
|
||||
|
@ -564,34 +585,52 @@ func registerStorageMetrics(strg *storage.Storage) {
|
|||
return float64(idbm().PendingItems)
|
||||
})
|
||||
|
||||
metrics.NewGauge(`vm_parts{type="storage/big"}`, func() float64 {
|
||||
return float64(tm().BigPartsCount)
|
||||
metrics.NewGauge(`vm_parts{type="storage/inmemory"}`, func() float64 {
|
||||
return float64(tm().InmemoryPartsCount)
|
||||
})
|
||||
metrics.NewGauge(`vm_parts{type="storage/small"}`, func() float64 {
|
||||
return float64(tm().SmallPartsCount)
|
||||
})
|
||||
metrics.NewGauge(`vm_parts{type="indexdb"}`, func() float64 {
|
||||
return float64(idbm().PartsCount)
|
||||
metrics.NewGauge(`vm_parts{type="storage/big"}`, func() float64 {
|
||||
return float64(tm().BigPartsCount)
|
||||
})
|
||||
metrics.NewGauge(`vm_parts{type="indexdb/inmemory"}`, func() float64 {
|
||||
return float64(idbm().InmemoryPartsCount)
|
||||
})
|
||||
metrics.NewGauge(`vm_parts{type="indexdb/file"}`, func() float64 {
|
||||
return float64(idbm().FilePartsCount)
|
||||
})
|
||||
|
||||
metrics.NewGauge(`vm_blocks{type="storage/big"}`, func() float64 {
|
||||
return float64(tm().BigBlocksCount)
|
||||
metrics.NewGauge(`vm_blocks{type="storage/inmemory"}`, func() float64 {
|
||||
return float64(tm().InmemoryBlocksCount)
|
||||
})
|
||||
metrics.NewGauge(`vm_blocks{type="storage/small"}`, func() float64 {
|
||||
return float64(tm().SmallBlocksCount)
|
||||
})
|
||||
metrics.NewGauge(`vm_blocks{type="indexdb"}`, func() float64 {
|
||||
return float64(idbm().BlocksCount)
|
||||
metrics.NewGauge(`vm_blocks{type="storage/big"}`, func() float64 {
|
||||
return float64(tm().BigBlocksCount)
|
||||
})
|
||||
metrics.NewGauge(`vm_blocks{type="indexdb/inmemory"}`, func() float64 {
|
||||
return float64(idbm().InmemoryBlocksCount)
|
||||
})
|
||||
metrics.NewGauge(`vm_blocks{type="indexdb/file"}`, func() float64 {
|
||||
return float64(idbm().FileBlocksCount)
|
||||
})
|
||||
|
||||
metrics.NewGauge(`vm_data_size_bytes{type="storage/big"}`, func() float64 {
|
||||
return float64(tm().BigSizeBytes)
|
||||
metrics.NewGauge(`vm_data_size_bytes{type="storage/inmemory"}`, func() float64 {
|
||||
return float64(tm().InmemorySizeBytes)
|
||||
})
|
||||
metrics.NewGauge(`vm_data_size_bytes{type="storage/small"}`, func() float64 {
|
||||
return float64(tm().SmallSizeBytes)
|
||||
})
|
||||
metrics.NewGauge(`vm_data_size_bytes{type="indexdb"}`, func() float64 {
|
||||
return float64(idbm().SizeBytes)
|
||||
metrics.NewGauge(`vm_data_size_bytes{type="storage/big"}`, func() float64 {
|
||||
return float64(tm().BigSizeBytes)
|
||||
})
|
||||
metrics.NewGauge(`vm_data_size_bytes{type="indexdb/inmemory"}`, func() float64 {
|
||||
return float64(idbm().InmemorySizeBytes)
|
||||
})
|
||||
metrics.NewGauge(`vm_data_size_bytes{type="indexdb/file"}`, func() float64 {
|
||||
return float64(idbm().FileSizeBytes)
|
||||
})
|
||||
|
||||
metrics.NewGauge(`vm_rows_added_to_storage_total`, func() float64 {
|
||||
|
@ -669,14 +708,20 @@ func registerStorageMetrics(strg *storage.Storage) {
|
|||
return float64(m().TimestampsBytesSaved)
|
||||
})
|
||||
|
||||
metrics.NewGauge(`vm_rows{type="storage/big"}`, func() float64 {
|
||||
return float64(tm().BigRowsCount)
|
||||
metrics.NewGauge(`vm_rows{type="storage/inmemory"}`, func() float64 {
|
||||
return float64(tm().InmemoryRowsCount)
|
||||
})
|
||||
metrics.NewGauge(`vm_rows{type="storage/small"}`, func() float64 {
|
||||
return float64(tm().SmallRowsCount)
|
||||
})
|
||||
metrics.NewGauge(`vm_rows{type="indexdb"}`, func() float64 {
|
||||
return float64(idbm().ItemsCount)
|
||||
metrics.NewGauge(`vm_rows{type="storage/big"}`, func() float64 {
|
||||
return float64(tm().BigRowsCount)
|
||||
})
|
||||
metrics.NewGauge(`vm_rows{type="indexdb/inmemory"}`, func() float64 {
|
||||
return float64(idbm().InmemoryItemsCount)
|
||||
})
|
||||
metrics.NewGauge(`vm_rows{type="indexdb/file"}`, func() float64 {
|
||||
return float64(idbm().FileItemsCount)
|
||||
})
|
||||
|
||||
metrics.NewGauge(`vm_date_range_search_calls_total`, func() float64 {
|
||||
|
|
|
@ -49,7 +49,7 @@ const ChartTooltip: FC<ChartTooltipProps> = ({
|
|||
const value = useMemo(() => get(u, ["data", seriesIdx, dataIdx], 0), [u, seriesIdx, dataIdx]);
|
||||
const valueFormat = useMemo(() => formatPrettyNumber(value), [value]);
|
||||
const dataTime = useMemo(() => u.data[0][dataIdx], [u, dataIdx]);
|
||||
const date = useMemo(() => dayjs(new Date(dataTime * 1000)).format(DATE_FULL_TIMEZONE_FORMAT), [dataTime]);
|
||||
const date = useMemo(() => dayjs(dataTime * 1000).tz().format(DATE_FULL_TIMEZONE_FORMAT), [dataTime]);
|
||||
|
||||
const color = useMemo(() => getColorLine(series[seriesIdx]?.label || ""), [series, seriesIdx]);
|
||||
|
||||
|
|
|
@ -76,5 +76,6 @@ $chart-tooltip-y: -1 * ($padding-small + $chart-tooltip-half-icon);
|
|||
&-info {
|
||||
display: grid;
|
||||
grid-gap: 4px;
|
||||
word-break: break-all;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -11,7 +11,7 @@ import { defaultOptions } from "../../../utils/uplot/helpers";
|
|||
import { dragChart } from "../../../utils/uplot/events";
|
||||
import { getAxes, getMinMaxBuffer } from "../../../utils/uplot/axes";
|
||||
import { MetricResult } from "../../../api/types";
|
||||
import { limitsDurations } from "../../../utils/time";
|
||||
import { dateFromSeconds, formatDateForNativeInput, limitsDurations } from "../../../utils/time";
|
||||
import throttle from "lodash.throttle";
|
||||
import useResize from "../../../hooks/useResize";
|
||||
import { TimeParams } from "../../../types";
|
||||
|
@ -20,6 +20,7 @@ import "uplot/dist/uPlot.min.css";
|
|||
import "./style.scss";
|
||||
import classNames from "classnames";
|
||||
import ChartTooltip, { ChartTooltipProps } from "../ChartTooltip/ChartTooltip";
|
||||
import dayjs from "dayjs";
|
||||
|
||||
export interface LineChartProps {
|
||||
metrics: MetricResult[];
|
||||
|
@ -57,7 +58,10 @@ const LineChart: FC<LineChartProps> = ({
|
|||
const tooltipId = useMemo(() => `${tooltipIdx.seriesIdx}_${tooltipIdx.dataIdx}`, [tooltipIdx]);
|
||||
|
||||
const setScale = ({ min, max }: { min: number, max: number }): void => {
|
||||
setPeriod({ from: new Date(min * 1000), to: new Date(max * 1000) });
|
||||
setPeriod({
|
||||
from: dayjs(min * 1000).toDate(),
|
||||
to: dayjs(max * 1000).toDate()
|
||||
});
|
||||
};
|
||||
const throttledSetScale = useCallback(throttle(setScale, 500), []);
|
||||
const setPlotScale = ({ u, min, max }: { u: uPlot, min: number, max: number }) => {
|
||||
|
@ -163,6 +167,7 @@ const LineChart: FC<LineChartProps> = ({
|
|||
|
||||
const options: uPlotOptions = {
|
||||
...defaultOptions,
|
||||
tzDate: ts => dayjs(formatDateForNativeInput(dateFromSeconds(ts))).local().toDate(),
|
||||
series,
|
||||
axes: getAxes( [{}, { scale: "1" }], unit),
|
||||
scales: { ...getScales() },
|
||||
|
|
|
@ -15,7 +15,7 @@ const CardinalityDatePicker: FC = () => {
|
|||
const { date } = useCardinalityState();
|
||||
const cardinalityDispatch = useCardinalityDispatch();
|
||||
|
||||
const dateFormatted = useMemo(() => dayjs(date).format(DATE_FORMAT), [date]);
|
||||
const dateFormatted = useMemo(() => dayjs.tz(date).format(DATE_FORMAT), [date]);
|
||||
|
||||
const handleChangeDate = (val: string) => {
|
||||
cardinalityDispatch({ type: "SET_DATE", payload: val });
|
||||
|
|
|
@ -11,6 +11,8 @@ import { SeriesLimits } from "../../../types";
|
|||
import { useCustomPanelDispatch, useCustomPanelState } from "../../../state/customPanel/CustomPanelStateContext";
|
||||
import { getAppModeEnable } from "../../../utils/app-mode";
|
||||
import classNames from "classnames";
|
||||
import Timezones from "./Timezones/Timezones";
|
||||
import { useTimeDispatch, useTimeState } from "../../../state/time/TimeStateContext";
|
||||
|
||||
const title = "Settings";
|
||||
|
||||
|
@ -18,13 +20,16 @@ const GlobalSettings: FC = () => {
|
|||
|
||||
const appModeEnable = getAppModeEnable();
|
||||
const { serverUrl: stateServerUrl } = useAppState();
|
||||
const { timezone: stateTimezone } = useTimeState();
|
||||
const { seriesLimits } = useCustomPanelState();
|
||||
|
||||
const dispatch = useAppDispatch();
|
||||
const timeDispatch = useTimeDispatch();
|
||||
const customPanelDispatch = useCustomPanelDispatch();
|
||||
|
||||
const [serverUrl, setServerUrl] = useState(stateServerUrl);
|
||||
const [limits, setLimits] = useState<SeriesLimits>(seriesLimits);
|
||||
const [timezone, setTimezone] = useState(stateTimezone);
|
||||
|
||||
const [open, setOpen] = useState(false);
|
||||
const handleOpen = () => setOpen(true);
|
||||
|
@ -32,6 +37,7 @@ const GlobalSettings: FC = () => {
|
|||
|
||||
const handlerApply = () => {
|
||||
dispatch({ type: "SET_SERVER", payload: serverUrl });
|
||||
timeDispatch({ type: "SET_TIMEZONE", payload: timezone });
|
||||
customPanelDispatch({ type: "SET_SERIES_LIMITS", payload: limits });
|
||||
handleClose();
|
||||
};
|
||||
|
@ -70,6 +76,12 @@ const GlobalSettings: FC = () => {
|
|||
onEnter={handlerApply}
|
||||
/>
|
||||
</div>
|
||||
<div className="vm-server-configurator__input">
|
||||
<Timezones
|
||||
timezoneState={timezone}
|
||||
onChange={setTimezone}
|
||||
/>
|
||||
</div>
|
||||
<div className="vm-server-configurator__footer">
|
||||
<Button
|
||||
variant="outlined"
|
||||
|
|
|
@ -46,7 +46,7 @@ const LimitsConfigurator: FC<ServerConfiguratorProps> = ({ limits, onChange , on
|
|||
|
||||
return (
|
||||
<div className="vm-limits-configurator">
|
||||
<div className="vm-limits-configurator-title">
|
||||
<div className="vm-server-configurator__title">
|
||||
Series limits by tabs
|
||||
<Tooltip title="To disable limits set to 0">
|
||||
<Button
|
||||
|
|
|
@ -3,13 +3,6 @@
|
|||
.vm-limits-configurator {
|
||||
|
||||
&-title {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: flex-start;
|
||||
font-size: $font-size;
|
||||
font-weight: bold;
|
||||
margin-bottom: $padding-global;
|
||||
|
||||
&__reset {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
|
|
|
@ -22,14 +22,18 @@ const ServerConfigurator: FC<ServerConfiguratorProps> = ({ serverUrl, onChange ,
|
|||
};
|
||||
|
||||
return (
|
||||
<TextField
|
||||
autofocus
|
||||
label="Server URL"
|
||||
value={serverUrl}
|
||||
error={error}
|
||||
onChange={onChangeServer}
|
||||
onEnter={onEnter}
|
||||
/>
|
||||
<div>
|
||||
<div className="vm-server-configurator__title">
|
||||
Server URL
|
||||
</div>
|
||||
<TextField
|
||||
autofocus
|
||||
value={serverUrl}
|
||||
error={error}
|
||||
onChange={onChangeServer}
|
||||
onEnter={onEnter}
|
||||
/>
|
||||
</div>
|
||||
);
|
||||
};
|
||||
|
||||
|
|
|
@ -0,0 +1,143 @@
|
|||
import React, { FC, useMemo, useRef, useState } from "preact/compat";
|
||||
import { getTimezoneList, getUTCByTimezone } from "../../../../utils/time";
|
||||
import { ArrowDropDownIcon } from "../../../Main/Icons";
|
||||
import classNames from "classnames";
|
||||
import Popper from "../../../Main/Popper/Popper";
|
||||
import Accordion from "../../../Main/Accordion/Accordion";
|
||||
import dayjs from "dayjs";
|
||||
import TextField from "../../../Main/TextField/TextField";
|
||||
import { Timezone } from "../../../../types";
|
||||
import "./style.scss";
|
||||
|
||||
interface TimezonesProps {
|
||||
timezoneState: string
|
||||
onChange: (val: string) => void
|
||||
}
|
||||
|
||||
const Timezones: FC<TimezonesProps> = ({ timezoneState, onChange }) => {
|
||||
|
||||
const timezones = getTimezoneList();
|
||||
|
||||
const [openList, setOpenList] = useState(false);
|
||||
const [search, setSearch] = useState("");
|
||||
const targetRef = useRef<HTMLDivElement>(null);
|
||||
|
||||
const searchTimezones = useMemo(() => {
|
||||
if (!search) return timezones;
|
||||
try {
|
||||
return getTimezoneList(search);
|
||||
} catch (e) {
|
||||
return {};
|
||||
}
|
||||
}, [search, timezones]);
|
||||
|
||||
const timezonesGroups = useMemo(() => Object.keys(searchTimezones), [searchTimezones]);
|
||||
|
||||
const localTimezone = useMemo(() => ({
|
||||
region: dayjs.tz.guess(),
|
||||
utc: getUTCByTimezone(dayjs.tz.guess())
|
||||
}), []);
|
||||
|
||||
const activeTimezone = useMemo(() => ({
|
||||
region: timezoneState,
|
||||
utc: getUTCByTimezone(timezoneState)
|
||||
}), [timezoneState]);
|
||||
|
||||
const toggleOpenList = () => {
|
||||
setOpenList(prev => !prev);
|
||||
};
|
||||
|
||||
const handleCloseList = () => {
|
||||
setOpenList(false);
|
||||
};
|
||||
|
||||
const handleChangeSearch = (val: string) => {
|
||||
setSearch(val);
|
||||
};
|
||||
|
||||
const handleSetTimezone = (val: Timezone) => {
|
||||
onChange(val.region);
|
||||
setSearch("");
|
||||
handleCloseList();
|
||||
};
|
||||
|
||||
const createHandlerSetTimezone = (val: Timezone) => () => {
|
||||
handleSetTimezone(val);
|
||||
};
|
||||
|
||||
return (
|
||||
<div className="vm-timezones">
|
||||
<div className="vm-server-configurator__title">
|
||||
Time zone
|
||||
</div>
|
||||
<div
|
||||
className="vm-timezones-item vm-timezones-item_selected"
|
||||
onClick={toggleOpenList}
|
||||
ref={targetRef}
|
||||
>
|
||||
<div className="vm-timezones-item__title">{activeTimezone.region}</div>
|
||||
<div className="vm-timezones-item__utc">{activeTimezone.utc}</div>
|
||||
<div
|
||||
className={classNames({
|
||||
"vm-timezones-item__icon": true,
|
||||
"vm-timezones-item__icon_open": openList
|
||||
})}
|
||||
>
|
||||
<ArrowDropDownIcon/>
|
||||
</div>
|
||||
</div>
|
||||
<Popper
|
||||
open={openList}
|
||||
buttonRef={targetRef}
|
||||
placement="bottom-left"
|
||||
onClose={handleCloseList}
|
||||
>
|
||||
<div className="vm-timezones-list">
|
||||
<div className="vm-timezones-list-header">
|
||||
<div className="vm-timezones-list-header__search">
|
||||
<TextField
|
||||
autofocus
|
||||
label="Search"
|
||||
value={search}
|
||||
onChange={handleChangeSearch}
|
||||
/>
|
||||
</div>
|
||||
<div
|
||||
className="vm-timezones-item vm-timezones-list-group-options__item"
|
||||
onClick={createHandlerSetTimezone(localTimezone)}
|
||||
>
|
||||
<div className="vm-timezones-item__title">Browser Time ({localTimezone.region})</div>
|
||||
<div className="vm-timezones-item__utc">{localTimezone.utc}</div>
|
||||
</div>
|
||||
</div>
|
||||
{timezonesGroups.map(t => (
|
||||
<div
|
||||
className="vm-timezones-list-group"
|
||||
key={t}
|
||||
>
|
||||
<Accordion
|
||||
defaultExpanded={true}
|
||||
title={<div className="vm-timezones-list-group__title">{t}</div>}
|
||||
>
|
||||
<div className="vm-timezones-list-group-options">
|
||||
{searchTimezones[t] && searchTimezones[t].map(item => (
|
||||
<div
|
||||
className="vm-timezones-item vm-timezones-list-group-options__item"
|
||||
onClick={createHandlerSetTimezone(item)}
|
||||
key={item.search}
|
||||
>
|
||||
<div className="vm-timezones-item__title">{item.region}</div>
|
||||
<div className="vm-timezones-item__utc">{item.utc}</div>
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
</Accordion>
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
</Popper>
|
||||
</div>
|
||||
);
|
||||
};
|
||||
|
||||
export default Timezones;
|
|
@ -0,0 +1,96 @@
|
|||
@use "src/styles/variables" as *;
|
||||
|
||||
.vm-timezones {
|
||||
|
||||
&-item {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
gap: $padding-small;
|
||||
cursor: pointer;
|
||||
|
||||
&_selected {
|
||||
border: $border-divider;
|
||||
padding: $padding-small $padding-global;
|
||||
border-radius: $border-radius-small;
|
||||
}
|
||||
|
||||
&__title {
|
||||
text-transform: capitalize;
|
||||
}
|
||||
|
||||
&__utc {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
background-color: rgba($color-black, 0.06);
|
||||
padding: calc($padding-small/2);
|
||||
border-radius: $border-radius-small;
|
||||
}
|
||||
|
||||
&__icon {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
justify-content: flex-end;
|
||||
margin: 0 0 0 auto;
|
||||
transition: transform 200ms ease-in;
|
||||
|
||||
svg {
|
||||
width: 14px;
|
||||
}
|
||||
|
||||
&_open {
|
||||
transform: rotate(180deg);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
&-list {
|
||||
min-width: 600px;
|
||||
max-height: 300px;
|
||||
background-color: $color-background-block;
|
||||
border-radius: $border-radius-medium;
|
||||
overflow: auto;
|
||||
|
||||
&-header {
|
||||
position: sticky;
|
||||
top: 0;
|
||||
background-color: $color-background-block;
|
||||
z-index: 2;
|
||||
border-bottom: $border-divider;
|
||||
|
||||
&__search {
|
||||
padding: $padding-small;
|
||||
}
|
||||
}
|
||||
|
||||
&-group {
|
||||
padding: $padding-small 0;
|
||||
border-bottom: $border-divider;
|
||||
|
||||
&:last-child {
|
||||
border-bottom: none;
|
||||
}
|
||||
|
||||
&__title {
|
||||
font-weight: bold;
|
||||
color: $color-text-secondary;
|
||||
padding: $padding-small $padding-global;
|
||||
}
|
||||
|
||||
&-options {
|
||||
display: grid;
|
||||
align-items: flex-start;
|
||||
|
||||
&__item {
|
||||
padding: $padding-small $padding-global;
|
||||
transition: background-color 200ms ease;
|
||||
|
||||
&:hover {
|
||||
background-color: rgba($color-black, 0.1);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -10,6 +10,15 @@
|
|||
|
||||
}
|
||||
|
||||
&__title {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: flex-start;
|
||||
font-size: $font-size;
|
||||
font-weight: bold;
|
||||
margin-bottom: $padding-global;
|
||||
}
|
||||
|
||||
&__footer {
|
||||
display: inline-grid;
|
||||
grid-template-columns: repeat(2, 1fr);
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
@use "src/styles/variables" as *;
|
||||
|
||||
.vm-time-duration {
|
||||
max-height: 168px;
|
||||
max-height: 200px;
|
||||
overflow: auto;
|
||||
font-size: $font-size;
|
||||
}
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
import React, { FC, useEffect, useState, useMemo, useRef } from "preact/compat";
|
||||
import { dateFromSeconds, formatDateForNativeInput } from "../../../../utils/time";
|
||||
import { dateFromSeconds, formatDateForNativeInput, getRelativeTime, getUTCByTimezone } from "../../../../utils/time";
|
||||
import TimeDurationSelector from "../TimeDurationSelector/TimeDurationSelector";
|
||||
import dayjs from "dayjs";
|
||||
import { getAppModeEnable } from "../../../../utils/app-mode";
|
||||
|
@ -22,20 +22,25 @@ export const TimeSelector: FC = () => {
|
|||
const [until, setUntil] = useState<string>();
|
||||
const [from, setFrom] = useState<string>();
|
||||
|
||||
const formFormat = useMemo(() => dayjs(from).format(DATE_TIME_FORMAT), [from]);
|
||||
const untilFormat = useMemo(() => dayjs(until).format(DATE_TIME_FORMAT), [until]);
|
||||
const formFormat = useMemo(() => dayjs.tz(from).format(DATE_TIME_FORMAT), [from]);
|
||||
const untilFormat = useMemo(() => dayjs.tz(until).format(DATE_TIME_FORMAT), [until]);
|
||||
|
||||
const { period: { end, start }, relativeTime } = useTimeState();
|
||||
const { period: { end, start }, relativeTime, timezone, duration } = useTimeState();
|
||||
const dispatch = useTimeDispatch();
|
||||
const appModeEnable = getAppModeEnable();
|
||||
|
||||
const activeTimezone = useMemo(() => ({
|
||||
region: timezone,
|
||||
utc: getUTCByTimezone(timezone)
|
||||
}), [timezone]);
|
||||
|
||||
useEffect(() => {
|
||||
setUntil(formatDateForNativeInput(dateFromSeconds(end)));
|
||||
}, [end]);
|
||||
}, [timezone, end]);
|
||||
|
||||
useEffect(() => {
|
||||
setFrom(formatDateForNativeInput(dateFromSeconds(start)));
|
||||
}, [start]);
|
||||
}, [timezone, start]);
|
||||
|
||||
const setDuration = ({ duration, until, id }: {duration: string, until: Date, id: string}) => {
|
||||
dispatch({ type: "SET_RELATIVE_TIME", payload: { duration, until, id } });
|
||||
|
@ -43,13 +48,13 @@ export const TimeSelector: FC = () => {
|
|||
};
|
||||
|
||||
const formatRange = useMemo(() => {
|
||||
const startFormat = dayjs(dateFromSeconds(start)).format(DATE_TIME_FORMAT);
|
||||
const endFormat = dayjs(dateFromSeconds(end)).format(DATE_TIME_FORMAT);
|
||||
const startFormat = dayjs.tz(dateFromSeconds(start)).format(DATE_TIME_FORMAT);
|
||||
const endFormat = dayjs.tz(dateFromSeconds(end)).format(DATE_TIME_FORMAT);
|
||||
return {
|
||||
start: startFormat,
|
||||
end: endFormat
|
||||
};
|
||||
}, [start, end]);
|
||||
}, [start, end, timezone]);
|
||||
|
||||
const dateTitle = useMemo(() => {
|
||||
const isRelativeTime = relativeTime && relativeTime !== "none";
|
||||
|
@ -65,7 +70,10 @@ export const TimeSelector: FC = () => {
|
|||
|
||||
const setTimeAndClosePicker = () => {
|
||||
if (from && until) {
|
||||
dispatch({ type: "SET_PERIOD", payload: { from: new Date(from), to: new Date(until) } });
|
||||
dispatch({ type: "SET_PERIOD", payload: {
|
||||
from: dayjs(from).toDate(),
|
||||
to: dayjs(until).toDate()
|
||||
} });
|
||||
}
|
||||
setOpenOptions(false);
|
||||
};
|
||||
|
@ -91,6 +99,15 @@ export const TimeSelector: FC = () => {
|
|||
setOpenOptions(false);
|
||||
};
|
||||
|
||||
useEffect(() => {
|
||||
const value = getRelativeTime({
|
||||
relativeTimeId: relativeTime,
|
||||
defaultDuration: duration,
|
||||
defaultEndInput: dateFromSeconds(end),
|
||||
});
|
||||
setDuration({ id: value.relativeTimeId, duration: value.duration, until: value.endInput });
|
||||
}, [timezone]);
|
||||
|
||||
useClickOutside(wrapperRef, (e) => {
|
||||
const target = e.target as HTMLElement;
|
||||
const isFromButton = fromRef?.current && fromRef.current.contains(target);
|
||||
|
@ -159,6 +176,10 @@ export const TimeSelector: FC = () => {
|
|||
/>
|
||||
</div>
|
||||
</div>
|
||||
<div className="vm-time-selector-left-timezone">
|
||||
<div className="vm-time-selector-left-timezone__title">{activeTimezone.region}</div>
|
||||
<div className="vm-time-selector-left-timezone__utc">{activeTimezone.utc}</div>
|
||||
</div>
|
||||
<Button
|
||||
variant="text"
|
||||
startIcon={<AlarmIcon />}
|
||||
|
|
|
@ -30,6 +30,10 @@
|
|||
cursor: pointer;
|
||||
transition: color 200ms ease-in-out, border-bottom-color 300ms ease;
|
||||
|
||||
&:last-child {
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
&:hover {
|
||||
border-bottom-color: $color-primary;
|
||||
}
|
||||
|
@ -52,6 +56,26 @@
|
|||
}
|
||||
}
|
||||
|
||||
&-timezone {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
gap: $padding-small;
|
||||
font-size: $font-size-small;
|
||||
margin-bottom: $padding-small;
|
||||
|
||||
&__title {}
|
||||
|
||||
&__utc {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
background-color: rgba($color-black, 0.06);
|
||||
padding: calc($padding-small/2);
|
||||
border-radius: $border-radius-small;
|
||||
}
|
||||
}
|
||||
|
||||
&__controls {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(2, 1fr);
|
||||
|
|
|
@ -56,13 +56,14 @@ const Autocomplete: FC<AutocompleteProps> = ({
|
|||
const handleKeyDown = (e: KeyboardEvent) => {
|
||||
const { key, ctrlKey, metaKey, shiftKey } = e;
|
||||
const modifiers = ctrlKey || metaKey || shiftKey;
|
||||
const hasOptions = foundOptions.length;
|
||||
|
||||
if (key === "ArrowUp" && !modifiers) {
|
||||
if (key === "ArrowUp" && !modifiers && hasOptions) {
|
||||
e.preventDefault();
|
||||
setFocusOption((prev) => prev <= 0 ? 0 : prev - 1);
|
||||
}
|
||||
|
||||
if (key === "ArrowDown" && !modifiers) {
|
||||
if (key === "ArrowDown" && !modifiers && hasOptions) {
|
||||
e.preventDefault();
|
||||
const lastIndex = foundOptions.length - 1;
|
||||
setFocusOption((prev) => prev >= lastIndex ? lastIndex : prev + 1);
|
||||
|
|
|
@ -30,8 +30,8 @@ const Calendar: FC<DatePickerProps> = ({
|
|||
onClose
|
||||
}) => {
|
||||
const [displayYears, setDisplayYears] = useState(false);
|
||||
const [viewDate, setViewDate] = useState(dayjs(date));
|
||||
const [selectDate, setSelectDate] = useState(dayjs(date));
|
||||
const [viewDate, setViewDate] = useState(dayjs.tz(date));
|
||||
const [selectDate, setSelectDate] = useState(dayjs.tz(date));
|
||||
const [tab, setTab] = useState(tabs[0].value);
|
||||
|
||||
const toggleDisplayYears = () => {
|
||||
|
@ -62,7 +62,7 @@ const Calendar: FC<DatePickerProps> = ({
|
|||
};
|
||||
|
||||
useEffect(() => {
|
||||
if (selectDate.format() === dayjs(date).format()) return;
|
||||
if (selectDate.format() === dayjs.tz(date).format()) return;
|
||||
onChange(selectDate.format(format));
|
||||
}, [selectDate]);
|
||||
|
||||
|
|
|
@ -11,7 +11,7 @@ interface CalendarBodyProps {
|
|||
const weekday = ["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"];
|
||||
|
||||
const CalendarBody: FC<CalendarBodyProps> = ({ viewDate, selectDate, onChangeSelectDate }) => {
|
||||
const today = dayjs().startOf("day");
|
||||
const today = dayjs().tz().startOf("day");
|
||||
|
||||
const days: (Dayjs|null)[] = useMemo(() => {
|
||||
const result = new Array(42).fill(null);
|
||||
|
|
|
@ -135,6 +135,7 @@
|
|||
display: grid;
|
||||
grid-template-columns: repeat(3, 1fr);
|
||||
gap: $padding-small;
|
||||
max-height: 400px;
|
||||
overflow: auto;
|
||||
|
||||
&__year {
|
||||
|
|
|
@ -20,7 +20,7 @@ const DatePicker = forwardRef<HTMLDivElement, DatePickerProps>(({
|
|||
onChange,
|
||||
}, ref) => {
|
||||
const [openCalendar, setOpenCalendar] = useState(false);
|
||||
const dateDayjs = useMemo(() => date ? dayjs(date) : dayjs(), [date]);
|
||||
const dateDayjs = useMemo(() => date ? dayjs.tz(date) : dayjs().tz(), [date]);
|
||||
|
||||
const toggleOpenCalendar = () => {
|
||||
setOpenCalendar(prev => !prev);
|
||||
|
|
|
@ -28,6 +28,10 @@ const keyList = [
|
|||
{
|
||||
keys: [ctrlMeta, "Arrow Down"],
|
||||
description: "Next command from the Query history"
|
||||
},
|
||||
{
|
||||
keys: [ctrlMeta, "Click by 'Eye'"],
|
||||
description: "Toggle multiple queries"
|
||||
}
|
||||
]
|
||||
},
|
||||
|
@ -36,10 +40,12 @@ const keyList = [
|
|||
list: [
|
||||
{
|
||||
keys: [ctrlMeta, "Scroll Up"],
|
||||
alt: ["+"],
|
||||
description: "Zoom in"
|
||||
},
|
||||
{
|
||||
keys: [ctrlMeta, "Scroll Down"],
|
||||
alt: ["-"],
|
||||
description: "Zoom out"
|
||||
},
|
||||
{
|
||||
|
@ -118,6 +124,15 @@ const ShortcutKeys: FC = () => {
|
|||
{i !== l.keys.length - 1 ? "+" : ""}
|
||||
</>
|
||||
))}
|
||||
{l.alt && l.alt.map((alt, i) => (
|
||||
<>
|
||||
or
|
||||
<code key={alt}>
|
||||
{alt}
|
||||
</code>
|
||||
{i !== l.alt.length - 1 ? "+" : ""}
|
||||
</>
|
||||
))}
|
||||
</div>
|
||||
<p className="vm-shortcuts-section-list-item__description">
|
||||
{l.description}
|
||||
|
|
|
@ -19,7 +19,7 @@
|
|||
|
||||
&-item {
|
||||
display: grid;
|
||||
grid-template-columns: 180px 1fr;
|
||||
grid-template-columns: 210px 1fr;
|
||||
align-items: center;
|
||||
gap: $padding-small;
|
||||
|
||||
|
|
|
@ -10,6 +10,7 @@ import { TimeParams } from "../../../types";
|
|||
import { AxisRange, YaxisState } from "../../../state/graph/reducer";
|
||||
import { getAvgFromArray, getMaxFromArray, getMinFromArray } from "../../../utils/math";
|
||||
import classNames from "classnames";
|
||||
import { useTimeState } from "../../../state/time/TimeStateContext";
|
||||
import "./style.scss";
|
||||
|
||||
export interface GraphViewProps {
|
||||
|
@ -54,6 +55,7 @@ const GraphView: FC<GraphViewProps> = ({
|
|||
alias = [],
|
||||
fullWidth = true
|
||||
}) => {
|
||||
const { timezone } = useTimeState();
|
||||
const currentStep = useMemo(() => customStep || period.step || 1, [period.step, customStep]);
|
||||
|
||||
const [dataChart, setDataChart] = useState<uPlotData>([[]]);
|
||||
|
@ -121,7 +123,7 @@ const GraphView: FC<GraphViewProps> = ({
|
|||
setDataChart(timeDataSeries as uPlotData);
|
||||
setSeries(tempSeries);
|
||||
setLegend(tempLegend);
|
||||
}, [data]);
|
||||
}, [data, timezone]);
|
||||
|
||||
useEffect(() => {
|
||||
const tempLegend: LegendItemType[] = [];
|
||||
|
|
8
app/vmui/packages/vmui/src/constants/dayjsPlugins.ts
Normal file
8
app/vmui/packages/vmui/src/constants/dayjsPlugins.ts
Normal file
|
@ -0,0 +1,8 @@
|
|||
import dayjs from "dayjs";
|
||||
import timezone from "dayjs/plugin/timezone";
|
||||
import duration from "dayjs/plugin/duration";
|
||||
import utc from "dayjs/plugin/utc";
|
||||
|
||||
dayjs.extend(timezone);
|
||||
dayjs.extend(duration);
|
||||
dayjs.extend(utc);
|
|
@ -36,7 +36,7 @@ export const SnackbarProvider: FC = ({ children }) => {
|
|||
setSnack({
|
||||
message: infoMessage.text,
|
||||
variant: infoMessage.type,
|
||||
key: new Date().getTime()
|
||||
key: Date.now()
|
||||
});
|
||||
setOpen(true);
|
||||
const timeout = setTimeout(handleClose, 4000);
|
||||
|
|
|
@ -8,9 +8,8 @@ const useClickOutside = <T extends HTMLElement = HTMLElement>(
|
|||
preventRef?: RefObject<T>
|
||||
) => {
|
||||
useEffect(() => {
|
||||
const el = ref?.current;
|
||||
|
||||
const listener = (event: Event) => {
|
||||
const el = ref?.current;
|
||||
const target = event.target as HTMLElement;
|
||||
const isPreventRef = preventRef?.current && preventRef.current.contains(target);
|
||||
if (!el || el.contains((event?.target as Node) || null) || isPreventRef) {
|
||||
|
@ -23,13 +22,10 @@ const useClickOutside = <T extends HTMLElement = HTMLElement>(
|
|||
document.addEventListener("mousedown", listener);
|
||||
document.addEventListener("touchstart", listener);
|
||||
|
||||
const removeListeners = () => {
|
||||
return () => {
|
||||
document.removeEventListener("mousedown", listener);
|
||||
document.removeEventListener("touchstart", listener);
|
||||
};
|
||||
|
||||
if (!el) removeListeners();
|
||||
return removeListeners;
|
||||
}, [ref, handler]); // Reload only if ref or handler changes
|
||||
};
|
||||
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
import React, { render } from "preact/compat";
|
||||
import "./constants/dayjsPlugins";
|
||||
import App from "./App";
|
||||
import reportWebVitals from "./reportWebVitals";
|
||||
import "./styles/style.scss";
|
||||
|
|
|
@ -11,6 +11,8 @@ import Button from "../../../components/Main/Button/Button";
|
|||
import "./style.scss";
|
||||
import Tooltip from "../../../components/Main/Tooltip/Tooltip";
|
||||
import classNames from "classnames";
|
||||
import { MouseEvent as ReactMouseEvent } from "react";
|
||||
import { arrayEquals } from "../../../utils/array";
|
||||
|
||||
export interface QueryConfiguratorProps {
|
||||
error?: ErrorTypes | string;
|
||||
|
@ -55,8 +57,16 @@ const QueryConfigurator: FC<QueryConfiguratorProps> = ({ error, queryOptions, on
|
|||
setStateQuery(prev => prev.filter((q, i) => i !== index));
|
||||
};
|
||||
|
||||
const onToggleHideQuery = (index: number) => {
|
||||
setHideQuery(prev => prev.includes(index) ? prev.filter(n => n !== index) : [...prev, index]);
|
||||
const onToggleHideQuery = (e: ReactMouseEvent<HTMLButtonElement, MouseEvent>, index: number) => {
|
||||
const { ctrlKey, metaKey } = e;
|
||||
const ctrlMetaKey = ctrlKey || metaKey;
|
||||
|
||||
if (ctrlMetaKey) {
|
||||
const hideIndexes = stateQuery.map((q, i) => i).filter(n => n !== index);
|
||||
setHideQuery(prev => arrayEquals(hideIndexes, prev) ? [] : hideIndexes);
|
||||
} else {
|
||||
setHideQuery(prev => prev.includes(index) ? prev.filter(n => n !== index) : [...prev, index]);
|
||||
}
|
||||
};
|
||||
|
||||
const handleChangeQuery = (value: string, index: number) => {
|
||||
|
@ -84,11 +94,11 @@ const QueryConfigurator: FC<QueryConfiguratorProps> = ({ error, queryOptions, on
|
|||
|
||||
const createHandlerRemoveQuery = (i: number) => () => {
|
||||
onRemoveQuery(i);
|
||||
setHideQuery(prev => prev.map(n => n > i ? n - 1: n));
|
||||
setHideQuery(prev => prev.includes(i) ? prev.filter(n => n !== i) : prev.map(n => n > i ? n - 1: n));
|
||||
};
|
||||
|
||||
const createHandlerHideQuery = (i: number) => () => {
|
||||
onToggleHideQuery(i);
|
||||
const createHandlerHideQuery = (i: number) => (e: ReactMouseEvent<HTMLButtonElement, MouseEvent>) => {
|
||||
onToggleHideQuery(e, i);
|
||||
};
|
||||
|
||||
useEffect(() => {
|
||||
|
|
|
@ -23,7 +23,7 @@ export type Action =
|
|||
export const initialState: CardinalityState = {
|
||||
runQuery: 0,
|
||||
topN: getQueryStringValue("topN", 10) as number,
|
||||
date: getQueryStringValue("date", dayjs(new Date()).format(DATE_FORMAT)) as string,
|
||||
date: getQueryStringValue("date", dayjs().tz().format(DATE_FORMAT)) as string,
|
||||
focusLabel: getQueryStringValue("focusLabel", "") as string,
|
||||
match: getQueryStringValue("match", "") as string,
|
||||
extraLabel: getQueryStringValue("extra_label", "") as string,
|
||||
|
|
|
@ -5,14 +5,18 @@ import {
|
|||
getDateNowUTC,
|
||||
getDurationFromPeriod,
|
||||
getTimeperiodForDuration,
|
||||
getRelativeTime
|
||||
getRelativeTime,
|
||||
setTimezone
|
||||
} from "../../utils/time";
|
||||
import { getQueryStringValue } from "../../utils/query-string";
|
||||
import dayjs from "dayjs";
|
||||
import { getFromStorage, saveToStorage } from "../../utils/storage";
|
||||
|
||||
export interface TimeState {
|
||||
duration: string;
|
||||
period: TimeParams;
|
||||
relativeTime?: string;
|
||||
timezone: string;
|
||||
}
|
||||
|
||||
export type TimeAction =
|
||||
|
@ -21,12 +25,16 @@ export type TimeAction =
|
|||
| { type: "SET_PERIOD", payload: TimePeriod }
|
||||
| { type: "RUN_QUERY"}
|
||||
| { type: "RUN_QUERY_TO_NOW"}
|
||||
| { type: "SET_TIMEZONE", payload: string }
|
||||
|
||||
const timezone = getFromStorage("TIMEZONE") as string || dayjs.tz.guess();
|
||||
setTimezone(timezone);
|
||||
|
||||
const defaultDuration = getQueryStringValue("g0.range_input") as string;
|
||||
|
||||
const { duration, endInput, relativeTimeId } = getRelativeTime({
|
||||
defaultDuration: defaultDuration || "1h",
|
||||
defaultEndInput: new Date(formatDateToLocal(getQueryStringValue("g0.end_input", getDateNowUTC()) as Date)),
|
||||
defaultEndInput: formatDateToLocal(getQueryStringValue("g0.end_input", getDateNowUTC()) as string),
|
||||
relativeTimeId: defaultDuration ? getQueryStringValue("g0.relative_time", "none") as string : undefined
|
||||
});
|
||||
|
||||
|
@ -34,8 +42,10 @@ export const initialTimeState: TimeState = {
|
|||
duration,
|
||||
period: getTimeperiodForDuration(duration, endInput),
|
||||
relativeTime: relativeTimeId,
|
||||
timezone,
|
||||
};
|
||||
|
||||
|
||||
export function reducer(state: TimeState, action: TimeAction): TimeState {
|
||||
switch (action.type) {
|
||||
case "SET_DURATION":
|
||||
|
@ -49,7 +59,7 @@ export function reducer(state: TimeState, action: TimeAction): TimeState {
|
|||
return {
|
||||
...state,
|
||||
duration: action.payload.duration,
|
||||
period: getTimeperiodForDuration(action.payload.duration, new Date(action.payload.until)),
|
||||
period: getTimeperiodForDuration(action.payload.duration, action.payload.until),
|
||||
relativeTime: action.payload.id,
|
||||
};
|
||||
case "SET_PERIOD":
|
||||
|
@ -77,6 +87,13 @@ export function reducer(state: TimeState, action: TimeAction): TimeState {
|
|||
...state,
|
||||
period: getTimeperiodForDuration(state.duration)
|
||||
};
|
||||
case "SET_TIMEZONE":
|
||||
setTimezone(action.payload);
|
||||
saveToStorage("TIMEZONE", action.payload);
|
||||
return {
|
||||
...state,
|
||||
timezone: action.payload
|
||||
};
|
||||
default:
|
||||
throw new Error();
|
||||
}
|
||||
|
|
|
@ -105,3 +105,9 @@ export interface SeriesLimits {
|
|||
chart: number,
|
||||
code: number,
|
||||
}
|
||||
|
||||
export interface Timezone {
|
||||
region: string,
|
||||
utc: string,
|
||||
search?: string
|
||||
}
|
||||
|
|
|
@ -5,7 +5,7 @@ import { MAX_QUERY_FIELDS } from "../constants/graph";
|
|||
export const setQueryStringWithoutPageReload = (params: Record<string, unknown>): void => {
|
||||
const w = window;
|
||||
if (w) {
|
||||
const qsValue = Object.entries(params).map(([k, v]) => `${k}=${v}`).join("&");
|
||||
const qsValue = Object.entries(params).map(([k, v]) => `${k}=${encodeURIComponent(String(v))}`).join("&");
|
||||
const qs = qsValue ? `?${qsValue}` : "";
|
||||
const newurl = `${w.location.protocol}//${w.location.host}${w.location.pathname}${qs}${w.location.hash}`;
|
||||
w.history.pushState({ path: newurl }, "", newurl);
|
||||
|
|
|
@ -6,6 +6,7 @@ export type StorageKeys = "BASIC_AUTH_DATA"
|
|||
| "QUERY_TRACING"
|
||||
| "SERIES_LIMITS"
|
||||
| "TABLE_COMPACT"
|
||||
| "TIMEZONE"
|
||||
|
||||
export const saveToStorage = (key: StorageKeys, value: string | boolean | Record<string, unknown>): void => {
|
||||
if (value) {
|
||||
|
|
|
@ -1,17 +1,16 @@
|
|||
import { RelativeTimeOption, TimeParams, TimePeriod } from "../types";
|
||||
import { RelativeTimeOption, TimeParams, TimePeriod, Timezone } from "../types";
|
||||
import dayjs, { UnitTypeShort } from "dayjs";
|
||||
import duration from "dayjs/plugin/duration";
|
||||
import utc from "dayjs/plugin/utc";
|
||||
import { getQueryStringValue } from "./query-string";
|
||||
import { DATE_ISO_FORMAT } from "../constants/date";
|
||||
|
||||
dayjs.extend(duration);
|
||||
dayjs.extend(utc);
|
||||
|
||||
const MAX_ITEMS_PER_CHART = window.innerWidth / 4;
|
||||
|
||||
export const limitsDurations = { min: 1, max: 1.578e+11 }; // min: 1 ms, max: 5 years
|
||||
|
||||
// eslint-disable-next-line @typescript-eslint/ban-ts-comment
|
||||
// @ts-ignore
|
||||
export const supportedTimezones = Intl.supportedValuesOf("timeZone") as string[];
|
||||
|
||||
export const supportedDurations = [
|
||||
{ long: "days", short: "d", possible: "day" },
|
||||
{ long: "weeks", short: "w", possible: "week" },
|
||||
|
@ -38,7 +37,7 @@ export const isSupportedDuration = (str: string): Partial<Record<UnitTypeShort,
|
|||
};
|
||||
|
||||
export const getTimeperiodForDuration = (dur: string, date?: Date): TimeParams => {
|
||||
const n = (date || new Date()).valueOf() / 1000;
|
||||
const n = (date || dayjs().toDate()).valueOf() / 1000;
|
||||
|
||||
const durItems = dur.trim().split(" ");
|
||||
|
||||
|
@ -64,24 +63,24 @@ export const getTimeperiodForDuration = (dur: string, date?: Date): TimeParams =
|
|||
start: n - delta,
|
||||
end: n,
|
||||
step: step,
|
||||
date: formatDateToUTC(date || new Date())
|
||||
date: formatDateToUTC(date || dayjs().toDate())
|
||||
};
|
||||
};
|
||||
|
||||
export const formatDateToLocal = (date: Date): string => {
|
||||
return dayjs(date).utcOffset(0, true).local().format(DATE_ISO_FORMAT);
|
||||
export const formatDateToLocal = (date: string): Date => {
|
||||
return dayjs(date).utcOffset(0, true).toDate();
|
||||
};
|
||||
|
||||
export const formatDateToUTC = (date: Date): string => {
|
||||
return dayjs(date).utc().format(DATE_ISO_FORMAT);
|
||||
return dayjs.tz(date).utc().format(DATE_ISO_FORMAT);
|
||||
};
|
||||
|
||||
export const formatDateForNativeInput = (date: Date): string => {
|
||||
return dayjs(date).format(DATE_ISO_FORMAT);
|
||||
return dayjs.tz(date).format(DATE_ISO_FORMAT);
|
||||
};
|
||||
|
||||
export const getDateNowUTC = (): Date => {
|
||||
return new Date(dayjs().utc().format(DATE_ISO_FORMAT));
|
||||
export const getDateNowUTC = (): string => {
|
||||
return dayjs().utc().format(DATE_ISO_FORMAT);
|
||||
};
|
||||
|
||||
export const getDurationFromMilliseconds = (ms: number): string => {
|
||||
|
@ -115,7 +114,10 @@ export const checkDurationLimit = (dur: string): string => {
|
|||
return dur;
|
||||
};
|
||||
|
||||
export const dateFromSeconds = (epochTimeInSeconds: number): Date => new Date(epochTimeInSeconds * 1000);
|
||||
export const dateFromSeconds = (epochTimeInSeconds: number): Date => dayjs(epochTimeInSeconds * 1000).toDate();
|
||||
|
||||
const getYesterday = () => dayjs().tz().subtract(1, "day").endOf("day").toDate();
|
||||
const getToday = () => dayjs().tz().endOf("day").toDate();
|
||||
|
||||
export const relativeTimeOptions: RelativeTimeOption[] = [
|
||||
{ title: "Last 5 minutes", duration: "5m" },
|
||||
|
@ -132,11 +134,11 @@ export const relativeTimeOptions: RelativeTimeOption[] = [
|
|||
{ title: "Last 90 days", duration: "90d" },
|
||||
{ title: "Last 180 days", duration: "180d" },
|
||||
{ title: "Last 1 year", duration: "1y" },
|
||||
{ title: "Yesterday", duration: "1d", until: () => dayjs().subtract(1, "day").endOf("day").toDate() },
|
||||
{ title: "Today", duration: "1d", until: () => dayjs().endOf("day").toDate() },
|
||||
{ title: "Yesterday", duration: "1d", until: getYesterday },
|
||||
{ title: "Today", duration: "1d", until: getToday },
|
||||
].map(o => ({
|
||||
id: o.title.replace(/\s/g, "_").toLocaleLowerCase(),
|
||||
until: o.until ? o.until : () => dayjs().toDate(),
|
||||
until: o.until ? o.until : () => dayjs().tz().toDate(),
|
||||
...o
|
||||
}));
|
||||
|
||||
|
@ -151,3 +153,35 @@ export const getRelativeTime = ({ relativeTimeId, defaultDuration, defaultEndInp
|
|||
endInput: target ? target.until() : defaultEndInput
|
||||
};
|
||||
};
|
||||
|
||||
export const getUTCByTimezone = (timezone: string) => {
|
||||
const date = dayjs().tz(timezone);
|
||||
return `UTC${date.format("Z")}`;
|
||||
};
|
||||
|
||||
export const getTimezoneList = (search = "") => {
|
||||
const regexp = new RegExp(search, "i");
|
||||
|
||||
return supportedTimezones.reduce((acc: {[key: string]: Timezone[]}, region) => {
|
||||
const zone = (region.match(/^(.*?)\//) || [])[1] || "unknown";
|
||||
const utc = getUTCByTimezone(region);
|
||||
const item = {
|
||||
region,
|
||||
utc,
|
||||
search: `${region} ${utc} ${region.replace(/[/_]/gmi, " ")}`
|
||||
};
|
||||
const includeZone = !search || (search && regexp.test(item.search));
|
||||
|
||||
if (includeZone && acc[zone]) {
|
||||
acc[zone].push(item);
|
||||
} else if (includeZone) {
|
||||
acc[zone] = [item];
|
||||
}
|
||||
|
||||
return acc;
|
||||
}, {});
|
||||
};
|
||||
|
||||
export const setTimezone = (timezone: string) => {
|
||||
dayjs.tz.setDefault(timezone);
|
||||
};
|
||||
|
|
|
@ -5,6 +5,18 @@ import { AxisRange } from "../../state/graph/reducer";
|
|||
import { formatTicks, sizeAxis } from "./helpers";
|
||||
import { TimeParams } from "../../types";
|
||||
|
||||
// see https://github.com/leeoniya/uPlot/tree/master/docs#axis--grid-opts
|
||||
const timeValues = [
|
||||
// tick incr default year month day hour min sec mode
|
||||
[3600 * 24 * 365, "{YYYY}", null, null, null, null, null, null, 1],
|
||||
[3600 * 24 * 28, "{MMM}", "\n{YYYY}", null, null, null, null, null, 1],
|
||||
[3600 * 24, "{MM}-{DD}", "\n{YYYY}", null, null, null, null, null, 1],
|
||||
[3600, "{HH}:{mm}", "\n{YYYY}-{MM}-{DD}", null, "\n{MM}-{DD}", null, null, null, 1],
|
||||
[60, "{HH}:{mm}", "\n{YYYY}-{MM}-{DD}", null, "\n{MM}-{DD}", null, null, null, 1],
|
||||
[1, "{HH}:{mm}:{ss}", "\n{YYYY}-{MM}-{DD}", null, "\n{MM}-{DD} {HH}:{mm}", null, null, null, 1],
|
||||
[0.001, ":{ss}.{fff}", "\n{YYYY}-{MM}-{DD} {HH}:{mm}", null, "\n{MM}-{DD} {HH}:{mm}", null, "\n{HH}:{mm}", null, 1],
|
||||
];
|
||||
|
||||
export const getAxes = (series: Series[], unit?: string): Axis[] => Array.from(new Set(series.map(s => s.scale))).map(a => {
|
||||
const axis = {
|
||||
scale: a,
|
||||
|
@ -13,7 +25,7 @@ export const getAxes = (series: Series[], unit?: string): Axis[] => Array.from(n
|
|||
font: "10px Arial",
|
||||
values: (u: uPlot, ticks: number[]) => formatTicks(u, ticks, unit)
|
||||
};
|
||||
if (!a) return { space: 80 };
|
||||
if (!a) return { space: 80, values: timeValues };
|
||||
if (!(Number(a) % 2)) return { ...axis, side: 1 };
|
||||
return axis;
|
||||
});
|
||||
|
|
File diff suppressed because it is too large
Load diff
|
@ -225,7 +225,7 @@
|
|||
"uid": "$ds"
|
||||
},
|
||||
"exemplar": false,
|
||||
"expr": "sum(vm_rows{job=~\"$job\", instance=~\"$instance\", type!=\"indexdb\"})",
|
||||
"expr": "sum(vm_rows{job=~\"$job\", instance=~\"$instance\", type!~\"indexdb.*\"})",
|
||||
"format": "time_series",
|
||||
"instant": true,
|
||||
"interval": "",
|
||||
|
@ -3767,7 +3767,7 @@
|
|||
"uid": "$ds"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "vm_free_disk_space_bytes{job=~\"$job\", instance=~\"$instance\"} \n/ ignoring(path) (\n (\n rate(vm_rows_added_to_storage_total{job=~\"$job\", instance=~\"$instance\"}[1d]) \n - ignoring(type) rate(vm_deduplicated_samples_total{job=~\"$job\", instance=~\"$instance\", type=\"merge\"}[1d])\n ) * scalar(\n sum(vm_data_size_bytes{job=~\"$job\", instance=~\"$instance\", type!=\"indexdb\"}) \n / sum(vm_rows{job=~\"$job\", instance=~\"$instance\", type!=\"indexdb\"})\n )\n )",
|
||||
"expr": "vm_free_disk_space_bytes{job=~\"$job\", instance=~\"$instance\"} \n/ ignoring(path) (\n (\n rate(vm_rows_added_to_storage_total{job=~\"$job\", instance=~\"$instance\"}[1d]) \n - ignoring(type) rate(vm_deduplicated_samples_total{job=~\"$job\", instance=~\"$instance\", type=\"merge\"}[1d])\n ) * scalar(\n sum(vm_data_size_bytes{job=~\"$job\", instance=~\"$instance\", type!~\"indexdb.*\"}) \n / sum(vm_rows{job=~\"$job\", instance=~\"$instance\", type!~\"indexdb.*\"})\n )\n )",
|
||||
"format": "time_series",
|
||||
"hide": false,
|
||||
"interval": "",
|
||||
|
@ -3874,7 +3874,7 @@
|
|||
"uid": "$ds"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum(vm_data_size_bytes{job=~\"$job\", instance=~\"$instance\", type!=\"indexdb\"})",
|
||||
"expr": "sum(vm_data_size_bytes{job=~\"$job\", instance=~\"$instance\", type!~\"indexdb.*\"})",
|
||||
"format": "time_series",
|
||||
"interval": "",
|
||||
"intervalFactor": 1,
|
||||
|
@ -3900,7 +3900,7 @@
|
|||
"uid": "$ds"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum(vm_data_size_bytes{job=~\"$job\", instance=~\"$instance\", type=\"indexdb\"})",
|
||||
"expr": "sum(vm_data_size_bytes{job=~\"$job\", instance=~\"$instance\", type=~\"indexdb.*\"})",
|
||||
"format": "time_series",
|
||||
"hide": false,
|
||||
"interval": "",
|
||||
|
@ -4156,7 +4156,7 @@
|
|||
"type": "prometheus",
|
||||
"uid": "$ds"
|
||||
},
|
||||
"expr": "sum(vm_rows{job=~\"$job\", instance=~\"$instance\", type != \"indexdb\"})",
|
||||
"expr": "sum(vm_rows{job=~\"$job\", instance=~\"$instance\", type!~\"indexdb.*\"})",
|
||||
"format": "time_series",
|
||||
"interval": "",
|
||||
"intervalFactor": 1,
|
||||
|
@ -5306,4 +5306,4 @@
|
|||
"uid": "wNf0q_kZk",
|
||||
"version": 1,
|
||||
"weekStart": ""
|
||||
}
|
||||
}
|
||||
|
|
|
@ -5,14 +5,14 @@ Docker compose environment for VictoriaMetrics includes VictoriaMetrics componen
|
|||
and [Grafana](https://grafana.com/).
|
||||
|
||||
For starting the docker-compose environment ensure you have docker installed and running and access to the Internet.
|
||||
All commands should be executed from the root directory of this repo.
|
||||
**All commands should be executed from the root directory of [the repo](https://github.com/VictoriaMetrics/VictoriaMetrics).**
|
||||
|
||||
To spin-up environment for single server VictoriaMetrics run the following command :
|
||||
To spin-up environment for single server VictoriaMetrics run the following command:
|
||||
```
|
||||
make docker-single-up
|
||||
```
|
||||
|
||||
To shutdown the docker compose environment for single server run the following command:
|
||||
To shut down the docker-compose environment for single server run the following command:
|
||||
```
|
||||
make docker-single-down
|
||||
```
|
||||
|
@ -22,7 +22,7 @@ For cluster version the command will be the following:
|
|||
make docker-cluster-up
|
||||
```
|
||||
|
||||
To shutdown the docker compose environment for cluster version run the following command:
|
||||
To shut down the docker compose environment for cluster version run the following command:
|
||||
```
|
||||
make docker-cluster-down
|
||||
```
|
||||
|
@ -36,51 +36,49 @@ VictoriaMetrics will be accessible on the following ports:
|
|||
* `--httpListenAddr=:8428`
|
||||
|
||||
The communication scheme between components is the following:
|
||||
* [vmagent](#vmagent) sends scraped metrics to VictoriaMetrics;
|
||||
* [grafana](#grafana) is configured with datasource pointing to VictoriaMetrics;
|
||||
* [vmalert](#vmalert) is configured to query VictoriaMetrics and send alerts state
|
||||
* [vmagent](#vmagent) sends scraped metrics to `single server VictoriaMetrics`;
|
||||
* [grafana](#grafana) is configured with datasource pointing to `single server VictoriaMetrics`;
|
||||
* [vmalert](#vmalert) is configured to query `single server VictoriaMetrics` and send alerts state
|
||||
and recording rules back to it;
|
||||
* [alertmanager](#alertmanager) is configured to receive notifications from vmalert.
|
||||
* [alertmanager](#alertmanager) is configured to receive notifications from `vmalert`.
|
||||
|
||||
To access `vmalert` via `vmselect`
|
||||
use link [http://localhost:8428/vmalert](http://localhost:8428/vmalert/).
|
||||
To access `vmalert` use link [http://localhost:8428/vmalert](http://localhost:8428/vmalert/).
|
||||
|
||||
To access [vmui](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#vmui)
|
||||
use link [http://localhost:8428/vmui](http://localhost:8428/vmui).
|
||||
|
||||
## VictoriaMetrics cluster
|
||||
|
||||
VictoriaMetrics cluster environemnt consists of vminsert, vmstorage and vmselect components. vmselect
|
||||
has exposed port `:8481`, vminsert has exposed port `:8480` and the rest of components are available
|
||||
only inside of environment.
|
||||
VictoriaMetrics cluster environment consists of `vminsert`, `vmstorage` and `vmselect` components.
|
||||
`vmselect` has exposed port `:8481`, `vminsert` has exposed port `:8480` and the rest of components
|
||||
are available only inside the environment.
|
||||
|
||||
The communication scheme between components is the following:
|
||||
* [vmagent](#vmagent) sends scraped metrics to vminsert;
|
||||
* vminsert forwards data to vmstorage;
|
||||
* vmselect is connected to vmstorage for querying data;
|
||||
* [grafana](#grafana) is configured with datasource pointing to vmselect;
|
||||
* [vmalert](#vmalert) is configured to query vmselect and send alerts state
|
||||
and recording rules to vminsert;
|
||||
* [alertmanager](#alertmanager) is configured to receive notifications from vmalert.
|
||||
* [vmagent](#vmagent) sends scraped metrics to `vminsert`;
|
||||
* `vminsert` forwards data to `vmstorage`;
|
||||
* `vmselect` is connected to `vmstorage` for querying data;
|
||||
* [grafana](#grafana) is configured with datasource pointing to `vmselect`;
|
||||
* [vmalert](#vmalert) is configured to query `vmselect` and send alerts state
|
||||
and recording rules to `vminsert`;
|
||||
* [alertmanager](#alertmanager) is configured to receive notifications from `vmalert`.
|
||||
|
||||
To access `vmalert` via `vmselect`
|
||||
use link [http://localhost:8481/select/0/prometheus/vmalert](http://localhost:8481/select/0/prometheus/vmalert/).
|
||||
To access `vmalert` use link [http://localhost:8481/select/0/prometheus/vmalert](http://localhost:8481/select/0/prometheus/vmalert/).
|
||||
|
||||
To access [vmui](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#vmui)
|
||||
use link [http://localhost:8481/select/0/prometheus/vmui](http://localhost:8481/select/0/prometheus/vmui).
|
||||
|
||||
## vmagent
|
||||
|
||||
vmagent is used for scraping and pushing timeseries to
|
||||
VictoriaMetrics instance. It accepts Prometheus-compatible
|
||||
configuration `prometheus.yml` with listed targets for scraping.
|
||||
vmagent is used for scraping and pushing time series to VictoriaMetrics instance.
|
||||
It accepts Prometheus-compatible configuration [prometheus.yml](https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/prometheus.yml)
|
||||
with listed targets for scraping.
|
||||
|
||||
[Web interface link](http://localhost:8429/).
|
||||
|
||||
## vmalert
|
||||
|
||||
vmalert evaluates alerting rules (`alerts.yml`) to track VictoriaMetrics
|
||||
health state. It is connected with AlertManager for firing alerts,
|
||||
vmalert evaluates alerting rules [alerts.yml(https://github.com/VictoriaMetrics/VictoriaMetrics/blob/master/deployment/docker/alerts.yml)
|
||||
to track VictoriaMetrics health state. It is connected with AlertManager for firing alerts,
|
||||
and with VictoriaMetrics for executing queries and storing alert's state.
|
||||
|
||||
[Web interface link](http://localhost:8880/).
|
||||
|
|
|
@ -18,8 +18,8 @@ groups:
|
|||
ignoring(type) rate(vm_deduplicated_samples_total{type="merge"}[1d])
|
||||
)
|
||||
* scalar(
|
||||
sum(vm_data_size_bytes{type!="indexdb"}) /
|
||||
sum(vm_rows{type!="indexdb"})
|
||||
sum(vm_data_size_bytes{type!~"indexdb.*"}) /
|
||||
sum(vm_rows{type!~"indexdb.*"})
|
||||
)
|
||||
) < 3 * 24 * 3600 > 0
|
||||
for: 30m
|
||||
|
@ -43,7 +43,7 @@ groups:
|
|||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
dashboard: http://localhost:3000/d/oS7Bi_0Wz?viewPanel=110&var-instance={{ $labels.instance }}"
|
||||
dashboard: http://localhost:3000/d/oS7Bi_0Wz?viewPanel=200&var-instance={{ $labels.instance }}"
|
||||
summary: "Instance {{ $labels.instance }} will run out of disk space soon"
|
||||
description: "Disk utilisation on instance {{ $labels.instance }} is more than 80%.\n
|
||||
Having less than 20% of free disk space could cripple merges processes and overall performance.
|
||||
|
|
|
@ -18,8 +18,8 @@ groups:
|
|||
ignoring(type) rate(vm_deduplicated_samples_total{type="merge"}[1d])
|
||||
)
|
||||
* scalar(
|
||||
sum(vm_data_size_bytes{type!="indexdb"}) /
|
||||
sum(vm_rows{type!="indexdb"})
|
||||
sum(vm_data_size_bytes{type!~"indexdb.*"}) /
|
||||
sum(vm_rows{type!~"indexdb.*"})
|
||||
)
|
||||
) < 3 * 24 * 3600 > 0
|
||||
for: 30m
|
||||
|
|
|
@ -15,14 +15,51 @@ The following tip changes can be tested by building VictoriaMetrics components f
|
|||
|
||||
## tip
|
||||
|
||||
**Update note 1:** this release drops support for direct upgrade from VictoriaMetrics versions prior [v1.28.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.28.0). Please upgrade to `v1.84.0`, wait until `finished round 2 of background conversion` line is emitted to log by single-node VictoriaMetrics or by `vmstorage`, and then upgrade to newer releases.
|
||||
|
||||
**Update note 2:** this release splits `type="indexdb"` metrics into `type="indexdb/inmemory"` and `type="indexdb/file"` metrics. This may break old dashboards and alerting rules, which contain [label filter](https://docs.victoriametrics.com/keyConcepts.html#filtering) on `{type="indexdb"}`. Such label filter must be substituted with `{type=~"indexdb.*"}`, so it matches `indexdb` from the previous releases and `indexdb/inmemory` + `indexdb/file` from new releases. It is recommended upgrading to the latest available dashboards and alerting rules mentioned in [these docs](https://docs.victoriametrics.com/#monitoring), since they already contain fixed label filters.
|
||||
|
||||
* FEATURE: add `-inmemoryDataFlushInterval` command-line flag, which can be used for controlling the frequency of in-memory data flush to disk. The data flush frequency can be reduced when VictoriaMetrics stores data to low-end flash device with limited number of write cycles (for example, on Raspberry PI). See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3337).
|
||||
* FEATURE: expose additional metrics for `indexdb` and `storage` parts stored in memory and for `indexdb` parts stored in files (see [storage docs](https://docs.victoriametrics.com/#storage) for technical details):
|
||||
* `vm_active_merges{type="storage/inmemory"}` - active merges for in-memory `storage` parts
|
||||
* `vm_active_merges{type="indexdb/inmemory"}` - active merges for in-memory `indexdb` parts
|
||||
* `vm_active_merges{type="indexdb/file"}` - active merges for file-based `indexdb` parts
|
||||
* `vm_merges_total{type="storage/inmemory"}` - the total merges for in-memory `storage` parts
|
||||
* `vm_merges_total{type="indexdb/inmemory"}` - the total merges for in-memory `indexdb` parts
|
||||
* `vm_merges_total{type="indexdb/file"}` - the total merges for file-based `indexdb` parts
|
||||
* `vm_rows_merged_total{type="storage/inmemory"}` - the total rows merged for in-memory `storage` parts
|
||||
* `vm_rows_merged_total{type="indexdb/inmemory"}` - the total rows merged for in-memory `indexdb` parts
|
||||
* `vm_rows_merged_total{type="indexdb/file"}` - the total rows merged for file-based `indexdb` parts
|
||||
* `vm_rows_deleted_total{type="storage/inmemory"}` - the total rows deleted for in-memory `storage` parts
|
||||
* `vm_assisted_merges_total{type="storage/inmemory"}` - the total number of assisted merges for in-memory `storage` parts
|
||||
* `vm_assisted_merges_total{type="indexdb/inmemory"}` - the total number of assisted merges for in-memory `indexdb` parts
|
||||
* `vm_parts{type="storage/inmemory"}` - the total number of in-memory `storage` parts
|
||||
* `vm_parts{type="indexdb/inmemory"}` - the total number of in-memory `indexdb` parts
|
||||
* `vm_parts{type="indexdb/file"}` - the total number of file-based `indexdb` parts
|
||||
* `vm_blocks{type="storage/inmemory"}` - the total number of in-memory `storage` blocks
|
||||
* `vm_blocks{type="indexdb/inmemory"}` - the total number of in-memory `indexdb` blocks
|
||||
* `vm_blocks{type="indexdb/file"}` - the total number of file-based `indexdb` blocks
|
||||
* `vm_data_size_bytes{type="storage/inmemory"}` - the total size of in-memory `storage` blocks
|
||||
* `vm_data_size_bytes{type="indexdb/inmemory"}` - the total size of in-memory `indexdb` blocks
|
||||
* `vm_data_size_bytes{type="indexdb/file"}` - the total size of file-based `indexdb` blocks
|
||||
* `vm_rows{type="storage/inmemory"}` - the total number of in-memory `storage` rows
|
||||
* `vm_rows{type="indexdb/inmemory"}` - the total number of in-memory `indexdb` rows
|
||||
* `vm_rows{type="indexdb/file"}` - the total number of file-based `indexdb` rows
|
||||
* FEATURE: [DataDog parser](https://docs.victoriametrics.com/#how-to-send-data-from-datadog-agent): add `device` tag when it is passed in the `device` field is present in the `series` object of the input request. Thanks to @PerGon for the provided [pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/3431).
|
||||
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): improve [service discovery](https://docs.victoriametrics.com/sd_configs.html) performance when discovering big number of targets (10K and more).
|
||||
* FEATURE: [vmagent](https://docs.victoriametrics.com/vmagent.html): add `exported_` prefix to metric names exported by scrape targets if these metric names clash with [automatically generated metrics](https://docs.victoriametrics.com/vmagent.html#automatically-generated-metrics) such as `up`, `scrape_samples_scraped`, etc. This prevents from corruption of automatically generated metrics. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3406).
|
||||
* FEATURE: [VictoriaMetrics cluster](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html): improve error message when the requested path cannot be properly parsed, so users could identify the issue and properly fix the path. Now the error message links to [url format docs](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html#url-format). See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3402).
|
||||
* FEATURE: [vmctl](https://docs.victoriametrics.com/vmctl.html): add ability to copy data from sources via Prometheus `remote_read` protocol. See [these docs](https://docs.victoriametrics.com/vmctl.html#migrating-data-by-remote-read-protocol). The related issues: [one](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3132) and [two](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1101).
|
||||
* FEATURE: [vmalert](https://docs.victoriametrics.com/vmalert.html): add `-remoteWrite.sendTimeout` command-line flag, which allows configuring timeout for sending data to `-remoteWrite.url`. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3408).
|
||||
* FEATURE: [vmctl](https://docs.victoriametrics.com/vmctl.html): add ability to migrate data between VictoriaMetrics clusters with automatic tenants discovery. See [these docs](https://docs.victoriametrics.com/vmctl.html#cluster-to-cluster-migration-mode) and [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2930).
|
||||
* FEATURE: [vmctl](https://docs.victoriametrics.com/vmctl.html): add ability to copy data from sources via Prometheus `remote_read` protocol. See [these docs](https://docs.victoriametrics.com/vmctl.html#migrating-data-by-remote-read-protocol). The related issues: [one](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3132) and [two](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/1101).
|
||||
* FEATURE: [vmui](https://docs.victoriametrics.com/#vmui): allow changing timezones for the requested data. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3075).
|
||||
* FEATURE: [vmui](https://docs.victoriametrics.com/#vmui): provide fast path for hiding results for all the queries except the given one by clicking `eye` icon with `ctrl` key pressed. See [this feature request](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3446).
|
||||
* FEATURE: [MetricsQL](https://docs.victoriametrics.com/MetricsQL.html): add `range_trim_spikes(phi, q)` function for trimming `phi` percent of the largest spikes per each time series returned by `q`. See [these docs](https://docs.victoriametrics.com/MetricsQL.html#range_trim_spikes).
|
||||
|
||||
* BUGFIX: [vmalert](https://docs.victoriametrics.com/vmalert.html): properly pass HTTP headers during the alert state restore procedure. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3418).
|
||||
* BUGFIX: [vmalert](https://docs.victoriametrics.com/vmalert.html): properly specify rule evaluation step during the [replay mode](https://docs.victoriametrics.com/vmalert.html#rules-backfilling). The `step` value was previously overriden by `-datasource.queryStep` command-line flag.
|
||||
* BUGFIX: [vmui](https://docs.victoriametrics.com/#vmui): properly put multi-line queries in the url, so it could be copy-n-pasted and opened without issues in a new browser tab. Previously the url for multi-line query couldn't be opened. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3444).
|
||||
* BUGFIX: [vmui](https://docs.victoriametrics.com/#vmui): correctly handle `up` and `down` keypresses when editing multi-line queries. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/3445).
|
||||
|
||||
|
||||
## [v1.84.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.84.0)
|
||||
|
|
|
@ -1247,6 +1247,11 @@ per each time series returned by `q` on the selected time range.
|
|||
|
||||
`range_sum(q)` is a [transform function](#transform-functions), which calculates the sum of points per each time series returned by `q`.
|
||||
|
||||
#### range_trim_spikes
|
||||
|
||||
`range_trim_spikes(phi, q)` is a [transform function](#transform-functions), which drops `phi` percent of biggest spikes from time series returned by `q`.
|
||||
The `phi` must be in the range `[0..1]`, where `0` means `0%` and `1` means `100%`.
|
||||
|
||||
#### remove_resets
|
||||
|
||||
`remove_resets(q)` is a [transform function](#transform-functions), which removes counter resets from time series returned by `q`.
|
||||
|
|
|
@ -276,7 +276,7 @@ It also provides the following features:
|
|||
- [query tracer](#query-tracing)
|
||||
- [top queries explorer](#top-queries)
|
||||
|
||||
Graphs in vmui support scrolling and zooming:
|
||||
Graphs in `vmui` support scrolling and zooming:
|
||||
|
||||
* Select the needed time range on the graph in order to zoom in into the selected time range. Hold `ctrl` (or `cmd` on MacOS) and scroll down in order to zoom out.
|
||||
* Hold `ctrl` (or `cmd` on MacOS) and scroll up in order to zoom in the area under cursor.
|
||||
|
@ -294,6 +294,8 @@ VMUI allows investigating correlations between multiple queries on the same grap
|
|||
enter an additional query in the newly appeared input field and press `Enter`.
|
||||
Results for all the queries are displayed simultaneously on the same graph.
|
||||
Graphs for a particular query can be temporarily hidden by clicking the `eye` icon on the right side of the input field.
|
||||
When the `eye` icon is clicked while holding the `ctrl` key, then query results for the rest of queries become hidden
|
||||
except of the current query results.
|
||||
|
||||
See the [example VMUI at VictoriaMetrics playground](https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/?g0.expr=100%20*%20sum(rate(process_cpu_seconds_total))%20by%20(job)&g0.range_input=1d).
|
||||
|
||||
|
@ -1364,18 +1366,50 @@ It is recommended passing different `-promscrape.cluster.name` values to HA pair
|
|||
|
||||
## Storage
|
||||
|
||||
VictoriaMetrics stores time series data in [MergeTree](https://en.wikipedia.org/wiki/Log-structured_merge-tree)-like
|
||||
data structures. On insert, VictoriaMetrics accumulates up to 1s of data and dumps it on disk to
|
||||
`<-storageDataPath>/data/small/YYYY_MM/` subdirectory forming a `part` with the following
|
||||
name pattern: `rowsCount_blocksCount_minTimestamp_maxTimestamp`. Each part consists of two "columns":
|
||||
values and timestamps. These are sorted and compressed raw time series values. Additionally, part contains
|
||||
index files for searching for specific series in the values and timestamps files.
|
||||
VictoriaMetrics buffers the ingested data in memory for up to a second. Then the buffered data is written to in-memory `parts`,
|
||||
which can be searched during queries. The in-memory `parts` are periodically persisted to disk, so they could survive unclean shutdown
|
||||
such as out of memory crash, hardware power loss or `SIGKILL` signal. The interval for flushing the in-memory data to disk
|
||||
can be configured with the `-inmemoryDataFlushInterval` command-line flag (note that too short flush interval may significantly increase disk IO).
|
||||
|
||||
`Parts` are periodically merged into the bigger parts. The resulting `part` is constructed
|
||||
under `<-storageDataPath>/data/{small,big}/YYYY_MM/tmp` subdirectory.
|
||||
When the resulting `part` is complete, it is atomically moved from the `tmp`
|
||||
to its own subdirectory, while the source parts are atomically removed. The end result is that the source
|
||||
parts are substituted by a single resulting bigger `part` in the `<-storageDataPath>/data/{small,big}/YYYY_MM/` directory.
|
||||
In-memory parts are persisted to disk into `part` directories under the `<-storageDataPath>/data/small/YYYY_MM/` folder,
|
||||
where `YYYY_MM` is the month partition for the stored data. For example, `2022_11` is the partition for `parts`
|
||||
with [raw samples](https://docs.victoriametrics.com/keyConcepts.html#raw-samples) from `November 2022`.
|
||||
|
||||
The `part` directory has the following name pattern: `rowsCount_blocksCount_minTimestamp_maxTimestamp`, where:
|
||||
|
||||
- `rowsCount` - the number of [raw samples](https://docs.victoriametrics.com/keyConcepts.html#raw-samples) stored in the part
|
||||
- `blocksCount` - the number of blocks stored in the part (see details about blocks below)
|
||||
- `minTimestamp` and `maxTimestamp` - minimum and maximum timestamps across raw samples stored in the part
|
||||
|
||||
Each `part` consists of `blocks` sorted by internal time series id (aka `TSID`).
|
||||
Each `block` contains up to 8K [raw samples](https://docs.victoriametrics.com/keyConcepts.html#raw-samples),
|
||||
which belong to a single [time series](https://docs.victoriametrics.com/keyConcepts.html#time-series).
|
||||
Raw samples in each block are sorted by `timestamp`. Blocks for the same time series are sorted
|
||||
by the `timestamp` of the first sample. Timestamps and values for all the blocks
|
||||
are stored in [compressed form](https://faun.pub/victoriametrics-achieving-better-compression-for-time-series-data-than-gorilla-317bc1f95932)
|
||||
in separate files under `part` directory - `timestamps.bin` and `values.bin`.
|
||||
|
||||
The `part` directory also contains `index.bin` and `metaindex.bin` files - these files contain index
|
||||
for fast block lookups, which belong to the given `TSID` and cover the given time range.
|
||||
|
||||
`Parts` are periodically merged into bigger parts in background. The background merge provides the following benefits:
|
||||
|
||||
* keeping the number of data files under control, so they don't exceed limits on open files
|
||||
* improved data compression, since bigger parts are usually compressed better than smaller parts
|
||||
* improved query speed, since queries over smaller number of parts are executed faster
|
||||
* various background maintenance tasks such as [de-duplication](#deduplication), [downsampling](#downsampling)
|
||||
and [freeing up disk space for the deleted time series](#how-to-delete-time-series) are performed during the merge
|
||||
|
||||
Newly added `parts` either successfully appear in the storage or fail to appear.
|
||||
The newly added `parts` are being created in a temporary directory under `<-storageDataPath>/data/{small,big}/YYYY_MM/tmp` folder.
|
||||
When the newly added `part` is fully written and [fsynced](https://man7.org/linux/man-pages/man2/fsync.2.html)
|
||||
to a temporary directory, then it is atomically moved to the storage directory.
|
||||
Thanks to this alogrithm, storage never contains partially created parts, even if hardware power off
|
||||
occurrs in the middle of writing the `part` to disk - such incompletely written `parts`
|
||||
are automatically deleted on the next VictoriaMetrics start.
|
||||
|
||||
The same applies to merge process — `parts` are either fully merged into a new `part` or fail to merge,
|
||||
leaving the source `parts` untouched.
|
||||
|
||||
VictoriaMetrics doesn't merge parts if their summary size exceeds free disk space.
|
||||
This prevents from potential out of disk space errors during merge.
|
||||
|
@ -1384,24 +1418,10 @@ This increases overhead during data querying, since VictoriaMetrics needs to rea
|
|||
bigger number of parts per each request. That's why it is recommended to have at least 20%
|
||||
of free disk space under directory pointed by `-storageDataPath` command-line flag.
|
||||
|
||||
Information about merging process is available in [single-node VictoriaMetrics](https://grafana.com/dashboards/10229)
|
||||
and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176) Grafana dashboards.
|
||||
Information about merging process is available in [the dashboard for single-node VictoriaMetrics](https://grafana.com/dashboards/10229)
|
||||
and [the dashboard for VictoriaMetrics cluster](https://grafana.com/grafana/dashboards/11176).
|
||||
See more details in [monitoring docs](#monitoring).
|
||||
|
||||
The `merge` process improves compression rate and keeps number of `parts` on disk relatively low.
|
||||
Benefits of doing the merge process are the following:
|
||||
|
||||
* it improves query performance, since lower number of `parts` are inspected with each query
|
||||
* it reduces the number of data files, since each `part` contains fixed number of files
|
||||
* various background maintenance tasks such as [de-duplication](#deduplication), [downsampling](#downsampling)
|
||||
and [freeing up disk space for the deleted time series](#how-to-delete-time-series) are performed during the merge.
|
||||
|
||||
Newly added `parts` either appear in the storage or fail to appear.
|
||||
Storage never contains partially created parts. The same applies to merge process — `parts` are either fully
|
||||
merged into a new `part` or fail to merge. MergeTree doesn't contain partially merged `parts`.
|
||||
`Part` contents in MergeTree never change. Parts are immutable. They may be only deleted after the merge
|
||||
to a bigger `part` or when the `part` contents goes outside the configured `-retentionPeriod`.
|
||||
|
||||
See [this article](https://valyala.medium.com/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282) for more details.
|
||||
|
||||
See also [how to work with snapshots](#how-to-work-with-snapshots).
|
||||
|
@ -1724,10 +1744,11 @@ and [cardinality explorer docs](#cardinality-explorer).
|
|||
|
||||
* VictoriaMetrics buffers incoming data in memory for up to a few seconds before flushing it to persistent storage.
|
||||
This may lead to the following "issues":
|
||||
* Data becomes available for querying in a few seconds after inserting. It is possible to flush in-memory buffers to persistent storage
|
||||
* Data becomes available for querying in a few seconds after inserting. It is possible to flush in-memory buffers to searchable parts
|
||||
by requesting `/internal/force_flush` http handler. This handler is mostly needed for testing and debugging purposes.
|
||||
* The last few seconds of inserted data may be lost on unclean shutdown (i.e. OOM, `kill -9` or hardware reset).
|
||||
See [this article for technical details](https://valyala.medium.com/wal-usage-looks-broken-in-modern-time-series-databases-b62a627ab704).
|
||||
The `-inmemoryDataFlushInterval` command-line flag allows controlling the frequency of in-memory data flush to persistent storage.
|
||||
See [storage docs](#storage) and [this article](https://valyala.medium.com/wal-usage-looks-broken-in-modern-time-series-databases-b62a627ab704) for more details.
|
||||
|
||||
* If VictoriaMetrics works slowly and eats more than a CPU core per 100K ingested data points per second,
|
||||
then it is likely you have too many [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series) for the current amount of RAM.
|
||||
|
@ -2134,6 +2155,8 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
Uses '{measurement}' instead of '{measurement}{separator}{field_name}' for metic name if InfluxDB line contains only a single field
|
||||
-influxTrimTimestamp duration
|
||||
Trim timestamps for InfluxDB line protocol data to this duration. Minimum practical duration is 1ms. Higher duration (i.e. 1s) may be used for reducing disk space usage for timestamp data (default 1ms)
|
||||
-inmemoryDataFlushInterval duration
|
||||
The interval for guaranteed saving of in-memory data to disk. The saved data survives unclean shutdown such as OOM crash, hardware reset, SIGKILL, etc. Bigger intervals may help increasing lifetime of flash storage with limited write cycles (e.g. Raspberry PI). Smaller intervals increase disk IO load. Minimum supported value is 1s (default 5s)
|
||||
-insert.maxQueueDuration duration
|
||||
The maximum duration for waiting in the queue for insert requests due to -maxConcurrentInserts (default 1m0s)
|
||||
-logNewSeries
|
||||
|
|
|
@ -279,7 +279,7 @@ It also provides the following features:
|
|||
- [query tracer](#query-tracing)
|
||||
- [top queries explorer](#top-queries)
|
||||
|
||||
Graphs in vmui support scrolling and zooming:
|
||||
Graphs in `vmui` support scrolling and zooming:
|
||||
|
||||
* Select the needed time range on the graph in order to zoom in into the selected time range. Hold `ctrl` (or `cmd` on MacOS) and scroll down in order to zoom out.
|
||||
* Hold `ctrl` (or `cmd` on MacOS) and scroll up in order to zoom in the area under cursor.
|
||||
|
@ -297,6 +297,8 @@ VMUI allows investigating correlations between multiple queries on the same grap
|
|||
enter an additional query in the newly appeared input field and press `Enter`.
|
||||
Results for all the queries are displayed simultaneously on the same graph.
|
||||
Graphs for a particular query can be temporarily hidden by clicking the `eye` icon on the right side of the input field.
|
||||
When the `eye` icon is clicked while holding the `ctrl` key, then query results for the rest of queries become hidden
|
||||
except of the current query results.
|
||||
|
||||
See the [example VMUI at VictoriaMetrics playground](https://play.victoriametrics.com/select/accounting/1/6a716b0f-38bc-4856-90ce-448fd713e3fe/prometheus/graph/?g0.expr=100%20*%20sum(rate(process_cpu_seconds_total))%20by%20(job)&g0.range_input=1d).
|
||||
|
||||
|
@ -1367,18 +1369,50 @@ It is recommended passing different `-promscrape.cluster.name` values to HA pair
|
|||
|
||||
## Storage
|
||||
|
||||
VictoriaMetrics stores time series data in [MergeTree](https://en.wikipedia.org/wiki/Log-structured_merge-tree)-like
|
||||
data structures. On insert, VictoriaMetrics accumulates up to 1s of data and dumps it on disk to
|
||||
`<-storageDataPath>/data/small/YYYY_MM/` subdirectory forming a `part` with the following
|
||||
name pattern: `rowsCount_blocksCount_minTimestamp_maxTimestamp`. Each part consists of two "columns":
|
||||
values and timestamps. These are sorted and compressed raw time series values. Additionally, part contains
|
||||
index files for searching for specific series in the values and timestamps files.
|
||||
VictoriaMetrics buffers the ingested data in memory for up to a second. Then the buffered data is written to in-memory `parts`,
|
||||
which can be searched during queries. The in-memory `parts` are periodically persisted to disk, so they could survive unclean shutdown
|
||||
such as out of memory crash, hardware power loss or `SIGKILL` signal. The interval for flushing the in-memory data to disk
|
||||
can be configured with the `-inmemoryDataFlushInterval` command-line flag (note that too short flush interval may significantly increase disk IO).
|
||||
|
||||
`Parts` are periodically merged into the bigger parts. The resulting `part` is constructed
|
||||
under `<-storageDataPath>/data/{small,big}/YYYY_MM/tmp` subdirectory.
|
||||
When the resulting `part` is complete, it is atomically moved from the `tmp`
|
||||
to its own subdirectory, while the source parts are atomically removed. The end result is that the source
|
||||
parts are substituted by a single resulting bigger `part` in the `<-storageDataPath>/data/{small,big}/YYYY_MM/` directory.
|
||||
In-memory parts are persisted to disk into `part` directories under the `<-storageDataPath>/data/small/YYYY_MM/` folder,
|
||||
where `YYYY_MM` is the month partition for the stored data. For example, `2022_11` is the partition for `parts`
|
||||
with [raw samples](https://docs.victoriametrics.com/keyConcepts.html#raw-samples) from `November 2022`.
|
||||
|
||||
The `part` directory has the following name pattern: `rowsCount_blocksCount_minTimestamp_maxTimestamp`, where:
|
||||
|
||||
- `rowsCount` - the number of [raw samples](https://docs.victoriametrics.com/keyConcepts.html#raw-samples) stored in the part
|
||||
- `blocksCount` - the number of blocks stored in the part (see details about blocks below)
|
||||
- `minTimestamp` and `maxTimestamp` - minimum and maximum timestamps across raw samples stored in the part
|
||||
|
||||
Each `part` consists of `blocks` sorted by internal time series id (aka `TSID`).
|
||||
Each `block` contains up to 8K [raw samples](https://docs.victoriametrics.com/keyConcepts.html#raw-samples),
|
||||
which belong to a single [time series](https://docs.victoriametrics.com/keyConcepts.html#time-series).
|
||||
Raw samples in each block are sorted by `timestamp`. Blocks for the same time series are sorted
|
||||
by the `timestamp` of the first sample. Timestamps and values for all the blocks
|
||||
are stored in [compressed form](https://faun.pub/victoriametrics-achieving-better-compression-for-time-series-data-than-gorilla-317bc1f95932)
|
||||
in separate files under `part` directory - `timestamps.bin` and `values.bin`.
|
||||
|
||||
The `part` directory also contains `index.bin` and `metaindex.bin` files - these files contain index
|
||||
for fast block lookups, which belong to the given `TSID` and cover the given time range.
|
||||
|
||||
`Parts` are periodically merged into bigger parts in background. The background merge provides the following benefits:
|
||||
|
||||
* keeping the number of data files under control, so they don't exceed limits on open files
|
||||
* improved data compression, since bigger parts are usually compressed better than smaller parts
|
||||
* improved query speed, since queries over smaller number of parts are executed faster
|
||||
* various background maintenance tasks such as [de-duplication](#deduplication), [downsampling](#downsampling)
|
||||
and [freeing up disk space for the deleted time series](#how-to-delete-time-series) are performed during the merge
|
||||
|
||||
Newly added `parts` either successfully appear in the storage or fail to appear.
|
||||
The newly added `parts` are being created in a temporary directory under `<-storageDataPath>/data/{small,big}/YYYY_MM/tmp` folder.
|
||||
When the newly added `part` is fully written and [fsynced](https://man7.org/linux/man-pages/man2/fsync.2.html)
|
||||
to a temporary directory, then it is atomically moved to the storage directory.
|
||||
Thanks to this alogrithm, storage never contains partially created parts, even if hardware power off
|
||||
occurrs in the middle of writing the `part` to disk - such incompletely written `parts`
|
||||
are automatically deleted on the next VictoriaMetrics start.
|
||||
|
||||
The same applies to merge process — `parts` are either fully merged into a new `part` or fail to merge,
|
||||
leaving the source `parts` untouched.
|
||||
|
||||
VictoriaMetrics doesn't merge parts if their summary size exceeds free disk space.
|
||||
This prevents from potential out of disk space errors during merge.
|
||||
|
@ -1387,24 +1421,10 @@ This increases overhead during data querying, since VictoriaMetrics needs to rea
|
|||
bigger number of parts per each request. That's why it is recommended to have at least 20%
|
||||
of free disk space under directory pointed by `-storageDataPath` command-line flag.
|
||||
|
||||
Information about merging process is available in [single-node VictoriaMetrics](https://grafana.com/dashboards/10229)
|
||||
and [clustered VictoriaMetrics](https://grafana.com/grafana/dashboards/11176) Grafana dashboards.
|
||||
Information about merging process is available in [the dashboard for single-node VictoriaMetrics](https://grafana.com/dashboards/10229)
|
||||
and [the dashboard for VictoriaMetrics cluster](https://grafana.com/grafana/dashboards/11176).
|
||||
See more details in [monitoring docs](#monitoring).
|
||||
|
||||
The `merge` process improves compression rate and keeps number of `parts` on disk relatively low.
|
||||
Benefits of doing the merge process are the following:
|
||||
|
||||
* it improves query performance, since lower number of `parts` are inspected with each query
|
||||
* it reduces the number of data files, since each `part` contains fixed number of files
|
||||
* various background maintenance tasks such as [de-duplication](#deduplication), [downsampling](#downsampling)
|
||||
and [freeing up disk space for the deleted time series](#how-to-delete-time-series) are performed during the merge.
|
||||
|
||||
Newly added `parts` either appear in the storage or fail to appear.
|
||||
Storage never contains partially created parts. The same applies to merge process — `parts` are either fully
|
||||
merged into a new `part` or fail to merge. MergeTree doesn't contain partially merged `parts`.
|
||||
`Part` contents in MergeTree never change. Parts are immutable. They may be only deleted after the merge
|
||||
to a bigger `part` or when the `part` contents goes outside the configured `-retentionPeriod`.
|
||||
|
||||
See [this article](https://valyala.medium.com/how-victoriametrics-makes-instant-snapshots-for-multi-terabyte-time-series-data-e1f3fb0e0282) for more details.
|
||||
|
||||
See also [how to work with snapshots](#how-to-work-with-snapshots).
|
||||
|
@ -1727,10 +1747,11 @@ and [cardinality explorer docs](#cardinality-explorer).
|
|||
|
||||
* VictoriaMetrics buffers incoming data in memory for up to a few seconds before flushing it to persistent storage.
|
||||
This may lead to the following "issues":
|
||||
* Data becomes available for querying in a few seconds after inserting. It is possible to flush in-memory buffers to persistent storage
|
||||
* Data becomes available for querying in a few seconds after inserting. It is possible to flush in-memory buffers to searchable parts
|
||||
by requesting `/internal/force_flush` http handler. This handler is mostly needed for testing and debugging purposes.
|
||||
* The last few seconds of inserted data may be lost on unclean shutdown (i.e. OOM, `kill -9` or hardware reset).
|
||||
See [this article for technical details](https://valyala.medium.com/wal-usage-looks-broken-in-modern-time-series-databases-b62a627ab704).
|
||||
The `-inmemoryDataFlushInterval` command-line flag allows controlling the frequency of in-memory data flush to persistent storage.
|
||||
See [storage docs](#storage) and [this article](https://valyala.medium.com/wal-usage-looks-broken-in-modern-time-series-databases-b62a627ab704) for more details.
|
||||
|
||||
* If VictoriaMetrics works slowly and eats more than a CPU core per 100K ingested data points per second,
|
||||
then it is likely you have too many [active time series](https://docs.victoriametrics.com/FAQ.html#what-is-an-active-time-series) for the current amount of RAM.
|
||||
|
@ -2137,6 +2158,8 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
|||
Uses '{measurement}' instead of '{measurement}{separator}{field_name}' for metic name if InfluxDB line contains only a single field
|
||||
-influxTrimTimestamp duration
|
||||
Trim timestamps for InfluxDB line protocol data to this duration. Minimum practical duration is 1ms. Higher duration (i.e. 1s) may be used for reducing disk space usage for timestamp data (default 1ms)
|
||||
-inmemoryDataFlushInterval duration
|
||||
The interval for guaranteed saving of in-memory data to disk. The saved data survives unclean shutdown such as OOM crash, hardware reset, SIGKILL, etc. Bigger intervals may help increasing lifetime of flash storage with limited write cycles (e.g. Raspberry PI). Smaller intervals increase disk IO load. Minimum supported value is 1s (default 5s)
|
||||
-insert.maxQueueDuration duration
|
||||
The maximum duration for waiting in the queue for insert requests due to -maxConcurrentInserts (default 1m0s)
|
||||
-logNewSeries
|
||||
|
|
|
@ -837,6 +837,80 @@ Total: 16 B ↗ Speed: 186.32 KiB p/s
|
|||
2022/08/30 19:48:24 Total time: 12.680582ms
|
||||
```
|
||||
|
||||
#### Cluster-to-cluster migration mode
|
||||
|
||||
Using cluster-to-cluster migration mode helps to migrate all tenants data in a single `vmctl` run.
|
||||
|
||||
Cluster-to-cluster uses `/admin/tenants` endpoint (available starting from [v1.84.0](https://docs.victoriametrics.com/CHANGELOG.html#v1840)) to discover list of tenants from source cluster.
|
||||
|
||||
To use this mode you need to set `--vm-intercluster` flag to `true`, `--vm-native-src-addr` flag to 'http://vmselect:8481/' and `--vm-native-dst-addr` value to http://vminsert:8480/:
|
||||
|
||||
```console
|
||||
./bin/vmctl vm-native --vm-intercluster=true --vm-native-src-addr=http://localhost:8481/ --vm-native-dst-addr=http://172.17.0.3:8480/
|
||||
VictoriaMetrics Native import mode
|
||||
2022/12/05 21:20:06 Discovered tenants: [123:1 12812919:1 1289198:1 1289:1283 12:1 1:0 1:1 1:1231231 1:1271727 1:12819 1:281 812891298:1]
|
||||
2022/12/05 21:20:06 Initing export pipe from "http://localhost:8481/select/123:1/prometheus/api/v1/export/native" with filters:
|
||||
filter: match[]={__name__!=""}
|
||||
Initing import process to "http://172.17.0.3:8480/insert/123:1/prometheus/api/v1/import/native":
|
||||
Total: 61.13 MiB ↖ Speed: 2.05 MiB p/s
|
||||
Total: 61.13 MiB ↗ Speed: 2.30 MiB p/s
|
||||
2022/12/05 21:20:33 Initing export pipe from "http://localhost:8481/select/12812919:1/prometheus/api/v1/export/native" with filters:
|
||||
filter: match[]={__name__!=""}
|
||||
Initing import process to "http://172.17.0.3:8480/insert/12812919:1/prometheus/api/v1/import/native":
|
||||
Total: 43.14 MiB ↘ Speed: 1.86 MiB p/s
|
||||
Total: 43.14 MiB ↙ Speed: 2.36 MiB p/s
|
||||
2022/12/05 21:20:51 Initing export pipe from "http://localhost:8481/select/1289198:1/prometheus/api/v1/export/native" with filters:
|
||||
filter: match[]={__name__!=""}
|
||||
Initing import process to "http://172.17.0.3:8480/insert/1289198:1/prometheus/api/v1/import/native":
|
||||
Total: 16.64 MiB ↗ Speed: 2.66 MiB p/s
|
||||
Total: 16.64 MiB ↘ Speed: 2.19 MiB p/s
|
||||
2022/12/05 21:20:59 Initing export pipe from "http://localhost:8481/select/1289:1283/prometheus/api/v1/export/native" with filters:
|
||||
filter: match[]={__name__!=""}
|
||||
Initing import process to "http://172.17.0.3:8480/insert/1289:1283/prometheus/api/v1/import/native":
|
||||
Total: 43.33 MiB ↙ Speed: 1.94 MiB p/s
|
||||
Total: 43.33 MiB ↖ Speed: 2.35 MiB p/s
|
||||
2022/12/05 21:21:18 Initing export pipe from "http://localhost:8481/select/12:1/prometheus/api/v1/export/native" with filters:
|
||||
filter: match[]={__name__!=""}
|
||||
Initing import process to "http://172.17.0.3:8480/insert/12:1/prometheus/api/v1/import/native":
|
||||
Total: 63.78 MiB ↙ Speed: 1.96 MiB p/s
|
||||
Total: 63.78 MiB ↖ Speed: 2.28 MiB p/s
|
||||
2022/12/05 21:21:46 Initing export pipe from "http://localhost:8481/select/1:0/prometheus/api/v1/export/native" with filters:
|
||||
filter: match[]={__name__!=""}
|
||||
Initing import process to "http://172.17.0.3:8480/insert/1:0/prometheus/api/v1/import/native":
|
||||
2022/12/05 21:21:46 Import finished!
|
||||
Total: 330 B ↗ Speed: 3.53 MiB p/s
|
||||
2022/12/05 21:21:46 Initing export pipe from "http://localhost:8481/select/1:1/prometheus/api/v1/export/native" with filters:
|
||||
filter: match[]={__name__!=""}
|
||||
Initing import process to "http://172.17.0.3:8480/insert/1:1/prometheus/api/v1/import/native":
|
||||
Total: 63.81 MiB ↙ Speed: 1.96 MiB p/s
|
||||
Total: 63.81 MiB ↖ Speed: 2.28 MiB p/s
|
||||
2022/12/05 21:22:14 Initing export pipe from "http://localhost:8481/select/1:1231231/prometheus/api/v1/export/native" with filters:
|
||||
filter: match[]={__name__!=""}
|
||||
Initing import process to "http://172.17.0.3:8480/insert/1:1231231/prometheus/api/v1/import/native":
|
||||
Total: 63.84 MiB ↙ Speed: 1.93 MiB p/s
|
||||
Total: 63.84 MiB ↖ Speed: 2.29 MiB p/s
|
||||
2022/12/05 21:22:42 Initing export pipe from "http://localhost:8481/select/1:1271727/prometheus/api/v1/export/native" with filters:
|
||||
filter: match[]={__name__!=""}
|
||||
Initing import process to "http://172.17.0.3:8480/insert/1:1271727/prometheus/api/v1/import/native":
|
||||
Total: 54.37 MiB ↘ Speed: 1.90 MiB p/s
|
||||
Total: 54.37 MiB ↙ Speed: 2.37 MiB p/s
|
||||
2022/12/05 21:23:05 Initing export pipe from "http://localhost:8481/select/1:12819/prometheus/api/v1/export/native" with filters:
|
||||
filter: match[]={__name__!=""}
|
||||
Initing import process to "http://172.17.0.3:8480/insert/1:12819/prometheus/api/v1/import/native":
|
||||
Total: 17.01 MiB ↙ Speed: 1.75 MiB p/s
|
||||
Total: 17.01 MiB ↖ Speed: 2.15 MiB p/s
|
||||
2022/12/05 21:23:13 Initing export pipe from "http://localhost:8481/select/1:281/prometheus/api/v1/export/native" with filters:
|
||||
filter: match[]={__name__!=""}
|
||||
Initing import process to "http://172.17.0.3:8480/insert/1:281/prometheus/api/v1/import/native":
|
||||
Total: 63.89 MiB ↘ Speed: 1.90 MiB p/s
|
||||
Total: 63.89 MiB ↙ Speed: 2.29 MiB p/s
|
||||
2022/12/05 21:23:42 Initing export pipe from "http://localhost:8481/select/812891298:1/prometheus/api/v1/export/native" with filters:
|
||||
filter: match[]={__name__!=""}
|
||||
Initing import process to "http://172.17.0.3:8480/insert/812891298:1/prometheus/api/v1/import/native":
|
||||
Total: 63.84 MiB ↖ Speed: 1.99 MiB p/s
|
||||
Total: 63.84 MiB ↗ Speed: 2.26 MiB p/s
|
||||
2022/12/05 21:24:10 Total time: 4m4.1466565s
|
||||
```
|
||||
|
||||
## Verifying exported blocks from VictoriaMetrics
|
||||
|
||||
|
|
76
go.mod
76
go.mod
|
@ -3,7 +3,7 @@ module github.com/VictoriaMetrics/VictoriaMetrics
|
|||
go 1.19
|
||||
|
||||
require (
|
||||
cloud.google.com/go/storage v1.28.0
|
||||
cloud.google.com/go/storage v1.28.1
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.2.0
|
||||
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v0.5.1
|
||||
github.com/VictoriaMetrics/fastcache v1.12.0
|
||||
|
@ -12,12 +12,12 @@ require (
|
|||
// like https://github.com/valyala/fasthttp/commit/996610f021ff45fdc98c2ce7884d5fa4e7f9199b
|
||||
github.com/VictoriaMetrics/fasthttp v1.1.0
|
||||
github.com/VictoriaMetrics/metrics v1.23.0
|
||||
github.com/VictoriaMetrics/metricsql v0.49.1
|
||||
github.com/aws/aws-sdk-go-v2 v1.17.1
|
||||
github.com/aws/aws-sdk-go-v2/config v1.18.3
|
||||
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.11.42
|
||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.29.4
|
||||
github.com/cespare/xxhash/v2 v2.1.2
|
||||
github.com/VictoriaMetrics/metricsql v0.50.0
|
||||
github.com/aws/aws-sdk-go-v2 v1.17.2
|
||||
github.com/aws/aws-sdk-go-v2/config v1.18.4
|
||||
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.11.43
|
||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.29.5
|
||||
github.com/cespare/xxhash/v2 v2.2.0
|
||||
github.com/cheggaaa/pb/v3 v3.1.0
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect
|
||||
github.com/fatih/color v1.13.0 // indirect
|
||||
|
@ -31,44 +31,44 @@ require (
|
|||
github.com/mattn/go-runewidth v0.0.14 // indirect
|
||||
github.com/oklog/ulid v1.3.1
|
||||
github.com/prometheus/common v0.37.0 // indirect
|
||||
github.com/prometheus/prometheus v0.40.4
|
||||
github.com/urfave/cli/v2 v2.23.5
|
||||
github.com/prometheus/prometheus v0.40.5
|
||||
github.com/urfave/cli/v2 v2.23.6
|
||||
github.com/valyala/fastjson v1.6.3
|
||||
github.com/valyala/fastrand v1.1.0
|
||||
github.com/valyala/fasttemplate v1.2.2
|
||||
github.com/valyala/gozstd v1.17.0
|
||||
github.com/valyala/quicktemplate v1.7.0
|
||||
golang.org/x/net v0.2.0
|
||||
golang.org/x/net v0.3.0
|
||||
golang.org/x/oauth2 v0.2.0
|
||||
golang.org/x/sys v0.2.0
|
||||
golang.org/x/sys v0.3.0
|
||||
google.golang.org/api v0.103.0
|
||||
gopkg.in/yaml.v2 v2.4.0
|
||||
)
|
||||
|
||||
require (
|
||||
cloud.google.com/go v0.107.0 // indirect
|
||||
cloud.google.com/go/compute v1.12.1 // indirect
|
||||
cloud.google.com/go/compute/metadata v0.2.1 // indirect
|
||||
cloud.google.com/go/iam v0.7.0 // indirect
|
||||
cloud.google.com/go/compute v1.14.0 // indirect
|
||||
cloud.google.com/go/compute/metadata v0.2.2 // indirect
|
||||
cloud.google.com/go/iam v0.8.0 // indirect
|
||||
github.com/Azure/azure-sdk-for-go/sdk/internal v1.1.1 // indirect
|
||||
github.com/VividCortex/ewma v1.2.0 // indirect
|
||||
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137 // indirect
|
||||
github.com/aws/aws-sdk-go v1.44.149 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.4.9 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/credentials v1.13.3 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.12.19 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.25 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.19 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/ini v1.3.26 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/v4a v1.0.16 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.9.10 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.1.20 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.9.19 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.13.19 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/sso v1.11.25 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.13.8 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/sts v1.17.5 // indirect
|
||||
github.com/aws/smithy-go v1.13.4 // indirect
|
||||
github.com/aws/aws-sdk-go v1.44.153 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.4.10 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/credentials v1.13.4 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.12.20 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.26 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.20 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/ini v1.3.27 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/v4a v1.0.17 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.9.11 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.1.21 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.9.20 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.13.20 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/sso v1.11.26 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.13.9 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/sts v1.17.6 // indirect
|
||||
github.com/aws/smithy-go v1.13.5 // indirect
|
||||
github.com/beorn7/perks v1.0.1 // indirect
|
||||
github.com/davecgh/go-spew v1.1.1 // indirect
|
||||
github.com/dennwc/varint v1.0.0 // indirect
|
||||
|
@ -101,19 +101,19 @@ require (
|
|||
github.com/valyala/histogram v1.2.0 // indirect
|
||||
github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673 // indirect
|
||||
go.opencensus.io v0.24.0 // indirect
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.36.4 // indirect
|
||||
go.opentelemetry.io/otel v1.11.1 // indirect
|
||||
go.opentelemetry.io/otel/metric v0.33.0 // indirect
|
||||
go.opentelemetry.io/otel/trace v1.11.1 // indirect
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.37.0 // indirect
|
||||
go.opentelemetry.io/otel v1.11.2 // indirect
|
||||
go.opentelemetry.io/otel/metric v0.34.0 // indirect
|
||||
go.opentelemetry.io/otel/trace v1.11.2 // indirect
|
||||
go.uber.org/atomic v1.10.0 // indirect
|
||||
go.uber.org/goleak v1.2.0 // indirect
|
||||
golang.org/x/exp v0.0.0-20221126150942-6ab00d035af9 // indirect
|
||||
golang.org/x/exp v0.0.0-20221205204356-47842c84f3db // indirect
|
||||
golang.org/x/sync v0.1.0 // indirect
|
||||
golang.org/x/text v0.4.0 // indirect
|
||||
golang.org/x/time v0.2.0 // indirect
|
||||
golang.org/x/text v0.5.0 // indirect
|
||||
golang.org/x/time v0.3.0 // indirect
|
||||
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 // indirect
|
||||
google.golang.org/appengine v1.6.7 // indirect
|
||||
google.golang.org/genproto v0.0.0-20221118155620-16455021b5e6 // indirect
|
||||
google.golang.org/genproto v0.0.0-20221205194025-8222ab48f5fc // indirect
|
||||
google.golang.org/grpc v1.51.0 // indirect
|
||||
google.golang.org/protobuf v1.28.1 // indirect
|
||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||
|
|
152
go.sum
152
go.sum
|
@ -21,14 +21,14 @@ cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvf
|
|||
cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg=
|
||||
cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc=
|
||||
cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ=
|
||||
cloud.google.com/go/compute v1.12.1 h1:gKVJMEyqV5c/UnpzjjQbo3Rjvvqpr9B1DFSbJC4OXr0=
|
||||
cloud.google.com/go/compute v1.12.1/go.mod h1:e8yNOBcBONZU1vJKCvCoDw/4JQsA0dpM4x/6PIIOocU=
|
||||
cloud.google.com/go/compute/metadata v0.2.1 h1:efOwf5ymceDhK6PKMnnrTHP4pppY5L22mle96M1yP48=
|
||||
cloud.google.com/go/compute/metadata v0.2.1/go.mod h1:jgHgmJd2RKBGzXqF5LR2EZMGxBkeanZ9wwa75XHJgOM=
|
||||
cloud.google.com/go/compute v1.14.0 h1:hfm2+FfxVmnRlh6LpB7cg1ZNU+5edAHmW679JePztk0=
|
||||
cloud.google.com/go/compute v1.14.0/go.mod h1:YfLtxrj9sU4Yxv+sXzZkyPjEyPBZfXHUvjxega5vAdo=
|
||||
cloud.google.com/go/compute/metadata v0.2.2 h1:aWKAjYaBaOSrpKl57+jnS/3fJRQnxL7TvR/u1VVbt6k=
|
||||
cloud.google.com/go/compute/metadata v0.2.2/go.mod h1:jgHgmJd2RKBGzXqF5LR2EZMGxBkeanZ9wwa75XHJgOM=
|
||||
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
|
||||
cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
|
||||
cloud.google.com/go/iam v0.7.0 h1:k4MuwOsS7zGJJ+QfZ5vBK8SgHBAvYN/23BWsiihJ1vs=
|
||||
cloud.google.com/go/iam v0.7.0/go.mod h1:H5Br8wRaDGNc8XP3keLc4unfUUZeyH3Sfl9XpQEYOeg=
|
||||
cloud.google.com/go/iam v0.8.0 h1:E2osAkZzxI/+8pZcxVLcDtAQx/u+hZXVryUaYQ5O0Kk=
|
||||
cloud.google.com/go/iam v0.8.0/go.mod h1:lga0/y3iH6CX7sYqypWJ33hf7kkfXJag67naqGESjkE=
|
||||
cloud.google.com/go/longrunning v0.3.0 h1:NjljC+FYPV3uh5/OwWT6pVU+doBqMg2x/rZlE+CamDs=
|
||||
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
|
||||
cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw=
|
||||
|
@ -39,8 +39,8 @@ cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0Zeo
|
|||
cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk=
|
||||
cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs=
|
||||
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
|
||||
cloud.google.com/go/storage v1.28.0 h1:DLrIZ6xkeZX6K70fU/boWx5INJumt6f+nwwWSHXzzGY=
|
||||
cloud.google.com/go/storage v1.28.0/go.mod h1:qlgZML35PXA3zoEnIkiPLY4/TOkUleufRlu6qmcf7sI=
|
||||
cloud.google.com/go/storage v1.28.1 h1:F5QDG5ChchaAVQhINh24U99OWHURqrW8OmQcGKXcbgI=
|
||||
cloud.google.com/go/storage v1.28.1/go.mod h1:Qnisd4CqDdo6BGs2AD5LLnEsmSQ80wQ5ogcBBKhU86Y=
|
||||
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
|
||||
github.com/Azure/azure-sdk-for-go v65.0.0+incompatible h1:HzKLt3kIwMm4KeJYTdx9EbjRYTySD/t8i1Ee/W5EGXw=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.2.0 h1:sVW/AFBTGyJxDaMYlq0ct3jUXTtj12tQ6zE2GZUgVQw=
|
||||
|
@ -71,8 +71,8 @@ github.com/VictoriaMetrics/fasthttp v1.1.0/go.mod h1:/7DMcogqd+aaD3G3Hg5kFgoFwlR
|
|||
github.com/VictoriaMetrics/metrics v1.18.1/go.mod h1:ArjwVz7WpgpegX/JpB0zpNF2h2232kErkEnzH1sxMmA=
|
||||
github.com/VictoriaMetrics/metrics v1.23.0 h1:WzfqyzCaxUZip+OBbg1+lV33WChDSu4ssYII3nxtpeA=
|
||||
github.com/VictoriaMetrics/metrics v1.23.0/go.mod h1:rAr/llLpEnAdTehiNlUxKgnjcOuROSzpw0GvjpEbvFc=
|
||||
github.com/VictoriaMetrics/metricsql v0.49.1 h1:9JAbpiZhlQnylclcf5xNtYRaBd5dr2CTPQ85RIoruuk=
|
||||
github.com/VictoriaMetrics/metricsql v0.49.1/go.mod h1:6pP1ZeLVJHqJrHlF6Ij3gmpQIznSsgktEcZgsAWYel0=
|
||||
github.com/VictoriaMetrics/metricsql v0.50.0 h1:MCBhjn1qlfMqPGP6HiR9JgmEw7oTRGm/O8YwSeoaI1E=
|
||||
github.com/VictoriaMetrics/metricsql v0.50.0/go.mod h1:6pP1ZeLVJHqJrHlF6Ij3gmpQIznSsgktEcZgsAWYel0=
|
||||
github.com/VividCortex/ewma v1.1.1/go.mod h1:2Tkkvm3sRDVXaiyucHiACn4cqf7DpdyLvmxzcbUokwA=
|
||||
github.com/VividCortex/ewma v1.2.0 h1:f58SaIzcDXrSy3kWaHNvuJgJ3Nmz59Zji6XoJR/q1ow=
|
||||
github.com/VividCortex/ewma v1.2.0/go.mod h1:nz4BbCtbLyFDeC9SUHbtcT5644juEuWfUAUnGx7j5l4=
|
||||
|
@ -89,54 +89,55 @@ github.com/andybalholm/brotli v1.0.2/go.mod h1:loMXtMfwqflxFJPmdbJO0a3KNoPuLBgiu
|
|||
github.com/andybalholm/brotli v1.0.3/go.mod h1:fO7iG3H7G2nSZ7m0zPUDn85XEX2GTukHGRSepvi9Eig=
|
||||
github.com/armon/go-metrics v0.3.10 h1:FR+drcQStOe+32sYyJYyZ7FIdgoGGBnwLl+flodp8Uo=
|
||||
github.com/aws/aws-sdk-go v1.38.35/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
|
||||
github.com/aws/aws-sdk-go v1.44.149 h1:zTWaUTbSjgMHvwhaQ91s/6ER8wMb3mA8M1GCZFO9QIo=
|
||||
github.com/aws/aws-sdk-go v1.44.149/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI=
|
||||
github.com/aws/aws-sdk-go-v2 v1.17.1 h1:02c72fDJr87N8RAC2s3Qu0YuvMRZKNZJ9F+lAehCazk=
|
||||
github.com/aws/aws-sdk-go-v2 v1.17.1/go.mod h1:JLnGeGONAyi2lWXI1p0PCIOIy333JMVK1U7Hf0aRFLw=
|
||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.4.9 h1:RKci2D7tMwpvGpDNZnGQw9wk6v7o/xSwFcUAuNPoB8k=
|
||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.4.9/go.mod h1:vCmV1q1VK8eoQJ5+aYE7PkK1K6v41qJ5pJdK3ggCDvg=
|
||||
github.com/aws/aws-sdk-go-v2/config v1.18.3 h1:3kfBKcX3votFX84dm00U8RGA1sCCh3eRMOGzg5dCWfU=
|
||||
github.com/aws/aws-sdk-go-v2/config v1.18.3/go.mod h1:BYdrbeCse3ZnOD5+2/VE/nATOK8fEUpBtmPMdKSyhMU=
|
||||
github.com/aws/aws-sdk-go-v2/credentials v1.13.3 h1:ur+FHdp4NbVIv/49bUjBW+FE7e57HOo03ELodttmagk=
|
||||
github.com/aws/aws-sdk-go-v2/credentials v1.13.3/go.mod h1:/rOMmqYBcFfNbRPU0iN9IgGqD5+V2yp3iWNmIlz0wI4=
|
||||
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.12.19 h1:E3PXZSI3F2bzyj6XxUXdTIfvp425HHhwKsFvmzBwHgs=
|
||||
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.12.19/go.mod h1:VihW95zQpeKQWVPGkwT+2+WJNQV8UXFfMTWdU6VErL8=
|
||||
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.11.42 h1:bxgBYvvBh+W1RnNYP4ROXEB8N+HSSucDszfE7Rb+kfU=
|
||||
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.11.42/go.mod h1:LHOsygMiW/14CkFxdXxvzKyMh3jbk/QfZVaDtCbLkl8=
|
||||
github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.25 h1:nBO/RFxeq/IS5G9Of+ZrgucRciie2qpLy++3UGZ+q2E=
|
||||
github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.25/go.mod h1:Zb29PYkf42vVYQY6pvSyJCJcFHlPIiY+YKdPtwnvMkY=
|
||||
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.19 h1:oRHDrwCTVT8ZXi4sr9Ld+EXk7N/KGssOr2ygNeojEhw=
|
||||
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.19/go.mod h1:6Q0546uHDp421okhmmGfbxzq2hBqbXFNpi4k+Q1JnQA=
|
||||
github.com/aws/aws-sdk-go-v2/internal/ini v1.3.26 h1:Mza+vlnZr+fPKFKRq/lKGVvM6B/8ZZmNdEopOwSQLms=
|
||||
github.com/aws/aws-sdk-go-v2/internal/ini v1.3.26/go.mod h1:Y2OJ+P+MC1u1VKnavT+PshiEuGPyh/7DqxoDNij4/bg=
|
||||
github.com/aws/aws-sdk-go-v2/internal/v4a v1.0.16 h1:2EXB7dtGwRYIN3XQ9qwIW504DVbKIw3r89xQnonGdsQ=
|
||||
github.com/aws/aws-sdk-go-v2/internal/v4a v1.0.16/go.mod h1:XH+3h395e3WVdd6T2Z3mPxuI+x/HVtdqVOREkTiyubs=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.9.10 h1:dpiPHgmFstgkLG07KaYAewvuptq5kvo52xn7tVSrtrQ=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.9.10/go.mod h1:9cBNUHI2aW4ho0A5T87O294iPDuuUOSIEDjnd1Lq/z0=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.1.20 h1:KSvtm1+fPXE0swe9GPjc6msyrdTT0LB/BP8eLugL1FI=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.1.20/go.mod h1:Mp4XI/CkWGD79AQxZ5lIFlgvC0A+gl+4BmyG1F+SfNc=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.9.19 h1:GE25AWCdNUPh9AOJzI9KIJnja7IwUc1WyUqz/JTyJ/I=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.9.19/go.mod h1:02CP6iuYP+IVnBX5HULVdSAku/85eHB2Y9EsFhrkEwU=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.13.19 h1:piDBAaWkaxkkVV3xJJbTehXCZRXYs49kvpi/LG6LR2o=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.13.19/go.mod h1:BmQWRVkLTmyNzYPFAZgon53qKLWBNSvonugD1MrSWUs=
|
||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.29.4 h1:QgmmWifaYZZcpaw3y1+ccRlgH6jAvLm4K/MBGUc7cNM=
|
||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.29.4/go.mod h1:/NHbqPRiwxSPVOB2Xr+StDEH+GWV/64WwnUjv4KYzV0=
|
||||
github.com/aws/aws-sdk-go-v2/service/sso v1.11.25 h1:GFZitO48N/7EsFDt8fMa5iYdmWqkUDDB3Eje6z3kbG0=
|
||||
github.com/aws/aws-sdk-go-v2/service/sso v1.11.25/go.mod h1:IARHuzTXmj1C0KS35vboR0FeJ89OkEy1M9mWbK2ifCI=
|
||||
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.13.8 h1:jcw6kKZrtNfBPJkaHrscDOZoe5gvi9wjudnxvozYFJo=
|
||||
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.13.8/go.mod h1:er2JHN+kBY6FcMfcBBKNGCT3CarImmdFzishsqBmSRI=
|
||||
github.com/aws/aws-sdk-go-v2/service/sts v1.17.5 h1:60SJ4lhvn///8ygCzYy2l53bFW/Q15bVfyjyAWo6zuw=
|
||||
github.com/aws/aws-sdk-go-v2/service/sts v1.17.5/go.mod h1:bXcN3koeVYiJcdDU89n3kCYILob7Y34AeLopUbZgLT4=
|
||||
github.com/aws/smithy-go v1.13.4 h1:/RN2z1txIJWeXeOkzX+Hk/4Uuvv7dWtCjbmVJcrskyk=
|
||||
github.com/aws/smithy-go v1.13.4/go.mod h1:Tg+OJXh4MB2R/uN61Ko2f6hTZwB/ZYGOtib8J3gBHzA=
|
||||
github.com/aws/aws-sdk-go v1.44.153 h1:KfN5URb9O/Fk48xHrAinrPV2DzPcLa0cd9yo1ax5KGg=
|
||||
github.com/aws/aws-sdk-go v1.44.153/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI=
|
||||
github.com/aws/aws-sdk-go-v2 v1.17.2 h1:r0yRZInwiPBNpQ4aDy/Ssh3ROWsGtKDwar2JS8Lm+N8=
|
||||
github.com/aws/aws-sdk-go-v2 v1.17.2/go.mod h1:uzbQtefpm44goOPmdKyAlXSNcwlRgF3ePWVW6EtJvvw=
|
||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.4.10 h1:dK82zF6kkPeCo8J1e+tGx4JdvDIQzj7ygIoLg8WMuGs=
|
||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.4.10/go.mod h1:VeTZetY5KRJLuD/7fkQXMU6Mw7H5m/KP2J5Iy9osMno=
|
||||
github.com/aws/aws-sdk-go-v2/config v1.18.4 h1:VZKhr3uAADXHStS/Gf9xSYVmmaluTUfkc0dcbPiDsKE=
|
||||
github.com/aws/aws-sdk-go-v2/config v1.18.4/go.mod h1:EZxMPLSdGAZ3eAmkqXfYbRppZJTzFTkv8VyEzJhKko4=
|
||||
github.com/aws/aws-sdk-go-v2/credentials v1.13.4 h1:nEbHIyJy7mCvQ/kzGG7VWHSBpRB4H6sJy3bWierWUtg=
|
||||
github.com/aws/aws-sdk-go-v2/credentials v1.13.4/go.mod h1:/Cj5w9LRsNTLSwexsohwDME32OzJ6U81Zs33zr2ZWOM=
|
||||
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.12.20 h1:tpNOglTZ8kg9T38NpcGBxudqfUAwUzyUnLQ4XSd0CHE=
|
||||
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.12.20/go.mod h1:d9xFpWd3qYwdIXM0fvu7deD08vvdRXyc/ueV+0SqaWE=
|
||||
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.11.43 h1:+bkAMTd5OGyHu2nwNOangjEsP65fR0uhMbZJA52sZ64=
|
||||
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.11.43/go.mod h1:sS2tu0VEspKuY5eM1vQgy7P/hpZX8F62o6qsghZExWc=
|
||||
github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.26 h1:5WU31cY7m0tG+AiaXuXGoMzo2GBQ1IixtWa8Yywsgco=
|
||||
github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.26/go.mod h1:2E0LdbJW6lbeU4uxjum99GZzI0ZjDpAb0CoSCM0oeEY=
|
||||
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.20 h1:WW0qSzDWoiWU2FS5DbKpxGilFVlCEJPwx4YtjdfI0Jw=
|
||||
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.20/go.mod h1:/+6lSiby8TBFpTVXZgKiN/rCfkYXEGvhlM4zCgPpt7w=
|
||||
github.com/aws/aws-sdk-go-v2/internal/ini v1.3.27 h1:N2eKFw2S+JWRCtTt0IhIX7uoGGQciD4p6ba+SJv4WEU=
|
||||
github.com/aws/aws-sdk-go-v2/internal/ini v1.3.27/go.mod h1:RdwFVc7PBYWY33fa2+8T1mSqQ7ZEK4ILpM0wfioDC3w=
|
||||
github.com/aws/aws-sdk-go-v2/internal/v4a v1.0.17 h1:5tXbMJ7Jq0iG65oiMg6tCLsHkSaO2xLXa2EmZ29vaTA=
|
||||
github.com/aws/aws-sdk-go-v2/internal/v4a v1.0.17/go.mod h1:twV0fKMQuqLY4klyFH56aXNq3AFiA5LO0/frTczEOFE=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.9.11 h1:y2+VQzC6Zh2ojtV2LoC0MNwHWc6qXv/j2vrQtlftkdA=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.9.11/go.mod h1:iV4q2hsqtNECrfmlXyord9u4zyuFEJX9eLgLpSPzWA8=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.1.21 h1:77b1GfaSuIok5yB/3HYbG+ypWvOJDQ2rVdq943D17R4=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.1.21/go.mod h1:sPOz31BVdqeeurKEuUpLNSve4tdCNPluE+070HNcEHI=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.9.20 h1:jlgyHbkZQAgAc7VIxJDmtouH8eNjOk2REVAQfVhdaiQ=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.9.20/go.mod h1:Xs52xaLBqDEKRcAfX/hgjmD3YQ7c/W+BEyfamlO/W2E=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.13.20 h1:4K6dbmR0mlp3o4Bo78PnpvzHtYAqEeVMguvEenpMGsI=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.13.20/go.mod h1:1XpDcReIEOHsjwNToDKhIAO3qwLo1BnfbtSqWJa8j7g=
|
||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.29.5 h1:nRSEQj1JergKTVc8RGkhZvOEGgcvo4fWpDPwGDeg2ok=
|
||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.29.5/go.mod h1:wcaJTmjKFDW0s+Se55HBNIds6ghdAGoDDw+SGUdrfAk=
|
||||
github.com/aws/aws-sdk-go-v2/service/sso v1.11.26 h1:ActQgdTNQej/RuUJjB9uxYVLDOvRGtUreXF8L3c8wyg=
|
||||
github.com/aws/aws-sdk-go-v2/service/sso v1.11.26/go.mod h1:uB9tV79ULEZUXc6Ob18A46KSQ0JDlrplPni9XW6Ot60=
|
||||
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.13.9 h1:wihKuqYUlA2T/Rx+yu2s6NDAns8B9DgnRooB1PVhY+Q=
|
||||
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.13.9/go.mod h1:2E/3D/mB8/r2J7nK42daoKP/ooCwbf0q1PznNc+DZTU=
|
||||
github.com/aws/aws-sdk-go-v2/service/sts v1.17.6 h1:VQFOLQVL3BrKM/NLO/7FiS4vcp5bqK0mGMyk09xLoAY=
|
||||
github.com/aws/aws-sdk-go-v2/service/sts v1.17.6/go.mod h1:Az3OXXYGyfNwQNsK/31L4R75qFYnO641RZGAoV3uH1c=
|
||||
github.com/aws/smithy-go v1.13.5 h1:hgz0X/DX0dGqTYpGALqXJoRKRj5oQ7150i5FdTePzO8=
|
||||
github.com/aws/smithy-go v1.13.5/go.mod h1:Tg+OJXh4MB2R/uN61Ko2f6hTZwB/ZYGOtib8J3gBHzA=
|
||||
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
|
||||
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
|
||||
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
|
||||
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
|
||||
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
|
||||
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
github.com/cespare/xxhash/v2 v2.1.2 h1:YRXhKfTDauu4ajMg1TPgFO5jnlC2HCbmLXMcTG5cbYE=
|
||||
github.com/cespare/xxhash/v2 v2.1.2/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
|
||||
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
github.com/cheggaaa/pb/v3 v3.1.0 h1:3uouEsl32RL7gTiQsuaXD4Bzbfl5tGztXGUvXbs4O04=
|
||||
github.com/cheggaaa/pb/v3 v3.1.0/go.mod h1:YjrevcBqadFDaGQKRdmZxTY42pXEqda48Ea3lt0K/BE=
|
||||
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
|
||||
|
@ -402,8 +403,8 @@ github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1
|
|||
github.com/prometheus/procfs v0.7.3/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
|
||||
github.com/prometheus/procfs v0.8.0 h1:ODq8ZFEaYeCaZOJlZZdJA2AbQR98dSHSM1KW/You5mo=
|
||||
github.com/prometheus/procfs v0.8.0/go.mod h1:z7EfXMXOkbkqb9IINtpCn86r/to3BnA0uaxHdg830/4=
|
||||
github.com/prometheus/prometheus v0.40.4 h1:6aLtQSvnhmC/uo5Tx910AQm3Fxq1nzaJA6uiYtsA6So=
|
||||
github.com/prometheus/prometheus v0.40.4/go.mod h1:bxgdmtoSNLmmIVPGmeTJ3OiP67VmuY4yalE4ZP6L/j8=
|
||||
github.com/prometheus/prometheus v0.40.5 h1:wmk5yNrQlkQ2OvZucMhUB4k78AVfG34szb1UtopS8Vc=
|
||||
github.com/prometheus/prometheus v0.40.5/go.mod h1:bxgdmtoSNLmmIVPGmeTJ3OiP67VmuY4yalE4ZP6L/j8=
|
||||
github.com/rivo/uniseg v0.1.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
|
||||
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
|
||||
github.com/rivo/uniseg v0.4.3 h1:utMvzDsuh3suAEnhH0RdHmoPbU648o6CvXxTx4SBMOw=
|
||||
|
@ -429,8 +430,8 @@ github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/
|
|||
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
|
||||
github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk=
|
||||
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
||||
github.com/urfave/cli/v2 v2.23.5 h1:xbrU7tAYviSpqeR3X4nEFWUdB/uDZ6DE+HxmRU7Xtyw=
|
||||
github.com/urfave/cli/v2 v2.23.5/go.mod h1:GHupkWPMM0M/sj1a2b4wUrWBPzazNrIjouW6fmdJLxc=
|
||||
github.com/urfave/cli/v2 v2.23.6 h1:iWmtKD+prGo1nKUtLO0Wg4z9esfBM4rAV4QRLQiEmJ4=
|
||||
github.com/urfave/cli/v2 v2.23.6/go.mod h1:GHupkWPMM0M/sj1a2b4wUrWBPzazNrIjouW6fmdJLxc=
|
||||
github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
|
||||
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
|
||||
github.com/valyala/fasthttp v1.30.0/go.mod h1:2rsYD01CKFrjjsvFxx75KlEUNpWNBY9JWD3K/7o2Cus=
|
||||
|
@ -462,14 +463,14 @@ go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
|
|||
go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
|
||||
go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=
|
||||
go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.36.4 h1:aUEBEdCa6iamGzg6fuYxDA8ThxvOG240mAvWDU+XLio=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.36.4/go.mod h1:l2MdsbKTocpPS5nQZscqTR9jd8u96VYZdcpF8Sye7mA=
|
||||
go.opentelemetry.io/otel v1.11.1 h1:4WLLAmcfkmDk2ukNXJyq3/kiz/3UzCaYq6PskJsaou4=
|
||||
go.opentelemetry.io/otel v1.11.1/go.mod h1:1nNhXBbWSD0nsL38H6btgnFN2k4i0sNLHNNMZMSbUGE=
|
||||
go.opentelemetry.io/otel/metric v0.33.0 h1:xQAyl7uGEYvrLAiV/09iTJlp1pZnQ9Wl793qbVvED1E=
|
||||
go.opentelemetry.io/otel/metric v0.33.0/go.mod h1:QlTYc+EnYNq/M2mNk1qDDMRLpqCOj2f/r5c7Fd5FYaI=
|
||||
go.opentelemetry.io/otel/trace v1.11.1 h1:ofxdnzsNrGBYXbP7t7zpUK281+go5rF7dvdIZXF8gdQ=
|
||||
go.opentelemetry.io/otel/trace v1.11.1/go.mod h1:f/Q9G7vzk5u91PhbmKbg1Qn0rzH1LJ4vbPHFGkTPtOk=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.37.0 h1:yt2NKzK7Vyo6h0+X8BA4FpreZQTlVEIarnsBP/H5mzs=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.37.0/go.mod h1:+ARmXlUlc51J7sZeCBkBJNdHGySrdOzgzxp6VWRWM1U=
|
||||
go.opentelemetry.io/otel v1.11.2 h1:YBZcQlsVekzFsFbjygXMOXSs6pialIZxcjfO/mBDmR0=
|
||||
go.opentelemetry.io/otel v1.11.2/go.mod h1:7p4EUV+AqgdlNV9gL97IgUZiVR3yrFXYo53f9BM3tRI=
|
||||
go.opentelemetry.io/otel/metric v0.34.0 h1:MCPoQxcg/26EuuJwpYN1mZTeCYAUGx8ABxfW07YkjP8=
|
||||
go.opentelemetry.io/otel/metric v0.34.0/go.mod h1:ZFuI4yQGNCupurTXCwkeD/zHBt+C2bR7bw5JqUm/AP8=
|
||||
go.opentelemetry.io/otel/trace v1.11.2 h1:Xf7hWSF2Glv0DE3MH7fBHvtpSBsjcBUe5MYAmZM/+y0=
|
||||
go.opentelemetry.io/otel/trace v1.11.2/go.mod h1:4N+yC7QEz7TTsG9BSRLNAa63eg5E06ObSbKPmxQ/pKA=
|
||||
go.uber.org/atomic v1.10.0 h1:9qC72Qh0+3MqyJbAn8YU5xVq1frD8bn3JtD2oXtafVQ=
|
||||
go.uber.org/atomic v1.10.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0=
|
||||
go.uber.org/goleak v1.2.0 h1:xqgm/S+aQvhWFTtR0XK3Jvg7z8kGV8P4X14IzwN3Eqk=
|
||||
|
@ -493,8 +494,8 @@ golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u0
|
|||
golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
|
||||
golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM=
|
||||
golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU=
|
||||
golang.org/x/exp v0.0.0-20221126150942-6ab00d035af9 h1:yZNXmy+j/JpX19vZkVktWqAo7Gny4PBWYYK3zskGpx4=
|
||||
golang.org/x/exp v0.0.0-20221126150942-6ab00d035af9/go.mod h1:CxIveKay+FTh1D0yPZemJVgC/95VzuuOLq5Qi4xnoYc=
|
||||
golang.org/x/exp v0.0.0-20221205204356-47842c84f3db h1:D/cFflL63o2KSLJIwjlcIt8PR064j/xsmdEJL/YvY/o=
|
||||
golang.org/x/exp v0.0.0-20221205204356-47842c84f3db/go.mod h1:CxIveKay+FTh1D0yPZemJVgC/95VzuuOLq5Qi4xnoYc=
|
||||
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
|
||||
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
|
||||
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||
|
@ -555,8 +556,8 @@ golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su
|
|||
golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
|
||||
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
|
||||
golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco=
|
||||
golang.org/x/net v0.2.0 h1:sZfSu1wtKLGlWI4ZZayP0ck9Y73K1ynO6gqzTdBVdPU=
|
||||
golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY=
|
||||
golang.org/x/net v0.3.0 h1:VWL6FNY2bEEmsGVKabSlHu5Irp34xmMRoqb/9lF9lxk=
|
||||
golang.org/x/net v0.3.0/go.mod h1:MBQ8lrhLObU/6UmLb4fmbmk5OcyYmqtbGd/9yIeKjEE=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
|
@ -627,12 +628,12 @@ golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBc
|
|||
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.2.0 h1:ljd4t30dBnAvMZaQCevtY0xLLD0A+bRZXbgLMLU1F/A=
|
||||
golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.3.0 h1:w8ZOecv6NaNa/zC8944JTU3vz4u6Lagfk4RPQxv92NQ=
|
||||
golang.org/x/sys v0.3.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||
golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||
golang.org/x/term v0.2.0 h1:z85xZCsEl7bi/KwbNADeBYoOP0++7W1ipu+aGnpwzRM=
|
||||
golang.org/x/term v0.3.0 h1:qoo4akIqOcDME5bhc/NgxUdovd6BSS2uMsVjB56q1xI=
|
||||
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
|
@ -640,13 +641,14 @@ golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
|||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
|
||||
golang.org/x/text v0.4.0 h1:BrVqGRd7+k1DiOgtnFvAkoQEWQvBc25ouMJM6429SFg=
|
||||
golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
|
||||
golang.org/x/text v0.5.0 h1:OLmvp0KP+FVG99Ct/qFiL/Fhk4zp4QQnZ7b2U+5piUM=
|
||||
golang.org/x/text v0.5.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
|
||||
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.2.0 h1:52I/1L54xyEQAYdtcSuxtiT84KGYTBGXwayxmIpNJhE=
|
||||
golang.org/x/time v0.2.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.3.0 h1:rg5rLMjNzMS1RkNLzCG38eapWhnYLFYXDXj2gOlr8j4=
|
||||
golang.org/x/time v0.3.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
|
||||
|
@ -752,8 +754,8 @@ google.golang.org/genproto v0.0.0-20200618031413-b414f8b61790/go.mod h1:jDfRM7Fc
|
|||
google.golang.org/genproto v0.0.0-20200729003335-053ba62fc06f/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/genproto v0.0.0-20221118155620-16455021b5e6 h1:a2S6M0+660BgMNl++4JPlcAO/CjkqYItDEZwkoDQK7c=
|
||||
google.golang.org/genproto v0.0.0-20221118155620-16455021b5e6/go.mod h1:rZS5c/ZVYMaOGBfO68GWtjOw/eLaZM1X6iVtgjZ+EWg=
|
||||
google.golang.org/genproto v0.0.0-20221205194025-8222ab48f5fc h1:nUKKji0AarrQKh6XpFEpG3p1TNztxhe7C8TcUvDgXqw=
|
||||
google.golang.org/genproto v0.0.0-20221205194025-8222ab48f5fc/go.mod h1:1dOng4TWOomJrDGhpXjfCD35wQC6jnC7HpRmOFRqEV0=
|
||||
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
||||
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
|
||||
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
|
||||
|
|
|
@ -45,7 +45,7 @@ func fsync(path string) error {
|
|||
func AppendFiles(dst []string, dir string) ([]string, error) {
|
||||
d, err := os.Open(dir)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot open %q: %w", dir, err)
|
||||
return nil, fmt.Errorf("cannot open directory: %w", err)
|
||||
}
|
||||
dst, err = appendFilesInternal(dst, d)
|
||||
if err1 := d.Close(); err1 != nil {
|
||||
|
|
|
@ -159,7 +159,7 @@ func (fs *FS) DeletePath(path string) (uint64, error) {
|
|||
// The file could be deleted earlier via symlink.
|
||||
return 0, nil
|
||||
}
|
||||
return 0, fmt.Errorf("cannot open %q at %q: %w", path, fullPath, err)
|
||||
return 0, fmt.Errorf("cannot open %q: %w", path, err)
|
||||
}
|
||||
fi, err := f.Stat()
|
||||
_ = f.Close()
|
||||
|
|
|
@ -107,12 +107,12 @@ func (fs *FS) CopyPart(srcFS common.OriginFS, p common.Part) error {
|
|||
// Cannot create hardlink. Just copy file contents
|
||||
srcFile, err := os.Open(srcPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot open file %q: %w", srcPath, err)
|
||||
return fmt.Errorf("cannot open source file: %w", err)
|
||||
}
|
||||
dstFile, err := os.Create(dstPath)
|
||||
if err != nil {
|
||||
_ = srcFile.Close()
|
||||
return fmt.Errorf("cannot create file %q: %w", dstPath, err)
|
||||
return fmt.Errorf("cannot create destination file: %w", err)
|
||||
}
|
||||
n, err := io.Copy(dstFile, srcFile)
|
||||
if err1 := dstFile.Close(); err1 != nil {
|
||||
|
@ -141,7 +141,7 @@ func (fs *FS) DownloadPart(p common.Part, w io.Writer) error {
|
|||
path := fs.path(p)
|
||||
r, err := os.Open(path)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot open %q: %w", path, err)
|
||||
return err
|
||||
}
|
||||
n, err := io.Copy(w, r)
|
||||
if err1 := r.Close(); err1 != nil && err == nil {
|
||||
|
|
|
@ -79,7 +79,7 @@ func OpenReaderAt(path string, offset int64, nocache bool) (*Reader, error) {
|
|||
func Open(path string, nocache bool) (*Reader, error) {
|
||||
f, err := os.Open(path)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot open file %q: %w", path, err)
|
||||
return nil, err
|
||||
}
|
||||
r := &Reader{
|
||||
f: f,
|
||||
|
@ -179,7 +179,7 @@ type Writer struct {
|
|||
func OpenWriterAt(path string, offset int64, nocache bool) (*Writer, error) {
|
||||
f, err := os.OpenFile(path, os.O_WRONLY|os.O_CREATE, 0600)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot open %q: %w", path, err)
|
||||
return nil, err
|
||||
}
|
||||
n, err := f.Seek(offset, io.SeekStart)
|
||||
if err != nil {
|
||||
|
|
57
lib/fs/fs.go
57
lib/fs/fs.go
|
@ -25,11 +25,38 @@ func MustSyncPath(path string) {
|
|||
mustSyncPath(path)
|
||||
}
|
||||
|
||||
// WriteFileAndSync writes data to the file at path and then calls fsync on the created file.
|
||||
//
|
||||
// The fsync guarantees that the written data survives hardware reset after successful call.
|
||||
//
|
||||
// This function may leave the file at the path in inconsistent state on app crash
|
||||
// in the middle of the write.
|
||||
// Use WriteFileAtomically if the file at the path must be either written in full
|
||||
// or not written at all on app crash in the middle of the write.
|
||||
func WriteFileAndSync(path string, data []byte) error {
|
||||
f, err := filestream.Create(path, false)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if _, err := f.Write(data); err != nil {
|
||||
f.MustClose()
|
||||
// Do not call MustRemoveAll(path), so the user could inpsect
|
||||
// the file contents during investigation of the issue.
|
||||
return fmt.Errorf("cannot write %d bytes to %q: %w", len(data), path, err)
|
||||
}
|
||||
// Sync and close the file.
|
||||
f.MustClose()
|
||||
return nil
|
||||
}
|
||||
|
||||
// WriteFileAtomically atomically writes data to the given file path.
|
||||
//
|
||||
// WriteFileAtomically returns only after the file is fully written and synced
|
||||
// This function returns only after the file is fully written and synced
|
||||
// to the underlying storage.
|
||||
//
|
||||
// This function guarantees that the file at path either fully written or not written at all on app crash
|
||||
// in the middle of the write.
|
||||
//
|
||||
// If the file at path already exists, then the file is overwritten atomically if canOverwrite is true.
|
||||
// Otherwise error is returned.
|
||||
func WriteFileAtomically(path string, data []byte, canOverwrite bool) error {
|
||||
|
@ -40,26 +67,18 @@ func WriteFileAtomically(path string, data []byte, canOverwrite bool) error {
|
|||
return fmt.Errorf("cannot create file %q, since it already exists", path)
|
||||
}
|
||||
|
||||
// Write data to a temporary file.
|
||||
n := atomic.AddUint64(&tmpFileNum, 1)
|
||||
tmpPath := fmt.Sprintf("%s.tmp.%d", path, n)
|
||||
f, err := filestream.Create(tmpPath, false)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot create file %q: %w", tmpPath, err)
|
||||
}
|
||||
if _, err := f.Write(data); err != nil {
|
||||
f.MustClose()
|
||||
MustRemoveAll(tmpPath)
|
||||
return fmt.Errorf("cannot write %d bytes to file %q: %w", len(data), tmpPath, err)
|
||||
if err := WriteFileAndSync(tmpPath, data); err != nil {
|
||||
return fmt.Errorf("cannot write data to temporary file: %w", err)
|
||||
}
|
||||
|
||||
// Sync and close the file.
|
||||
f.MustClose()
|
||||
|
||||
// Atomically move the file from tmpPath to path.
|
||||
// Atomically move the temporary file from tmpPath to path.
|
||||
if err := os.Rename(tmpPath, path); err != nil {
|
||||
// do not call MustRemoveAll(tmpPath) here, so the user could inspect
|
||||
// the file contents during investigating the issue.
|
||||
return fmt.Errorf("cannot move %q to %q: %w", tmpPath, path, err)
|
||||
// the file contents during investigation of the issue.
|
||||
return fmt.Errorf("cannot move temporary file %q to %q: %w", tmpPath, path, err)
|
||||
}
|
||||
|
||||
// Sync the containing directory, so the file is guaranteed to appear in the directory.
|
||||
|
@ -123,7 +142,7 @@ func RemoveDirContents(dir string) {
|
|||
}
|
||||
d, err := os.Open(dir)
|
||||
if err != nil {
|
||||
logger.Panicf("FATAL: cannot open dir %q: %s", dir, err)
|
||||
logger.Panicf("FATAL: cannot open dir: %s", err)
|
||||
}
|
||||
defer MustClose(d)
|
||||
names, err := d.Readdirnames(-1)
|
||||
|
@ -185,7 +204,7 @@ func IsEmptyDir(path string) bool {
|
|||
// See https://stackoverflow.com/a/30708914/274937
|
||||
f, err := os.Open(path)
|
||||
if err != nil {
|
||||
logger.Panicf("FATAL: unexpected error when opening directory %q: %s", path, err)
|
||||
logger.Panicf("FATAL: cannot open dir: %s", err)
|
||||
}
|
||||
_, err = f.Readdirnames(1)
|
||||
MustClose(f)
|
||||
|
@ -230,7 +249,7 @@ var atomicDirRemoveCounter = uint64(time.Now().UnixNano())
|
|||
func MustRemoveTemporaryDirs(dir string) {
|
||||
d, err := os.Open(dir)
|
||||
if err != nil {
|
||||
logger.Panicf("FATAL: cannot open dir %q: %s", dir, err)
|
||||
logger.Panicf("FATAL: cannot open dir: %s", err)
|
||||
}
|
||||
defer MustClose(d)
|
||||
fis, err := d.Readdir(-1)
|
||||
|
@ -259,7 +278,7 @@ func HardLinkFiles(srcDir, dstDir string) error {
|
|||
|
||||
d, err := os.Open(srcDir)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot open srcDir=%q: %w", srcDir, err)
|
||||
return fmt.Errorf("cannot open srcDir: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
if err := d.Close(); err != nil {
|
||||
|
|
|
@ -19,7 +19,7 @@ func mUnmap(data []byte) error {
|
|||
func mustSyncPath(path string) {
|
||||
d, err := os.Open(path)
|
||||
if err != nil {
|
||||
logger.Panicf("FATAL: cannot open %q: %s", path, err)
|
||||
logger.Panicf("FATAL: cannot open file for fsync: %s", err)
|
||||
}
|
||||
if err := d.Sync(); err != nil {
|
||||
_ = d.Close()
|
||||
|
@ -51,7 +51,7 @@ func createFlockFile(flockFile string) (*os.File, error) {
|
|||
func mustGetFreeSpace(path string) uint64 {
|
||||
d, err := os.Open(path)
|
||||
if err != nil {
|
||||
logger.Panicf("FATAL: cannot determine free disk space on %q: %s", path, err)
|
||||
logger.Panicf("FATAL: cannot open dir for determining free disk space: %s", err)
|
||||
}
|
||||
defer MustClose(d)
|
||||
|
||||
|
|
|
@ -22,7 +22,7 @@ func mUnmap(data []byte) error {
|
|||
func mustSyncPath(path string) {
|
||||
d, err := os.Open(path)
|
||||
if err != nil {
|
||||
logger.Panicf("FATAL: cannot open %q: %s", path, err)
|
||||
logger.Panicf("FATAL: cannot open file for fsync: %s", err)
|
||||
}
|
||||
if err := d.Sync(); err != nil {
|
||||
_ = d.Close()
|
||||
|
@ -47,7 +47,7 @@ func createFlockFile(flockFile string) (*os.File, error) {
|
|||
func mustGetFreeSpace(path string) uint64 {
|
||||
d, err := os.Open(path)
|
||||
if err != nil {
|
||||
logger.Panicf("FATAL: cannot determine free disk space on %q: %s", path, err)
|
||||
logger.Panicf("FATAL: cannot open dir for determining free disk space: %s", err)
|
||||
}
|
||||
defer MustClose(d)
|
||||
|
||||
|
|
|
@ -89,7 +89,7 @@ func (r *ReaderAt) MustFadviseSequentialRead(prefetch bool) {
|
|||
func MustOpenReaderAt(path string) *ReaderAt {
|
||||
f, err := os.Open(path)
|
||||
if err != nil {
|
||||
logger.Panicf("FATAL: cannot open file %q for reading: %s", path, err)
|
||||
logger.Panicf("FATAL: cannot open file for reading: %s", err)
|
||||
}
|
||||
var r ReaderAt
|
||||
r.f = f
|
||||
|
|
|
@ -63,13 +63,10 @@ func (bsw *blockStreamWriter) reset() {
|
|||
bsw.mrFirstItemCaught = false
|
||||
}
|
||||
|
||||
func (bsw *blockStreamWriter) InitFromInmemoryPart(mp *inmemoryPart) {
|
||||
func (bsw *blockStreamWriter) InitFromInmemoryPart(mp *inmemoryPart, compressLevel int) {
|
||||
bsw.reset()
|
||||
|
||||
// Use the minimum compression level for in-memory blocks,
|
||||
// since they are going to be re-compressed during the merge into file-based blocks.
|
||||
bsw.compressLevel = -5 // See https://github.com/facebook/zstd/releases/tag/v1.3.4
|
||||
|
||||
bsw.compressLevel = compressLevel
|
||||
bsw.metaindexWriter = &mp.metaindexData
|
||||
bsw.indexWriter = &mp.indexData
|
||||
bsw.itemsWriter = &mp.itemsData
|
||||
|
|
|
@ -47,7 +47,9 @@ func (it Item) String(data []byte) string {
|
|||
return *(*string)(unsafe.Pointer(sh))
|
||||
}
|
||||
|
||||
func (ib *inmemoryBlock) Len() int { return len(ib.items) }
|
||||
func (ib *inmemoryBlock) Len() int {
|
||||
return len(ib.items)
|
||||
}
|
||||
|
||||
func (ib *inmemoryBlock) Less(i, j int) bool {
|
||||
items := ib.items
|
||||
|
|
|
@ -115,7 +115,7 @@ func TestInmemoryBlockMarshalUnmarshal(t *testing.T) {
|
|||
var itemsLen uint32
|
||||
var mt marshalType
|
||||
|
||||
for i := 0; i < 1000; i++ {
|
||||
for i := 0; i < 1000; i += 10 {
|
||||
var items []string
|
||||
totalLen := 0
|
||||
ib.Reset()
|
||||
|
|
|
@ -1,8 +1,12 @@
|
|||
package mergeset
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/encoding"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
)
|
||||
|
||||
|
@ -28,6 +32,36 @@ func (mp *inmemoryPart) Reset() {
|
|||
mp.lensData.Reset()
|
||||
}
|
||||
|
||||
// StoreToDisk stores mp to the given path on disk.
|
||||
func (mp *inmemoryPart) StoreToDisk(path string) error {
|
||||
if err := fs.MkdirAllIfNotExist(path); err != nil {
|
||||
return fmt.Errorf("cannot create directory %q: %w", path, err)
|
||||
}
|
||||
metaindexPath := path + "/metaindex.bin"
|
||||
if err := fs.WriteFileAndSync(metaindexPath, mp.metaindexData.B); err != nil {
|
||||
return fmt.Errorf("cannot store metaindex: %w", err)
|
||||
}
|
||||
indexPath := path + "/index.bin"
|
||||
if err := fs.WriteFileAndSync(indexPath, mp.indexData.B); err != nil {
|
||||
return fmt.Errorf("cannot store index: %w", err)
|
||||
}
|
||||
itemsPath := path + "/items.bin"
|
||||
if err := fs.WriteFileAndSync(itemsPath, mp.itemsData.B); err != nil {
|
||||
return fmt.Errorf("cannot store items: %w", err)
|
||||
}
|
||||
lensPath := path + "/lens.bin"
|
||||
if err := fs.WriteFileAndSync(lensPath, mp.lensData.B); err != nil {
|
||||
return fmt.Errorf("cannot store lens: %w", err)
|
||||
}
|
||||
if err := mp.ph.WriteMetadata(path); err != nil {
|
||||
return fmt.Errorf("cannot store metadata: %w", err)
|
||||
}
|
||||
// Sync parent directory in order to make sure the written files remain visible after hardware reset
|
||||
parentDirPath := filepath.Dir(path)
|
||||
fs.MustSyncPath(parentDirPath)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Init initializes mp from ib.
|
||||
func (mp *inmemoryPart) Init(ib *inmemoryBlock) {
|
||||
mp.Reset()
|
||||
|
@ -60,14 +94,14 @@ func (mp *inmemoryPart) Init(ib *inmemoryBlock) {
|
|||
|
||||
bb := inmemoryPartBytePool.Get()
|
||||
bb.B = mp.bh.Marshal(bb.B[:0])
|
||||
mp.indexData.B = encoding.CompressZSTDLevel(mp.indexData.B[:0], bb.B, 0)
|
||||
mp.indexData.B = encoding.CompressZSTDLevel(mp.indexData.B[:0], bb.B, compressLevel)
|
||||
|
||||
mp.mr.firstItem = append(mp.mr.firstItem[:0], mp.bh.firstItem...)
|
||||
mp.mr.blockHeadersCount = 1
|
||||
mp.mr.indexBlockOffset = 0
|
||||
mp.mr.indexBlockSize = uint32(len(mp.indexData.B))
|
||||
bb.B = mp.mr.Marshal(bb.B[:0])
|
||||
mp.metaindexData.B = encoding.CompressZSTDLevel(mp.metaindexData.B[:0], bb.B, 0)
|
||||
mp.metaindexData.B = encoding.CompressZSTDLevel(mp.metaindexData.B[:0], bb.B, compressLevel)
|
||||
inmemoryPartBytePool.Put(bb)
|
||||
}
|
||||
|
||||
|
@ -76,9 +110,8 @@ var inmemoryPartBytePool bytesutil.ByteBufferPool
|
|||
// It is safe calling NewPart multiple times.
|
||||
// It is unsafe re-using mp while the returned part is in use.
|
||||
func (mp *inmemoryPart) NewPart() *part {
|
||||
ph := mp.ph
|
||||
size := mp.size()
|
||||
p, err := newPart(&ph, "", size, mp.metaindexData.NewReader(), &mp.indexData, &mp.itemsData, &mp.lensData)
|
||||
p, err := newPart(&mp.ph, "", size, mp.metaindexData.NewReader(), &mp.indexData, &mp.itemsData, &mp.lensData)
|
||||
if err != nil {
|
||||
logger.Panicf("BUG: cannot create a part from inmemoryPart: %s", err)
|
||||
}
|
||||
|
@ -86,5 +119,5 @@ func (mp *inmemoryPart) NewPart() *part {
|
|||
}
|
||||
|
||||
func (mp *inmemoryPart) size() uint64 {
|
||||
return uint64(len(mp.metaindexData.B) + len(mp.indexData.B) + len(mp.itemsData.B) + len(mp.lensData.B))
|
||||
return uint64(cap(mp.metaindexData.B) + cap(mp.indexData.B) + cap(mp.itemsData.B) + cap(mp.lensData.B))
|
||||
}
|
||||
|
|
|
@ -30,14 +30,14 @@ func TestMultilevelMerge(t *testing.T) {
|
|||
// First level merge
|
||||
var dstIP1 inmemoryPart
|
||||
var bsw1 blockStreamWriter
|
||||
bsw1.InitFromInmemoryPart(&dstIP1)
|
||||
bsw1.InitFromInmemoryPart(&dstIP1, -5)
|
||||
if err := mergeBlockStreams(&dstIP1.ph, &bsw1, bsrs[:5], nil, nil, &itemsMerged); err != nil {
|
||||
t.Fatalf("cannot merge first level part 1: %s", err)
|
||||
}
|
||||
|
||||
var dstIP2 inmemoryPart
|
||||
var bsw2 blockStreamWriter
|
||||
bsw2.InitFromInmemoryPart(&dstIP2)
|
||||
bsw2.InitFromInmemoryPart(&dstIP2, -5)
|
||||
if err := mergeBlockStreams(&dstIP2.ph, &bsw2, bsrs[5:], nil, nil, &itemsMerged); err != nil {
|
||||
t.Fatalf("cannot merge first level part 2: %s", err)
|
||||
}
|
||||
|
@ -54,7 +54,7 @@ func TestMultilevelMerge(t *testing.T) {
|
|||
newTestBlockStreamReader(&dstIP1),
|
||||
newTestBlockStreamReader(&dstIP2),
|
||||
}
|
||||
bsw.InitFromInmemoryPart(&dstIP)
|
||||
bsw.InitFromInmemoryPart(&dstIP, 1)
|
||||
if err := mergeBlockStreams(&dstIP.ph, &bsw, bsrsTop, nil, nil, &itemsMerged); err != nil {
|
||||
t.Fatalf("cannot merge second level: %s", err)
|
||||
}
|
||||
|
@ -73,7 +73,7 @@ func TestMergeForciblyStop(t *testing.T) {
|
|||
bsrs, _ := newTestInmemoryBlockStreamReaders(20, 4000)
|
||||
var dstIP inmemoryPart
|
||||
var bsw blockStreamWriter
|
||||
bsw.InitFromInmemoryPart(&dstIP)
|
||||
bsw.InitFromInmemoryPart(&dstIP, 1)
|
||||
ch := make(chan struct{})
|
||||
var itemsMerged uint64
|
||||
close(ch)
|
||||
|
@ -120,7 +120,7 @@ func testMergeBlockStreamsSerial(blocksToMerge, maxItemsPerBlock int) error {
|
|||
var itemsMerged uint64
|
||||
var dstIP inmemoryPart
|
||||
var bsw blockStreamWriter
|
||||
bsw.InitFromInmemoryPart(&dstIP)
|
||||
bsw.InitFromInmemoryPart(&dstIP, -4)
|
||||
if err := mergeBlockStreams(&dstIP.ph, &bsw, bsrs, nil, nil, &itemsMerged); err != nil {
|
||||
return fmt.Errorf("cannot merge block streams: %w", err)
|
||||
}
|
||||
|
|
|
@ -149,7 +149,7 @@ func newTestPart(blocksCount, maxItemsPerBlock int) (*part, []string, error) {
|
|||
var itemsMerged uint64
|
||||
var ip inmemoryPart
|
||||
var bsw blockStreamWriter
|
||||
bsw.InitFromInmemoryPart(&ip)
|
||||
bsw.InitFromInmemoryPart(&ip, -3)
|
||||
if err := mergeBlockStreams(&ip.ph, &bsw, bsrs, nil, nil, &itemsMerged); err != nil {
|
||||
return nil, nil, fmt.Errorf("cannot merge blocks: %w", err)
|
||||
}
|
||||
|
|
File diff suppressed because it is too large
Load diff
|
@ -161,9 +161,7 @@ func newTestTable(path string, itemsCount int) (*Table, []string, error) {
|
|||
items := make([]string, itemsCount)
|
||||
for i := 0; i < itemsCount; i++ {
|
||||
item := fmt.Sprintf("%d:%d", rand.Intn(1e9), i)
|
||||
if err := tb.AddItems([][]byte{[]byte(item)}); err != nil {
|
||||
return nil, nil, fmt.Errorf("cannot add item: %w", err)
|
||||
}
|
||||
tb.AddItems([][]byte{[]byte(item)})
|
||||
items[i] = item
|
||||
}
|
||||
tb.DebugFlush()
|
||||
|
|
|
@ -7,8 +7,6 @@ import (
|
|||
"sync"
|
||||
"sync/atomic"
|
||||
"testing"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
)
|
||||
|
||||
func TestTableOpenClose(t *testing.T) {
|
||||
|
@ -31,7 +29,7 @@ func TestTableOpenClose(t *testing.T) {
|
|||
tb.MustClose()
|
||||
|
||||
// Re-open created table multiple times.
|
||||
for i := 0; i < 10; i++ {
|
||||
for i := 0; i < 4; i++ {
|
||||
tb, err := OpenTable(path, nil, nil, &isReadOnly)
|
||||
if err != nil {
|
||||
t.Fatalf("cannot open created table: %s", err)
|
||||
|
@ -53,7 +51,7 @@ func TestTableOpenMultipleTimes(t *testing.T) {
|
|||
}
|
||||
defer tb1.MustClose()
|
||||
|
||||
for i := 0; i < 10; i++ {
|
||||
for i := 0; i < 4; i++ {
|
||||
tb2, err := OpenTable(path, nil, nil, &isReadOnly)
|
||||
if err == nil {
|
||||
tb2.MustClose()
|
||||
|
@ -62,8 +60,8 @@ func TestTableOpenMultipleTimes(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestTableAddItemSerial(t *testing.T) {
|
||||
const path = "TestTableAddItemSerial"
|
||||
func TestTableAddItemsSerial(t *testing.T) {
|
||||
const path = "TestTableAddItemsSerial"
|
||||
if err := os.RemoveAll(path); err != nil {
|
||||
t.Fatalf("cannot remove %q: %s", path, err)
|
||||
}
|
||||
|
@ -81,7 +79,7 @@ func TestTableAddItemSerial(t *testing.T) {
|
|||
t.Fatalf("cannot open %q: %s", path, err)
|
||||
}
|
||||
|
||||
const itemsCount = 1e5
|
||||
const itemsCount = 10e3
|
||||
testAddItemsSerial(tb, itemsCount)
|
||||
|
||||
// Verify items count after pending items flush.
|
||||
|
@ -92,13 +90,13 @@ func TestTableAddItemSerial(t *testing.T) {
|
|||
|
||||
var m TableMetrics
|
||||
tb.UpdateMetrics(&m)
|
||||
if m.ItemsCount != itemsCount {
|
||||
t.Fatalf("unexpected itemsCount; got %d; want %v", m.ItemsCount, itemsCount)
|
||||
if n := m.TotalItemsCount(); n != itemsCount {
|
||||
t.Fatalf("unexpected itemsCount; got %d; want %v", n, itemsCount)
|
||||
}
|
||||
|
||||
tb.MustClose()
|
||||
|
||||
// Re-open the table and make sure ItemsCount remains the same.
|
||||
// Re-open the table and make sure itemsCount remains the same.
|
||||
testReopenTable(t, path, itemsCount)
|
||||
|
||||
// Add more items in order to verify merge between inmemory parts and file-based parts.
|
||||
|
@ -110,7 +108,7 @@ func TestTableAddItemSerial(t *testing.T) {
|
|||
testAddItemsSerial(tb, moreItemsCount)
|
||||
tb.MustClose()
|
||||
|
||||
// Re-open the table and verify ItemsCount again.
|
||||
// Re-open the table and verify itemsCount again.
|
||||
testReopenTable(t, path, itemsCount+moreItemsCount)
|
||||
}
|
||||
|
||||
|
@ -120,9 +118,7 @@ func testAddItemsSerial(tb *Table, itemsCount int) {
|
|||
if len(item) > maxInmemoryBlockSize {
|
||||
item = item[:maxInmemoryBlockSize]
|
||||
}
|
||||
if err := tb.AddItems([][]byte{item}); err != nil {
|
||||
logger.Panicf("BUG: cannot add item to table: %s", err)
|
||||
}
|
||||
tb.AddItems([][]byte{item})
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -146,9 +142,7 @@ func TestTableCreateSnapshotAt(t *testing.T) {
|
|||
const itemsCount = 3e5
|
||||
for i := 0; i < itemsCount; i++ {
|
||||
item := []byte(fmt.Sprintf("item %d", i))
|
||||
if err := tb.AddItems([][]byte{item}); err != nil {
|
||||
t.Fatalf("cannot add item to table: %s", err)
|
||||
}
|
||||
tb.AddItems([][]byte{item})
|
||||
}
|
||||
tb.DebugFlush()
|
||||
|
||||
|
@ -221,9 +215,7 @@ func TestTableAddItemsConcurrent(t *testing.T) {
|
|||
flushCallback := func() {
|
||||
atomic.AddUint64(&flushes, 1)
|
||||
}
|
||||
var itemsMerged uint64
|
||||
prepareBlock := func(data []byte, items []Item) ([]byte, []Item) {
|
||||
atomic.AddUint64(&itemsMerged, uint64(len(items)))
|
||||
return data, items
|
||||
}
|
||||
var isReadOnly uint32
|
||||
|
@ -232,7 +224,7 @@ func TestTableAddItemsConcurrent(t *testing.T) {
|
|||
t.Fatalf("cannot open %q: %s", path, err)
|
||||
}
|
||||
|
||||
const itemsCount = 1e5
|
||||
const itemsCount = 10e3
|
||||
testAddItemsConcurrent(tb, itemsCount)
|
||||
|
||||
// Verify items count after pending items flush.
|
||||
|
@ -240,20 +232,16 @@ func TestTableAddItemsConcurrent(t *testing.T) {
|
|||
if atomic.LoadUint64(&flushes) == 0 {
|
||||
t.Fatalf("unexpected zero flushes")
|
||||
}
|
||||
n := atomic.LoadUint64(&itemsMerged)
|
||||
if n < itemsCount {
|
||||
t.Fatalf("too low number of items merged; got %v; must be at least %v", n, itemsCount)
|
||||
}
|
||||
|
||||
var m TableMetrics
|
||||
tb.UpdateMetrics(&m)
|
||||
if m.ItemsCount != itemsCount {
|
||||
t.Fatalf("unexpected itemsCount; got %d; want %v", m.ItemsCount, itemsCount)
|
||||
if n := m.TotalItemsCount(); n != itemsCount {
|
||||
t.Fatalf("unexpected itemsCount; got %d; want %v", n, itemsCount)
|
||||
}
|
||||
|
||||
tb.MustClose()
|
||||
|
||||
// Re-open the table and make sure ItemsCount remains the same.
|
||||
// Re-open the table and make sure itemsCount remains the same.
|
||||
testReopenTable(t, path, itemsCount)
|
||||
|
||||
// Add more items in order to verify merge between inmemory parts and file-based parts.
|
||||
|
@ -265,7 +253,7 @@ func TestTableAddItemsConcurrent(t *testing.T) {
|
|||
testAddItemsConcurrent(tb, moreItemsCount)
|
||||
tb.MustClose()
|
||||
|
||||
// Re-open the table and verify ItemsCount again.
|
||||
// Re-open the table and verify itemsCount again.
|
||||
testReopenTable(t, path, itemsCount+moreItemsCount)
|
||||
}
|
||||
|
||||
|
@ -282,9 +270,7 @@ func testAddItemsConcurrent(tb *Table, itemsCount int) {
|
|||
if len(item) > maxInmemoryBlockSize {
|
||||
item = item[:maxInmemoryBlockSize]
|
||||
}
|
||||
if err := tb.AddItems([][]byte{item}); err != nil {
|
||||
logger.Panicf("BUG: cannot add item to table: %s", err)
|
||||
}
|
||||
tb.AddItems([][]byte{item})
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
@ -306,8 +292,8 @@ func testReopenTable(t *testing.T, path string, itemsCount int) {
|
|||
}
|
||||
var m TableMetrics
|
||||
tb.UpdateMetrics(&m)
|
||||
if m.ItemsCount != uint64(itemsCount) {
|
||||
t.Fatalf("unexpected itemsCount after re-opening; got %d; want %v", m.ItemsCount, itemsCount)
|
||||
if n := m.TotalItemsCount(); n != uint64(itemsCount) {
|
||||
t.Fatalf("unexpected itemsCount after re-opening; got %d; want %v", n, itemsCount)
|
||||
}
|
||||
tb.MustClose()
|
||||
}
|
||||
|
|
|
@ -106,7 +106,7 @@ func (gmt *graphiteMatchTemplate) Match(dst []string, s string) ([]string, bool)
|
|||
dst = append(dst, s)
|
||||
return dst, true
|
||||
}
|
||||
// Search for the the start of the next part.
|
||||
// Search for the start of the next part.
|
||||
p = parts[i+1]
|
||||
i++
|
||||
n := strings.Index(s, p)
|
||||
|
|
|
@ -137,7 +137,7 @@ func (cfg *Config) mustRestart(prevCfg *Config) {
|
|||
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2884
|
||||
needGlobalRestart := !areEqualGlobalConfigs(&cfg.Global, &prevCfg.Global)
|
||||
|
||||
// Loop over the the new jobs, start new ones and restart updated ones.
|
||||
// Loop over the new jobs, start new ones and restart updated ones.
|
||||
var started, stopped, restarted int
|
||||
currentJobNames := make(map[string]struct{}, len(cfg.ScrapeConfigs))
|
||||
for i, sc := range cfg.ScrapeConfigs {
|
||||
|
|
|
@ -67,6 +67,10 @@ type Series struct {
|
|||
Metric string `json:"metric"`
|
||||
Points []Point `json:"points"`
|
||||
Tags []string `json:"tags"`
|
||||
// The device field does not appear in the datadog docs, but datadog-agent does use it.
|
||||
// Datadog agent (v7 at least), removes the tag "device" and adds it as its own field. Why? That I don't know!
|
||||
// https://github.com/DataDog/datadog-agent/blob/0ada7a97fed6727838a6f4d9c87123d2aafde735/pkg/metrics/series.go#L84-L105
|
||||
Device string `json:"device"`
|
||||
|
||||
// Do not decode Type, since it isn't used by VictoriaMetrics
|
||||
// Type string `json:"type"`
|
||||
|
|
|
@ -56,6 +56,7 @@ func TestRequestUnmarshalSuccess(t *testing.T) {
|
|||
"host": "test.example.com",
|
||||
"interval": 20,
|
||||
"metric": "system.load.1",
|
||||
"device": "/dev/sda",
|
||||
"points": [[
|
||||
1575317847,
|
||||
0.5
|
||||
|
@ -71,6 +72,7 @@ func TestRequestUnmarshalSuccess(t *testing.T) {
|
|||
Series: []Series{{
|
||||
Host: "test.example.com",
|
||||
Metric: "system.load.1",
|
||||
Device: "/dev/sda",
|
||||
Points: []Point{{
|
||||
1575317847,
|
||||
0.5,
|
||||
|
|
|
@ -252,7 +252,7 @@ func (bsr *blockStreamReader) readBlock() error {
|
|||
if err == io.EOF {
|
||||
return io.EOF
|
||||
}
|
||||
return fmt.Errorf("cannot read index block from index data: %w", err)
|
||||
return fmt.Errorf("cannot read index block: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -354,11 +354,11 @@ func (bsr *blockStreamReader) readIndexBlock() error {
|
|||
// Read index block.
|
||||
bsr.compressedIndexData = bytesutil.ResizeNoCopyMayOverallocate(bsr.compressedIndexData, int(bsr.mr.IndexBlockSize))
|
||||
if err := fs.ReadFullData(bsr.indexReader, bsr.compressedIndexData); err != nil {
|
||||
return fmt.Errorf("cannot read index block from index data at offset %d: %w", bsr.indexBlockOffset, err)
|
||||
return fmt.Errorf("cannot read index block at offset %d: %w", bsr.indexBlockOffset, err)
|
||||
}
|
||||
tmpData, err := encoding.DecompressZSTD(bsr.indexData[:0], bsr.compressedIndexData)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot decompress index block read at offset %d: %w", bsr.indexBlockOffset, err)
|
||||
return fmt.Errorf("cannot decompress index block at offset %d: %w", bsr.indexBlockOffset, err)
|
||||
}
|
||||
bsr.indexData = tmpData
|
||||
bsr.indexCursor = bsr.indexData
|
||||
|
|
|
@ -80,13 +80,10 @@ func (bsw *blockStreamWriter) reset() {
|
|||
}
|
||||
|
||||
// InitFromInmemoryPart initialzes bsw from inmemory part.
|
||||
func (bsw *blockStreamWriter) InitFromInmemoryPart(mp *inmemoryPart) {
|
||||
func (bsw *blockStreamWriter) InitFromInmemoryPart(mp *inmemoryPart, compressLevel int) {
|
||||
bsw.reset()
|
||||
|
||||
// Use the minimum compression level for in-memory blocks,
|
||||
// since they are going to be re-compressed during the merge into file-based blocks.
|
||||
bsw.compressLevel = -5 // See https://github.com/facebook/zstd/releases/tag/v1.3.4
|
||||
|
||||
bsw.compressLevel = compressLevel
|
||||
bsw.timestampsWriter = &mp.timestampsData
|
||||
bsw.valuesWriter = &mp.valuesData
|
||||
bsw.indexWriter = &mp.indexData
|
||||
|
|
|
@ -47,7 +47,7 @@ func benchmarkBlockStreamWriter(b *testing.B, ebs []Block, rowsCount int, writeR
|
|||
}
|
||||
}
|
||||
|
||||
bsw.InitFromInmemoryPart(&mp)
|
||||
bsw.InitFromInmemoryPart(&mp, -5)
|
||||
for i := range ebsCopy {
|
||||
bsw.WriteExternalBlock(&ebsCopy[i], &ph, &rowsMerged)
|
||||
}
|
||||
|
|
|
@ -400,12 +400,8 @@ func (is *indexSearch) maybeCreateIndexes(tsid *TSID, metricNameRaw []byte, date
|
|||
return false, fmt.Errorf("cannot unmarshal metricNameRaw %q: %w", metricNameRaw, err)
|
||||
}
|
||||
mn.sortTags()
|
||||
if err := is.createGlobalIndexes(tsid, mn); err != nil {
|
||||
return false, fmt.Errorf("cannot create global indexes: %w", err)
|
||||
}
|
||||
if err := is.createPerDayIndexes(date, tsid.MetricID, mn); err != nil {
|
||||
return false, fmt.Errorf("cannot create per-day indexes for date=%s: %w", dateToString(date), err)
|
||||
}
|
||||
is.createGlobalIndexes(tsid, mn)
|
||||
is.createPerDayIndexes(date, tsid.MetricID, mn)
|
||||
PutMetricName(mn)
|
||||
atomic.AddUint64(&is.db.timeseriesRepopulated, 1)
|
||||
return true, nil
|
||||
|
@ -599,12 +595,8 @@ func (is *indexSearch) createTSIDByName(dst *TSID, metricName, metricNameRaw []b
|
|||
if err := is.db.s.registerSeriesCardinality(dst.MetricID, metricNameRaw); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := is.createGlobalIndexes(dst, mn); err != nil {
|
||||
return fmt.Errorf("cannot create global indexes: %w", err)
|
||||
}
|
||||
if err := is.createPerDayIndexes(date, dst.MetricID, mn); err != nil {
|
||||
return fmt.Errorf("cannot create per-day indexes for date=%s: %w", dateToString(date), err)
|
||||
}
|
||||
is.createGlobalIndexes(dst, mn)
|
||||
is.createPerDayIndexes(date, dst.MetricID, mn)
|
||||
|
||||
// There is no need in invalidating tag cache, since it is invalidated
|
||||
// on db.tb flush via invalidateTagFiltersCache flushCallback passed to OpenTable.
|
||||
|
@ -668,7 +660,7 @@ func generateTSID(dst *TSID, mn *MetricName) {
|
|||
dst.MetricID = generateUniqueMetricID()
|
||||
}
|
||||
|
||||
func (is *indexSearch) createGlobalIndexes(tsid *TSID, mn *MetricName) error {
|
||||
func (is *indexSearch) createGlobalIndexes(tsid *TSID, mn *MetricName) {
|
||||
// The order of index items is important.
|
||||
// It guarantees index consistency.
|
||||
|
||||
|
@ -699,7 +691,7 @@ func (is *indexSearch) createGlobalIndexes(tsid *TSID, mn *MetricName) error {
|
|||
ii.registerTagIndexes(prefix.B, mn, tsid.MetricID)
|
||||
kbPool.Put(prefix)
|
||||
|
||||
return is.db.tb.AddItems(ii.Items)
|
||||
is.db.tb.AddItems(ii.Items)
|
||||
}
|
||||
|
||||
type indexItems struct {
|
||||
|
@ -1640,9 +1632,7 @@ func (db *indexDB) searchMetricNameWithCache(dst []byte, metricID uint64) ([]byt
|
|||
|
||||
// Mark the metricID as deleted, so it will be created again when new data point
|
||||
// for the given time series will arrive.
|
||||
if err := db.deleteMetricIDs([]uint64{metricID}); err != nil {
|
||||
return dst, fmt.Errorf("cannot delete metricID for missing metricID->metricName entry; metricID=%d; error: %w", metricID, err)
|
||||
}
|
||||
db.deleteMetricIDs([]uint64{metricID})
|
||||
return dst, io.EOF
|
||||
}
|
||||
|
||||
|
@ -1669,9 +1659,7 @@ func (db *indexDB) DeleteTSIDs(qt *querytracer.Tracer, tfss []*TagFilters) (int,
|
|||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if err := db.deleteMetricIDs(metricIDs); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
db.deleteMetricIDs(metricIDs)
|
||||
|
||||
// Delete TSIDs in the extDB.
|
||||
deletedCount := len(metricIDs)
|
||||
|
@ -1689,10 +1677,10 @@ func (db *indexDB) DeleteTSIDs(qt *querytracer.Tracer, tfss []*TagFilters) (int,
|
|||
return deletedCount, nil
|
||||
}
|
||||
|
||||
func (db *indexDB) deleteMetricIDs(metricIDs []uint64) error {
|
||||
func (db *indexDB) deleteMetricIDs(metricIDs []uint64) {
|
||||
if len(metricIDs) == 0 {
|
||||
// Nothing to delete
|
||||
return nil
|
||||
return
|
||||
}
|
||||
|
||||
// atomically add deleted metricIDs to an inmemory map.
|
||||
|
@ -1717,9 +1705,8 @@ func (db *indexDB) deleteMetricIDs(metricIDs []uint64) error {
|
|||
items.B = encoding.MarshalUint64(items.B, metricID)
|
||||
items.Next()
|
||||
}
|
||||
err := db.tb.AddItems(items.Items)
|
||||
db.tb.AddItems(items.Items)
|
||||
putIndexItems(items)
|
||||
return err
|
||||
}
|
||||
|
||||
func (db *indexDB) loadDeletedMetricIDs() (*uint64set.Set, error) {
|
||||
|
@ -2793,7 +2780,7 @@ const (
|
|||
int64Max = int64((1 << 63) - 1)
|
||||
)
|
||||
|
||||
func (is *indexSearch) createPerDayIndexes(date, metricID uint64, mn *MetricName) error {
|
||||
func (is *indexSearch) createPerDayIndexes(date, metricID uint64, mn *MetricName) {
|
||||
ii := getIndexItems()
|
||||
defer putIndexItems(ii)
|
||||
|
||||
|
@ -2808,11 +2795,8 @@ func (is *indexSearch) createPerDayIndexes(date, metricID uint64, mn *MetricName
|
|||
kb.B = marshalCommonPrefix(kb.B[:0], nsPrefixDateTagToMetricIDs)
|
||||
kb.B = encoding.MarshalUint64(kb.B, date)
|
||||
ii.registerTagIndexes(kb.B, mn, metricID)
|
||||
if err := is.db.tb.AddItems(ii.Items); err != nil {
|
||||
return fmt.Errorf("cannot add per-day entires for metricID %d: %w", metricID, err)
|
||||
}
|
||||
is.db.tb.AddItems(ii.Items)
|
||||
is.db.s.dateMetricIDCache.Set(date, metricID)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (ii *indexItems) registerTagIndexes(prefix []byte, mn *MetricName, metricID uint64) {
|
||||
|
|
|
@ -523,22 +523,13 @@ func TestIndexDB(t *testing.T) {
|
|||
}
|
||||
}()
|
||||
|
||||
if err := testIndexDBBigMetricName(db); err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
mns, tsids, err := testIndexDBGetOrCreateTSIDByName(db, metricGroups)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
if err := testIndexDBBigMetricName(db); err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
if err := testIndexDBCheckTSIDByName(db, mns, tsids, false); err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
if err := testIndexDBBigMetricName(db); err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
|
||||
// Re-open the db and verify it works as expected.
|
||||
db.MustClose()
|
||||
|
@ -546,15 +537,9 @@ func TestIndexDB(t *testing.T) {
|
|||
if err != nil {
|
||||
t.Fatalf("cannot open indexDB: %s", err)
|
||||
}
|
||||
if err := testIndexDBBigMetricName(db); err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
if err := testIndexDBCheckTSIDByName(db, mns, tsids, false); err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
if err := testIndexDBBigMetricName(db); err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("concurrent", func(t *testing.T) {
|
||||
|
@ -577,27 +562,15 @@ func TestIndexDB(t *testing.T) {
|
|||
ch := make(chan error, 3)
|
||||
for i := 0; i < cap(ch); i++ {
|
||||
go func() {
|
||||
if err := testIndexDBBigMetricName(db); err != nil {
|
||||
ch <- err
|
||||
return
|
||||
}
|
||||
mns, tsid, err := testIndexDBGetOrCreateTSIDByName(db, metricGroups)
|
||||
if err != nil {
|
||||
ch <- err
|
||||
return
|
||||
}
|
||||
if err := testIndexDBBigMetricName(db); err != nil {
|
||||
ch <- err
|
||||
return
|
||||
}
|
||||
if err := testIndexDBCheckTSIDByName(db, mns, tsid, true); err != nil {
|
||||
ch <- err
|
||||
return
|
||||
}
|
||||
if err := testIndexDBBigMetricName(db); err != nil {
|
||||
ch <- err
|
||||
return
|
||||
}
|
||||
ch <- nil
|
||||
}()
|
||||
}
|
||||
|
@ -618,74 +591,6 @@ func TestIndexDB(t *testing.T) {
|
|||
})
|
||||
}
|
||||
|
||||
func testIndexDBBigMetricName(db *indexDB) error {
|
||||
var bigBytes []byte
|
||||
for i := 0; i < 128*1000; i++ {
|
||||
bigBytes = append(bigBytes, byte(i))
|
||||
}
|
||||
var mn MetricName
|
||||
var tsid TSID
|
||||
|
||||
is := db.getIndexSearch(noDeadline)
|
||||
defer db.putIndexSearch(is)
|
||||
|
||||
// Try creating too big metric group
|
||||
mn.Reset()
|
||||
mn.MetricGroup = append(mn.MetricGroup[:0], bigBytes...)
|
||||
mn.sortTags()
|
||||
metricName := mn.Marshal(nil)
|
||||
metricNameRaw := mn.marshalRaw(nil)
|
||||
if err := is.GetOrCreateTSIDByName(&tsid, metricName, metricNameRaw, 0); err == nil {
|
||||
return fmt.Errorf("expecting non-nil error on an attempt to insert metric with too big MetricGroup")
|
||||
}
|
||||
|
||||
// Try creating too big tag key
|
||||
mn.Reset()
|
||||
mn.MetricGroup = append(mn.MetricGroup[:0], "xxx"...)
|
||||
mn.Tags = []Tag{{
|
||||
Key: append([]byte(nil), bigBytes...),
|
||||
Value: []byte("foobar"),
|
||||
}}
|
||||
mn.sortTags()
|
||||
metricName = mn.Marshal(nil)
|
||||
metricNameRaw = mn.marshalRaw(nil)
|
||||
if err := is.GetOrCreateTSIDByName(&tsid, metricName, metricNameRaw, 0); err == nil {
|
||||
return fmt.Errorf("expecting non-nil error on an attempt to insert metric with too big tag key")
|
||||
}
|
||||
|
||||
// Try creating too big tag value
|
||||
mn.Reset()
|
||||
mn.MetricGroup = append(mn.MetricGroup[:0], "xxx"...)
|
||||
mn.Tags = []Tag{{
|
||||
Key: []byte("foobar"),
|
||||
Value: append([]byte(nil), bigBytes...),
|
||||
}}
|
||||
mn.sortTags()
|
||||
metricName = mn.Marshal(nil)
|
||||
metricNameRaw = mn.marshalRaw(nil)
|
||||
if err := is.GetOrCreateTSIDByName(&tsid, metricName, metricNameRaw, 0); err == nil {
|
||||
return fmt.Errorf("expecting non-nil error on an attempt to insert metric with too big tag value")
|
||||
}
|
||||
|
||||
// Try creating metric name with too many tags
|
||||
mn.Reset()
|
||||
mn.MetricGroup = append(mn.MetricGroup[:0], "xxx"...)
|
||||
for i := 0; i < 60000; i++ {
|
||||
mn.Tags = append(mn.Tags, Tag{
|
||||
Key: []byte(fmt.Sprintf("foobar %d", i)),
|
||||
Value: []byte(fmt.Sprintf("sdfjdslkfj %d", i)),
|
||||
})
|
||||
}
|
||||
mn.sortTags()
|
||||
metricName = mn.Marshal(nil)
|
||||
metricNameRaw = mn.marshalRaw(nil)
|
||||
if err := is.GetOrCreateTSIDByName(&tsid, metricName, metricNameRaw, 0); err == nil {
|
||||
return fmt.Errorf("expecting non-nil error on an attempt to insert metric with too many tags")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func testIndexDBGetOrCreateTSIDByName(db *indexDB, metricGroups int) ([]MetricName, []TSID, error) {
|
||||
// Create tsids.
|
||||
var mns []MetricName
|
||||
|
@ -727,9 +632,7 @@ func testIndexDBGetOrCreateTSIDByName(db *indexDB, metricGroups int) ([]MetricNa
|
|||
date := uint64(timestampFromTime(time.Now())) / msecPerDay
|
||||
for i := range tsids {
|
||||
tsid := &tsids[i]
|
||||
if err := is.createPerDayIndexes(date, tsid.MetricID, &mns[i]); err != nil {
|
||||
return nil, nil, fmt.Errorf("error in createPerDayIndexes(%d, %d): %w", date, tsid.MetricID, err)
|
||||
}
|
||||
is.createPerDayIndexes(date, tsid.MetricID, &mns[i])
|
||||
}
|
||||
|
||||
// Flush index to disk, so it becomes visible for search
|
||||
|
@ -1549,11 +1452,10 @@ func TestMatchTagFilters(t *testing.T) {
|
|||
|
||||
func TestIndexDBRepopulateAfterRotation(t *testing.T) {
|
||||
path := "TestIndexRepopulateAfterRotation"
|
||||
s, err := OpenStorage(path, 0, 1e5, 1e5)
|
||||
s, err := OpenStorage(path, msecsPerMonth, 1e5, 1e5)
|
||||
if err != nil {
|
||||
t.Fatalf("cannot open storage: %s", err)
|
||||
}
|
||||
s.retentionMsecs = msecsPerMonth
|
||||
defer func() {
|
||||
s.MustClose()
|
||||
if err := os.RemoveAll(path); err != nil {
|
||||
|
@ -1578,8 +1480,8 @@ func TestIndexDBRepopulateAfterRotation(t *testing.T) {
|
|||
// verify the storage contains rows.
|
||||
var m Metrics
|
||||
s.UpdateMetrics(&m)
|
||||
if m.TableMetrics.SmallRowsCount < uint64(metricRowsN) {
|
||||
t.Fatalf("expecting at least %d rows in the table; got %d", metricRowsN, m.TableMetrics.SmallRowsCount)
|
||||
if rowsCount := m.TableMetrics.TotalRowsCount(); rowsCount < uint64(metricRowsN) {
|
||||
t.Fatalf("expecting at least %d rows in the table; got %d", metricRowsN, rowsCount)
|
||||
}
|
||||
|
||||
// check new series were registered in indexDB
|
||||
|
@ -1721,9 +1623,7 @@ func TestSearchTSIDWithTimeRange(t *testing.T) {
|
|||
for i := range tsids {
|
||||
tsid := &tsids[i]
|
||||
metricIDs.Add(tsid.MetricID)
|
||||
if err := is.createPerDayIndexes(date, tsid.MetricID, &mns[i]); err != nil {
|
||||
t.Fatalf("error in createPerDayIndexes(%d, %d): %s", date, tsid.MetricID, err)
|
||||
}
|
||||
is.createPerDayIndexes(date, tsid.MetricID, &mns[i])
|
||||
}
|
||||
allMetricIDs.Union(&metricIDs)
|
||||
perDayMetricIDs[date] = &metricIDs
|
||||
|
|
|
@ -1,9 +1,13 @@
|
|||
package storage
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fs"
|
||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||
)
|
||||
|
||||
|
@ -31,6 +35,36 @@ func (mp *inmemoryPart) Reset() {
|
|||
mp.creationTime = 0
|
||||
}
|
||||
|
||||
// StoreToDisk stores the mp to the given path on disk.
|
||||
func (mp *inmemoryPart) StoreToDisk(path string) error {
|
||||
if err := fs.MkdirAllIfNotExist(path); err != nil {
|
||||
return fmt.Errorf("cannot create directory %q: %w", path, err)
|
||||
}
|
||||
timestampsPath := path + "/timestamps.bin"
|
||||
if err := fs.WriteFileAndSync(timestampsPath, mp.timestampsData.B); err != nil {
|
||||
return fmt.Errorf("cannot store timestamps: %w", err)
|
||||
}
|
||||
valuesPath := path + "/values.bin"
|
||||
if err := fs.WriteFileAndSync(valuesPath, mp.valuesData.B); err != nil {
|
||||
return fmt.Errorf("cannot store values: %w", err)
|
||||
}
|
||||
indexPath := path + "/index.bin"
|
||||
if err := fs.WriteFileAndSync(indexPath, mp.indexData.B); err != nil {
|
||||
return fmt.Errorf("cannot store index: %w", err)
|
||||
}
|
||||
metaindexPath := path + "/metaindex.bin"
|
||||
if err := fs.WriteFileAndSync(metaindexPath, mp.metaindexData.B); err != nil {
|
||||
return fmt.Errorf("cannot store metaindex: %w", err)
|
||||
}
|
||||
if err := mp.ph.writeMinDedupInterval(path); err != nil {
|
||||
return fmt.Errorf("cannot store min dedup interval: %w", err)
|
||||
}
|
||||
// Sync parent directory in order to make sure the written files remain visible after hardware reset
|
||||
parentDirPath := filepath.Dir(path)
|
||||
fs.MustSyncPath(parentDirPath)
|
||||
return nil
|
||||
}
|
||||
|
||||
// InitFromRows initializes mp from the given rows.
|
||||
func (mp *inmemoryPart) InitFromRows(rows []rawRow) {
|
||||
if len(rows) == 0 {
|
||||
|
@ -49,9 +83,12 @@ func (mp *inmemoryPart) InitFromRows(rows []rawRow) {
|
|||
// It is safe calling NewPart multiple times.
|
||||
// It is unsafe re-using mp while the returned part is in use.
|
||||
func (mp *inmemoryPart) NewPart() (*part, error) {
|
||||
ph := mp.ph
|
||||
size := uint64(len(mp.timestampsData.B) + len(mp.valuesData.B) + len(mp.indexData.B) + len(mp.metaindexData.B))
|
||||
return newPart(&ph, "", size, mp.metaindexData.NewReader(), &mp.timestampsData, &mp.valuesData, &mp.indexData)
|
||||
size := mp.size()
|
||||
return newPart(&mp.ph, "", size, mp.metaindexData.NewReader(), &mp.timestampsData, &mp.valuesData, &mp.indexData)
|
||||
}
|
||||
|
||||
func (mp *inmemoryPart) size() uint64 {
|
||||
return uint64(cap(mp.timestampsData.B) + cap(mp.valuesData.B) + cap(mp.indexData.B) + cap(mp.metaindexData.B))
|
||||
}
|
||||
|
||||
func getInmemoryPart() *inmemoryPart {
|
||||
|
|
|
@ -178,6 +178,10 @@ func mergeBlocks(ob, ib1, ib2 *Block, retentionDeadline int64, rowsDeleted *uint
|
|||
}
|
||||
|
||||
func skipSamplesOutsideRetention(b *Block, retentionDeadline int64, rowsDeleted *uint64) {
|
||||
if b.bh.MinTimestamp >= retentionDeadline {
|
||||
// Fast path - the block contains only samples with timestamps bigger than retentionDeadline.
|
||||
return
|
||||
}
|
||||
timestamps := b.timestamps
|
||||
nextIdx := b.nextIdx
|
||||
nextIdxOrig := nextIdx
|
||||
|
|
|
@ -361,7 +361,7 @@ func TestMergeForciblyStop(t *testing.T) {
|
|||
|
||||
var mp inmemoryPart
|
||||
var bsw blockStreamWriter
|
||||
bsw.InitFromInmemoryPart(&mp)
|
||||
bsw.InitFromInmemoryPart(&mp, -5)
|
||||
ch := make(chan struct{})
|
||||
var rowsMerged, rowsDeleted uint64
|
||||
close(ch)
|
||||
|
@ -384,7 +384,7 @@ func testMergeBlockStreams(t *testing.T, bsrs []*blockStreamReader, expectedBloc
|
|||
var mp inmemoryPart
|
||||
|
||||
var bsw blockStreamWriter
|
||||
bsw.InitFromInmemoryPart(&mp)
|
||||
bsw.InitFromInmemoryPart(&mp, -5)
|
||||
|
||||
strg := newTestStorage()
|
||||
var rowsMerged, rowsDeleted uint64
|
||||
|
|
|
@ -41,7 +41,7 @@ func benchmarkMergeBlockStreams(b *testing.B, mps []*inmemoryPart, rowsPerLoop i
|
|||
bsrs[i].InitFromInmemoryPart(mp)
|
||||
}
|
||||
mpOut.Reset()
|
||||
bsw.InitFromInmemoryPart(&mpOut)
|
||||
bsw.InitFromInmemoryPart(&mpOut, -5)
|
||||
if err := mergeBlockStreams(&mpOut.ph, &bsw, bsrs, nil, strg, 0, &rowsMerged, &rowsDeleted); err != nil {
|
||||
panic(fmt.Errorf("cannot merge block streams: %w", err))
|
||||
}
|
||||
|
|
|
@ -228,24 +228,29 @@ func (ps *partSearch) readIndexBlock(mr *metaindexRow) (*indexBlock, error) {
|
|||
}
|
||||
|
||||
func (ps *partSearch) searchBHS() bool {
|
||||
for i := range ps.bhs {
|
||||
bh := &ps.bhs[i]
|
||||
|
||||
nextTSID:
|
||||
if bh.TSID.Less(&ps.BlockRef.bh.TSID) {
|
||||
// Skip blocks with small tsid values.
|
||||
continue
|
||||
bhs := ps.bhs
|
||||
for len(bhs) > 0 {
|
||||
// Skip block headers with tsids smaller than the given tsid.
|
||||
tsid := &ps.BlockRef.bh.TSID
|
||||
n := sort.Search(len(bhs), func(i int) bool {
|
||||
return !bhs[i].TSID.Less(tsid)
|
||||
})
|
||||
if n == len(bhs) {
|
||||
// Nothing found.
|
||||
break
|
||||
}
|
||||
bhs = bhs[n:]
|
||||
|
||||
// Invariant: ps.BlockRef.bh.TSID <= bh.TSID
|
||||
// Invariant: tsid <= bh.TSID
|
||||
|
||||
if bh.TSID.MetricID != ps.BlockRef.bh.TSID.MetricID {
|
||||
// ps.BlockRef.bh.TSID < bh.TSID: no more blocks with the given tsid.
|
||||
bh := &bhs[0]
|
||||
if bh.TSID.MetricID != tsid.MetricID {
|
||||
// tsid < bh.TSID: no more blocks with the given tsid.
|
||||
// Proceed to the next (bigger) tsid.
|
||||
if !ps.nextTSID() {
|
||||
return false
|
||||
}
|
||||
goto nextTSID
|
||||
continue
|
||||
}
|
||||
|
||||
// Found the block with the given tsid. Verify timestamp range.
|
||||
|
@ -254,6 +259,7 @@ func (ps *partSearch) searchBHS() bool {
|
|||
// So use linear search instead of binary search.
|
||||
if bh.MaxTimestamp < ps.tr.MinTimestamp {
|
||||
// Skip the block with too small timestamps.
|
||||
bhs = bhs[1:]
|
||||
continue
|
||||
}
|
||||
if bh.MinTimestamp > ps.tr.MaxTimestamp {
|
||||
|
@ -269,10 +275,9 @@ func (ps *partSearch) searchBHS() bool {
|
|||
// Read it.
|
||||
ps.BlockRef.init(ps.p, bh)
|
||||
|
||||
ps.bhs = ps.bhs[i+1:]
|
||||
ps.bhs = bhs[1:]
|
||||
return true
|
||||
}
|
||||
|
||||
ps.bhs = nil
|
||||
return false
|
||||
}
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue