mirror of
https://github.com/VictoriaMetrics/VictoriaMetrics.git
synced 2024-12-01 14:47:38 +00:00
Merge branch 'public-single-node' into pmm-6401-read-prometheus-data-files
This commit is contained in:
commit
d9166e899e
120 changed files with 3022 additions and 1781 deletions
16
README.md
16
README.md
|
@ -820,13 +820,13 @@ Send a request to `http://<victoriametrics-addr>:8428/api/v1/export/native?match
|
||||||
where `<timeseries_selector_for_export>` may contain any [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors)
|
where `<timeseries_selector_for_export>` may contain any [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors)
|
||||||
for metrics to export. Use `{__name__=~".*"}` selector for fetching all the time series.
|
for metrics to export. Use `{__name__=~".*"}` selector for fetching all the time series.
|
||||||
|
|
||||||
On large databases you may experience problems with limit on unique timeseries (default value is 300000). In this case you need to adjust `-search.maxUniqueTimeseries` parameter:
|
On large databases you may experience problems with limit on the number of time series, which can be exported. In this case you need to adjust `-search.maxExportSeries` command-line flag:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# count unique timeseries in database
|
# count unique timeseries in database
|
||||||
wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq '.data[0]'
|
wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq '.data[0]'
|
||||||
|
|
||||||
# relaunch victoriametrics with search.maxUniqueTimeseries more than value from previous command
|
# relaunch victoriametrics with search.maxExportSeries more than value from previous command
|
||||||
```
|
```
|
||||||
|
|
||||||
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
|
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
|
||||||
|
@ -1835,6 +1835,12 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
||||||
The maximum number of concurrent search requests. It shouldn't be high, since a single request can saturate all the CPU cores. See also -search.maxQueueDuration (default 8)
|
The maximum number of concurrent search requests. It shouldn't be high, since a single request can saturate all the CPU cores. See also -search.maxQueueDuration (default 8)
|
||||||
-search.maxExportDuration duration
|
-search.maxExportDuration duration
|
||||||
The maximum duration for /api/v1/export call (default 720h0m0s)
|
The maximum duration for /api/v1/export call (default 720h0m0s)
|
||||||
|
-search.maxExportSeries int
|
||||||
|
The maximum number of time series, which can be returned from /api/v1/export* APIs. This option allows limiting memory usage (default 1000000)
|
||||||
|
-search.maxFederateSeries int
|
||||||
|
The maximum number of time series, which can be returned from /federate. This option allows limiting memory usage (default 300000)
|
||||||
|
-search.maxGraphiteSeries int
|
||||||
|
The maximum number of time series, which can be scanned during queries to Graphite Render API. See https://docs.victoriametrics.com/#graphite-render-api-usage (default 300000)
|
||||||
-search.maxLookback duration
|
-search.maxLookback duration
|
||||||
Synonym to -search.lookback-delta from Prometheus. The value is dynamically detected from interval between time series datapoints if not set. It can be overridden on per-query basis via max_lookback arg. See also '-search.maxStalenessInterval' flag, which has the same meaining due to historical reasons
|
Synonym to -search.lookback-delta from Prometheus. The value is dynamically detected from interval between time series datapoints if not set. It can be overridden on per-query basis via max_lookback arg. See also '-search.maxStalenessInterval' flag, which has the same meaining due to historical reasons
|
||||||
-search.maxPointsPerTimeseries int
|
-search.maxPointsPerTimeseries int
|
||||||
|
@ -1850,12 +1856,16 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
||||||
The maximum number of raw samples a single query can process across all time series. This protects from heavy queries, which select unexpectedly high number of raw samples. See also -search.maxSamplesPerSeries (default 1000000000)
|
The maximum number of raw samples a single query can process across all time series. This protects from heavy queries, which select unexpectedly high number of raw samples. See also -search.maxSamplesPerSeries (default 1000000000)
|
||||||
-search.maxSamplesPerSeries int
|
-search.maxSamplesPerSeries int
|
||||||
The maximum number of raw samples a single query can scan per each time series. This option allows limiting memory usage (default 30000000)
|
The maximum number of raw samples a single query can scan per each time series. This option allows limiting memory usage (default 30000000)
|
||||||
|
-search.maxSeries int
|
||||||
|
The maximum number of time series, which can be returned from /api/v1/series. This option allows limiting memory usage (default 10000)
|
||||||
-search.maxStalenessInterval duration
|
-search.maxStalenessInterval duration
|
||||||
The maximum interval for staleness calculations. By default it is automatically calculated from the median interval between samples. This flag could be useful for tuning Prometheus data model closer to Influx-style data model. See https://prometheus.io/docs/prometheus/latest/querying/basics/#staleness for details. See also '-search.maxLookback' flag, which has the same meaning due to historical reasons
|
The maximum interval for staleness calculations. By default it is automatically calculated from the median interval between samples. This flag could be useful for tuning Prometheus data model closer to Influx-style data model. See https://prometheus.io/docs/prometheus/latest/querying/basics/#staleness for details. See also '-search.maxLookback' flag, which has the same meaning due to historical reasons
|
||||||
-search.maxStatusRequestDuration duration
|
-search.maxStatusRequestDuration duration
|
||||||
The maximum duration for /api/v1/status/* requests (default 5m0s)
|
The maximum duration for /api/v1/status/* requests (default 5m0s)
|
||||||
-search.maxStepForPointsAdjustment duration
|
-search.maxStepForPointsAdjustment duration
|
||||||
The maximum step when /api/v1/query_range handler adjusts points with timestamps closer than -search.latencyOffset to the current time. The adjustment is needed because such points may contain incomplete data (default 1m0s)
|
The maximum step when /api/v1/query_range handler adjusts points with timestamps closer than -search.latencyOffset to the current time. The adjustment is needed because such points may contain incomplete data (default 1m0s)
|
||||||
|
-search.maxTSDBStatusSeries int
|
||||||
|
The maximum number of time series, which can be processed during the call to /api/v1/status/tsdb. This option allows limiting memory usage (default 1000000)
|
||||||
-search.maxTagKeys int
|
-search.maxTagKeys int
|
||||||
The maximum number of tag keys returned from /api/v1/labels (default 100000)
|
The maximum number of tag keys returned from /api/v1/labels (default 100000)
|
||||||
-search.maxTagValueSuffixesPerSearch int
|
-search.maxTagValueSuffixesPerSearch int
|
||||||
|
@ -1863,7 +1873,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
||||||
-search.maxTagValues int
|
-search.maxTagValues int
|
||||||
The maximum number of tag values returned from /api/v1/label/<label_name>/values (default 100000)
|
The maximum number of tag values returned from /api/v1/label/<label_name>/values (default 100000)
|
||||||
-search.maxUniqueTimeseries int
|
-search.maxUniqueTimeseries int
|
||||||
The maximum number of unique time series each search can scan. This option allows limiting memory usage (default 300000)
|
The maximum number of unique time series, which can be selected during /api/v1/query and /api/v1/query_range queries. This option allows limiting memory usage (default 300000)
|
||||||
-search.minStalenessInterval duration
|
-search.minStalenessInterval duration
|
||||||
The minimum interval for staleness calculations. This flag could be useful for removing gaps on graphs generated from time series with irregular intervals between samples. See also '-search.maxStalenessInterval'
|
The minimum interval for staleness calculations. This flag could be useful for removing gaps on graphs generated from time series with irregular intervals between samples. See also '-search.maxStalenessInterval'
|
||||||
-search.noStaleMarkers
|
-search.noStaleMarkers
|
||||||
|
|
|
@ -54,7 +54,7 @@ func TagsDelSeriesHandler(startTime time.Time, w http.ResponseWriter, r *http.Re
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
tfss := joinTagFilterss(tfs, etfs)
|
tfss := joinTagFilterss(tfs, etfs)
|
||||||
sq := storage.NewSearchQuery(0, ct, tfss)
|
sq := storage.NewSearchQuery(0, ct, tfss, 0)
|
||||||
n, err := netstorage.DeleteSeries(sq, deadline)
|
n, err := netstorage.DeleteSeries(sq, deadline)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("cannot delete series for %q: %w", sq, err)
|
return fmt.Errorf("cannot delete series for %q: %w", sq, err)
|
||||||
|
@ -196,7 +196,7 @@ func TagsAutoCompleteValuesHandler(startTime time.Time, w http.ResponseWriter, r
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
// Slow path: use netstorage.SearchMetricNames for applying `expr` filters.
|
// Slow path: use netstorage.SearchMetricNames for applying `expr` filters.
|
||||||
sq, err := getSearchQueryForExprs(startTime, etfs, exprs)
|
sq, err := getSearchQueryForExprs(startTime, etfs, exprs, limit*10)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -282,7 +282,7 @@ func TagsAutoCompleteTagsHandler(startTime time.Time, w http.ResponseWriter, r *
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
// Slow path: use netstorage.SearchMetricNames for applying `expr` filters.
|
// Slow path: use netstorage.SearchMetricNames for applying `expr` filters.
|
||||||
sq, err := getSearchQueryForExprs(startTime, etfs, exprs)
|
sq, err := getSearchQueryForExprs(startTime, etfs, exprs, limit*10)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -349,7 +349,7 @@ func TagsFindSeriesHandler(startTime time.Time, w http.ResponseWriter, r *http.R
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("cannot setup tag filters: %w", err)
|
return fmt.Errorf("cannot setup tag filters: %w", err)
|
||||||
}
|
}
|
||||||
sq, err := getSearchQueryForExprs(startTime, etfs, exprs)
|
sq, err := getSearchQueryForExprs(startTime, etfs, exprs, limit*10)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -474,14 +474,14 @@ func getInt(r *http.Request, argName string) (int, error) {
|
||||||
return n, nil
|
return n, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func getSearchQueryForExprs(startTime time.Time, etfs [][]storage.TagFilter, exprs []string) (*storage.SearchQuery, error) {
|
func getSearchQueryForExprs(startTime time.Time, etfs [][]storage.TagFilter, exprs []string, maxMetrics int) (*storage.SearchQuery, error) {
|
||||||
tfs, err := exprsToTagFilters(exprs)
|
tfs, err := exprsToTagFilters(exprs)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
ct := startTime.UnixNano() / 1e6
|
ct := startTime.UnixNano() / 1e6
|
||||||
tfss := joinTagFilterss(tfs, etfs)
|
tfss := joinTagFilterss(tfs, etfs)
|
||||||
sq := storage.NewSearchQuery(0, ct, tfss)
|
sq := storage.NewSearchQuery(0, ct, tfss, maxMetrics)
|
||||||
return sq, nil
|
return sq, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -27,7 +27,6 @@ var (
|
||||||
maxTagKeysPerSearch = flag.Int("search.maxTagKeys", 100e3, "The maximum number of tag keys returned from /api/v1/labels")
|
maxTagKeysPerSearch = flag.Int("search.maxTagKeys", 100e3, "The maximum number of tag keys returned from /api/v1/labels")
|
||||||
maxTagValuesPerSearch = flag.Int("search.maxTagValues", 100e3, "The maximum number of tag values returned from /api/v1/label/<label_name>/values")
|
maxTagValuesPerSearch = flag.Int("search.maxTagValues", 100e3, "The maximum number of tag values returned from /api/v1/label/<label_name>/values")
|
||||||
maxTagValueSuffixesPerSearch = flag.Int("search.maxTagValueSuffixesPerSearch", 100e3, "The maximum number of tag value suffixes returned from /metrics/find")
|
maxTagValueSuffixesPerSearch = flag.Int("search.maxTagValueSuffixesPerSearch", 100e3, "The maximum number of tag value suffixes returned from /metrics/find")
|
||||||
maxMetricsPerSearch = flag.Int("search.maxUniqueTimeseries", 300e3, "The maximum number of unique time series each search can scan. This option allows limiting memory usage")
|
|
||||||
maxSamplesPerSeries = flag.Int("search.maxSamplesPerSeries", 30e6, "The maximum number of raw samples a single query can scan per each time series. This option allows limiting memory usage")
|
maxSamplesPerSeries = flag.Int("search.maxSamplesPerSeries", 30e6, "The maximum number of raw samples a single query can scan per each time series. This option allows limiting memory usage")
|
||||||
maxSamplesPerQuery = flag.Int("search.maxSamplesPerQuery", 1e9, "The maximum number of raw samples a single query can process across all time series. This protects from heavy queries, which select unexpectedly high number of raw samples. See also -search.maxSamplesPerSeries")
|
maxSamplesPerQuery = flag.Int("search.maxSamplesPerQuery", 1e9, "The maximum number of raw samples a single query can process across all time series. This protects from heavy queries, which select unexpectedly high number of raw samples. See also -search.maxSamplesPerSeries")
|
||||||
)
|
)
|
||||||
|
@ -642,7 +641,7 @@ func DeleteSeries(sq *storage.SearchQuery, deadline searchutils.Deadline) (int,
|
||||||
MinTimestamp: sq.MinTimestamp,
|
MinTimestamp: sq.MinTimestamp,
|
||||||
MaxTimestamp: sq.MaxTimestamp,
|
MaxTimestamp: sq.MaxTimestamp,
|
||||||
}
|
}
|
||||||
tfss, err := setupTfss(tr, sq.TagFilterss, deadline)
|
tfss, err := setupTfss(tr, sq.TagFilterss, sq.MaxMetrics, deadline)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return 0, err
|
return 0, err
|
||||||
}
|
}
|
||||||
|
@ -900,11 +899,11 @@ func GetLabelEntries(deadline searchutils.Deadline) ([]storage.TagEntry, error)
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetTSDBStatusForDate returns tsdb status according to https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-stats
|
// GetTSDBStatusForDate returns tsdb status according to https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-stats
|
||||||
func GetTSDBStatusForDate(deadline searchutils.Deadline, date uint64, topN int) (*storage.TSDBStatus, error) {
|
func GetTSDBStatusForDate(deadline searchutils.Deadline, date uint64, topN, maxMetrics int) (*storage.TSDBStatus, error) {
|
||||||
if deadline.Exceeded() {
|
if deadline.Exceeded() {
|
||||||
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
|
return nil, fmt.Errorf("timeout exceeded before starting the query processing: %s", deadline.String())
|
||||||
}
|
}
|
||||||
status, err := vmstorage.GetTSDBStatusForDate(date, topN, deadline.Deadline())
|
status, err := vmstorage.GetTSDBStatusForDate(date, topN, maxMetrics, deadline.Deadline())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("error during tsdb status request: %w", err)
|
return nil, fmt.Errorf("error during tsdb status request: %w", err)
|
||||||
}
|
}
|
||||||
|
@ -922,12 +921,12 @@ func GetTSDBStatusWithFilters(deadline searchutils.Deadline, sq *storage.SearchQ
|
||||||
MinTimestamp: sq.MinTimestamp,
|
MinTimestamp: sq.MinTimestamp,
|
||||||
MaxTimestamp: sq.MaxTimestamp,
|
MaxTimestamp: sq.MaxTimestamp,
|
||||||
}
|
}
|
||||||
tfss, err := setupTfss(tr, sq.TagFilterss, deadline)
|
tfss, err := setupTfss(tr, sq.TagFilterss, sq.MaxMetrics, deadline)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
date := uint64(tr.MinTimestamp) / (3600 * 24 * 1000)
|
date := uint64(tr.MinTimestamp) / (3600 * 24 * 1000)
|
||||||
status, err := vmstorage.GetTSDBStatusWithFiltersForDate(tfss, date, topN, deadline.Deadline())
|
status, err := vmstorage.GetTSDBStatusWithFiltersForDate(tfss, date, topN, sq.MaxMetrics, deadline.Deadline())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("error during tsdb status with filters request: %w", err)
|
return nil, fmt.Errorf("error during tsdb status with filters request: %w", err)
|
||||||
}
|
}
|
||||||
|
@ -978,7 +977,7 @@ func ExportBlocks(sq *storage.SearchQuery, deadline searchutils.Deadline, f func
|
||||||
if err := vmstorage.CheckTimeRange(tr); err != nil {
|
if err := vmstorage.CheckTimeRange(tr); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
tfss, err := setupTfss(tr, sq.TagFilterss, deadline)
|
tfss, err := setupTfss(tr, sq.TagFilterss, sq.MaxMetrics, deadline)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -989,7 +988,7 @@ func ExportBlocks(sq *storage.SearchQuery, deadline searchutils.Deadline, f func
|
||||||
sr := getStorageSearch()
|
sr := getStorageSearch()
|
||||||
defer putStorageSearch(sr)
|
defer putStorageSearch(sr)
|
||||||
startTime := time.Now()
|
startTime := time.Now()
|
||||||
sr.Init(vmstorage.Storage, tfss, tr, *maxMetricsPerSearch, deadline.Deadline())
|
sr.Init(vmstorage.Storage, tfss, tr, sq.MaxMetrics, deadline.Deadline())
|
||||||
indexSearchDuration.UpdateDuration(startTime)
|
indexSearchDuration.UpdateDuration(startTime)
|
||||||
|
|
||||||
// Start workers that call f in parallel on available CPU cores.
|
// Start workers that call f in parallel on available CPU cores.
|
||||||
|
@ -1086,12 +1085,12 @@ func SearchMetricNames(sq *storage.SearchQuery, deadline searchutils.Deadline) (
|
||||||
if err := vmstorage.CheckTimeRange(tr); err != nil {
|
if err := vmstorage.CheckTimeRange(tr); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
tfss, err := setupTfss(tr, sq.TagFilterss, deadline)
|
tfss, err := setupTfss(tr, sq.TagFilterss, sq.MaxMetrics, deadline)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
mns, err := vmstorage.SearchMetricNames(tfss, tr, *maxMetricsPerSearch, deadline.Deadline())
|
mns, err := vmstorage.SearchMetricNames(tfss, tr, sq.MaxMetrics, deadline.Deadline())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("cannot find metric names: %w", err)
|
return nil, fmt.Errorf("cannot find metric names: %w", err)
|
||||||
}
|
}
|
||||||
|
@ -1114,7 +1113,7 @@ func ProcessSearchQuery(sq *storage.SearchQuery, fetchData bool, deadline search
|
||||||
if err := vmstorage.CheckTimeRange(tr); err != nil {
|
if err := vmstorage.CheckTimeRange(tr); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
tfss, err := setupTfss(tr, sq.TagFilterss, deadline)
|
tfss, err := setupTfss(tr, sq.TagFilterss, sq.MaxMetrics, deadline)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
@ -1124,7 +1123,7 @@ func ProcessSearchQuery(sq *storage.SearchQuery, fetchData bool, deadline search
|
||||||
|
|
||||||
sr := getStorageSearch()
|
sr := getStorageSearch()
|
||||||
startTime := time.Now()
|
startTime := time.Now()
|
||||||
maxSeriesCount := sr.Init(vmstorage.Storage, tfss, tr, *maxMetricsPerSearch, deadline.Deadline())
|
maxSeriesCount := sr.Init(vmstorage.Storage, tfss, tr, sq.MaxMetrics, deadline.Deadline())
|
||||||
indexSearchDuration.UpdateDuration(startTime)
|
indexSearchDuration.UpdateDuration(startTime)
|
||||||
m := make(map[string][]blockRef, maxSeriesCount)
|
m := make(map[string][]blockRef, maxSeriesCount)
|
||||||
orderedMetricNames := make([]string, 0, maxSeriesCount)
|
orderedMetricNames := make([]string, 0, maxSeriesCount)
|
||||||
|
@ -1227,7 +1226,7 @@ type blockRef struct {
|
||||||
addr tmpBlockAddr
|
addr tmpBlockAddr
|
||||||
}
|
}
|
||||||
|
|
||||||
func setupTfss(tr storage.TimeRange, tagFilterss [][]storage.TagFilter, deadline searchutils.Deadline) ([]*storage.TagFilters, error) {
|
func setupTfss(tr storage.TimeRange, tagFilterss [][]storage.TagFilter, maxMetrics int, deadline searchutils.Deadline) ([]*storage.TagFilters, error) {
|
||||||
tfss := make([]*storage.TagFilters, 0, len(tagFilterss))
|
tfss := make([]*storage.TagFilters, 0, len(tagFilterss))
|
||||||
for _, tagFilters := range tagFilterss {
|
for _, tagFilters := range tagFilterss {
|
||||||
tfs := storage.NewTagFilters()
|
tfs := storage.NewTagFilters()
|
||||||
|
@ -1235,13 +1234,13 @@ func setupTfss(tr storage.TimeRange, tagFilterss [][]storage.TagFilter, deadline
|
||||||
tf := &tagFilters[i]
|
tf := &tagFilters[i]
|
||||||
if string(tf.Key) == "__graphite__" {
|
if string(tf.Key) == "__graphite__" {
|
||||||
query := tf.Value
|
query := tf.Value
|
||||||
paths, err := vmstorage.SearchGraphitePaths(tr, query, *maxMetricsPerSearch, deadline.Deadline())
|
paths, err := vmstorage.SearchGraphitePaths(tr, query, maxMetrics, deadline.Deadline())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("error when searching for Graphite paths for query %q: %w", query, err)
|
return nil, fmt.Errorf("error when searching for Graphite paths for query %q: %w", query, err)
|
||||||
}
|
}
|
||||||
if len(paths) >= *maxMetricsPerSearch {
|
if len(paths) >= maxMetrics {
|
||||||
return nil, fmt.Errorf("more than -search.maxUniqueTimeseries=%d time series match Graphite query %q; "+
|
return nil, fmt.Errorf("more than %d time series match Graphite query %q; "+
|
||||||
"either narrow down the query or increase -search.maxUniqueTimeseries command-line flag value", *maxMetricsPerSearch, query)
|
"either narrow down the query or increase the corresponding -search.max* command-line flag value", maxMetrics, query)
|
||||||
}
|
}
|
||||||
tfs.AddGraphiteQuery(query, paths, tf.IsNegative)
|
tfs.AddGraphiteQuery(query, paths, tf.IsNegative)
|
||||||
continue
|
continue
|
||||||
|
|
|
@ -42,6 +42,12 @@ var (
|
||||||
"See also '-search.maxLookback' flag, which has the same meaning due to historical reasons")
|
"See also '-search.maxLookback' flag, which has the same meaning due to historical reasons")
|
||||||
maxStepForPointsAdjustment = flag.Duration("search.maxStepForPointsAdjustment", time.Minute, "The maximum step when /api/v1/query_range handler adjusts "+
|
maxStepForPointsAdjustment = flag.Duration("search.maxStepForPointsAdjustment", time.Minute, "The maximum step when /api/v1/query_range handler adjusts "+
|
||||||
"points with timestamps closer than -search.latencyOffset to the current time. The adjustment is needed because such points may contain incomplete data")
|
"points with timestamps closer than -search.latencyOffset to the current time. The adjustment is needed because such points may contain incomplete data")
|
||||||
|
|
||||||
|
maxUniqueTimeseries = flag.Int("search.maxUniqueTimeseries", 300e3, "The maximum number of unique time series, which can be selected during /api/v1/query and /api/v1/query_range queries. This option allows limiting memory usage")
|
||||||
|
maxFederateSeries = flag.Int("search.maxFederateSeries", 300e3, "The maximum number of time series, which can be returned from /federate. This option allows limiting memory usage")
|
||||||
|
maxExportSeries = flag.Int("search.maxExportSeries", 1e6, "The maximum number of time series, which can be returned from /api/v1/export* APIs. This option allows limiting memory usage")
|
||||||
|
maxTSDBStatusSeries = flag.Int("search.maxTSDBStatusSeries", 1e6, "The maximum number of time series, which can be processed during the call to /api/v1/status/tsdb. This option allows limiting memory usage")
|
||||||
|
maxSeriesLimit = flag.Int("search.maxSeries", 10e3, "The maximum number of time series, which can be returned from /api/v1/series. This option allows limiting memory usage")
|
||||||
)
|
)
|
||||||
|
|
||||||
// Default step used if not set.
|
// Default step used if not set.
|
||||||
|
@ -78,7 +84,7 @@ func FederateHandler(startTime time.Time, w http.ResponseWriter, r *http.Request
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
sq := storage.NewSearchQuery(start, end, tagFilterss)
|
sq := storage.NewSearchQuery(start, end, tagFilterss, *maxFederateSeries)
|
||||||
rss, err := netstorage.ProcessSearchQuery(sq, true, deadline)
|
rss, err := netstorage.ProcessSearchQuery(sq, true, deadline)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("cannot fetch data for %q: %w", sq, err)
|
return fmt.Errorf("cannot fetch data for %q: %w", sq, err)
|
||||||
|
@ -135,7 +141,7 @@ func ExportCSVHandler(startTime time.Time, w http.ResponseWriter, r *http.Reques
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
sq := storage.NewSearchQuery(start, end, tagFilterss)
|
sq := storage.NewSearchQuery(start, end, tagFilterss, *maxExportSeries)
|
||||||
w.Header().Set("Content-Type", "text/csv; charset=utf-8")
|
w.Header().Set("Content-Type", "text/csv; charset=utf-8")
|
||||||
bw := bufferedwriter.Get(w)
|
bw := bufferedwriter.Get(w)
|
||||||
defer bufferedwriter.Put(bw)
|
defer bufferedwriter.Put(bw)
|
||||||
|
@ -232,7 +238,7 @@ func ExportNativeHandler(startTime time.Time, w http.ResponseWriter, r *http.Req
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
sq := storage.NewSearchQuery(start, end, tagFilterss)
|
sq := storage.NewSearchQuery(start, end, tagFilterss, *maxExportSeries)
|
||||||
w.Header().Set("Content-Type", "VictoriaMetrics/native")
|
w.Header().Set("Content-Type", "VictoriaMetrics/native")
|
||||||
bw := bufferedwriter.Get(w)
|
bw := bufferedwriter.Get(w)
|
||||||
defer bufferedwriter.Put(bw)
|
defer bufferedwriter.Put(bw)
|
||||||
|
@ -383,7 +389,7 @@ func exportHandler(w http.ResponseWriter, matches []string, etfs [][]storage.Tag
|
||||||
}
|
}
|
||||||
tagFilterss = searchutils.JoinTagFilterss(tagFilterss, etfs)
|
tagFilterss = searchutils.JoinTagFilterss(tagFilterss, etfs)
|
||||||
|
|
||||||
sq := storage.NewSearchQuery(start, end, tagFilterss)
|
sq := storage.NewSearchQuery(start, end, tagFilterss, *maxExportSeries)
|
||||||
w.Header().Set("Content-Type", contentType)
|
w.Header().Set("Content-Type", contentType)
|
||||||
bw := bufferedwriter.Get(w)
|
bw := bufferedwriter.Get(w)
|
||||||
defer bufferedwriter.Put(bw)
|
defer bufferedwriter.Put(bw)
|
||||||
|
@ -484,7 +490,7 @@ func DeleteHandler(startTime time.Time, r *http.Request) error {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
ct := startTime.UnixNano() / 1e6
|
ct := startTime.UnixNano() / 1e6
|
||||||
sq := storage.NewSearchQuery(0, ct, tagFilterss)
|
sq := storage.NewSearchQuery(0, ct, tagFilterss, 0)
|
||||||
deletedCount, err := netstorage.DeleteSeries(sq, deadline)
|
deletedCount, err := netstorage.DeleteSeries(sq, deadline)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("cannot delete time series: %w", err)
|
return fmt.Errorf("cannot delete time series: %w", err)
|
||||||
|
@ -597,7 +603,7 @@ func labelValuesWithMatches(labelName string, matches []string, etfs [][]storage
|
||||||
if len(tagFilterss) == 0 {
|
if len(tagFilterss) == 0 {
|
||||||
logger.Panicf("BUG: tagFilterss must be non-empty")
|
logger.Panicf("BUG: tagFilterss must be non-empty")
|
||||||
}
|
}
|
||||||
sq := storage.NewSearchQuery(start, end, tagFilterss)
|
sq := storage.NewSearchQuery(start, end, tagFilterss, *maxSeriesLimit)
|
||||||
m := make(map[string]struct{})
|
m := make(map[string]struct{})
|
||||||
if end-start > 24*3600*1000 {
|
if end-start > 24*3600*1000 {
|
||||||
// It is cheaper to call SearchMetricNames on time ranges exceeding a day.
|
// It is cheaper to call SearchMetricNames on time ranges exceeding a day.
|
||||||
|
@ -709,12 +715,12 @@ func TSDBStatusHandler(startTime time.Time, w http.ResponseWriter, r *http.Reque
|
||||||
}
|
}
|
||||||
var status *storage.TSDBStatus
|
var status *storage.TSDBStatus
|
||||||
if len(matches) == 0 && len(etfs) == 0 {
|
if len(matches) == 0 && len(etfs) == 0 {
|
||||||
status, err = netstorage.GetTSDBStatusForDate(deadline, date, topN)
|
status, err = netstorage.GetTSDBStatusForDate(deadline, date, topN, *maxTSDBStatusSeries)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf(`cannot obtain tsdb status for date=%d, topN=%d: %w`, date, topN, err)
|
return fmt.Errorf(`cannot obtain tsdb status for date=%d, topN=%d: %w`, date, topN, err)
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
status, err = tsdbStatusWithMatches(matches, etfs, date, topN, deadline)
|
status, err = tsdbStatusWithMatches(matches, etfs, date, topN, *maxTSDBStatusSeries, deadline)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("cannot obtain tsdb status with matches for date=%d, topN=%d: %w", date, topN, err)
|
return fmt.Errorf("cannot obtain tsdb status with matches for date=%d, topN=%d: %w", date, topN, err)
|
||||||
}
|
}
|
||||||
|
@ -729,7 +735,7 @@ func TSDBStatusHandler(startTime time.Time, w http.ResponseWriter, r *http.Reque
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func tsdbStatusWithMatches(matches []string, etfs [][]storage.TagFilter, date uint64, topN int, deadline searchutils.Deadline) (*storage.TSDBStatus, error) {
|
func tsdbStatusWithMatches(matches []string, etfs [][]storage.TagFilter, date uint64, topN, maxMetrics int, deadline searchutils.Deadline) (*storage.TSDBStatus, error) {
|
||||||
tagFilterss, err := getTagFilterssFromMatches(matches)
|
tagFilterss, err := getTagFilterssFromMatches(matches)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
|
@ -740,7 +746,7 @@ func tsdbStatusWithMatches(matches []string, etfs [][]storage.TagFilter, date ui
|
||||||
}
|
}
|
||||||
start := int64(date*secsPerDay) * 1000
|
start := int64(date*secsPerDay) * 1000
|
||||||
end := int64(date*secsPerDay+secsPerDay) * 1000
|
end := int64(date*secsPerDay+secsPerDay) * 1000
|
||||||
sq := storage.NewSearchQuery(start, end, tagFilterss)
|
sq := storage.NewSearchQuery(start, end, tagFilterss, maxMetrics)
|
||||||
status, err := netstorage.GetTSDBStatusWithFilters(deadline, sq, topN)
|
status, err := netstorage.GetTSDBStatusWithFilters(deadline, sq, topN)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
|
@ -835,7 +841,7 @@ func labelsWithMatches(matches []string, etfs [][]storage.TagFilter, start, end
|
||||||
if len(tagFilterss) == 0 {
|
if len(tagFilterss) == 0 {
|
||||||
logger.Panicf("BUG: tagFilterss must be non-empty")
|
logger.Panicf("BUG: tagFilterss must be non-empty")
|
||||||
}
|
}
|
||||||
sq := storage.NewSearchQuery(start, end, tagFilterss)
|
sq := storage.NewSearchQuery(start, end, tagFilterss, *maxSeriesLimit)
|
||||||
m := make(map[string]struct{})
|
m := make(map[string]struct{})
|
||||||
if end-start > 24*3600*1000 {
|
if end-start > 24*3600*1000 {
|
||||||
// It is cheaper to call SearchMetricNames on time ranges exceeding a day.
|
// It is cheaper to call SearchMetricNames on time ranges exceeding a day.
|
||||||
|
@ -933,7 +939,7 @@ func SeriesHandler(startTime time.Time, w http.ResponseWriter, r *http.Request)
|
||||||
if start >= end {
|
if start >= end {
|
||||||
end = start + defaultStep
|
end = start + defaultStep
|
||||||
}
|
}
|
||||||
sq := storage.NewSearchQuery(start, end, tagFilterss)
|
sq := storage.NewSearchQuery(start, end, tagFilterss, *maxSeriesLimit)
|
||||||
if end-start > 24*3600*1000 {
|
if end-start > 24*3600*1000 {
|
||||||
// It is cheaper to call SearchMetricNames on time ranges exceeding a day.
|
// It is cheaper to call SearchMetricNames on time ranges exceeding a day.
|
||||||
mns, err := netstorage.SearchMetricNames(sq, deadline)
|
mns, err := netstorage.SearchMetricNames(sq, deadline)
|
||||||
|
@ -1080,6 +1086,7 @@ func QueryHandler(startTime time.Time, w http.ResponseWriter, r *http.Request) e
|
||||||
Start: start,
|
Start: start,
|
||||||
End: start,
|
End: start,
|
||||||
Step: step,
|
Step: step,
|
||||||
|
MaxSeries: *maxUniqueTimeseries,
|
||||||
QuotedRemoteAddr: httpserver.GetQuotedRemoteAddr(r),
|
QuotedRemoteAddr: httpserver.GetQuotedRemoteAddr(r),
|
||||||
Deadline: deadline,
|
Deadline: deadline,
|
||||||
LookbackDelta: lookbackDelta,
|
LookbackDelta: lookbackDelta,
|
||||||
|
@ -1170,6 +1177,7 @@ func queryRangeHandler(startTime time.Time, w http.ResponseWriter, query string,
|
||||||
Start: start,
|
Start: start,
|
||||||
End: end,
|
End: end,
|
||||||
Step: step,
|
Step: step,
|
||||||
|
MaxSeries: *maxUniqueTimeseries,
|
||||||
QuotedRemoteAddr: httpserver.GetQuotedRemoteAddr(r),
|
QuotedRemoteAddr: httpserver.GetQuotedRemoteAddr(r),
|
||||||
Deadline: deadline,
|
Deadline: deadline,
|
||||||
MayCache: mayCache,
|
MayCache: mayCache,
|
||||||
|
|
|
@ -93,6 +93,10 @@ type EvalConfig struct {
|
||||||
End int64
|
End int64
|
||||||
Step int64
|
Step int64
|
||||||
|
|
||||||
|
// MaxSeries is the maximum number of time series, which can be scanned by the query.
|
||||||
|
// Zero means 'no limit'
|
||||||
|
MaxSeries int
|
||||||
|
|
||||||
// QuotedRemoteAddr contains quoted remote address.
|
// QuotedRemoteAddr contains quoted remote address.
|
||||||
QuotedRemoteAddr string
|
QuotedRemoteAddr string
|
||||||
|
|
||||||
|
@ -113,12 +117,13 @@ type EvalConfig struct {
|
||||||
timestampsOnce sync.Once
|
timestampsOnce sync.Once
|
||||||
}
|
}
|
||||||
|
|
||||||
// newEvalConfig returns new EvalConfig copy from src.
|
// copyEvalConfig returns src copy.
|
||||||
func newEvalConfig(src *EvalConfig) *EvalConfig {
|
func copyEvalConfig(src *EvalConfig) *EvalConfig {
|
||||||
var ec EvalConfig
|
var ec EvalConfig
|
||||||
ec.Start = src.Start
|
ec.Start = src.Start
|
||||||
ec.End = src.End
|
ec.End = src.End
|
||||||
ec.Step = src.Step
|
ec.Step = src.Step
|
||||||
|
ec.MaxSeries = src.MaxSeries
|
||||||
ec.Deadline = src.Deadline
|
ec.Deadline = src.Deadline
|
||||||
ec.MayCache = src.MayCache
|
ec.MayCache = src.MayCache
|
||||||
ec.LookbackDelta = src.LookbackDelta
|
ec.LookbackDelta = src.LookbackDelta
|
||||||
|
@ -575,7 +580,7 @@ func evalRollupFunc(ec *EvalConfig, funcName string, rf rollupFunc, expr metrics
|
||||||
return nil, fmt.Errorf("`@` modifier must return a single series; it returns %d series instead", len(tssAt))
|
return nil, fmt.Errorf("`@` modifier must return a single series; it returns %d series instead", len(tssAt))
|
||||||
}
|
}
|
||||||
atTimestamp := int64(tssAt[0].Values[0] * 1000)
|
atTimestamp := int64(tssAt[0].Values[0] * 1000)
|
||||||
ecNew := newEvalConfig(ec)
|
ecNew := copyEvalConfig(ec)
|
||||||
ecNew.Start = atTimestamp
|
ecNew.Start = atTimestamp
|
||||||
ecNew.End = atTimestamp
|
ecNew.End = atTimestamp
|
||||||
tss, err := evalRollupFuncWithoutAt(ecNew, funcName, rf, expr, re, iafc)
|
tss, err := evalRollupFuncWithoutAt(ecNew, funcName, rf, expr, re, iafc)
|
||||||
|
@ -602,7 +607,7 @@ func evalRollupFuncWithoutAt(ec *EvalConfig, funcName string, rf rollupFunc, exp
|
||||||
var offset int64
|
var offset int64
|
||||||
if re.Offset != nil {
|
if re.Offset != nil {
|
||||||
offset = re.Offset.Duration(ec.Step)
|
offset = re.Offset.Duration(ec.Step)
|
||||||
ecNew = newEvalConfig(ecNew)
|
ecNew = copyEvalConfig(ecNew)
|
||||||
ecNew.Start -= offset
|
ecNew.Start -= offset
|
||||||
ecNew.End -= offset
|
ecNew.End -= offset
|
||||||
// There is no need in calling AdjustStartEnd() on ecNew if ecNew.MayCache is set to true,
|
// There is no need in calling AdjustStartEnd() on ecNew if ecNew.MayCache is set to true,
|
||||||
|
@ -615,7 +620,7 @@ func evalRollupFuncWithoutAt(ec *EvalConfig, funcName string, rf rollupFunc, exp
|
||||||
// in order to obtain expected OHLC results.
|
// in order to obtain expected OHLC results.
|
||||||
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/309#issuecomment-582113462
|
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/309#issuecomment-582113462
|
||||||
step := ecNew.Step
|
step := ecNew.Step
|
||||||
ecNew = newEvalConfig(ecNew)
|
ecNew = copyEvalConfig(ecNew)
|
||||||
ecNew.Start += step
|
ecNew.Start += step
|
||||||
ecNew.End += step
|
ecNew.End += step
|
||||||
offset -= step
|
offset -= step
|
||||||
|
@ -679,7 +684,7 @@ func evalRollupFuncWithSubquery(ec *EvalConfig, funcName string, rf rollupFunc,
|
||||||
}
|
}
|
||||||
window := re.Window.Duration(ec.Step)
|
window := re.Window.Duration(ec.Step)
|
||||||
|
|
||||||
ecSQ := newEvalConfig(ec)
|
ecSQ := copyEvalConfig(ec)
|
||||||
ecSQ.Start -= window + maxSilenceInterval + step
|
ecSQ.Start -= window + maxSilenceInterval + step
|
||||||
ecSQ.End += step
|
ecSQ.End += step
|
||||||
ecSQ.Step = step
|
ecSQ.Step = step
|
||||||
|
@ -834,7 +839,7 @@ func evalRollupFuncWithMetricExpr(ec *EvalConfig, funcName string, rf rollupFunc
|
||||||
} else {
|
} else {
|
||||||
minTimestamp -= ec.Step
|
minTimestamp -= ec.Step
|
||||||
}
|
}
|
||||||
sq := storage.NewSearchQuery(minTimestamp, ec.End, tfss)
|
sq := storage.NewSearchQuery(minTimestamp, ec.End, tfss, ec.MaxSeries)
|
||||||
rss, err := netstorage.ProcessSearchQuery(sq, true, ec.Deadline)
|
rss, err := netstorage.ProcessSearchQuery(sq, true, ec.Deadline)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
|
|
|
@ -61,6 +61,7 @@ func TestExecSuccess(t *testing.T) {
|
||||||
Start: start,
|
Start: start,
|
||||||
End: end,
|
End: end,
|
||||||
Step: step,
|
Step: step,
|
||||||
|
MaxSeries: 1000,
|
||||||
Deadline: searchutils.NewDeadline(time.Now(), time.Minute, ""),
|
Deadline: searchutils.NewDeadline(time.Now(), time.Minute, ""),
|
||||||
RoundDigits: 100,
|
RoundDigits: 100,
|
||||||
}
|
}
|
||||||
|
@ -7496,6 +7497,7 @@ func TestExecError(t *testing.T) {
|
||||||
Start: 1000,
|
Start: 1000,
|
||||||
End: 2000,
|
End: 2000,
|
||||||
Step: 100,
|
Step: 100,
|
||||||
|
MaxSeries: 1000,
|
||||||
Deadline: searchutils.NewDeadline(time.Now(), time.Minute, ""),
|
Deadline: searchutils.NewDeadline(time.Now(), time.Minute, ""),
|
||||||
RoundDigits: 100,
|
RoundDigits: 100,
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,12 +1,14 @@
|
||||||
{
|
{
|
||||||
"files": {
|
"files": {
|
||||||
"main.css": "./static/css/main.098d452b.css",
|
"main.css": "./static/css/main.d8362c27.css",
|
||||||
"main.js": "./static/js/main.523bd341.js",
|
"main.js": "./static/js/main.1c66c512.js",
|
||||||
|
"static/js/362.1990b49e.chunk.js": "./static/js/362.1990b49e.chunk.js",
|
||||||
"static/js/27.939f971b.chunk.js": "./static/js/27.939f971b.chunk.js",
|
"static/js/27.939f971b.chunk.js": "./static/js/27.939f971b.chunk.js",
|
||||||
|
"static/media/README.md": "./static/media/README.a3933343f0099d3929b4.md",
|
||||||
"index.html": "./index.html"
|
"index.html": "./index.html"
|
||||||
},
|
},
|
||||||
"entrypoints": [
|
"entrypoints": [
|
||||||
"static/css/main.098d452b.css",
|
"static/css/main.d8362c27.css",
|
||||||
"static/js/main.523bd341.js"
|
"static/js/main.1c66c512.js"
|
||||||
]
|
]
|
||||||
}
|
}
|
|
@ -1 +1 @@
|
||||||
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="./favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1"/><meta name="theme-color" content="#000000"/><meta name="description" content="VM-UI is a metric explorer for Victoria Metrics"/><link rel="apple-touch-icon" href="./apple-touch-icon.png"/><link rel="icon" type="image/png" sizes="32x32" href="./favicon-32x32.png"><link rel="manifest" href="./manifest.json"/><title>VM UI</title><link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto:300,400,500,700&display=swap"/><script defer="defer" src="./static/js/main.523bd341.js"></script><link href="./static/css/main.098d452b.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div></body></html>
|
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="./favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1"/><meta name="theme-color" content="#000000"/><meta name="description" content="VM-UI is a metric explorer for Victoria Metrics"/><link rel="apple-touch-icon" href="./apple-touch-icon.png"/><link rel="icon" type="image/png" sizes="32x32" href="./favicon-32x32.png"><link rel="manifest" href="./manifest.json"/><title>VM UI</title><link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto:300,400,500,700&display=swap"/><script defer="defer" src="./static/js/main.1c66c512.js"></script><link href="./static/css/main.d8362c27.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div></body></html>
|
|
@ -1 +1 @@
|
||||||
body{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;font-family:-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Oxygen,Ubuntu,Cantarell,Fira Sans,Droid Sans,Helvetica Neue,sans-serif}code{font-family:source-code-pro,Menlo,Monaco,Consolas,Courier New,monospace}.MuiAccordionSummary-content{margin:0!important}.uplot,.uplot *,.uplot :after,.uplot :before{box-sizing:border-box}.uplot{font-family:system-ui,-apple-system,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif,Apple Color Emoji,Segoe UI Emoji,Segoe UI Symbol,Noto Color Emoji;line-height:1.5;width:-webkit-min-content;width:min-content}.u-title{font-size:18px;font-weight:700;text-align:center}.u-wrap{position:relative;-webkit-user-select:none;-ms-user-select:none;user-select:none}.u-over,.u-under{position:absolute}.u-under{overflow:hidden}.uplot canvas{display:block;height:100%;position:relative;width:100%}.u-axis{position:absolute}.u-legend{font-size:14px;margin:auto;text-align:center}.u-inline{display:block}.u-inline *{display:inline-block}.u-inline tr{margin-right:16px}.u-legend th{font-weight:600}.u-legend th>*{display:inline-block;vertical-align:middle}.u-legend .u-marker{background-clip:padding-box!important;height:1em;margin-right:4px;width:1em}.u-inline.u-live th:after{content:":";vertical-align:middle}.u-inline:not(.u-live) .u-value{display:none}.u-series>*{padding:4px}.u-series th{cursor:pointer}.u-legend .u-off>*{opacity:.3}.u-select{background:rgba(0,0,0,.07)}.u-cursor-x,.u-cursor-y,.u-select{pointer-events:none;position:absolute}.u-cursor-x,.u-cursor-y{left:0;top:0;will-change:transform;z-index:100}.u-hz .u-cursor-x,.u-vt .u-cursor-y{border-right:1px dashed #607d8b;height:100%}.u-hz .u-cursor-y,.u-vt .u-cursor-x{border-bottom:1px dashed #607d8b;width:100%}.u-cursor-pt{background-clip:padding-box!important;border:0 solid;border-radius:50%;left:0;pointer-events:none;position:absolute;top:0;will-change:transform;z-index:100}.u-axis.u-off,.u-cursor-pt.u-off,.u-cursor-x.u-off,.u-cursor-y.u-off,.u-select.u-off,.u-tooltip{display:none}.u-tooltip{grid-gap:12px;word-wrap:break-word;background:rgba(57,57,57,.9);border-radius:4px;color:#fff;font-family:monospace;font-size:10px;font-weight:500;line-height:1.4em;max-width:300px;padding:8px;pointer-events:none;position:absolute;z-index:100}.u-tooltip-data{align-items:center;display:flex;flex-wrap:wrap;font-size:11px;line-height:150%}.u-tooltip-data__value{font-weight:700;padding:4px}.u-tooltip__info{grid-gap:4px;display:grid}.u-tooltip__marker{height:12px;margin-right:4px;width:12px}.legendWrapper{grid-gap:20px;cursor:default;display:grid;grid-template-columns:repeat(auto-fit,minmax(400px,1fr));margin-top:20px;position:relative}.legendGroup{margin-bottom:24px}.legendGroupTitle{align-items:center;display:grid;font-size:11px;grid-template-columns:43px auto;padding:10px}.legendGroupQuery{grid-column:1/3;opacity:.6}.legendGroupLine{margin-right:10px}.legendItem{grid-gap:6px;align-items:start;background-color:#fff;cursor:pointer;display:inline-grid;grid-template-columns:auto auto;justify-content:start;padding:7px 50px 7px 10px;transition:.2s ease}.legendItemHide{opacity:.5;text-decoration:line-through}.legendItem:hover{background-color:rgba(0,0,0,.1)}.legendMarker{border-style:solid;border-width:2px;box-sizing:border-box;height:12px;transition:.2s ease;width:12px}.legendLabel{font-size:11px;font-weight:400;line-height:12px}.legendFreeFields{cursor:pointer;padding:3px}.legendFreeFields:hover{text-decoration:underline}.legendFreeFields:not(:last-child):after{content:","}.legendWrapperHotkey{align-items:center;display:flex;font-size:11px}.legendWrapperHotkey p{margin-right:20px}.legendWrapperHotkey code{word-wrap:break-word;background-color:#f2f2f2;border:1px solid #dedede;border-radius:2px;color:#0a0a0a;display:inline;font-size:10px;font-weight:400;max-width:100%;padding:4px 6px}
|
body{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;font-family:-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Oxygen,Ubuntu,Cantarell,Fira Sans,Droid Sans,Helvetica Neue,sans-serif}code{font-family:source-code-pro,Menlo,Monaco,Consolas,Courier New,monospace}.MuiAccordionSummary-content{margin:0!important}.uplot,.uplot *,.uplot :after,.uplot :before{box-sizing:border-box}.uplot{font-family:system-ui,-apple-system,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif,Apple Color Emoji,Segoe UI Emoji,Segoe UI Symbol,Noto Color Emoji;line-height:1.5;width:-webkit-min-content;width:min-content}.u-title{font-size:18px;font-weight:700;text-align:center}.u-wrap{position:relative;-webkit-user-select:none;-ms-user-select:none;user-select:none}.u-over,.u-under{position:absolute}.u-under{overflow:hidden}.uplot canvas{display:block;height:100%;position:relative;width:100%}.u-axis{position:absolute}.u-legend{font-size:14px;margin:auto;text-align:center}.u-inline{display:block}.u-inline *{display:inline-block}.u-inline tr{margin-right:16px}.u-legend th{font-weight:600}.u-legend th>*{display:inline-block;vertical-align:middle}.u-legend .u-marker{background-clip:padding-box!important;height:1em;margin-right:4px;width:1em}.u-inline.u-live th:after{content:":";vertical-align:middle}.u-inline:not(.u-live) .u-value{display:none}.u-series>*{padding:4px}.u-series th{cursor:pointer}.u-legend .u-off>*{opacity:.3}.u-select{background:rgba(0,0,0,.07)}.u-cursor-x,.u-cursor-y,.u-select{pointer-events:none;position:absolute}.u-cursor-x,.u-cursor-y{left:0;top:0;will-change:transform;z-index:100}.u-hz .u-cursor-x,.u-vt .u-cursor-y{border-right:1px dashed #607d8b;height:100%}.u-hz .u-cursor-y,.u-vt .u-cursor-x{border-bottom:1px dashed #607d8b;width:100%}.u-cursor-pt{background-clip:padding-box!important;border:0 solid;border-radius:50%;left:0;pointer-events:none;position:absolute;top:0;will-change:transform;z-index:100}.u-axis.u-off,.u-cursor-pt.u-off,.u-cursor-x.u-off,.u-cursor-y.u-off,.u-select.u-off,.u-tooltip{display:none}.u-tooltip{grid-gap:12px;word-wrap:break-word;background:rgba(57,57,57,.9);border-radius:4px;color:#fff;font-family:monospace;font-size:10px;font-weight:500;line-height:1.4em;max-width:300px;padding:8px;pointer-events:none;position:absolute;z-index:100}.u-tooltip-data{align-items:center;display:flex;flex-wrap:wrap;font-size:11px;line-height:150%}.u-tooltip-data__value{font-weight:700;padding:4px}.u-tooltip__info{grid-gap:4px;display:grid}.u-tooltip__marker{height:12px;margin-right:4px;width:12px}.legendWrapper{cursor:default;display:flex;flex-wrap:wrap;margin-top:20px;position:relative}.legendGroup{margin:0 12px 24px 0}.legendGroupTitle{align-items:center;display:grid;font-size:11px;grid-template-columns:43px auto;padding:10px}.legendGroupQuery{grid-column:1/3;opacity:.6}.legendGroupLine{margin-right:10px}.legendItem{grid-gap:6px;align-items:start;background-color:#fff;cursor:pointer;display:grid;grid-template-columns:auto auto;justify-content:start;padding:7px 50px 7px 10px;transition:.2s ease}.legendItemHide{opacity:.5;text-decoration:line-through}.legendItem:hover{background-color:rgba(0,0,0,.1)}.legendMarker{border-style:solid;border-width:2px;box-sizing:border-box;height:12px;transition:.2s ease;width:12px}.legendLabel{font-size:11px;font-weight:400;line-height:12px}.legendFreeFields{cursor:pointer;padding:3px}.legendFreeFields:hover{text-decoration:underline}.legendFreeFields:not(:last-child):after{content:","}.legendWrapperHotkey{align-items:center;display:flex;font-size:11px}.legendWrapperHotkey p{margin-right:20px}.legendWrapperHotkey code{word-wrap:break-word;background-color:#f2f2f2;border:1px solid #dedede;border-radius:2px;color:#0a0a0a;display:inline;font-size:10px;font-weight:400;max-width:100%;padding:4px 6px}.panelDescription ul{line-height:2.2}.panelDescription a{color:#fff}.panelDescription code{background-color:rgba(0,0,0,.3);border-radius:2px;color:#fff;display:inline;font-size:inherit;font-weight:400;max-width:100%;padding:4px 6px}
|
1
app/vmselect/vmui/static/js/362.1990b49e.chunk.js
Normal file
1
app/vmselect/vmui/static/js/362.1990b49e.chunk.js
Normal file
|
@ -0,0 +1 @@
|
||||||
|
"use strict";(self.webpackChunkvmui=self.webpackChunkvmui||[]).push([[362],{8362:function(e,s,u){e.exports=u.p+"static/media/README.a3933343f0099d3929b4.md"}}]);
|
2
app/vmselect/vmui/static/js/main.1c66c512.js
Normal file
2
app/vmselect/vmui/static/js/main.1c66c512.js
Normal file
File diff suppressed because one or more lines are too long
|
@ -6,7 +6,29 @@
|
||||||
* @license MIT
|
* @license MIT
|
||||||
*/
|
*/
|
||||||
|
|
||||||
/** @license MUI v5.4.4
|
/**
|
||||||
|
* React Router DOM v6.2.2
|
||||||
|
*
|
||||||
|
* Copyright (c) Remix Software Inc.
|
||||||
|
*
|
||||||
|
* This source code is licensed under the MIT license found in the
|
||||||
|
* LICENSE.md file in the root directory of this source tree.
|
||||||
|
*
|
||||||
|
* @license MIT
|
||||||
|
*/
|
||||||
|
|
||||||
|
/**
|
||||||
|
* React Router v6.2.2
|
||||||
|
*
|
||||||
|
* Copyright (c) Remix Software Inc.
|
||||||
|
*
|
||||||
|
* This source code is licensed under the MIT license found in the
|
||||||
|
* LICENSE.md file in the root directory of this source tree.
|
||||||
|
*
|
||||||
|
* @license MIT
|
||||||
|
*/
|
||||||
|
|
||||||
|
/** @license MUI v5.5.2
|
||||||
*
|
*
|
||||||
* This source code is licensed under the MIT license found in the
|
* This source code is licensed under the MIT license found in the
|
||||||
* LICENSE file in the root directory of this source tree.
|
* LICENSE file in the root directory of this source tree.
|
File diff suppressed because one or more lines are too long
|
@ -0,0 +1,76 @@
|
||||||
|
### Configuration options
|
||||||
|
|
||||||
|
<br/>
|
||||||
|
DashboardSettings:
|
||||||
|
|
||||||
|
| Name | Type | Description |
|
||||||
|
|:----------|:----------------:|---------------------------:|
|
||||||
|
| rows* | `DashboardRow[]` | Sections containing panels |
|
||||||
|
| title | `string` | Dashboard title |
|
||||||
|
|
||||||
|
|
||||||
|
<br/>
|
||||||
|
DashboardRow:
|
||||||
|
|
||||||
|
| Name | Type | Description |
|
||||||
|
|:-----------|:-----------------:|---------------------------:|
|
||||||
|
| panels* | `PanelSettings[]` | List of panels (charts) |
|
||||||
|
| title | `string` | Row title |
|
||||||
|
|
||||||
|
<br/>
|
||||||
|
PanelSettings:
|
||||||
|
|
||||||
|
| Name | Type | Description |
|
||||||
|
|:---------------|:----------:|----------------------------------------------------:|
|
||||||
|
| expr* | `string[]` | Data source queries |
|
||||||
|
| title | `string` | Panel title |
|
||||||
|
| description | `string` | Additional information about the panel |
|
||||||
|
| unit | `string` | Y-axis unit |
|
||||||
|
| showLegend | `boolean` | If `false`, the legend hide. Default value - `true` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Example json
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"title": "Example",
|
||||||
|
"rows": [
|
||||||
|
{
|
||||||
|
"title": "Performance",
|
||||||
|
"panels": [
|
||||||
|
{
|
||||||
|
"title": "Query duration",
|
||||||
|
"description": "The less time it takes is better.\n* `*` - unsupported query path\n* `/write` - insert into VM\n* `/metrics` - query VM system metrics\n* `/query` - query instant values\n* `/query_range` - query over a range of time\n* `/series` - match a certain label set\n* `/label/{}/values` - query a list of label values (variables mostly)",
|
||||||
|
"unit": "ms",
|
||||||
|
"showLegend": false,
|
||||||
|
"expr": [
|
||||||
|
"max(vm_request_duration_seconds{quantile=~\"(0.5|0.99)\"}) by (path, quantile) > 0"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"title": "Concurrent flushes on disk",
|
||||||
|
"description": "Shows how many ongoing insertions (not API /write calls) on disk are taking place, where:\n* `max` - equal to number of CPUs;\n* `current` - current number of goroutines busy with inserting rows into underlying storage.\n\nEvery successful API /write call results into flush on disk. However, these two actions are separated and controlled via different concurrency limiters. The `max` on this panel can't be changed and always equal to number of CPUs. \n\nWhen `current` hits `max` constantly, it means storage is overloaded and requires more CPU.\n\n",
|
||||||
|
"expr": [
|
||||||
|
"sum(vm_concurrent_addrows_capacity)",
|
||||||
|
"sum(vm_concurrent_addrows_current)"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"title": "Troubleshooting",
|
||||||
|
"panels": [
|
||||||
|
{
|
||||||
|
"title": "Churn rate",
|
||||||
|
"description": "Shows the rate and total number of new series created over last 24h.\n\nHigh churn rate tightly connected with database performance and may result in unexpected OOM's or slow queries. It is recommended to always keep an eye on this metric to avoid unexpected cardinality \"explosions\".\n\nThe higher churn rate is, the more resources required to handle it. Consider to keep the churn rate as low as possible.\n\nGood references to read:\n* https://www.robustperception.io/cardinality-is-key\n* https://www.robustperception.io/using-tsdb-analyze-to-investigate-churn-and-cardinality",
|
||||||
|
"expr": [
|
||||||
|
"sum(rate(vm_new_timeseries_created_total[5m]))",
|
||||||
|
"sum(increase(vm_new_timeseries_created_total[24h]))"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
|
@ -230,17 +230,17 @@ func SearchTagEntries(maxTagKeys, maxTagValues int, deadline uint64) ([]storage.
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetTSDBStatusForDate returns TSDB status for the given date.
|
// GetTSDBStatusForDate returns TSDB status for the given date.
|
||||||
func GetTSDBStatusForDate(date uint64, topN int, deadline uint64) (*storage.TSDBStatus, error) {
|
func GetTSDBStatusForDate(date uint64, topN, maxMetrics int, deadline uint64) (*storage.TSDBStatus, error) {
|
||||||
WG.Add(1)
|
WG.Add(1)
|
||||||
status, err := Storage.GetTSDBStatusWithFiltersForDate(nil, date, topN, deadline)
|
status, err := Storage.GetTSDBStatusWithFiltersForDate(nil, date, topN, maxMetrics, deadline)
|
||||||
WG.Done()
|
WG.Done()
|
||||||
return status, err
|
return status, err
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetTSDBStatusWithFiltersForDate returns TSDB status for given filters on the given date.
|
// GetTSDBStatusWithFiltersForDate returns TSDB status for given filters on the given date.
|
||||||
func GetTSDBStatusWithFiltersForDate(tfss []*storage.TagFilters, date uint64, topN int, deadline uint64) (*storage.TSDBStatus, error) {
|
func GetTSDBStatusWithFiltersForDate(tfss []*storage.TagFilters, date uint64, topN, maxMetrics int, deadline uint64) (*storage.TSDBStatus, error) {
|
||||||
WG.Add(1)
|
WG.Add(1)
|
||||||
status, err := Storage.GetTSDBStatusWithFiltersForDate(tfss, date, topN, deadline)
|
status, err := Storage.GetTSDBStatusWithFiltersForDate(tfss, date, topN, maxMetrics, deadline)
|
||||||
WG.Done()
|
WG.Done()
|
||||||
return status, err
|
return status, err
|
||||||
}
|
}
|
||||||
|
@ -684,6 +684,10 @@ func registerStorageMetrics() {
|
||||||
metrics.NewGauge(`vm_cache_entries{type="storage/regexps"}`, func() float64 {
|
metrics.NewGauge(`vm_cache_entries{type="storage/regexps"}`, func() float64 {
|
||||||
return float64(storage.RegexpCacheSize())
|
return float64(storage.RegexpCacheSize())
|
||||||
})
|
})
|
||||||
|
metrics.NewGauge(`vm_cache_entries{type="storage/regexpPrefixes"}`, func() float64 {
|
||||||
|
return float64(storage.RegexpPrefixesCacheSize())
|
||||||
|
})
|
||||||
|
|
||||||
metrics.NewGauge(`vm_cache_entries{type="storage/prefetchedMetricIDs"}`, func() float64 {
|
metrics.NewGauge(`vm_cache_entries{type="storage/prefetchedMetricIDs"}`, func() float64 {
|
||||||
return float64(m().PrefetchedMetricIDsSize)
|
return float64(m().PrefetchedMetricIDsSize)
|
||||||
})
|
})
|
||||||
|
@ -718,6 +722,12 @@ func registerStorageMetrics() {
|
||||||
metrics.NewGauge(`vm_cache_size_bytes{type="indexdb/tagFilters"}`, func() float64 {
|
metrics.NewGauge(`vm_cache_size_bytes{type="indexdb/tagFilters"}`, func() float64 {
|
||||||
return float64(idbm().TagFiltersCacheSizeBytes)
|
return float64(idbm().TagFiltersCacheSizeBytes)
|
||||||
})
|
})
|
||||||
|
metrics.NewGauge(`vm_cache_size_bytes{type="storage/regexps"}`, func() float64 {
|
||||||
|
return float64(storage.RegexpCacheSizeBytes())
|
||||||
|
})
|
||||||
|
metrics.NewGauge(`vm_cache_size_bytes{type="storage/regexpPrefixes"}`, func() float64 {
|
||||||
|
return float64(storage.RegexpPrefixesCacheSizeBytes())
|
||||||
|
})
|
||||||
metrics.NewGauge(`vm_cache_size_bytes{type="storage/prefetchedMetricIDs"}`, func() float64 {
|
metrics.NewGauge(`vm_cache_size_bytes{type="storage/prefetchedMetricIDs"}`, func() float64 {
|
||||||
return float64(m().PrefetchedMetricIDsSizeBytes)
|
return float64(m().PrefetchedMetricIDsSizeBytes)
|
||||||
})
|
})
|
||||||
|
@ -743,6 +753,12 @@ func registerStorageMetrics() {
|
||||||
metrics.NewGauge(`vm_cache_size_max_bytes{type="indexdb/tagFilters"}`, func() float64 {
|
metrics.NewGauge(`vm_cache_size_max_bytes{type="indexdb/tagFilters"}`, func() float64 {
|
||||||
return float64(idbm().TagFiltersCacheSizeMaxBytes)
|
return float64(idbm().TagFiltersCacheSizeMaxBytes)
|
||||||
})
|
})
|
||||||
|
metrics.NewGauge(`vm_cache_size_max_bytes{type="storage/regexps"}`, func() float64 {
|
||||||
|
return float64(storage.RegexpCacheMaxSizeBytes())
|
||||||
|
})
|
||||||
|
metrics.NewGauge(`vm_cache_size_max_bytes{type="storage/regexpPrefixes"}`, func() float64 {
|
||||||
|
return float64(storage.RegexpPrefixesCacheMaxSizeBytes())
|
||||||
|
})
|
||||||
|
|
||||||
metrics.NewGauge(`vm_cache_requests_total{type="storage/tsid"}`, func() float64 {
|
metrics.NewGauge(`vm_cache_requests_total{type="storage/tsid"}`, func() float64 {
|
||||||
return float64(m().TSIDCacheRequests)
|
return float64(m().TSIDCacheRequests)
|
||||||
|
@ -768,6 +784,9 @@ func registerStorageMetrics() {
|
||||||
metrics.NewGauge(`vm_cache_requests_total{type="storage/regexps"}`, func() float64 {
|
metrics.NewGauge(`vm_cache_requests_total{type="storage/regexps"}`, func() float64 {
|
||||||
return float64(storage.RegexpCacheRequests())
|
return float64(storage.RegexpCacheRequests())
|
||||||
})
|
})
|
||||||
|
metrics.NewGauge(`vm_cache_requests_total{type="storage/regexpPrefixes"}`, func() float64 {
|
||||||
|
return float64(storage.RegexpPrefixesCacheRequests())
|
||||||
|
})
|
||||||
|
|
||||||
metrics.NewGauge(`vm_cache_misses_total{type="storage/tsid"}`, func() float64 {
|
metrics.NewGauge(`vm_cache_misses_total{type="storage/tsid"}`, func() float64 {
|
||||||
return float64(m().TSIDCacheMisses)
|
return float64(m().TSIDCacheMisses)
|
||||||
|
@ -793,6 +812,9 @@ func registerStorageMetrics() {
|
||||||
metrics.NewGauge(`vm_cache_misses_total{type="storage/regexps"}`, func() float64 {
|
metrics.NewGauge(`vm_cache_misses_total{type="storage/regexps"}`, func() float64 {
|
||||||
return float64(storage.RegexpCacheMisses())
|
return float64(storage.RegexpCacheMisses())
|
||||||
})
|
})
|
||||||
|
metrics.NewGauge(`vm_cache_misses_total{type="storage/regexpPrefixes"}`, func() float64 {
|
||||||
|
return float64(storage.RegexpPrefixesCacheMisses())
|
||||||
|
})
|
||||||
|
|
||||||
metrics.NewGauge(`vm_deleted_metrics_total{type="indexdb"}`, func() float64 {
|
metrics.NewGauge(`vm_deleted_metrics_total{type="indexdb"}`, func() float64 {
|
||||||
return float64(idbm().DeletedMetricsCount)
|
return float64(idbm().DeletedMetricsCount)
|
||||||
|
|
2519
app/vmui/packages/vmui/package-lock.json
generated
2519
app/vmui/packages/vmui/package-lock.json
generated
File diff suppressed because it is too large
Load diff
|
@ -17,17 +17,22 @@
|
||||||
"@types/lodash.debounce": "^4.0.6",
|
"@types/lodash.debounce": "^4.0.6",
|
||||||
"@types/lodash.get": "^4.4.6",
|
"@types/lodash.get": "^4.4.6",
|
||||||
"@types/lodash.throttle": "^4.1.6",
|
"@types/lodash.throttle": "^4.1.6",
|
||||||
|
"@types/marked": "^4.0.2",
|
||||||
"@types/node": "^17.0.21",
|
"@types/node": "^17.0.21",
|
||||||
"@types/qs": "^6.9.7",
|
"@types/qs": "^6.9.7",
|
||||||
"@types/react": "^17.0.41",
|
"@types/react": "^17.0.41",
|
||||||
"@types/react-dom": "^17.0.14",
|
"@types/react-dom": "^17.0.14",
|
||||||
"@types/react-measure": "^2.0.8",
|
"@types/react-measure": "^2.0.8",
|
||||||
|
"@types/react-router-dom": "^5.3.3",
|
||||||
|
"@types/webpack-env": "^1.16.3",
|
||||||
"dayjs": "^1.11.0",
|
"dayjs": "^1.11.0",
|
||||||
"lodash.debounce": "^4.0.8",
|
"lodash.debounce": "^4.0.8",
|
||||||
"lodash.get": "^4.4.2",
|
"lodash.get": "^4.4.2",
|
||||||
"lodash.throttle": "^4.1.1",
|
"lodash.throttle": "^4.1.1",
|
||||||
|
"marked": "^4.0.12",
|
||||||
"preact": "^10.6.6",
|
"preact": "^10.6.6",
|
||||||
"qs": "^6.10.3",
|
"qs": "^6.10.3",
|
||||||
|
"react-router-dom": "^6.2.1",
|
||||||
"typescript": "~4.6.2",
|
"typescript": "~4.6.2",
|
||||||
"uplot": "^1.6.19",
|
"uplot": "^1.6.19",
|
||||||
"web-vitals": "^2.1.4"
|
"web-vitals": "^2.1.4"
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
import React, {FC} from "preact/compat";
|
import React, {FC} from "preact/compat";
|
||||||
|
import {HashRouter, Route, Routes} from "react-router-dom";
|
||||||
import {SnackbarProvider} from "./contexts/Snackbar";
|
import {SnackbarProvider} from "./contexts/Snackbar";
|
||||||
import HomeLayout from "./components/Home/HomeLayout";
|
|
||||||
import {StateProvider} from "./state/common/StateContext";
|
import {StateProvider} from "./state/common/StateContext";
|
||||||
import {AuthStateProvider} from "./state/auth/AuthStateContext";
|
import {AuthStateProvider} from "./state/auth/AuthStateContext";
|
||||||
import {GraphStateProvider} from "./state/graph/GraphStateContext";
|
import {GraphStateProvider} from "./state/graph/GraphStateContext";
|
||||||
|
@ -9,6 +9,11 @@ import { ThemeProvider, StyledEngineProvider } from "@mui/material/styles";
|
||||||
import CssBaseline from "@mui/material/CssBaseline";
|
import CssBaseline from "@mui/material/CssBaseline";
|
||||||
import LocalizationProvider from "@mui/lab/LocalizationProvider";
|
import LocalizationProvider from "@mui/lab/LocalizationProvider";
|
||||||
import DayjsUtils from "@date-io/dayjs";
|
import DayjsUtils from "@date-io/dayjs";
|
||||||
|
import router from "./router/index";
|
||||||
|
|
||||||
|
import CustomPanel from "./components/CustomPanel/CustomPanel";
|
||||||
|
import HomeLayout from "./components/Home/HomeLayout";
|
||||||
|
import DashboardsLayout from "./components/PredefinedPanels/DashboardsLayout";
|
||||||
|
|
||||||
|
|
||||||
const App: FC = () => {
|
const App: FC = () => {
|
||||||
|
@ -22,7 +27,14 @@ const App: FC = () => {
|
||||||
<AuthStateProvider> {/* Auth related info - optionally persisted to Local Storage */}
|
<AuthStateProvider> {/* Auth related info - optionally persisted to Local Storage */}
|
||||||
<GraphStateProvider> {/* Graph settings */}
|
<GraphStateProvider> {/* Graph settings */}
|
||||||
<SnackbarProvider> {/* Display various snackbars */}
|
<SnackbarProvider> {/* Display various snackbars */}
|
||||||
<HomeLayout/>
|
<HashRouter>
|
||||||
|
<Routes>
|
||||||
|
<Route path={"/"} element={<HomeLayout/>}>
|
||||||
|
<Route path={router.home} element={<CustomPanel/>}/>
|
||||||
|
<Route path={router.dashboards} element={<DashboardsLayout/>}/>
|
||||||
|
</Route>
|
||||||
|
</Routes>
|
||||||
|
</HashRouter>
|
||||||
</SnackbarProvider>
|
</SnackbarProvider>
|
||||||
</GraphStateProvider>
|
</GraphStateProvider>
|
||||||
</AuthStateProvider>
|
</AuthStateProvider>
|
||||||
|
|
|
@ -3,29 +3,31 @@ import {ChangeEvent} from "react";
|
||||||
import Box from "@mui/material/Box";
|
import Box from "@mui/material/Box";
|
||||||
import FormControlLabel from "@mui/material/FormControlLabel";
|
import FormControlLabel from "@mui/material/FormControlLabel";
|
||||||
import TextField from "@mui/material/TextField";
|
import TextField from "@mui/material/TextField";
|
||||||
import {useGraphDispatch, useGraphState} from "../../../../state/graph/GraphStateContext";
|
|
||||||
import debounce from "lodash.debounce";
|
import debounce from "lodash.debounce";
|
||||||
import BasicSwitch from "../../../../theme/switch";
|
import BasicSwitch from "../../../../theme/switch";
|
||||||
|
import {AxisRange, YaxisState} from "../../../../state/graph/reducer";
|
||||||
|
|
||||||
const AxesLimitsConfigurator: FC = () => {
|
interface AxesLimitsConfiguratorProps {
|
||||||
|
yaxis: YaxisState,
|
||||||
|
setYaxisLimits: (limits: AxisRange) => void,
|
||||||
|
toggleEnableLimits: () => void
|
||||||
|
}
|
||||||
|
|
||||||
|
const AxesLimitsConfigurator: FC<AxesLimitsConfiguratorProps> = ({yaxis, setYaxisLimits, toggleEnableLimits}) => {
|
||||||
|
|
||||||
const { yaxis } = useGraphState();
|
|
||||||
const graphDispatch = useGraphDispatch();
|
|
||||||
const axes = useMemo(() => Object.keys(yaxis.limits.range), [yaxis.limits.range]);
|
const axes = useMemo(() => Object.keys(yaxis.limits.range), [yaxis.limits.range]);
|
||||||
|
|
||||||
const onChangeYaxisLimits = () => { graphDispatch({type: "TOGGLE_ENABLE_YAXIS_LIMITS"}); };
|
|
||||||
|
|
||||||
const onChangeLimit = (e: ChangeEvent<HTMLInputElement | HTMLTextAreaElement>, axis: string, index: number) => {
|
const onChangeLimit = (e: ChangeEvent<HTMLInputElement | HTMLTextAreaElement>, axis: string, index: number) => {
|
||||||
const newLimits = yaxis.limits.range;
|
const newLimits = yaxis.limits.range;
|
||||||
newLimits[axis][index] = +e.target.value;
|
newLimits[axis][index] = +e.target.value;
|
||||||
if (newLimits[axis][0] === newLimits[axis][1] || newLimits[axis][0] > newLimits[axis][1]) return;
|
if (newLimits[axis][0] === newLimits[axis][1] || newLimits[axis][0] > newLimits[axis][1]) return;
|
||||||
graphDispatch({type: "SET_YAXIS_LIMITS", payload: newLimits});
|
setYaxisLimits(newLimits);
|
||||||
};
|
};
|
||||||
const debouncedOnChangeLimit = useCallback(debounce(onChangeLimit, 500), [yaxis.limits.range]);
|
const debouncedOnChangeLimit = useCallback(debounce(onChangeLimit, 500), [yaxis.limits.range]);
|
||||||
|
|
||||||
return <Box display="grid" alignItems="center" gap={2}>
|
return <Box display="grid" alignItems="center" gap={2}>
|
||||||
<FormControlLabel
|
<FormControlLabel
|
||||||
control={<BasicSwitch checked={yaxis.limits.enable} onChange={onChangeYaxisLimits}/>}
|
control={<BasicSwitch checked={yaxis.limits.enable} onChange={toggleEnableLimits}/>}
|
||||||
label="Fix the limits for y-axis"
|
label="Fix the limits for y-axis"
|
||||||
/>
|
/>
|
||||||
<Box display="grid" alignItems="center" gap={2}>
|
<Box display="grid" alignItems="center" gap={2}>
|
|
@ -10,6 +10,7 @@ import Typography from "@mui/material/Typography";
|
||||||
import makeStyles from "@mui/styles/makeStyles";
|
import makeStyles from "@mui/styles/makeStyles";
|
||||||
import CloseIcon from "@mui/icons-material/Close";
|
import CloseIcon from "@mui/icons-material/Close";
|
||||||
import ClickAwayListener from "@mui/material/ClickAwayListener";
|
import ClickAwayListener from "@mui/material/ClickAwayListener";
|
||||||
|
import {AxisRange, YaxisState} from "../../../../state/graph/reducer";
|
||||||
|
|
||||||
const useStyles = makeStyles({
|
const useStyles = makeStyles({
|
||||||
popover: {
|
popover: {
|
||||||
|
@ -35,7 +36,13 @@ const useStyles = makeStyles({
|
||||||
|
|
||||||
const title = "Axes Settings";
|
const title = "Axes Settings";
|
||||||
|
|
||||||
const GraphSettings: FC = () => {
|
interface GraphSettingsProps {
|
||||||
|
yaxis: YaxisState,
|
||||||
|
setYaxisLimits: (limits: AxisRange) => void,
|
||||||
|
toggleEnableLimits: () => void
|
||||||
|
}
|
||||||
|
|
||||||
|
const GraphSettings: FC<GraphSettingsProps> = ({yaxis, setYaxisLimits, toggleEnableLimits}) => {
|
||||||
const [anchorEl, setAnchorEl] = useState<HTMLButtonElement | null>(null);
|
const [anchorEl, setAnchorEl] = useState<HTMLButtonElement | null>(null);
|
||||||
const open = Boolean(anchorEl);
|
const open = Boolean(anchorEl);
|
||||||
|
|
||||||
|
@ -61,7 +68,11 @@ const GraphSettings: FC = () => {
|
||||||
</IconButton>
|
</IconButton>
|
||||||
</div>
|
</div>
|
||||||
<Box className={classes.popoverBody}>
|
<Box className={classes.popoverBody}>
|
||||||
<AxesLimitsConfigurator/>
|
<AxesLimitsConfigurator
|
||||||
|
yaxis={yaxis}
|
||||||
|
setYaxisLimits={setYaxisLimits}
|
||||||
|
toggleEnableLimits={toggleEnableLimits}
|
||||||
|
/>
|
||||||
</Box>
|
</Box>
|
||||||
</Paper>
|
</Paper>
|
||||||
</ClickAwayListener>
|
</ClickAwayListener>
|
|
@ -5,10 +5,14 @@ import {saveToStorage} from "../../../../utils/storage";
|
||||||
import {useAppDispatch, useAppState} from "../../../../state/common/StateContext";
|
import {useAppDispatch, useAppState} from "../../../../state/common/StateContext";
|
||||||
import BasicSwitch from "../../../../theme/switch";
|
import BasicSwitch from "../../../../theme/switch";
|
||||||
import StepConfigurator from "./StepConfigurator";
|
import StepConfigurator from "./StepConfigurator";
|
||||||
|
import {useGraphDispatch, useGraphState} from "../../../../state/graph/GraphStateContext";
|
||||||
|
|
||||||
const AdditionalSettings: FC = () => {
|
const AdditionalSettings: FC = () => {
|
||||||
|
|
||||||
const {queryControls: {autocomplete, nocache}} = useAppState();
|
const {customStep} = useGraphState();
|
||||||
|
const graphDispatch = useGraphDispatch();
|
||||||
|
|
||||||
|
const {queryControls: {autocomplete, nocache}, time: {period: {step}}} = useAppState();
|
||||||
const dispatch = useAppDispatch();
|
const dispatch = useAppDispatch();
|
||||||
|
|
||||||
const onChangeAutocomplete = () => {
|
const onChangeAutocomplete = () => {
|
||||||
|
@ -33,7 +37,13 @@ const AdditionalSettings: FC = () => {
|
||||||
/>
|
/>
|
||||||
</Box>
|
</Box>
|
||||||
<Box ml={2}>
|
<Box ml={2}>
|
||||||
<StepConfigurator/>
|
<StepConfigurator defaultStep={step} customStepEnable={customStep.enable}
|
||||||
|
setStep={(value) => {
|
||||||
|
graphDispatch({type: "SET_CUSTOM_STEP", payload: value});
|
||||||
|
}}
|
||||||
|
toggleEnableStep={() => {
|
||||||
|
graphDispatch({type: "TOGGLE_CUSTOM_STEP"});
|
||||||
|
}}/>
|
||||||
</Box>
|
</Box>
|
||||||
</Box>;
|
</Box>;
|
||||||
};
|
};
|
|
@ -0,0 +1,60 @@
|
||||||
|
import React, {FC, useEffect, useState} from "preact/compat";
|
||||||
|
import {ChangeEvent} from "react";
|
||||||
|
import Box from "@mui/material/Box";
|
||||||
|
import FormControlLabel from "@mui/material/FormControlLabel";
|
||||||
|
import TextField from "@mui/material/TextField";
|
||||||
|
import BasicSwitch from "../../../../theme/switch";
|
||||||
|
|
||||||
|
interface StepConfiguratorProps {
|
||||||
|
defaultStep?: number,
|
||||||
|
customStepEnable: boolean,
|
||||||
|
setStep: (step: number) => void,
|
||||||
|
toggleEnableStep: () => void
|
||||||
|
}
|
||||||
|
|
||||||
|
const StepConfigurator: FC<StepConfiguratorProps> = ({
|
||||||
|
defaultStep, customStepEnable, setStep, toggleEnableStep
|
||||||
|
}) => {
|
||||||
|
|
||||||
|
const [customStep, setCustomStep] = useState(defaultStep);
|
||||||
|
const [error, setError] = useState(false);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
setStep(customStep || 1);
|
||||||
|
}, [customStep]);
|
||||||
|
|
||||||
|
const onChangeStep = (e: ChangeEvent<HTMLInputElement | HTMLTextAreaElement>) => {
|
||||||
|
if (!customStepEnable) return;
|
||||||
|
const value = +e.target.value;
|
||||||
|
if (value > 0) {
|
||||||
|
setCustomStep(value);
|
||||||
|
setError(false);
|
||||||
|
} else {
|
||||||
|
setError(true);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const onChangeEnableStep = () => {
|
||||||
|
setError(false);
|
||||||
|
toggleEnableStep();
|
||||||
|
};
|
||||||
|
|
||||||
|
return <Box display="grid" gridTemplateColumns="auto 120px" alignItems="center">
|
||||||
|
<FormControlLabel
|
||||||
|
control={<BasicSwitch checked={customStepEnable} onChange={onChangeEnableStep}/>}
|
||||||
|
label="Override step value"
|
||||||
|
/>
|
||||||
|
<TextField
|
||||||
|
label="Step value"
|
||||||
|
type="number"
|
||||||
|
size="small"
|
||||||
|
variant="outlined"
|
||||||
|
value={customStep}
|
||||||
|
disabled={!customStepEnable}
|
||||||
|
error={error}
|
||||||
|
helperText={error ? "step is out of allowed range" : " "}
|
||||||
|
onChange={onChangeStep}/>
|
||||||
|
</Box>;
|
||||||
|
};
|
||||||
|
|
||||||
|
export default StepConfigurator;
|
|
@ -10,6 +10,7 @@ import KeyboardArrowDownIcon from "@mui/icons-material/KeyboardArrowDown";
|
||||||
import List from "@mui/material/List";
|
import List from "@mui/material/List";
|
||||||
import ListItem from "@mui/material/ListItem";
|
import ListItem from "@mui/material/ListItem";
|
||||||
import ListItemText from "@mui/material/ListItemText";
|
import ListItemText from "@mui/material/ListItemText";
|
||||||
|
import {useLocation} from "react-router-dom";
|
||||||
|
|
||||||
interface AutoRefreshOption {
|
interface AutoRefreshOption {
|
||||||
seconds: number
|
seconds: number
|
||||||
|
@ -36,6 +37,12 @@ export const ExecutionControls: FC = () => {
|
||||||
const dispatch = useAppDispatch();
|
const dispatch = useAppDispatch();
|
||||||
const {queryControls: {autoRefresh}} = useAppState();
|
const {queryControls: {autoRefresh}} = useAppState();
|
||||||
|
|
||||||
|
const location = useLocation();
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
if (autoRefresh) dispatch({type: "TOGGLE_AUTOREFRESH"});
|
||||||
|
}, [location]);
|
||||||
|
|
||||||
const [selectedDelay, setSelectedDelay] = useState<AutoRefreshOption>(delayOptions[0]);
|
const [selectedDelay, setSelectedDelay] = useState<AutoRefreshOption>(delayOptions[0]);
|
||||||
|
|
||||||
const handleChange = (d: AutoRefreshOption) => {
|
const handleChange = (d: AutoRefreshOption) => {
|
|
@ -0,0 +1,68 @@
|
||||||
|
import React, {FC} from "preact/compat";
|
||||||
|
import Alert from "@mui/material/Alert";
|
||||||
|
import Box from "@mui/material/Box";
|
||||||
|
import GraphView from "./Views/GraphView";
|
||||||
|
import TableView from "./Views/TableView";
|
||||||
|
import {useAppDispatch, useAppState} from "../../state/common/StateContext";
|
||||||
|
import QueryConfigurator from "./Configurator/Query/QueryConfigurator";
|
||||||
|
import {useFetchQuery} from "../../hooks/useFetchQuery";
|
||||||
|
import JsonView from "./Views/JsonView";
|
||||||
|
import {DisplayTypeSwitch} from "./Configurator/DisplayTypeSwitch";
|
||||||
|
import GraphSettings from "./Configurator/Graph/GraphSettings";
|
||||||
|
import {useGraphDispatch, useGraphState} from "../../state/graph/GraphStateContext";
|
||||||
|
import {AxisRange} from "../../state/graph/reducer";
|
||||||
|
import Spinner from "../common/Spinner";
|
||||||
|
|
||||||
|
const CustomPanel: FC = () => {
|
||||||
|
|
||||||
|
const {displayType, time: {period}, query} = useAppState();
|
||||||
|
const { customStep, yaxis } = useGraphState();
|
||||||
|
|
||||||
|
const dispatch = useAppDispatch();
|
||||||
|
const graphDispatch = useGraphDispatch();
|
||||||
|
|
||||||
|
const setYaxisLimits = (limits: AxisRange) => {
|
||||||
|
graphDispatch({type: "SET_YAXIS_LIMITS", payload: limits});
|
||||||
|
};
|
||||||
|
|
||||||
|
const toggleEnableLimits = () => {
|
||||||
|
graphDispatch({type: "TOGGLE_ENABLE_YAXIS_LIMITS"});
|
||||||
|
};
|
||||||
|
|
||||||
|
const setPeriod = ({from, to}: {from: Date, to: Date}) => {
|
||||||
|
dispatch({type: "SET_PERIOD", payload: {from, to}});
|
||||||
|
};
|
||||||
|
|
||||||
|
const {isLoading, liveData, graphData, error, queryOptions} = useFetchQuery({
|
||||||
|
visible: true,
|
||||||
|
customStep
|
||||||
|
});
|
||||||
|
|
||||||
|
return (
|
||||||
|
<Box p={4} display="grid" gridTemplateRows="auto 1fr" style={{minHeight: "calc(100vh - 64px)"}}>
|
||||||
|
<QueryConfigurator error={error} queryOptions={queryOptions}/>
|
||||||
|
<Box height="100%">
|
||||||
|
{isLoading && <Spinner isLoading={isLoading} height={"500px"}/>}
|
||||||
|
{<Box height={"100%"} bgcolor={"#fff"}>
|
||||||
|
<Box display="grid" gridTemplateColumns="1fr auto" alignItems="center" mx={-4} px={4} mb={2}
|
||||||
|
borderBottom={1} borderColor="divider">
|
||||||
|
<DisplayTypeSwitch/>
|
||||||
|
{displayType === "chart" && <GraphSettings
|
||||||
|
yaxis={yaxis}
|
||||||
|
setYaxisLimits={setYaxisLimits}
|
||||||
|
toggleEnableLimits={toggleEnableLimits}
|
||||||
|
/>}
|
||||||
|
</Box>
|
||||||
|
{error && <Alert color="error" severity="error" sx={{whiteSpace: "pre-wrap", mt: 2}}>{error}</Alert>}
|
||||||
|
{graphData && period && (displayType === "chart") &&
|
||||||
|
<GraphView data={graphData} period={period} customStep={customStep} query={query} yaxis={yaxis}
|
||||||
|
setYaxisLimits={setYaxisLimits} setPeriod={setPeriod}/>}
|
||||||
|
{liveData && (displayType === "code") && <JsonView data={liveData}/>}
|
||||||
|
{liveData && (displayType === "table") && <TableView data={liveData}/>}
|
||||||
|
</Box>}
|
||||||
|
</Box>
|
||||||
|
</Box>
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
export default CustomPanel;
|
|
@ -3,14 +3,23 @@ import {MetricResult} from "../../../api/types";
|
||||||
import LineChart from "../../LineChart/LineChart";
|
import LineChart from "../../LineChart/LineChart";
|
||||||
import {AlignedData as uPlotData, Series as uPlotSeries} from "uplot";
|
import {AlignedData as uPlotData, Series as uPlotSeries} from "uplot";
|
||||||
import Legend from "../../Legend/Legend";
|
import Legend from "../../Legend/Legend";
|
||||||
import {useGraphDispatch, useGraphState} from "../../../state/graph/GraphStateContext";
|
|
||||||
import {getHideSeries, getLegendItem, getSeriesItem} from "../../../utils/uplot/series";
|
import {getHideSeries, getLegendItem, getSeriesItem} from "../../../utils/uplot/series";
|
||||||
import {getLimitsYAxis, getTimeSeries} from "../../../utils/uplot/axes";
|
import {getLimitsYAxis, getTimeSeries} from "../../../utils/uplot/axes";
|
||||||
import {LegendItem} from "../../../utils/uplot/types";
|
import {LegendItem} from "../../../utils/uplot/types";
|
||||||
import {useAppState} from "../../../state/common/StateContext";
|
import {TimeParams} from "../../../types";
|
||||||
|
import {AxisRange, CustomStep, YaxisState} from "../../../state/graph/reducer";
|
||||||
|
import Alert from "@mui/material/Alert";
|
||||||
|
|
||||||
export interface GraphViewProps {
|
export interface GraphViewProps {
|
||||||
data?: MetricResult[];
|
data?: MetricResult[];
|
||||||
|
period: TimeParams;
|
||||||
|
customStep: CustomStep;
|
||||||
|
query: string[];
|
||||||
|
yaxis: YaxisState;
|
||||||
|
unit?: string;
|
||||||
|
showLegend?: boolean;
|
||||||
|
setYaxisLimits: (val: AxisRange) => void
|
||||||
|
setPeriod: ({from, to}: {from: Date, to: Date}) => void
|
||||||
}
|
}
|
||||||
|
|
||||||
const promValueToNumber = (s: string): number => {
|
const promValueToNumber = (s: string): number => {
|
||||||
|
@ -28,10 +37,17 @@ const promValueToNumber = (s: string): number => {
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
const GraphView: FC<GraphViewProps> = ({data = []}) => {
|
const GraphView: FC<GraphViewProps> = ({
|
||||||
const graphDispatch = useGraphDispatch();
|
data = [],
|
||||||
const {time: {period}} = useAppState();
|
period,
|
||||||
const { customStep } = useGraphState();
|
customStep,
|
||||||
|
query,
|
||||||
|
yaxis,
|
||||||
|
unit,
|
||||||
|
showLegend= true,
|
||||||
|
setYaxisLimits,
|
||||||
|
setPeriod
|
||||||
|
}) => {
|
||||||
const currentStep = useMemo(() => customStep.enable ? customStep.value : period.step || 1, [period.step, customStep]);
|
const currentStep = useMemo(() => customStep.enable ? customStep.value : period.step || 1, [period.step, customStep]);
|
||||||
|
|
||||||
const [dataChart, setDataChart] = useState<uPlotData>([[]]);
|
const [dataChart, setDataChart] = useState<uPlotData>([[]]);
|
||||||
|
@ -41,7 +57,7 @@ const GraphView: FC<GraphViewProps> = ({data = []}) => {
|
||||||
|
|
||||||
const setLimitsYaxis = (values: {[key: string]: number[]}) => {
|
const setLimitsYaxis = (values: {[key: string]: number[]}) => {
|
||||||
const limits = getLimitsYAxis(values);
|
const limits = getLimitsYAxis(values);
|
||||||
graphDispatch({type: "SET_YAXIS_LIMITS", payload: limits});
|
setYaxisLimits(limits);
|
||||||
};
|
};
|
||||||
|
|
||||||
const onChangeLegend = (legend: LegendItem, metaKey: boolean) => {
|
const onChangeLegend = (legend: LegendItem, metaKey: boolean) => {
|
||||||
|
@ -113,10 +129,10 @@ const GraphView: FC<GraphViewProps> = ({data = []}) => {
|
||||||
return <>
|
return <>
|
||||||
{(data.length > 0)
|
{(data.length > 0)
|
||||||
? <div>
|
? <div>
|
||||||
<LineChart data={dataChart} series={series} metrics={data}/>
|
<LineChart data={dataChart} series={series} metrics={data} period={period} yaxis={yaxis} unit={unit} setPeriod={setPeriod}/>
|
||||||
<Legend labels={legend} onChange={onChangeLegend}/>
|
{showLegend && <Legend labels={legend} query={query} onChange={onChangeLegend}/>}
|
||||||
</div>
|
</div>
|
||||||
: <div style={{textAlign: "center"}}>No data to show</div>}
|
: <Alert color="warning" severity="warning" sx={{mt: 2}}>No data to show</Alert>}
|
||||||
</>;
|
</>;
|
||||||
};
|
};
|
||||||
|
|
|
@ -10,6 +10,7 @@ import TableRow from "@mui/material/TableRow";
|
||||||
import TableSortLabel from "@mui/material/TableSortLabel";
|
import TableSortLabel from "@mui/material/TableSortLabel";
|
||||||
import makeStyles from "@mui/styles/makeStyles";
|
import makeStyles from "@mui/styles/makeStyles";
|
||||||
import {useSortedCategories} from "../../../hooks/useSortedCategories";
|
import {useSortedCategories} from "../../../hooks/useSortedCategories";
|
||||||
|
import Alert from "@mui/material/Alert";
|
||||||
|
|
||||||
export interface GraphViewProps {
|
export interface GraphViewProps {
|
||||||
data: InstantMetricResult[];
|
data: InstantMetricResult[];
|
||||||
|
@ -98,7 +99,7 @@ const TableView: FC<GraphViewProps> = ({data}) => {
|
||||||
</TableBody>
|
</TableBody>
|
||||||
</Table>
|
</Table>
|
||||||
</TableContainer>
|
</TableContainer>
|
||||||
: <div style={{textAlign: "center"}}>No data to show</div>}
|
: <Alert color="warning" severity="warning" sx={{mt: 2}}>No data to show</Alert>}
|
||||||
</>
|
</>
|
||||||
);
|
);
|
||||||
};
|
};
|
|
@ -1,15 +1,19 @@
|
||||||
import React, {FC} from "preact/compat";
|
import React, {FC, useState} from "preact/compat";
|
||||||
import AppBar from "@mui/material/AppBar";
|
import AppBar from "@mui/material/AppBar";
|
||||||
import Box from "@mui/material/Box";
|
import Box from "@mui/material/Box";
|
||||||
import Link from "@mui/material/Link";
|
import Link from "@mui/material/Link";
|
||||||
import Toolbar from "@mui/material/Toolbar";
|
import Toolbar from "@mui/material/Toolbar";
|
||||||
import Typography from "@mui/material/Typography";
|
import Typography from "@mui/material/Typography";
|
||||||
import {ExecutionControls} from "../Home/Configurator/Time/ExecutionControls";
|
import {ExecutionControls} from "../CustomPanel/Configurator/Time/ExecutionControls";
|
||||||
import Logo from "../common/Logo";
|
import Logo from "../common/Logo";
|
||||||
import makeStyles from "@mui/styles/makeStyles";
|
import makeStyles from "@mui/styles/makeStyles";
|
||||||
import {setQueryStringWithoutPageReload} from "../../utils/query-string";
|
import {setQueryStringWithoutPageReload} from "../../utils/query-string";
|
||||||
import {TimeSelector} from "../Home/Configurator/Time/TimeSelector";
|
import {TimeSelector} from "../CustomPanel/Configurator/Time/TimeSelector";
|
||||||
import GlobalSettings from "../Home/Configurator/Settings/GlobalSettings";
|
import GlobalSettings from "../CustomPanel/Configurator/Settings/GlobalSettings";
|
||||||
|
import {Link as RouterLink, useLocation, useNavigate} from "react-router-dom";
|
||||||
|
import Tabs from "@mui/material/Tabs";
|
||||||
|
import Tab from "@mui/material/Tab";
|
||||||
|
import router from "../../router/index";
|
||||||
|
|
||||||
const useStyles = makeStyles({
|
const useStyles = makeStyles({
|
||||||
logo: {
|
logo: {
|
||||||
|
@ -32,18 +36,41 @@ const useStyles = makeStyles({
|
||||||
"&:hover": {
|
"&:hover": {
|
||||||
opacity: ".8",
|
opacity: ".8",
|
||||||
}
|
}
|
||||||
|
},
|
||||||
|
menuLink: {
|
||||||
|
display: "block",
|
||||||
|
padding: "16px 8px",
|
||||||
|
color: "white",
|
||||||
|
fontSize: "11px",
|
||||||
|
textDecoration: "none",
|
||||||
|
cursor: "pointer",
|
||||||
|
textTransform: "uppercase",
|
||||||
|
borderRadius: "4px",
|
||||||
|
transition: ".2s background",
|
||||||
|
"&:hover": {
|
||||||
|
boxShadow: "rgba(0, 0, 0, 0.15) 0px 2px 8px"
|
||||||
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
const Header: FC = () => {
|
const Header: FC = () => {
|
||||||
|
|
||||||
const classes = useStyles();
|
const classes = useStyles();
|
||||||
|
const {search, pathname} = useLocation();
|
||||||
|
const navigate = useNavigate();
|
||||||
|
|
||||||
|
const [activeMenu, setActiveMenu] = useState(pathname);
|
||||||
|
|
||||||
const onClickLogo = () => {
|
const onClickLogo = () => {
|
||||||
|
navigateHandler(router.home);
|
||||||
setQueryStringWithoutPageReload("");
|
setQueryStringWithoutPageReload("");
|
||||||
window.location.reload();
|
window.location.reload();
|
||||||
};
|
};
|
||||||
|
|
||||||
|
const navigateHandler = (pathname: string) => {
|
||||||
|
navigate({pathname, search: search});
|
||||||
|
};
|
||||||
|
|
||||||
return <AppBar position="static" sx={{px: 1, boxShadow: "none"}}>
|
return <AppBar position="static" sx={{px: 1, boxShadow: "none"}}>
|
||||||
<Toolbar>
|
<Toolbar>
|
||||||
<Box display="grid" alignItems="center" justifyContent="center">
|
<Box display="grid" alignItems="center" justifyContent="center">
|
||||||
|
@ -59,6 +86,13 @@ const Header: FC = () => {
|
||||||
create an issue
|
create an issue
|
||||||
</Link>
|
</Link>
|
||||||
</Box>
|
</Box>
|
||||||
|
<Box sx={{ml: 8}}>
|
||||||
|
<Tabs value={activeMenu} textColor="inherit" TabIndicatorProps={{style: {background: "white"}}}
|
||||||
|
onChange={(e, val) => setActiveMenu(val)}>
|
||||||
|
<Tab label="Custom panel" value={router.home} component={RouterLink} to={`${router.home}${search}`}/>
|
||||||
|
<Tab label="Dashboards" value={router.dashboards} component={RouterLink} to={`${router.dashboards}${search}`}/>
|
||||||
|
</Tabs>
|
||||||
|
</Box>
|
||||||
<Box display="grid" gridTemplateColumns="repeat(3, auto)" gap={1} alignItems="center" ml="auto" mr={0}>
|
<Box display="grid" gridTemplateColumns="repeat(3, auto)" gap={1} alignItems="center" ml="auto" mr={0}>
|
||||||
<TimeSelector/>
|
<TimeSelector/>
|
||||||
<ExecutionControls/>
|
<ExecutionControls/>
|
||||||
|
|
|
@ -1,53 +0,0 @@
|
||||||
import React, {FC, useCallback, useEffect, useState} from "preact/compat";
|
|
||||||
import {ChangeEvent} from "react";
|
|
||||||
import Box from "@mui/material/Box";
|
|
||||||
import FormControlLabel from "@mui/material/FormControlLabel";
|
|
||||||
import TextField from "@mui/material/TextField";
|
|
||||||
import BasicSwitch from "../../../../theme/switch";
|
|
||||||
import {useGraphDispatch, useGraphState} from "../../../../state/graph/GraphStateContext";
|
|
||||||
import {useAppState} from "../../../../state/common/StateContext";
|
|
||||||
import debounce from "lodash.debounce";
|
|
||||||
|
|
||||||
const StepConfigurator: FC = () => {
|
|
||||||
const {customStep} = useGraphState();
|
|
||||||
const graphDispatch = useGraphDispatch();
|
|
||||||
const [error, setError] = useState(false);
|
|
||||||
const {time: {period: {step}}} = useAppState();
|
|
||||||
|
|
||||||
const onChangeStep = (e: ChangeEvent<HTMLInputElement | HTMLTextAreaElement>) => {
|
|
||||||
const value = +e.target.value;
|
|
||||||
if (value > 0) {
|
|
||||||
graphDispatch({type: "SET_CUSTOM_STEP", payload: value});
|
|
||||||
setError(false);
|
|
||||||
} else {
|
|
||||||
setError(true);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
const debouncedOnChangeStep = useCallback(debounce(onChangeStep, 500), [customStep.value]);
|
|
||||||
|
|
||||||
const onChangeEnableStep = () => {
|
|
||||||
setError(false);
|
|
||||||
graphDispatch({type: "TOGGLE_CUSTOM_STEP"});
|
|
||||||
};
|
|
||||||
|
|
||||||
useEffect(() => {
|
|
||||||
if (!customStep.enable) graphDispatch({type: "SET_CUSTOM_STEP", payload: step || 1});
|
|
||||||
}, [step]);
|
|
||||||
|
|
||||||
return <Box display="grid" gridTemplateColumns="auto 120px" alignItems="center">
|
|
||||||
<FormControlLabel
|
|
||||||
control={<BasicSwitch checked={customStep.enable} onChange={onChangeEnableStep}/>}
|
|
||||||
label="Override step value"
|
|
||||||
/>
|
|
||||||
{customStep.enable &&
|
|
||||||
<TextField label="Step value" type="number" size="small" variant="outlined"
|
|
||||||
defaultValue={customStep.value}
|
|
||||||
error={error}
|
|
||||||
helperText={error ? "step is out of allowed range" : " "}
|
|
||||||
onChange={debouncedOnChangeStep}/>
|
|
||||||
}
|
|
||||||
</Box>;
|
|
||||||
};
|
|
||||||
|
|
||||||
export default StepConfigurator;
|
|
|
@ -1,62 +1,13 @@
|
||||||
import React, {FC} from "preact/compat";
|
|
||||||
import Alert from "@mui/material/Alert";
|
|
||||||
import Box from "@mui/material/Box";
|
|
||||||
import CircularProgress from "@mui/material/CircularProgress";
|
|
||||||
import Fade from "@mui/material/Fade";
|
|
||||||
import GraphView from "./Views/GraphView";
|
|
||||||
import TableView from "./Views/TableView";
|
|
||||||
import {useAppState} from "../../state/common/StateContext";
|
|
||||||
import QueryConfigurator from "./Configurator/Query/QueryConfigurator";
|
|
||||||
import {useFetchQuery} from "./Configurator/Query/useFetchQuery";
|
|
||||||
import JsonView from "./Views/JsonView";
|
|
||||||
import Header from "../Header/Header";
|
import Header from "../Header/Header";
|
||||||
import {DisplayTypeSwitch} from "./Configurator/DisplayTypeSwitch";
|
import React, {FC} from "preact/compat";
|
||||||
import GraphSettings from "./Configurator/Graph/GraphSettings";
|
import Box from "@mui/material/Box";
|
||||||
|
import { Outlet } from "react-router-dom";
|
||||||
|
|
||||||
const HomeLayout: FC = () => {
|
const HomeLayout: FC = () => {
|
||||||
|
return <Box id="homeLayout">
|
||||||
const {displayType, time: {period}} = useAppState();
|
<Header/>
|
||||||
|
<Outlet/>
|
||||||
const {isLoading, liveData, graphData, error, queryOptions} = useFetchQuery();
|
</Box>;
|
||||||
|
|
||||||
return (
|
|
||||||
<Box id="homeLayout">
|
|
||||||
<Header/>
|
|
||||||
<Box p={4} display="grid" gridTemplateRows="auto 1fr" style={{minHeight: "calc(100vh - 64px)"}}>
|
|
||||||
<QueryConfigurator error={error} queryOptions={queryOptions}/>
|
|
||||||
<Box height="100%">
|
|
||||||
{isLoading && <Fade in={isLoading} style={{
|
|
||||||
transitionDelay: isLoading ? "300ms" : "0ms",
|
|
||||||
}}>
|
|
||||||
<Box alignItems="center" justifyContent="center" flexDirection="column" display="flex"
|
|
||||||
style={{
|
|
||||||
width: "100%",
|
|
||||||
maxWidth: "calc(100vw - 64px)",
|
|
||||||
position: "absolute",
|
|
||||||
height: "50%",
|
|
||||||
background: "linear-gradient(rgba(255,255,255,.7), rgba(255,255,255,.7), rgba(255,255,255,0))"
|
|
||||||
}}>
|
|
||||||
<CircularProgress/>
|
|
||||||
</Box>
|
|
||||||
</Fade>}
|
|
||||||
{<Box height={"100%"} bgcolor={"#fff"}>
|
|
||||||
<Box display="grid" gridTemplateColumns="1fr auto" alignItems="center" mx={-4} px={4} mb={2}
|
|
||||||
borderBottom={1} borderColor="divider">
|
|
||||||
<DisplayTypeSwitch/>
|
|
||||||
{displayType === "chart" && <GraphSettings/>}
|
|
||||||
</Box>
|
|
||||||
{error && <Alert color="error" severity="error"
|
|
||||||
style={{fontSize: "14px", whiteSpace: "pre-wrap", marginTop: "20px"}}>
|
|
||||||
{error}
|
|
||||||
</Alert>}
|
|
||||||
{graphData && period && (displayType === "chart") && <GraphView data={graphData}/>}
|
|
||||||
{liveData && (displayType === "code") && <JsonView data={liveData}/>}
|
|
||||||
{liveData && (displayType === "table") && <TableView data={liveData}/>}
|
|
||||||
</Box>}
|
|
||||||
</Box>
|
|
||||||
</Box>
|
|
||||||
</Box>
|
|
||||||
);
|
|
||||||
};
|
};
|
||||||
|
|
||||||
export default HomeLayout;
|
export default HomeLayout;
|
|
@ -1,6 +1,5 @@
|
||||||
import React, {FC, useMemo, useState} from "preact/compat";
|
import React, {FC, useMemo, useState} from "preact/compat";
|
||||||
import {hexToRGB} from "../../utils/color";
|
import {hexToRGB} from "../../utils/color";
|
||||||
import {useAppState} from "../../state/common/StateContext";
|
|
||||||
import {LegendItem} from "../../utils/uplot/types";
|
import {LegendItem} from "../../utils/uplot/types";
|
||||||
import "./legend.css";
|
import "./legend.css";
|
||||||
import {getDashLine} from "../../utils/uplot/helpers";
|
import {getDashLine} from "../../utils/uplot/helpers";
|
||||||
|
@ -8,12 +7,11 @@ import Tooltip from "@mui/material/Tooltip";
|
||||||
|
|
||||||
export interface LegendProps {
|
export interface LegendProps {
|
||||||
labels: LegendItem[];
|
labels: LegendItem[];
|
||||||
|
query: string[];
|
||||||
onChange: (item: LegendItem, metaKey: boolean) => void;
|
onChange: (item: LegendItem, metaKey: boolean) => void;
|
||||||
}
|
}
|
||||||
|
|
||||||
const Legend: FC<LegendProps> = ({labels, onChange}) => {
|
const Legend: FC<LegendProps> = ({labels, query, onChange}) => {
|
||||||
const {query} = useAppState();
|
|
||||||
|
|
||||||
const [copiedValue, setCopiedValue] = useState("");
|
const [copiedValue, setCopiedValue] = useState("");
|
||||||
|
|
||||||
const groups = useMemo(() => {
|
const groups = useMemo(() => {
|
||||||
|
|
|
@ -1,14 +1,13 @@
|
||||||
.legendWrapper {
|
.legendWrapper {
|
||||||
position: relative;
|
position: relative;
|
||||||
display: grid;
|
display: flex;
|
||||||
grid-template-columns: repeat(auto-fit, minmax(400px, 1fr));
|
flex-wrap: wrap;
|
||||||
grid-gap: 20px;
|
|
||||||
margin-top: 20px;
|
margin-top: 20px;
|
||||||
cursor: default;
|
cursor: default;
|
||||||
}
|
}
|
||||||
|
|
||||||
.legendGroup {
|
.legendGroup {
|
||||||
margin-bottom: 24px;
|
margin: 0 12px 24px 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
.legendGroupTitle {
|
.legendGroupTitle {
|
||||||
|
@ -29,7 +28,7 @@
|
||||||
}
|
}
|
||||||
|
|
||||||
.legendItem {
|
.legendItem {
|
||||||
display: inline-grid;
|
display: grid;
|
||||||
grid-template-columns: auto auto;
|
grid-template-columns: auto auto;
|
||||||
grid-gap: 6px;
|
grid-gap: 6px;
|
||||||
align-items: start;
|
align-items: start;
|
||||||
|
|
|
@ -1,7 +1,5 @@
|
||||||
import React, {FC, useCallback, useEffect, useRef, useState} from "preact/compat";
|
import React, {FC, useCallback, useEffect, useRef, useState} from "preact/compat";
|
||||||
import {useAppDispatch, useAppState} from "../../state/common/StateContext";
|
|
||||||
import uPlot, {AlignedData as uPlotData, Options as uPlotOptions, Series as uPlotSeries, Range, Scales, Scale} from "uplot";
|
import uPlot, {AlignedData as uPlotData, Options as uPlotOptions, Series as uPlotSeries, Range, Scales, Scale} from "uplot";
|
||||||
import {useGraphState} from "../../state/graph/GraphStateContext";
|
|
||||||
import {defaultOptions} from "../../utils/uplot/helpers";
|
import {defaultOptions} from "../../utils/uplot/helpers";
|
||||||
import {dragChart} from "../../utils/uplot/events";
|
import {dragChart} from "../../utils/uplot/events";
|
||||||
import {getAxes, getMinMaxBuffer} from "../../utils/uplot/axes";
|
import {getAxes, getMinMaxBuffer} from "../../utils/uplot/axes";
|
||||||
|
@ -12,18 +10,22 @@ import throttle from "lodash.throttle";
|
||||||
import "uplot/dist/uPlot.min.css";
|
import "uplot/dist/uPlot.min.css";
|
||||||
import "./tooltip.css";
|
import "./tooltip.css";
|
||||||
import useResize from "../../hooks/useResize";
|
import useResize from "../../hooks/useResize";
|
||||||
|
import {TimeParams} from "../../types";
|
||||||
|
import {YaxisState} from "../../state/graph/reducer";
|
||||||
|
|
||||||
export interface LineChartProps {
|
export interface LineChartProps {
|
||||||
metrics: MetricResult[];
|
metrics: MetricResult[];
|
||||||
data: uPlotData;
|
data: uPlotData;
|
||||||
series: uPlotSeries[];
|
period: TimeParams;
|
||||||
|
yaxis: YaxisState;
|
||||||
|
series: uPlotSeries[];
|
||||||
|
unit?: string;
|
||||||
|
setPeriod: ({from, to}: {from: Date, to: Date}) => void;
|
||||||
}
|
}
|
||||||
enum typeChartUpdate {xRange = "xRange", yRange = "yRange", data = "data"}
|
enum typeChartUpdate {xRange = "xRange", yRange = "yRange", data = "data"}
|
||||||
|
|
||||||
const LineChart: FC<LineChartProps> = ({data, series, metrics = []}) => {
|
const LineChart: FC<LineChartProps> = ({data, series, metrics = [],
|
||||||
const dispatch = useAppDispatch();
|
period, yaxis, unit, setPeriod}) => {
|
||||||
const {time: {period}} = useAppState();
|
|
||||||
const {yaxis} = useGraphState();
|
|
||||||
const uPlotRef = useRef<HTMLDivElement>(null);
|
const uPlotRef = useRef<HTMLDivElement>(null);
|
||||||
const [isPanning, setPanning] = useState(false);
|
const [isPanning, setPanning] = useState(false);
|
||||||
const [xRange, setXRange] = useState({min: period.start, max: period.end});
|
const [xRange, setXRange] = useState({min: period.start, max: period.end});
|
||||||
|
@ -36,7 +38,7 @@ const LineChart: FC<LineChartProps> = ({data, series, metrics = []}) => {
|
||||||
const tooltipOffset = {left: 0, top: 0};
|
const tooltipOffset = {left: 0, top: 0};
|
||||||
|
|
||||||
const setScale = ({min, max}: { min: number, max: number }): void => {
|
const setScale = ({min, max}: { min: number, max: number }): void => {
|
||||||
dispatch({type: "SET_PERIOD", payload: {from: new Date(min * 1000), to: new Date(max * 1000)}});
|
setPeriod({from: new Date(min * 1000), to: new Date(max * 1000)});
|
||||||
};
|
};
|
||||||
const throttledSetScale = useCallback(throttle(setScale, 500), []);
|
const throttledSetScale = useCallback(throttle(setScale, 500), []);
|
||||||
const setPlotScale = ({u, min, max}: { u: uPlot, min: number, max: number }) => {
|
const setPlotScale = ({u, min, max}: { u: uPlot, min: number, max: number }) => {
|
||||||
|
@ -73,7 +75,7 @@ const LineChart: FC<LineChartProps> = ({data, series, metrics = []}) => {
|
||||||
if (tooltipIdx.dataIdx === u.cursor.idx) return;
|
if (tooltipIdx.dataIdx === u.cursor.idx) return;
|
||||||
tooltipIdx.dataIdx = u.cursor.idx || 0;
|
tooltipIdx.dataIdx = u.cursor.idx || 0;
|
||||||
if (tooltipIdx.seriesIdx !== null && tooltipIdx.dataIdx !== undefined) {
|
if (tooltipIdx.seriesIdx !== null && tooltipIdx.dataIdx !== undefined) {
|
||||||
setTooltip({u, tooltipIdx, metrics, series, tooltip, tooltipOffset});
|
setTooltip({u, tooltipIdx, metrics, series, tooltip, tooltipOffset, unit});
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -81,7 +83,7 @@ const LineChart: FC<LineChartProps> = ({data, series, metrics = []}) => {
|
||||||
if (tooltipIdx.seriesIdx === sidx) return;
|
if (tooltipIdx.seriesIdx === sidx) return;
|
||||||
tooltipIdx.seriesIdx = sidx;
|
tooltipIdx.seriesIdx = sidx;
|
||||||
sidx && tooltipIdx.dataIdx !== undefined
|
sidx && tooltipIdx.dataIdx !== undefined
|
||||||
? setTooltip({u, tooltipIdx, metrics, series, tooltip, tooltipOffset})
|
? setTooltip({u, tooltipIdx, metrics, series, tooltip, tooltipOffset, unit})
|
||||||
: tooltip.style.display = "none";
|
: tooltip.style.display = "none";
|
||||||
};
|
};
|
||||||
const getRangeX = (): Range.MinMax => [xRange.min, xRange.max];
|
const getRangeX = (): Range.MinMax => [xRange.min, xRange.max];
|
||||||
|
@ -101,7 +103,7 @@ const LineChart: FC<LineChartProps> = ({data, series, metrics = []}) => {
|
||||||
const options: uPlotOptions = {
|
const options: uPlotOptions = {
|
||||||
...defaultOptions,
|
...defaultOptions,
|
||||||
series,
|
series,
|
||||||
axes: getAxes(series),
|
axes: getAxes(series, unit),
|
||||||
scales: {...getScales()},
|
scales: {...getScales()},
|
||||||
width: layoutSize.width ? layoutSize.width - 64 : 400,
|
width: layoutSize.width ? layoutSize.width - 64 : 400,
|
||||||
plugins: [{hooks: {ready: onReadyChart, setCursor, setSeries: seriesFocus}}],
|
plugins: [{hooks: {ready: onReadyChart, setCursor, setSeries: seriesFocus}}],
|
||||||
|
@ -123,7 +125,7 @@ const LineChart: FC<LineChartProps> = ({data, series, metrics = []}) => {
|
||||||
uPlotInst.setData(data);
|
uPlotInst.setData(data);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
uPlotInst.redraw();
|
if (!isPanning) uPlotInst.redraw();
|
||||||
};
|
};
|
||||||
|
|
||||||
useEffect(() => setXRange({min: period.start, max: period.end}), [period]);
|
useEffect(() => setXRange({min: period.start, max: period.end}), [period]);
|
||||||
|
|
|
@ -0,0 +1,53 @@
|
||||||
|
import React, {FC, useEffect, useMemo, useState} from "preact/compat";
|
||||||
|
import getDashboardSettings from "./getDashboardSettings";
|
||||||
|
import {DashboardRow, DashboardSettings} from "../../types";
|
||||||
|
import Box from "@mui/material/Box";
|
||||||
|
import Alert from "@mui/material/Alert";
|
||||||
|
import Tabs from "@mui/material/Tabs";
|
||||||
|
import Tab from "@mui/material/Tab";
|
||||||
|
import PredefinedDashboard from "./PredefinedDashboard";
|
||||||
|
import get from "lodash.get";
|
||||||
|
|
||||||
|
const DashboardLayout: FC = () => {
|
||||||
|
|
||||||
|
const [dashboards, setDashboards] = useState<DashboardSettings[]>();
|
||||||
|
const [tab, setTab] = useState(0);
|
||||||
|
|
||||||
|
const filename = useMemo(() => get(dashboards, [tab, "filename"], ""), [dashboards, tab]);
|
||||||
|
|
||||||
|
const rows = useMemo(() => {
|
||||||
|
return get(dashboards, [tab, "rows"], []) as DashboardRow[];
|
||||||
|
}, [dashboards, tab]);
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
getDashboardSettings().then(d => d.length && setDashboards(d));
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
return <>
|
||||||
|
{!dashboards && <Alert color="info" severity="info" sx={{m: 4}}>Dashboards not found</Alert>}
|
||||||
|
{dashboards && <>
|
||||||
|
<Box sx={{ borderBottom: 1, borderColor: "divider" }}>
|
||||||
|
<Tabs value={tab} onChange={(e, val) => setTab(val)} aria-label="dashboard-tabs">
|
||||||
|
{dashboards && dashboards.map((d, i) =>
|
||||||
|
<Tab key={i} label={d.title || d.filename} id={`tab-${i}`} aria-controls={`tabpanel-${i}`}/>
|
||||||
|
)}
|
||||||
|
</Tabs>
|
||||||
|
</Box>
|
||||||
|
<Box>
|
||||||
|
{Array.isArray(rows) && !!rows.length
|
||||||
|
? rows.map((r,i) =>
|
||||||
|
<PredefinedDashboard
|
||||||
|
key={`${tab}_${i}`}
|
||||||
|
index={i}
|
||||||
|
filename={filename}
|
||||||
|
title={r.title}
|
||||||
|
panels={r.panels}/>)
|
||||||
|
: <Alert color="error" severity="error" sx={{m: 4}}>
|
||||||
|
<code>"rows"</code> not found. Check the configuration file <b>{filename}</b>.
|
||||||
|
</Alert>}
|
||||||
|
</Box>
|
||||||
|
</>}
|
||||||
|
</>;
|
||||||
|
};
|
||||||
|
|
||||||
|
export default DashboardLayout;
|
|
@ -0,0 +1,48 @@
|
||||||
|
import React, {FC} from "preact/compat";
|
||||||
|
import {DashboardRow} from "../../types";
|
||||||
|
import Box from "@mui/material/Box";
|
||||||
|
import Accordion from "@mui/material/Accordion";
|
||||||
|
import AccordionSummary from "@mui/material/AccordionSummary";
|
||||||
|
import AccordionDetails from "@mui/material/AccordionDetails";
|
||||||
|
import ExpandMoreIcon from "@mui/icons-material/ExpandMore";
|
||||||
|
import Typography from "@mui/material/Typography";
|
||||||
|
import PredefinedPanels from "./PredefinedPanels";
|
||||||
|
import Alert from "@mui/material/Alert";
|
||||||
|
|
||||||
|
export interface PredefinedDashboardProps extends DashboardRow {
|
||||||
|
filename: string;
|
||||||
|
index: number;
|
||||||
|
}
|
||||||
|
|
||||||
|
const PredefinedDashboard: FC<PredefinedDashboardProps> = ({index, title, panels, filename}) => {
|
||||||
|
|
||||||
|
return <Accordion defaultExpanded={!index} sx={{boxShadow: "none"}}>
|
||||||
|
<AccordionSummary
|
||||||
|
sx={{px: 3, bgcolor: "rgba(227, 242, 253, 0.6)"}}
|
||||||
|
aria-controls={`panel${index}-content`}
|
||||||
|
id={`panel${index}-header`}
|
||||||
|
expandIcon={<ExpandMoreIcon />}
|
||||||
|
>
|
||||||
|
<Box display="flex" alignItems="center" width={"100%"}>
|
||||||
|
{title && <Typography variant="h6" fontWeight="bold" sx={{mr: 2}}>{title}</Typography>}
|
||||||
|
{panels && <Typography variant="body2" fontStyle="italic">({panels.length} panels)</Typography>}
|
||||||
|
</Box>
|
||||||
|
</AccordionSummary>
|
||||||
|
<AccordionDetails sx={{display: "grid", gridGap: "10px"}}>
|
||||||
|
{Array.isArray(panels) && !!panels.length
|
||||||
|
? panels.map((p, i) => <PredefinedPanels key={i}
|
||||||
|
title={p.title}
|
||||||
|
description={p.description}
|
||||||
|
unit={p.unit}
|
||||||
|
expr={p.expr}
|
||||||
|
filename={filename}
|
||||||
|
showLegend={p.showLegend}/>)
|
||||||
|
: <Alert color="error" severity="error" sx={{m: 4}}>
|
||||||
|
<code>"panels"</code> not found. Check the configuration file <b>{filename}</b>.
|
||||||
|
</Alert>
|
||||||
|
}
|
||||||
|
</AccordionDetails>
|
||||||
|
</Accordion>;
|
||||||
|
};
|
||||||
|
|
||||||
|
export default PredefinedDashboard;
|
|
@ -0,0 +1,139 @@
|
||||||
|
import React, {FC, useEffect, useMemo, useRef, useState} from "preact/compat";
|
||||||
|
import Box from "@mui/material/Box";
|
||||||
|
import {PanelSettings} from "../../types";
|
||||||
|
import Tooltip from "@mui/material/Tooltip";
|
||||||
|
import InfoIcon from "@mui/icons-material/Info";
|
||||||
|
import Typography from "@mui/material/Typography";
|
||||||
|
import {useAppDispatch, useAppState} from "../../state/common/StateContext";
|
||||||
|
import {AxisRange, YaxisState} from "../../state/graph/reducer";
|
||||||
|
import GraphView from "../CustomPanel/Views/GraphView";
|
||||||
|
import Alert from "@mui/material/Alert";
|
||||||
|
import {useFetchQuery} from "../../hooks/useFetchQuery";
|
||||||
|
import Spinner from "../common/Spinner";
|
||||||
|
import StepConfigurator from "../CustomPanel/Configurator/Query/StepConfigurator";
|
||||||
|
import GraphSettings from "../CustomPanel/Configurator/Graph/GraphSettings";
|
||||||
|
import {CustomStep} from "../../state/graph/reducer";
|
||||||
|
import {marked} from "marked";
|
||||||
|
import "./dashboard.css";
|
||||||
|
|
||||||
|
export interface PredefinedPanelsProps extends PanelSettings {
|
||||||
|
filename: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
const PredefinedPanels: FC<PredefinedPanelsProps> = ({
|
||||||
|
title,
|
||||||
|
description,
|
||||||
|
unit,
|
||||||
|
expr,
|
||||||
|
showLegend,
|
||||||
|
filename
|
||||||
|
}) => {
|
||||||
|
|
||||||
|
const {time: {period}} = useAppState();
|
||||||
|
|
||||||
|
const dispatch = useAppDispatch();
|
||||||
|
|
||||||
|
const containerRef = useRef<HTMLDivElement>(null);
|
||||||
|
const [visible, setVisible] = useState(true);
|
||||||
|
const [customStep, setCustomStep] = useState<CustomStep>({enable: false, value: period.step || 1});
|
||||||
|
const [yaxis, setYaxis] = useState<YaxisState>({
|
||||||
|
limits: {
|
||||||
|
enable: false,
|
||||||
|
range: {"1": [0, 0]}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
const validExpr = useMemo(() => Array.isArray(expr) && expr.every(q => typeof q === "string"), [expr]);
|
||||||
|
|
||||||
|
const {isLoading, graphData, error} = useFetchQuery({
|
||||||
|
predefinedQuery: validExpr ? expr : [],
|
||||||
|
display: "chart",
|
||||||
|
visible,
|
||||||
|
customStep,
|
||||||
|
});
|
||||||
|
|
||||||
|
const setYaxisLimits = (limits: AxisRange) => {
|
||||||
|
const tempYaxis = {...yaxis};
|
||||||
|
tempYaxis.limits.range = limits;
|
||||||
|
setYaxis(tempYaxis);
|
||||||
|
};
|
||||||
|
|
||||||
|
const toggleEnableLimits = () => {
|
||||||
|
const tempYaxis = {...yaxis};
|
||||||
|
tempYaxis.limits.enable = !tempYaxis.limits.enable;
|
||||||
|
setYaxis(tempYaxis);
|
||||||
|
};
|
||||||
|
|
||||||
|
const setPeriod = ({from, to}: {from: Date, to: Date}) => {
|
||||||
|
dispatch({type: "SET_PERIOD", payload: {from, to}});
|
||||||
|
};
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
const observer = new IntersectionObserver((entries) => {
|
||||||
|
entries.forEach(entry => setVisible(entry.isIntersecting));
|
||||||
|
}, { threshold: 0.1 });
|
||||||
|
if (containerRef.current) observer.observe(containerRef.current);
|
||||||
|
return () => {
|
||||||
|
if (containerRef.current) observer.unobserve(containerRef.current);
|
||||||
|
};
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
if (!validExpr) return <Alert color="error" severity="error" sx={{m: 4}}>
|
||||||
|
<code>"expr"</code> not found. Check the configuration file <b>{filename}</b>.
|
||||||
|
</Alert>;
|
||||||
|
|
||||||
|
return <Box border="1px solid" borderRadius="2px" borderColor="divider" ref={containerRef}>
|
||||||
|
<Box px={2} py={1} display="grid" gap={1} gridTemplateColumns="18px 1fr auto"
|
||||||
|
alignItems="center" justifyContent="space-between" borderBottom={"1px solid"} borderColor={"divider"}>
|
||||||
|
<Tooltip arrow componentsProps={{
|
||||||
|
tooltip: {
|
||||||
|
sx: {maxWidth: "100%"}
|
||||||
|
}
|
||||||
|
}}
|
||||||
|
title={<Box sx={{p: 1}}>
|
||||||
|
{description && <Box mb={2}>
|
||||||
|
<Typography fontWeight={"500"} sx={{mb: 0.5, textDecoration: "underline"}}>Description:</Typography>
|
||||||
|
<div className="panelDescription" dangerouslySetInnerHTML={{__html: marked.parse(description)}}/>
|
||||||
|
</Box>}
|
||||||
|
<Box>
|
||||||
|
<Typography fontWeight={"500"} sx={{mb: 0.5, textDecoration: "underline"}}>Queries:</Typography>
|
||||||
|
<div>
|
||||||
|
{expr.map((e, i) => <Box key={`${i}_${e}`} mb={0.5}>{e}</Box>)}
|
||||||
|
</div>
|
||||||
|
</Box>
|
||||||
|
</Box>}>
|
||||||
|
<InfoIcon color="info"/>
|
||||||
|
</Tooltip>
|
||||||
|
<Typography variant="subtitle1" gridColumn={2} textAlign={"left"} width={"100%"} fontWeight={500}>
|
||||||
|
{title || ""}
|
||||||
|
</Typography>
|
||||||
|
<Box display={"grid"} gridTemplateColumns={"repeat(2, auto)"} gap={2} alignItems={"center"}>
|
||||||
|
<StepConfigurator defaultStep={period.step} customStepEnable={customStep.enable}
|
||||||
|
setStep={(value) => {
|
||||||
|
setCustomStep({...customStep, value: value});
|
||||||
|
}}
|
||||||
|
toggleEnableStep={() => {
|
||||||
|
setCustomStep({...customStep, enable: !customStep.enable});
|
||||||
|
}}/>
|
||||||
|
<GraphSettings yaxis={yaxis} setYaxisLimits={setYaxisLimits} toggleEnableLimits={toggleEnableLimits}/>
|
||||||
|
</Box>
|
||||||
|
</Box>
|
||||||
|
<Box px={2} pb={2}>
|
||||||
|
{isLoading && <Spinner isLoading={true} height={"500px"}/>}
|
||||||
|
{error && <Alert color="error" severity="error" sx={{whiteSpace: "pre-wrap", mt: 2}}>{error}</Alert>}
|
||||||
|
{graphData && <GraphView
|
||||||
|
data={graphData}
|
||||||
|
period={period}
|
||||||
|
customStep={customStep}
|
||||||
|
query={expr}
|
||||||
|
yaxis={yaxis}
|
||||||
|
unit={unit}
|
||||||
|
showLegend={showLegend}
|
||||||
|
setYaxisLimits={setYaxisLimits}
|
||||||
|
setPeriod={setPeriod}/>
|
||||||
|
}
|
||||||
|
</Box>
|
||||||
|
</Box>;
|
||||||
|
};
|
||||||
|
|
||||||
|
export default PredefinedPanels;
|
|
@ -0,0 +1,18 @@
|
||||||
|
.panelDescription ul {
|
||||||
|
line-height: 2.2;
|
||||||
|
}
|
||||||
|
|
||||||
|
.panelDescription a {
|
||||||
|
color: #FFFFFF;
|
||||||
|
}
|
||||||
|
|
||||||
|
.panelDescription code {
|
||||||
|
display: inline;
|
||||||
|
max-width: 100%;
|
||||||
|
padding: 4px 6px;
|
||||||
|
background-color: rgba(0, 0, 0, 0.3);
|
||||||
|
border-radius: 2px;
|
||||||
|
font-weight: 400;
|
||||||
|
font-size: inherit;
|
||||||
|
color: #FFFFFF;
|
||||||
|
}
|
|
@ -0,0 +1,14 @@
|
||||||
|
import {DashboardSettings} from "../../types";
|
||||||
|
|
||||||
|
const importModule = async (filename: string) => {
|
||||||
|
const module = await import(`../../dashboards/${filename}`);
|
||||||
|
module.default.filename = filename;
|
||||||
|
return module.default as DashboardSettings;
|
||||||
|
};
|
||||||
|
|
||||||
|
export default async () => {
|
||||||
|
const context = require.context("../../dashboards", true, /\.json$/);
|
||||||
|
const filenames = context.keys().map(r => r.replace("./", ""));
|
||||||
|
return await Promise.all(filenames.map(async f => importModule(f)));
|
||||||
|
};
|
||||||
|
|
30
app/vmui/packages/vmui/src/components/common/Spinner.tsx
Normal file
30
app/vmui/packages/vmui/src/components/common/Spinner.tsx
Normal file
|
@ -0,0 +1,30 @@
|
||||||
|
import React, {FC} from "preact/compat";
|
||||||
|
import Fade from "@mui/material/Fade";
|
||||||
|
import Box from "@mui/material/Box";
|
||||||
|
import CircularProgress from "@mui/material/CircularProgress";
|
||||||
|
|
||||||
|
interface SpinnerProps {
|
||||||
|
isLoading: boolean;
|
||||||
|
height?: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
const Spinner: FC<SpinnerProps> = ({isLoading, height}) => {
|
||||||
|
return <Fade in={isLoading} style={{
|
||||||
|
transitionDelay: isLoading ? "300ms" : "0ms",
|
||||||
|
}}>
|
||||||
|
<Box alignItems="center" justifyContent="center" flexDirection="column" display="flex"
|
||||||
|
style={{
|
||||||
|
width: "100%",
|
||||||
|
maxWidth: "calc(100vw - 64px)",
|
||||||
|
position: "absolute",
|
||||||
|
height: height ?? "50%",
|
||||||
|
background: "rgba(255, 255, 255, 0.7)",
|
||||||
|
pointerEvents: "none",
|
||||||
|
zIndex: 2,
|
||||||
|
}}>
|
||||||
|
<CircularProgress/>
|
||||||
|
</Box>
|
||||||
|
</Fade>;
|
||||||
|
};
|
||||||
|
|
||||||
|
export default Spinner;
|
76
app/vmui/packages/vmui/src/dashboards/README.md
Normal file
76
app/vmui/packages/vmui/src/dashboards/README.md
Normal file
|
@ -0,0 +1,76 @@
|
||||||
|
### Configuration options
|
||||||
|
|
||||||
|
<br/>
|
||||||
|
DashboardSettings:
|
||||||
|
|
||||||
|
| Name | Type | Description |
|
||||||
|
|:----------|:----------------:|---------------------------:|
|
||||||
|
| rows* | `DashboardRow[]` | Sections containing panels |
|
||||||
|
| title | `string` | Dashboard title |
|
||||||
|
|
||||||
|
|
||||||
|
<br/>
|
||||||
|
DashboardRow:
|
||||||
|
|
||||||
|
| Name | Type | Description |
|
||||||
|
|:-----------|:-----------------:|---------------------------:|
|
||||||
|
| panels* | `PanelSettings[]` | List of panels (charts) |
|
||||||
|
| title | `string` | Row title |
|
||||||
|
|
||||||
|
<br/>
|
||||||
|
PanelSettings:
|
||||||
|
|
||||||
|
| Name | Type | Description |
|
||||||
|
|:---------------|:----------:|----------------------------------------------------:|
|
||||||
|
| expr* | `string[]` | Data source queries |
|
||||||
|
| title | `string` | Panel title |
|
||||||
|
| description | `string` | Additional information about the panel |
|
||||||
|
| unit | `string` | Y-axis unit |
|
||||||
|
| showLegend | `boolean` | If `false`, the legend hide. Default value - `true` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Example json
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"title": "Example",
|
||||||
|
"rows": [
|
||||||
|
{
|
||||||
|
"title": "Performance",
|
||||||
|
"panels": [
|
||||||
|
{
|
||||||
|
"title": "Query duration",
|
||||||
|
"description": "The less time it takes is better.\n* `*` - unsupported query path\n* `/write` - insert into VM\n* `/metrics` - query VM system metrics\n* `/query` - query instant values\n* `/query_range` - query over a range of time\n* `/series` - match a certain label set\n* `/label/{}/values` - query a list of label values (variables mostly)",
|
||||||
|
"unit": "ms",
|
||||||
|
"showLegend": false,
|
||||||
|
"expr": [
|
||||||
|
"max(vm_request_duration_seconds{quantile=~\"(0.5|0.99)\"}) by (path, quantile) > 0"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"title": "Concurrent flushes on disk",
|
||||||
|
"description": "Shows how many ongoing insertions (not API /write calls) on disk are taking place, where:\n* `max` - equal to number of CPUs;\n* `current` - current number of goroutines busy with inserting rows into underlying storage.\n\nEvery successful API /write call results into flush on disk. However, these two actions are separated and controlled via different concurrency limiters. The `max` on this panel can't be changed and always equal to number of CPUs. \n\nWhen `current` hits `max` constantly, it means storage is overloaded and requires more CPU.\n\n",
|
||||||
|
"expr": [
|
||||||
|
"sum(vm_concurrent_addrows_capacity)",
|
||||||
|
"sum(vm_concurrent_addrows_current)"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"title": "Troubleshooting",
|
||||||
|
"panels": [
|
||||||
|
{
|
||||||
|
"title": "Churn rate",
|
||||||
|
"description": "Shows the rate and total number of new series created over last 24h.\n\nHigh churn rate tightly connected with database performance and may result in unexpected OOM's or slow queries. It is recommended to always keep an eye on this metric to avoid unexpected cardinality \"explosions\".\n\nThe higher churn rate is, the more resources required to handle it. Consider to keep the churn rate as low as possible.\n\nGood references to read:\n* https://www.robustperception.io/cardinality-is-key\n* https://www.robustperception.io/using-tsdb-analyze-to-investigate-churn-and-cardinality",
|
||||||
|
"expr": [
|
||||||
|
"sum(rate(vm_new_timeseries_created_total[5m]))",
|
||||||
|
"sum(increase(vm_new_timeseries_created_total[24h]))"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
25
app/vmui/packages/vmui/src/dashboards/perJobUsage.json
Normal file
25
app/vmui/packages/vmui/src/dashboards/perJobUsage.json
Normal file
|
@ -0,0 +1,25 @@
|
||||||
|
{
|
||||||
|
"title": "per-job resource usage",
|
||||||
|
"rows": [
|
||||||
|
{
|
||||||
|
"panels": [
|
||||||
|
{
|
||||||
|
"title": "Per-job CPU usage",
|
||||||
|
"expr": ["sum(rate(process_cpu_seconds_total)) by (job)"]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"title": "Per-job RSS usage",
|
||||||
|
"expr": ["sum(process_resident_memory_bytes) by (job)"]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"title": "Per-job disk read",
|
||||||
|
"expr": ["sum(rate(process_io_storage_read_bytes_total)) by (job)"]
|
||||||
|
},{
|
||||||
|
"title": "Per-job disk write",
|
||||||
|
"expr": ["sum(rate(process_io_storage_written_bytes_total)) by (job)"]
|
||||||
|
}
|
||||||
|
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
|
@ -1,18 +1,25 @@
|
||||||
import {useEffect, useMemo, useCallback, useState} from "preact/compat";
|
import {useEffect, useMemo, useCallback, useState} from "preact/compat";
|
||||||
import {getQueryOptions, getQueryRangeUrl, getQueryUrl} from "../../../../api/query-range";
|
import {getQueryOptions, getQueryRangeUrl, getQueryUrl} from "../api/query-range";
|
||||||
import {useAppState} from "../../../../state/common/StateContext";
|
import {useAppState} from "../state/common/StateContext";
|
||||||
import {InstantMetricResult, MetricBase, MetricResult} from "../../../../api/types";
|
import {InstantMetricResult, MetricBase, MetricResult} from "../api/types";
|
||||||
import {isValidHttpUrl} from "../../../../utils/url";
|
import {isValidHttpUrl} from "../utils/url";
|
||||||
import {ErrorTypes} from "../../../../types";
|
import {ErrorTypes} from "../types";
|
||||||
import {useGraphState} from "../../../../state/graph/GraphStateContext";
|
import {getAppModeEnable, getAppModeParams} from "../utils/app-mode";
|
||||||
import {getAppModeEnable, getAppModeParams} from "../../../../utils/app-mode";
|
|
||||||
import throttle from "lodash.throttle";
|
import throttle from "lodash.throttle";
|
||||||
import {DisplayType} from "../DisplayTypeSwitch";
|
import {DisplayType} from "../components/CustomPanel/Configurator/DisplayTypeSwitch";
|
||||||
|
import {CustomStep} from "../state/graph/reducer";
|
||||||
|
|
||||||
|
interface FetchQueryParams {
|
||||||
|
predefinedQuery?: string[]
|
||||||
|
visible: boolean
|
||||||
|
display?: DisplayType,
|
||||||
|
customStep: CustomStep,
|
||||||
|
}
|
||||||
|
|
||||||
const appModeEnable = getAppModeEnable();
|
const appModeEnable = getAppModeEnable();
|
||||||
const {serverURL: appServerUrl} = getAppModeParams();
|
const {serverURL: appServerUrl} = getAppModeParams();
|
||||||
|
|
||||||
export const useFetchQuery = (): {
|
export const useFetchQuery = ({predefinedQuery, visible, display, customStep}: FetchQueryParams): {
|
||||||
fetchUrl?: string[],
|
fetchUrl?: string[],
|
||||||
isLoading: boolean,
|
isLoading: boolean,
|
||||||
graphData?: MetricResult[],
|
graphData?: MetricResult[],
|
||||||
|
@ -22,8 +29,6 @@ export const useFetchQuery = (): {
|
||||||
} => {
|
} => {
|
||||||
const {query, displayType, serverUrl, time: {period}, queryControls: {nocache}} = useAppState();
|
const {query, displayType, serverUrl, time: {period}, queryControls: {nocache}} = useAppState();
|
||||||
|
|
||||||
const {customStep} = useGraphState();
|
|
||||||
|
|
||||||
const [queryOptions, setQueryOptions] = useState([]);
|
const [queryOptions, setQueryOptions] = useState([]);
|
||||||
const [isLoading, setIsLoading] = useState(false);
|
const [isLoading, setIsLoading] = useState(false);
|
||||||
const [graphData, setGraphData] = useState<MetricResult[]>();
|
const [graphData, setGraphData] = useState<MetricResult[]>();
|
||||||
|
@ -67,11 +72,10 @@ export const useFetchQuery = (): {
|
||||||
setError(`${e.name}: ${e.message}`);
|
setError(`${e.name}: ${e.message}`);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
setIsLoading(false);
|
setIsLoading(false);
|
||||||
};
|
};
|
||||||
|
|
||||||
const throttledFetchData = useCallback(throttle(fetchData, 300), []);
|
const throttledFetchData = useCallback(throttle(fetchData, 1000), []);
|
||||||
|
|
||||||
const fetchOptions = async () => {
|
const fetchOptions = async () => {
|
||||||
const server = appModeEnable ? appServerUrl : serverUrl;
|
const server = appModeEnable ? appServerUrl : serverUrl;
|
||||||
|
@ -91,16 +95,19 @@ export const useFetchQuery = (): {
|
||||||
|
|
||||||
const fetchUrl = useMemo(() => {
|
const fetchUrl = useMemo(() => {
|
||||||
const server = appModeEnable ? appServerUrl : serverUrl;
|
const server = appModeEnable ? appServerUrl : serverUrl;
|
||||||
|
const expr = predefinedQuery ?? query;
|
||||||
|
const displayChart = (display || displayType) === "chart";
|
||||||
if (!period) return;
|
if (!period) return;
|
||||||
if (!server) {
|
if (!server) {
|
||||||
setError(ErrorTypes.emptyServer);
|
setError(ErrorTypes.emptyServer);
|
||||||
} else if (query.every(q => !q.trim())) {
|
} else if (expr.every(q => !q.trim())) {
|
||||||
setError(ErrorTypes.validQuery);
|
setError(ErrorTypes.validQuery);
|
||||||
} else if (isValidHttpUrl(server)) {
|
} else if (isValidHttpUrl(server)) {
|
||||||
if (customStep.enable) period.step = customStep.value;
|
const updatedPeriod = {...period};
|
||||||
return query.filter(q => q.trim()).map(q => displayType === "chart"
|
if (customStep.enable) updatedPeriod.step = customStep.value;
|
||||||
? getQueryRangeUrl(server, q, period, nocache)
|
return expr.filter(q => q.trim()).map(q => displayChart
|
||||||
: getQueryUrl(server, q, period));
|
? getQueryRangeUrl(server, q, updatedPeriod, nocache)
|
||||||
|
: getQueryUrl(server, q, updatedPeriod));
|
||||||
} else {
|
} else {
|
||||||
setError(ErrorTypes.validServer);
|
setError(ErrorTypes.validServer);
|
||||||
}
|
}
|
||||||
|
@ -111,10 +118,10 @@ export const useFetchQuery = (): {
|
||||||
fetchOptions();
|
fetchOptions();
|
||||||
}, [serverUrl]);
|
}, [serverUrl]);
|
||||||
|
|
||||||
// TODO: this should depend on query as well, but need to decide when to do the request. Doing it on each query change - looks to be a bad idea. Probably can be done on blur
|
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
throttledFetchData(fetchUrl, fetchQueue, displayType);
|
if (!visible) return;
|
||||||
}, [fetchUrl]);
|
throttledFetchData(fetchUrl, fetchQueue, (display || displayType));
|
||||||
|
}, [fetchUrl, visible]);
|
||||||
|
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
const fetchPast = fetchQueue.slice(0, -1);
|
const fetchPast = fetchQueue.slice(0, -1);
|
4
app/vmui/packages/vmui/src/router/index.ts
Normal file
4
app/vmui/packages/vmui/src/router/index.ts
Normal file
|
@ -0,0 +1,4 @@
|
||||||
|
export default {
|
||||||
|
home: "/",
|
||||||
|
dashboards: "/dashboards"
|
||||||
|
};
|
|
@ -1,5 +1,5 @@
|
||||||
/* eslint max-lines: 0 */
|
/* eslint max-lines: 0 */
|
||||||
import {DisplayType} from "../../components/Home/Configurator/DisplayTypeSwitch";
|
import {DisplayType} from "../../components/CustomPanel/Configurator/DisplayTypeSwitch";
|
||||||
import {TimeParams, TimePeriod} from "../../types";
|
import {TimeParams, TimePeriod} from "../../types";
|
||||||
import {
|
import {
|
||||||
dateFromSeconds,
|
dateFromSeconds,
|
||||||
|
|
|
@ -99,6 +99,7 @@ const THEME = createTheme({
|
||||||
MuiAlert: {
|
MuiAlert: {
|
||||||
styleOverrides: {
|
styleOverrides: {
|
||||||
root: {
|
root: {
|
||||||
|
fontSize: "14px",
|
||||||
boxShadow: "rgba(0, 0, 0, 0.08) 0px 4px 12px"
|
boxShadow: "rgba(0, 0, 0, 0.08) 0px 4px 12px"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -34,3 +34,22 @@ export enum ErrorTypes {
|
||||||
validServer = "Please provide a valid Server URL",
|
validServer = "Please provide a valid Server URL",
|
||||||
validQuery = "Please enter a valid Query and execute it"
|
validQuery = "Please enter a valid Query and execute it"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
export interface PanelSettings {
|
||||||
|
title?: string;
|
||||||
|
description?: string;
|
||||||
|
unit?: string;
|
||||||
|
expr: string[];
|
||||||
|
showLegend?: boolean;
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface DashboardRow {
|
||||||
|
title?: string;
|
||||||
|
panels: PanelSettings[];
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface DashboardSettings {
|
||||||
|
title?: string;
|
||||||
|
filename: string;
|
||||||
|
rows: DashboardRow[];
|
||||||
|
}
|
|
@ -31,7 +31,7 @@ const stateToUrlParams = {
|
||||||
export const setQueryStringWithoutPageReload = (qsValue: string): void => {
|
export const setQueryStringWithoutPageReload = (qsValue: string): void => {
|
||||||
const w = window;
|
const w = window;
|
||||||
if (w) {
|
if (w) {
|
||||||
const newurl = `${w.location.protocol}//${w.location.host}${w.location.pathname}?${qsValue}`;
|
const newurl = `${w.location.protocol}//${w.location.host}${w.location.pathname}?${qsValue}${w.location.hash}`;
|
||||||
w.history.pushState({ path: newurl }, "", newurl);
|
w.history.pushState({ path: newurl }, "", newurl);
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
|
@ -1,12 +1,18 @@
|
||||||
import {Axis, Series} from "uplot";
|
import uPlot, {Axis, Series} from "uplot";
|
||||||
import {getMaxFromArray, getMinFromArray} from "../math";
|
import {getMaxFromArray, getMinFromArray} from "../math";
|
||||||
import {roundToMilliseconds} from "../time";
|
import {roundToMilliseconds} from "../time";
|
||||||
import {AxisRange} from "../../state/graph/reducer";
|
import {AxisRange} from "../../state/graph/reducer";
|
||||||
import {formatTicks} from "./helpers";
|
import {formatTicks, sizeAxis} from "./helpers";
|
||||||
import {TimeParams} from "../../types";
|
import {TimeParams} from "../../types";
|
||||||
|
|
||||||
export const getAxes = (series: Series[]): Axis[] => Array.from(new Set(series.map(s => s.scale))).map(a => {
|
export const getAxes = (series: Series[], unit?: string): Axis[] => Array.from(new Set(series.map(s => s.scale))).map(a => {
|
||||||
const axis = {scale: a, show: true, font: "10px Arial", values: formatTicks};
|
const axis = {
|
||||||
|
scale: a,
|
||||||
|
show: true,
|
||||||
|
size: sizeAxis,
|
||||||
|
font: "10px Arial",
|
||||||
|
values: (u: uPlot, ticks: number[]) => formatTicks(u, ticks, unit)
|
||||||
|
};
|
||||||
if (!a) return {space: 80};
|
if (!a) return {space: 80};
|
||||||
if (!(Number(a) % 2)) return {...axis, side: 1};
|
if (!(Number(a) % 2)) return {...axis, side: 1};
|
||||||
return axis;
|
return axis;
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
import uPlot from "uplot";
|
import uPlot, {Axis} from "uplot";
|
||||||
import {getColorFromString} from "../color";
|
import {getColorFromString} from "../color";
|
||||||
|
|
||||||
export const defaultOptions = {
|
export const defaultOptions = {
|
||||||
|
@ -28,16 +28,40 @@ export const defaultOptions = {
|
||||||
},
|
},
|
||||||
};
|
};
|
||||||
|
|
||||||
export const formatTicks = (u: uPlot, ticks: number[]): string[] => {
|
export const formatTicks = (u: uPlot, ticks: number[], unit = ""): string[] => {
|
||||||
return ticks.map(v => {
|
return ticks.map(v => {
|
||||||
const n = Math.abs(v);
|
const n = Math.abs(v);
|
||||||
if (n > 1e-3 && n < 1e4) {
|
return `${n > 1e-3 && n < 1e4 ? v.toString() : v.toExponential(1)} ${unit}`;
|
||||||
return v.toString();
|
|
||||||
}
|
|
||||||
return v.toExponential(1);
|
|
||||||
});
|
});
|
||||||
};
|
};
|
||||||
|
|
||||||
|
interface AxisExtend extends Axis {
|
||||||
|
_size?: number;
|
||||||
|
}
|
||||||
|
|
||||||
|
const getTextWidth = (val: string, font: string): number => {
|
||||||
|
const span = document.createElement("span");
|
||||||
|
span.innerText = val;
|
||||||
|
span.style.cssText = `position: absolute; z-index: -1; pointer-events: none; opacity: 0; font: ${font}`;
|
||||||
|
document.body.appendChild(span);
|
||||||
|
const width = span.offsetWidth;
|
||||||
|
span.remove();
|
||||||
|
return width;
|
||||||
|
};
|
||||||
|
|
||||||
|
export const sizeAxis = (u: uPlot, values: string[], axisIdx: number, cycleNum: number): number => {
|
||||||
|
const axis = u.axes[axisIdx] as AxisExtend;
|
||||||
|
|
||||||
|
if (cycleNum > 1) return axis._size || 60;
|
||||||
|
|
||||||
|
let axisSize = 6 + (axis?.ticks?.size || 0) + (axis.gap || 0);
|
||||||
|
|
||||||
|
const longestVal = (values ?? []).reduce((acc, val) => val.length > acc.length ? val : acc, "");
|
||||||
|
if (longestVal != "") axisSize += getTextWidth(longestVal, u.ctx.font);
|
||||||
|
|
||||||
|
return Math.ceil(axisSize);
|
||||||
|
};
|
||||||
|
|
||||||
export const getColorLine = (scale: number, label: string): string => getColorFromString(`${scale}${label}`);
|
export const getColorLine = (scale: number, label: string): string => getColorFromString(`${scale}${label}`);
|
||||||
|
|
||||||
export const getDashLine = (group: number): number[] => group <= 1 ? [] : [group*4, group*1.2];
|
export const getDashLine = (group: number): number[] => group <= 1 ? [] : [group*4, group*1.2];
|
||||||
|
|
|
@ -2,7 +2,7 @@ import dayjs from "dayjs";
|
||||||
import {SetupTooltip} from "./types";
|
import {SetupTooltip} from "./types";
|
||||||
import {getColorLine} from "./helpers";
|
import {getColorLine} from "./helpers";
|
||||||
|
|
||||||
export const setTooltip = ({u, tooltipIdx, metrics, series, tooltip, tooltipOffset}: SetupTooltip): void => {
|
export const setTooltip = ({u, tooltipIdx, metrics, series, tooltip, tooltipOffset, unit = ""}: SetupTooltip): void => {
|
||||||
const {seriesIdx, dataIdx} = tooltipIdx;
|
const {seriesIdx, dataIdx} = tooltipIdx;
|
||||||
if (seriesIdx === null || dataIdx === undefined) return;
|
if (seriesIdx === null || dataIdx === undefined) return;
|
||||||
const dataSeries = u.data[seriesIdx][dataIdx];
|
const dataSeries = u.data[seriesIdx][dataIdx];
|
||||||
|
@ -25,7 +25,7 @@ export const setTooltip = ({u, tooltipIdx, metrics, series, tooltip, tooltipOffs
|
||||||
const marker = `<div class="u-tooltip__marker" style="background: ${color}"></div>`;
|
const marker = `<div class="u-tooltip__marker" style="background: ${color}"></div>`;
|
||||||
tooltip.innerHTML = `<div>${date}</div>
|
tooltip.innerHTML = `<div>${date}</div>
|
||||||
<div class="u-tooltip-data">
|
<div class="u-tooltip-data">
|
||||||
${marker}${metric.__name__ || ""}: <b class="u-tooltip-data__value">${dataSeries}</b>
|
${marker}${metric.__name__ || ""}: <b class="u-tooltip-data__value">${dataSeries}</b> ${unit}
|
||||||
</div>
|
</div>
|
||||||
<div class="u-tooltip__info">${info}</div>`;
|
<div class="u-tooltip__info">${info}</div>`;
|
||||||
};
|
};
|
||||||
|
|
|
@ -6,6 +6,7 @@ export interface SetupTooltip {
|
||||||
metrics: MetricResult[],
|
metrics: MetricResult[],
|
||||||
series: Series[],
|
series: Series[],
|
||||||
tooltip: HTMLDivElement,
|
tooltip: HTMLDivElement,
|
||||||
|
unit?: string,
|
||||||
tooltipOffset: {
|
tooltipOffset: {
|
||||||
left: number,
|
left: number,
|
||||||
top: number
|
top: number
|
||||||
|
|
|
@ -2,8 +2,8 @@
|
||||||
|
|
||||||
DOCKER_NAMESPACE := victoriametrics
|
DOCKER_NAMESPACE := victoriametrics
|
||||||
|
|
||||||
ROOT_IMAGE ?= alpine:3.15.0
|
ROOT_IMAGE ?= alpine:3.15.2
|
||||||
CERTS_IMAGE := alpine:3.15.0
|
CERTS_IMAGE := alpine:3.15.2
|
||||||
GO_BUILDER_IMAGE := golang:1.18.0-alpine
|
GO_BUILDER_IMAGE := golang:1.18.0-alpine
|
||||||
BUILDER_IMAGE := local/builder:2.0.0-$(shell echo $(GO_BUILDER_IMAGE) | tr :/ __)-1
|
BUILDER_IMAGE := local/builder:2.0.0-$(shell echo $(GO_BUILDER_IMAGE) | tr :/ __)-1
|
||||||
BASE_IMAGE := local/base:1.1.3-$(shell echo $(ROOT_IMAGE) | tr :/ __)-$(shell echo $(CERTS_IMAGE) | tr :/ __)
|
BASE_IMAGE := local/base:1.1.3-$(shell echo $(ROOT_IMAGE) | tr :/ __)-$(shell echo $(CERTS_IMAGE) | tr :/ __)
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
sort: 16
|
sort: 17
|
||||||
---
|
---
|
||||||
|
|
||||||
# Articles
|
# Articles
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
sort: 19
|
sort: 20
|
||||||
---
|
---
|
||||||
|
|
||||||
# VictoriaMetrics best practices
|
# VictoriaMetrics best practices
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
sort: 15
|
sort: 16
|
||||||
---
|
---
|
||||||
|
|
||||||
# CHANGELOG
|
# CHANGELOG
|
||||||
|
@ -15,6 +15,19 @@ The following tip changes can be tested by building VictoriaMetrics components f
|
||||||
|
|
||||||
## tip
|
## tip
|
||||||
|
|
||||||
|
* FEATURE: [vmui](https://docs.victoriametrics.com/#vmui): add pre-defined dasbhoards for per-job CPU usage, memory usage and disk IO usage. See [this pull request](https://github.com/VictoriaMetrics/VictoriaMetrics/pull/2243) for details.
|
||||||
|
* FEATURE: add the following command-line flags, which can be used for fine-grained limiting of CPU and memory usage during various API calls:
|
||||||
|
|
||||||
|
* `-search.maxFederateSeries` for limiting the number of time series, which can be returned from [/federate](https://docs.victoriametrics.com/#federation).
|
||||||
|
* `-search.maxExportSeries` for limiting the number of time series, which can be returned from [/api/v1/export](https://docs.victoriametrics.com/#how-to-export-time-series).
|
||||||
|
* `-search.maxSeries` for limiting the number of time series, which can be returned from [/api/v1/series](https://docs.victoriametrics.com/url-examples.html#apiv1series).
|
||||||
|
* `-search.maxTSDBStatusSeries` for limiting the number of time series, which can be scanned during the request to [/api/v1/status/tsdb](https://docs.victoriametrics.com/#tsdb-stats).
|
||||||
|
* `-search.maxGraphiteSeries` for limiting the number of time series, which can be scanned during the request to [Graphite Render API](https://docs.victoriametrics.com/#graphite-render-api-usage).
|
||||||
|
|
||||||
|
Previously the `-search.maxUniqueTimeseries` command-line flag was used as a global limit for all these APIs. Now the `-search.maxUniqueTimeseries` is used only for limiting the number of time series, which can be scanned during requests to [/api/v1/query](https://docs.victoriametrics.com/url-examples.html#apiv1query) and [/api/v1/query_range](https://docs.victoriametrics.com/url-examples.html#apiv1query_range).
|
||||||
|
|
||||||
|
When using [cluster version of VictoriaMetrics](https://docs.victoriametrics.com/Cluster-VictoriaMetrics.html), these command-line flags (including `-search.maxUniqueTimeseries`) must be passed to `vmselect` instead of `vmstorage`.
|
||||||
|
|
||||||
* BUGFIX: return `Content-Type: text/html` response header when requesting `/` HTTP path at VictoriaMetrics components. Previously `text/plain` response header was returned, which could lead to broken page formatting. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2323).
|
* BUGFIX: return `Content-Type: text/html` response header when requesting `/` HTTP path at VictoriaMetrics components. Previously `text/plain` response header was returned, which could lead to broken page formatting. See [this issue](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2323).
|
||||||
|
|
||||||
## [v1.75.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.75.0)
|
## [v1.75.0](https://github.com/VictoriaMetrics/VictoriaMetrics/releases/tag/v1.75.0)
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
sort: 11
|
sort: 12
|
||||||
---
|
---
|
||||||
|
|
||||||
# Case studies and talks
|
# Case studies and talks
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
sort: 14
|
sort: 15
|
||||||
---
|
---
|
||||||
|
|
||||||
# FAQ
|
# FAQ
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
sort: 13
|
sort: 14
|
||||||
---
|
---
|
||||||
|
|
||||||
# MetricsQL
|
# MetricsQL
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
sort: 18
|
sort: 19
|
||||||
---
|
---
|
||||||
|
|
||||||
# VictoriaMetrics Cluster Per Tenant Statistic
|
# VictoriaMetrics Cluster Per Tenant Statistic
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
sort: 12
|
sort: 13
|
||||||
---
|
---
|
||||||
|
|
||||||
# Quick Start
|
# Quick Start
|
||||||
|
|
|
@ -820,13 +820,13 @@ Send a request to `http://<victoriametrics-addr>:8428/api/v1/export/native?match
|
||||||
where `<timeseries_selector_for_export>` may contain any [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors)
|
where `<timeseries_selector_for_export>` may contain any [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors)
|
||||||
for metrics to export. Use `{__name__=~".*"}` selector for fetching all the time series.
|
for metrics to export. Use `{__name__=~".*"}` selector for fetching all the time series.
|
||||||
|
|
||||||
On large databases you may experience problems with limit on unique timeseries (default value is 300000). In this case you need to adjust `-search.maxUniqueTimeseries` parameter:
|
On large databases you may experience problems with limit on the number of time series, which can be exported. In this case you need to adjust `-search.maxExportSeries` command-line flag:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# count unique timeseries in database
|
# count unique timeseries in database
|
||||||
wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq '.data[0]'
|
wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq '.data[0]'
|
||||||
|
|
||||||
# relaunch victoriametrics with search.maxUniqueTimeseries more than value from previous command
|
# relaunch victoriametrics with search.maxExportSeries more than value from previous command
|
||||||
```
|
```
|
||||||
|
|
||||||
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
|
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
|
||||||
|
@ -1835,6 +1835,12 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
||||||
The maximum number of concurrent search requests. It shouldn't be high, since a single request can saturate all the CPU cores. See also -search.maxQueueDuration (default 8)
|
The maximum number of concurrent search requests. It shouldn't be high, since a single request can saturate all the CPU cores. See also -search.maxQueueDuration (default 8)
|
||||||
-search.maxExportDuration duration
|
-search.maxExportDuration duration
|
||||||
The maximum duration for /api/v1/export call (default 720h0m0s)
|
The maximum duration for /api/v1/export call (default 720h0m0s)
|
||||||
|
-search.maxExportSeries int
|
||||||
|
The maximum number of time series, which can be returned from /api/v1/export* APIs. This option allows limiting memory usage (default 1000000)
|
||||||
|
-search.maxFederateSeries int
|
||||||
|
The maximum number of time series, which can be returned from /federate. This option allows limiting memory usage (default 300000)
|
||||||
|
-search.maxGraphiteSeries int
|
||||||
|
The maximum number of time series, which can be scanned during queries to Graphite Render API. See https://docs.victoriametrics.com/#graphite-render-api-usage (default 300000)
|
||||||
-search.maxLookback duration
|
-search.maxLookback duration
|
||||||
Synonym to -search.lookback-delta from Prometheus. The value is dynamically detected from interval between time series datapoints if not set. It can be overridden on per-query basis via max_lookback arg. See also '-search.maxStalenessInterval' flag, which has the same meaining due to historical reasons
|
Synonym to -search.lookback-delta from Prometheus. The value is dynamically detected from interval between time series datapoints if not set. It can be overridden on per-query basis via max_lookback arg. See also '-search.maxStalenessInterval' flag, which has the same meaining due to historical reasons
|
||||||
-search.maxPointsPerTimeseries int
|
-search.maxPointsPerTimeseries int
|
||||||
|
@ -1850,12 +1856,16 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
||||||
The maximum number of raw samples a single query can process across all time series. This protects from heavy queries, which select unexpectedly high number of raw samples. See also -search.maxSamplesPerSeries (default 1000000000)
|
The maximum number of raw samples a single query can process across all time series. This protects from heavy queries, which select unexpectedly high number of raw samples. See also -search.maxSamplesPerSeries (default 1000000000)
|
||||||
-search.maxSamplesPerSeries int
|
-search.maxSamplesPerSeries int
|
||||||
The maximum number of raw samples a single query can scan per each time series. This option allows limiting memory usage (default 30000000)
|
The maximum number of raw samples a single query can scan per each time series. This option allows limiting memory usage (default 30000000)
|
||||||
|
-search.maxSeries int
|
||||||
|
The maximum number of time series, which can be returned from /api/v1/series. This option allows limiting memory usage (default 10000)
|
||||||
-search.maxStalenessInterval duration
|
-search.maxStalenessInterval duration
|
||||||
The maximum interval for staleness calculations. By default it is automatically calculated from the median interval between samples. This flag could be useful for tuning Prometheus data model closer to Influx-style data model. See https://prometheus.io/docs/prometheus/latest/querying/basics/#staleness for details. See also '-search.maxLookback' flag, which has the same meaning due to historical reasons
|
The maximum interval for staleness calculations. By default it is automatically calculated from the median interval between samples. This flag could be useful for tuning Prometheus data model closer to Influx-style data model. See https://prometheus.io/docs/prometheus/latest/querying/basics/#staleness for details. See also '-search.maxLookback' flag, which has the same meaning due to historical reasons
|
||||||
-search.maxStatusRequestDuration duration
|
-search.maxStatusRequestDuration duration
|
||||||
The maximum duration for /api/v1/status/* requests (default 5m0s)
|
The maximum duration for /api/v1/status/* requests (default 5m0s)
|
||||||
-search.maxStepForPointsAdjustment duration
|
-search.maxStepForPointsAdjustment duration
|
||||||
The maximum step when /api/v1/query_range handler adjusts points with timestamps closer than -search.latencyOffset to the current time. The adjustment is needed because such points may contain incomplete data (default 1m0s)
|
The maximum step when /api/v1/query_range handler adjusts points with timestamps closer than -search.latencyOffset to the current time. The adjustment is needed because such points may contain incomplete data (default 1m0s)
|
||||||
|
-search.maxTSDBStatusSeries int
|
||||||
|
The maximum number of time series, which can be processed during the call to /api/v1/status/tsdb. This option allows limiting memory usage (default 1000000)
|
||||||
-search.maxTagKeys int
|
-search.maxTagKeys int
|
||||||
The maximum number of tag keys returned from /api/v1/labels (default 100000)
|
The maximum number of tag keys returned from /api/v1/labels (default 100000)
|
||||||
-search.maxTagValueSuffixesPerSearch int
|
-search.maxTagValueSuffixesPerSearch int
|
||||||
|
@ -1863,7 +1873,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
||||||
-search.maxTagValues int
|
-search.maxTagValues int
|
||||||
The maximum number of tag values returned from /api/v1/label/<label_name>/values (default 100000)
|
The maximum number of tag values returned from /api/v1/label/<label_name>/values (default 100000)
|
||||||
-search.maxUniqueTimeseries int
|
-search.maxUniqueTimeseries int
|
||||||
The maximum number of unique time series each search can scan. This option allows limiting memory usage (default 300000)
|
The maximum number of unique time series, which can be selected during /api/v1/query and /api/v1/query_range queries. This option allows limiting memory usage (default 300000)
|
||||||
-search.minStalenessInterval duration
|
-search.minStalenessInterval duration
|
||||||
The minimum interval for staleness calculations. This flag could be useful for removing gaps on graphs generated from time series with irregular intervals between samples. See also '-search.maxStalenessInterval'
|
The minimum interval for staleness calculations. This flag could be useful for removing gaps on graphs generated from time series with irregular intervals between samples. See also '-search.maxStalenessInterval'
|
||||||
-search.noStaleMarkers
|
-search.noStaleMarkers
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
sort: 17
|
sort: 18
|
||||||
---
|
---
|
||||||
|
|
||||||
# Release process guidance
|
# Release process guidance
|
||||||
|
|
|
@ -824,13 +824,13 @@ Send a request to `http://<victoriametrics-addr>:8428/api/v1/export/native?match
|
||||||
where `<timeseries_selector_for_export>` may contain any [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors)
|
where `<timeseries_selector_for_export>` may contain any [time series selector](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-series-selectors)
|
||||||
for metrics to export. Use `{__name__=~".*"}` selector for fetching all the time series.
|
for metrics to export. Use `{__name__=~".*"}` selector for fetching all the time series.
|
||||||
|
|
||||||
On large databases you may experience problems with limit on unique timeseries (default value is 300000). In this case you need to adjust `-search.maxUniqueTimeseries` parameter:
|
On large databases you may experience problems with limit on the number of time series, which can be exported. In this case you need to adjust `-search.maxExportSeries` command-line flag:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# count unique timeseries in database
|
# count unique timeseries in database
|
||||||
wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq '.data[0]'
|
wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq '.data[0]'
|
||||||
|
|
||||||
# relaunch victoriametrics with search.maxUniqueTimeseries more than value from previous command
|
# relaunch victoriametrics with search.maxExportSeries more than value from previous command
|
||||||
```
|
```
|
||||||
|
|
||||||
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
|
Optional `start` and `end` args may be added to the request in order to limit the time frame for the exported data. These args may contain either
|
||||||
|
@ -1839,6 +1839,12 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
||||||
The maximum number of concurrent search requests. It shouldn't be high, since a single request can saturate all the CPU cores. See also -search.maxQueueDuration (default 8)
|
The maximum number of concurrent search requests. It shouldn't be high, since a single request can saturate all the CPU cores. See also -search.maxQueueDuration (default 8)
|
||||||
-search.maxExportDuration duration
|
-search.maxExportDuration duration
|
||||||
The maximum duration for /api/v1/export call (default 720h0m0s)
|
The maximum duration for /api/v1/export call (default 720h0m0s)
|
||||||
|
-search.maxExportSeries int
|
||||||
|
The maximum number of time series, which can be returned from /api/v1/export* APIs. This option allows limiting memory usage (default 1000000)
|
||||||
|
-search.maxFederateSeries int
|
||||||
|
The maximum number of time series, which can be returned from /federate. This option allows limiting memory usage (default 300000)
|
||||||
|
-search.maxGraphiteSeries int
|
||||||
|
The maximum number of time series, which can be scanned during queries to Graphite Render API. See https://docs.victoriametrics.com/#graphite-render-api-usage (default 300000)
|
||||||
-search.maxLookback duration
|
-search.maxLookback duration
|
||||||
Synonym to -search.lookback-delta from Prometheus. The value is dynamically detected from interval between time series datapoints if not set. It can be overridden on per-query basis via max_lookback arg. See also '-search.maxStalenessInterval' flag, which has the same meaining due to historical reasons
|
Synonym to -search.lookback-delta from Prometheus. The value is dynamically detected from interval between time series datapoints if not set. It can be overridden on per-query basis via max_lookback arg. See also '-search.maxStalenessInterval' flag, which has the same meaining due to historical reasons
|
||||||
-search.maxPointsPerTimeseries int
|
-search.maxPointsPerTimeseries int
|
||||||
|
@ -1854,12 +1860,16 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
||||||
The maximum number of raw samples a single query can process across all time series. This protects from heavy queries, which select unexpectedly high number of raw samples. See also -search.maxSamplesPerSeries (default 1000000000)
|
The maximum number of raw samples a single query can process across all time series. This protects from heavy queries, which select unexpectedly high number of raw samples. See also -search.maxSamplesPerSeries (default 1000000000)
|
||||||
-search.maxSamplesPerSeries int
|
-search.maxSamplesPerSeries int
|
||||||
The maximum number of raw samples a single query can scan per each time series. This option allows limiting memory usage (default 30000000)
|
The maximum number of raw samples a single query can scan per each time series. This option allows limiting memory usage (default 30000000)
|
||||||
|
-search.maxSeries int
|
||||||
|
The maximum number of time series, which can be returned from /api/v1/series. This option allows limiting memory usage (default 10000)
|
||||||
-search.maxStalenessInterval duration
|
-search.maxStalenessInterval duration
|
||||||
The maximum interval for staleness calculations. By default it is automatically calculated from the median interval between samples. This flag could be useful for tuning Prometheus data model closer to Influx-style data model. See https://prometheus.io/docs/prometheus/latest/querying/basics/#staleness for details. See also '-search.maxLookback' flag, which has the same meaning due to historical reasons
|
The maximum interval for staleness calculations. By default it is automatically calculated from the median interval between samples. This flag could be useful for tuning Prometheus data model closer to Influx-style data model. See https://prometheus.io/docs/prometheus/latest/querying/basics/#staleness for details. See also '-search.maxLookback' flag, which has the same meaning due to historical reasons
|
||||||
-search.maxStatusRequestDuration duration
|
-search.maxStatusRequestDuration duration
|
||||||
The maximum duration for /api/v1/status/* requests (default 5m0s)
|
The maximum duration for /api/v1/status/* requests (default 5m0s)
|
||||||
-search.maxStepForPointsAdjustment duration
|
-search.maxStepForPointsAdjustment duration
|
||||||
The maximum step when /api/v1/query_range handler adjusts points with timestamps closer than -search.latencyOffset to the current time. The adjustment is needed because such points may contain incomplete data (default 1m0s)
|
The maximum step when /api/v1/query_range handler adjusts points with timestamps closer than -search.latencyOffset to the current time. The adjustment is needed because such points may contain incomplete data (default 1m0s)
|
||||||
|
-search.maxTSDBStatusSeries int
|
||||||
|
The maximum number of time series, which can be processed during the call to /api/v1/status/tsdb. This option allows limiting memory usage (default 1000000)
|
||||||
-search.maxTagKeys int
|
-search.maxTagKeys int
|
||||||
The maximum number of tag keys returned from /api/v1/labels (default 100000)
|
The maximum number of tag keys returned from /api/v1/labels (default 100000)
|
||||||
-search.maxTagValueSuffixesPerSearch int
|
-search.maxTagValueSuffixesPerSearch int
|
||||||
|
@ -1867,7 +1877,7 @@ Pass `-help` to VictoriaMetrics in order to see the list of supported command-li
|
||||||
-search.maxTagValues int
|
-search.maxTagValues int
|
||||||
The maximum number of tag values returned from /api/v1/label/<label_name>/values (default 100000)
|
The maximum number of tag values returned from /api/v1/label/<label_name>/values (default 100000)
|
||||||
-search.maxUniqueTimeseries int
|
-search.maxUniqueTimeseries int
|
||||||
The maximum number of unique time series each search can scan. This option allows limiting memory usage (default 300000)
|
The maximum number of unique time series, which can be selected during /api/v1/query and /api/v1/query_range queries. This option allows limiting memory usage (default 300000)
|
||||||
-search.minStalenessInterval duration
|
-search.minStalenessInterval duration
|
||||||
The minimum interval for staleness calculations. This flag could be useful for removing gaps on graphs generated from time series with irregular intervals between samples. See also '-search.maxStalenessInterval'
|
The minimum interval for staleness calculations. This flag could be useful for removing gaps on graphs generated from time series with irregular intervals between samples. See also '-search.maxStalenessInterval'
|
||||||
-search.noStaleMarkers
|
-search.noStaleMarkers
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
sort: 21
|
sort: 22
|
||||||
---
|
---
|
||||||
|
|
||||||
# Guides
|
# Guides
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
sort: 22
|
sort: 23
|
||||||
---
|
---
|
||||||
|
|
||||||
# VictoriaMetrics Operator
|
# VictoriaMetrics Operator
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
sort: 20
|
sort: 21
|
||||||
---
|
---
|
||||||
|
|
||||||
# VictoriaMetrics API examples
|
# VictoriaMetrics API examples
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
sort: 23
|
sort: 11
|
||||||
---
|
---
|
||||||
|
|
||||||
# vmanomaly
|
# vmanomaly
|
||||||
|
|
10
go.mod
10
go.mod
|
@ -11,7 +11,7 @@ require (
|
||||||
github.com/VictoriaMetrics/fasthttp v1.1.0
|
github.com/VictoriaMetrics/fasthttp v1.1.0
|
||||||
github.com/VictoriaMetrics/metrics v1.18.1
|
github.com/VictoriaMetrics/metrics v1.18.1
|
||||||
github.com/VictoriaMetrics/metricsql v0.40.0
|
github.com/VictoriaMetrics/metricsql v0.40.0
|
||||||
github.com/aws/aws-sdk-go v1.43.21
|
github.com/aws/aws-sdk-go v1.43.26
|
||||||
github.com/cespare/xxhash/v2 v2.1.2
|
github.com/cespare/xxhash/v2 v2.1.2
|
||||||
github.com/cheggaaa/pb/v3 v3.0.8
|
github.com/cheggaaa/pb/v3 v3.0.8
|
||||||
github.com/cpuguy83/go-md2man/v2 v2.0.1 // indirect
|
github.com/cpuguy83/go-md2man/v2 v2.0.1 // indirect
|
||||||
|
@ -31,9 +31,9 @@ require (
|
||||||
github.com/valyala/fasttemplate v1.2.1
|
github.com/valyala/fasttemplate v1.2.1
|
||||||
github.com/valyala/gozstd v1.16.0
|
github.com/valyala/gozstd v1.16.0
|
||||||
github.com/valyala/quicktemplate v1.7.0
|
github.com/valyala/quicktemplate v1.7.0
|
||||||
golang.org/x/net v0.0.0-20220225172249-27dd8689420f
|
golang.org/x/net v0.0.0-20220325170049-de3da57026de
|
||||||
golang.org/x/oauth2 v0.0.0-20220309155454-6242fa91716a
|
golang.org/x/oauth2 v0.0.0-20220309155454-6242fa91716a
|
||||||
golang.org/x/sys v0.0.0-20220319134239-a9b59b0215f8
|
golang.org/x/sys v0.0.0-20220325203850-36772127a21f
|
||||||
google.golang.org/api v0.73.0
|
google.golang.org/api v0.73.0
|
||||||
gopkg.in/yaml.v2 v2.4.0
|
gopkg.in/yaml.v2 v2.4.0
|
||||||
)
|
)
|
||||||
|
@ -68,8 +68,8 @@ require (
|
||||||
golang.org/x/text v0.3.7 // indirect
|
golang.org/x/text v0.3.7 // indirect
|
||||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
|
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
|
||||||
google.golang.org/appengine v1.6.7 // indirect
|
google.golang.org/appengine v1.6.7 // indirect
|
||||||
google.golang.org/genproto v0.0.0-20220317150908-0efb43f6373e // indirect
|
google.golang.org/genproto v0.0.0-20220324131243-acbaeb5b85eb // indirect
|
||||||
google.golang.org/grpc v1.45.0 // indirect
|
google.golang.org/grpc v1.45.0 // indirect
|
||||||
google.golang.org/protobuf v1.27.1 // indirect
|
google.golang.org/protobuf v1.28.0 // indirect
|
||||||
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
|
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
|
||||||
)
|
)
|
||||||
|
|
18
go.sum
18
go.sum
|
@ -165,8 +165,8 @@ github.com/aws/aws-sdk-go v1.30.12/go.mod h1:5zCpMtNQVjRREroY7sYe8lOMRSxkhG6MZve
|
||||||
github.com/aws/aws-sdk-go v1.34.28/go.mod h1:H7NKnBqNVzoTJpGfLrQkkD+ytBA93eiDYi/+8rV9s48=
|
github.com/aws/aws-sdk-go v1.34.28/go.mod h1:H7NKnBqNVzoTJpGfLrQkkD+ytBA93eiDYi/+8rV9s48=
|
||||||
github.com/aws/aws-sdk-go v1.35.31/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
|
github.com/aws/aws-sdk-go v1.35.31/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
|
||||||
github.com/aws/aws-sdk-go v1.40.45/go.mod h1:585smgzpB/KqRA+K3y/NL/oYRqQvpNJYvLm+LY1U59Q=
|
github.com/aws/aws-sdk-go v1.40.45/go.mod h1:585smgzpB/KqRA+K3y/NL/oYRqQvpNJYvLm+LY1U59Q=
|
||||||
github.com/aws/aws-sdk-go v1.43.21 h1:E4S2eX3d2gKJyI/ISrcIrSwXwqjIvCK85gtBMt4sAPE=
|
github.com/aws/aws-sdk-go v1.43.26 h1:/ABcm/2xp+Vu+iUx8+TmlwXMGjO7fmZqJMoZjml4y/4=
|
||||||
github.com/aws/aws-sdk-go v1.43.21/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo=
|
github.com/aws/aws-sdk-go v1.43.26/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo=
|
||||||
github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g=
|
github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g=
|
||||||
github.com/aws/aws-sdk-go-v2 v1.9.1/go.mod h1:cK/D0BBs0b/oWPIcX/Z/obahJK1TT7IPVjy53i/mX/4=
|
github.com/aws/aws-sdk-go-v2 v1.9.1/go.mod h1:cK/D0BBs0b/oWPIcX/Z/obahJK1TT7IPVjy53i/mX/4=
|
||||||
github.com/aws/aws-sdk-go-v2/service/cloudwatch v1.8.1/go.mod h1:CM+19rL1+4dFWnOQKwDc7H1KwXTz+h61oUSHyhV0b3o=
|
github.com/aws/aws-sdk-go-v2/service/cloudwatch v1.8.1/go.mod h1:CM+19rL1+4dFWnOQKwDc7H1KwXTz+h61oUSHyhV0b3o=
|
||||||
|
@ -1178,8 +1178,9 @@ golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qx
|
||||||
golang.org/x/net v0.0.0-20210614182718-04defd469f4e/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
golang.org/x/net v0.0.0-20210614182718-04defd469f4e/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||||
golang.org/x/net v0.0.0-20210917221730-978cfadd31cf/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
golang.org/x/net v0.0.0-20210917221730-978cfadd31cf/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||||
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
|
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
|
||||||
golang.org/x/net v0.0.0-20220225172249-27dd8689420f h1:oA4XRj0qtSt8Yo1Zms0CUlsT3KG69V2UGQWPBxujDmc=
|
|
||||||
golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
|
golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
|
||||||
|
golang.org/x/net v0.0.0-20220325170049-de3da57026de h1:pZB1TWnKi+o4bENlbzAgLrEbY4RMYmUIRobMcSmfeYc=
|
||||||
|
golang.org/x/net v0.0.0-20220325170049-de3da57026de/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
|
||||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||||
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||||
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||||
|
@ -1316,8 +1317,8 @@ golang.org/x/sys v0.0.0-20220204135822-1c1b9b1eba6a/go.mod h1:oPkhp1MJrh7nUepCBc
|
||||||
golang.org/x/sys v0.0.0-20220209214540-3681064d5158/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20220209214540-3681064d5158/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20220227234510-4e6760a101f9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20220227234510-4e6760a101f9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20220310020820-b874c991c1a5/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20220310020820-b874c991c1a5/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20220319134239-a9b59b0215f8 h1:OH54vjqzRWmbJ62fjuhxy7AxFFgoHN0/DPc/UrL8cAs=
|
golang.org/x/sys v0.0.0-20220325203850-36772127a21f h1:TrmogKRsSOxRMJbLYGrB4SBbW+LJcEllYBLME5Zk5pU=
|
||||||
golang.org/x/sys v0.0.0-20220319134239-a9b59b0215f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20220325203850-36772127a21f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
|
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
|
||||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||||
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||||
|
@ -1566,8 +1567,8 @@ google.golang.org/genproto v0.0.0-20220218161850-94dd64e39d7c/go.mod h1:kGP+zUP2
|
||||||
google.golang.org/genproto v0.0.0-20220222213610-43724f9ea8cf/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
|
google.golang.org/genproto v0.0.0-20220222213610-43724f9ea8cf/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
|
||||||
google.golang.org/genproto v0.0.0-20220304144024-325a89244dc8/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
|
google.golang.org/genproto v0.0.0-20220304144024-325a89244dc8/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
|
||||||
google.golang.org/genproto v0.0.0-20220310185008-1973136f34c6/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
|
google.golang.org/genproto v0.0.0-20220310185008-1973136f34c6/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
|
||||||
google.golang.org/genproto v0.0.0-20220317150908-0efb43f6373e h1:fNKDNuUyC4WH+inqDMpfXDdfvwfYILbsX+oskGZ8hxg=
|
google.golang.org/genproto v0.0.0-20220324131243-acbaeb5b85eb h1:0m9wktIpOxGw+SSKmydXWB3Z3GTfcPP6+q75HCQa6HI=
|
||||||
google.golang.org/genproto v0.0.0-20220317150908-0efb43f6373e/go.mod h1:hAL49I2IFola2sVEjAn7MEwsja0xp51I0tlGAf9hz4E=
|
google.golang.org/genproto v0.0.0-20220324131243-acbaeb5b85eb/go.mod h1:hAL49I2IFola2sVEjAn7MEwsja0xp51I0tlGAf9hz4E=
|
||||||
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
|
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
|
||||||
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
||||||
google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM=
|
google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM=
|
||||||
|
@ -1617,8 +1618,9 @@ google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGj
|
||||||
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
|
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
|
||||||
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
|
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
|
||||||
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
|
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
|
||||||
google.golang.org/protobuf v1.27.1 h1:SnqbnDw1V7RiZcXPx5MEeqPv2s79L9i7BJUlG/+RurQ=
|
|
||||||
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
|
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
|
||||||
|
google.golang.org/protobuf v1.28.0 h1:w43yiav+6bVFTBQFZX0r7ipe9JQ1QsbMgHwbBziscLw=
|
||||||
|
google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
|
||||||
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
|
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
|
||||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||||
|
|
|
@ -133,20 +133,20 @@ func TestCacheConcurrentAccess(t *testing.T) {
|
||||||
var wg sync.WaitGroup
|
var wg sync.WaitGroup
|
||||||
wg.Add(workers)
|
wg.Add(workers)
|
||||||
for i := 0; i < workers; i++ {
|
for i := 0; i < workers; i++ {
|
||||||
go func() {
|
go func(worker int) {
|
||||||
defer wg.Done()
|
defer wg.Done()
|
||||||
testCacheSetGet(c)
|
testCacheSetGet(c, worker)
|
||||||
}()
|
}(i)
|
||||||
}
|
}
|
||||||
wg.Wait()
|
wg.Wait()
|
||||||
}
|
}
|
||||||
|
|
||||||
func testCacheSetGet(c *Cache) {
|
func testCacheSetGet(c *Cache, worker int) {
|
||||||
for i := 0; i < 1000; i++ {
|
for i := 0; i < 1000; i++ {
|
||||||
part := (interface{})(i)
|
part := (interface{})(i)
|
||||||
b := testBlock{}
|
b := testBlock{}
|
||||||
k := Key{
|
k := Key{
|
||||||
Offset: uint64(i),
|
Offset: uint64(worker*1000 + i),
|
||||||
Part: part,
|
Part: part,
|
||||||
}
|
}
|
||||||
c.PutBlock(k, &b)
|
c.PutBlock(k, &b)
|
||||||
|
|
327
lib/lrucache/lrucache.go
Normal file
327
lib/lrucache/lrucache.go
Normal file
|
@ -0,0 +1,327 @@
|
||||||
|
package lrucache
|
||||||
|
|
||||||
|
import (
|
||||||
|
"container/heap"
|
||||||
|
"sync"
|
||||||
|
"sync/atomic"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||||
|
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
|
||||||
|
"github.com/VictoriaMetrics/VictoriaMetrics/lib/fasttime"
|
||||||
|
xxhash "github.com/cespare/xxhash/v2"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Cache caches Entry entries.
|
||||||
|
//
|
||||||
|
// Call NewCache() for creating new Cache.
|
||||||
|
type Cache struct {
|
||||||
|
shards []*cache
|
||||||
|
|
||||||
|
cleanerMustStopCh chan struct{}
|
||||||
|
cleanerStoppedCh chan struct{}
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewCache creates new cache.
|
||||||
|
//
|
||||||
|
// Cache size in bytes is limited by the value returned by getMaxSizeBytes() callback.
|
||||||
|
// Call MustStop() in order to free up resources occupied by Cache.
|
||||||
|
func NewCache(getMaxSizeBytes func() int) *Cache {
|
||||||
|
cpusCount := cgroup.AvailableCPUs()
|
||||||
|
shardsCount := cgroup.AvailableCPUs()
|
||||||
|
// Increase the number of shards with the increased number of available CPU cores.
|
||||||
|
// This should reduce contention on per-shard mutexes.
|
||||||
|
multiplier := cpusCount
|
||||||
|
if multiplier > 16 {
|
||||||
|
multiplier = 16
|
||||||
|
}
|
||||||
|
shardsCount *= multiplier
|
||||||
|
shards := make([]*cache, shardsCount)
|
||||||
|
getMaxShardBytes := func() int {
|
||||||
|
n := getMaxSizeBytes()
|
||||||
|
return n / shardsCount
|
||||||
|
}
|
||||||
|
for i := range shards {
|
||||||
|
shards[i] = newCache(getMaxShardBytes)
|
||||||
|
}
|
||||||
|
c := &Cache{
|
||||||
|
shards: shards,
|
||||||
|
cleanerMustStopCh: make(chan struct{}),
|
||||||
|
cleanerStoppedCh: make(chan struct{}),
|
||||||
|
}
|
||||||
|
go c.cleaner()
|
||||||
|
return c
|
||||||
|
}
|
||||||
|
|
||||||
|
// MustStop frees up resources occupied by c.
|
||||||
|
func (c *Cache) MustStop() {
|
||||||
|
close(c.cleanerMustStopCh)
|
||||||
|
<-c.cleanerStoppedCh
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetEntry returns an Entry for the given key k from c.
|
||||||
|
func (c *Cache) GetEntry(k string) Entry {
|
||||||
|
idx := uint64(0)
|
||||||
|
if len(c.shards) > 1 {
|
||||||
|
h := hashUint64(k)
|
||||||
|
idx = h % uint64(len(c.shards))
|
||||||
|
}
|
||||||
|
shard := c.shards[idx]
|
||||||
|
return shard.GetEntry(k)
|
||||||
|
}
|
||||||
|
|
||||||
|
// PutEntry puts the given Entry e under the given key k into c.
|
||||||
|
func (c *Cache) PutEntry(k string, e Entry) {
|
||||||
|
idx := uint64(0)
|
||||||
|
if len(c.shards) > 1 {
|
||||||
|
h := hashUint64(k)
|
||||||
|
idx = h % uint64(len(c.shards))
|
||||||
|
}
|
||||||
|
shard := c.shards[idx]
|
||||||
|
shard.PutEntry(k, e)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Len returns the number of blocks in the cache c.
|
||||||
|
func (c *Cache) Len() int {
|
||||||
|
n := 0
|
||||||
|
for _, shard := range c.shards {
|
||||||
|
n += shard.Len()
|
||||||
|
}
|
||||||
|
return n
|
||||||
|
}
|
||||||
|
|
||||||
|
// SizeBytes returns an approximate size in bytes of all the blocks stored in the cache c.
|
||||||
|
func (c *Cache) SizeBytes() int {
|
||||||
|
n := 0
|
||||||
|
for _, shard := range c.shards {
|
||||||
|
n += shard.SizeBytes()
|
||||||
|
}
|
||||||
|
return n
|
||||||
|
}
|
||||||
|
|
||||||
|
// SizeMaxBytes returns the max allowed size in bytes for c.
|
||||||
|
func (c *Cache) SizeMaxBytes() int {
|
||||||
|
n := 0
|
||||||
|
for _, shard := range c.shards {
|
||||||
|
n += shard.SizeMaxBytes()
|
||||||
|
}
|
||||||
|
return n
|
||||||
|
}
|
||||||
|
|
||||||
|
// Requests returns the number of requests served by c.
|
||||||
|
func (c *Cache) Requests() uint64 {
|
||||||
|
n := uint64(0)
|
||||||
|
for _, shard := range c.shards {
|
||||||
|
n += shard.Requests()
|
||||||
|
}
|
||||||
|
return n
|
||||||
|
}
|
||||||
|
|
||||||
|
// Misses returns the number of cache misses for c.
|
||||||
|
func (c *Cache) Misses() uint64 {
|
||||||
|
n := uint64(0)
|
||||||
|
for _, shard := range c.shards {
|
||||||
|
n += shard.Misses()
|
||||||
|
}
|
||||||
|
return n
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *Cache) cleaner() {
|
||||||
|
ticker := time.NewTicker(53 * time.Second)
|
||||||
|
defer ticker.Stop()
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case <-c.cleanerMustStopCh:
|
||||||
|
close(c.cleanerStoppedCh)
|
||||||
|
return
|
||||||
|
case <-ticker.C:
|
||||||
|
c.cleanByTimeout()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *Cache) cleanByTimeout() {
|
||||||
|
for _, shard := range c.shards {
|
||||||
|
shard.cleanByTimeout()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type cache struct {
|
||||||
|
// Atomically updated fields must go first in the struct, so they are properly
|
||||||
|
// aligned to 8 bytes on 32-bit architectures.
|
||||||
|
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/212
|
||||||
|
requests uint64
|
||||||
|
misses uint64
|
||||||
|
|
||||||
|
// sizeBytes contains an approximate size for all the blocks stored in the cache.
|
||||||
|
sizeBytes int64
|
||||||
|
|
||||||
|
// getMaxSizeBytes() is a callback, which returns the maximum allowed cache size in bytes.
|
||||||
|
getMaxSizeBytes func() int
|
||||||
|
|
||||||
|
// mu protects all the fields below.
|
||||||
|
mu sync.Mutex
|
||||||
|
|
||||||
|
// m contains cached entries
|
||||||
|
m map[string]*cacheEntry
|
||||||
|
|
||||||
|
// The heap for removing the least recently used entries from m.
|
||||||
|
lah lastAccessHeap
|
||||||
|
}
|
||||||
|
|
||||||
|
func hashUint64(s string) uint64 {
|
||||||
|
b := bytesutil.ToUnsafeBytes(s)
|
||||||
|
return xxhash.Sum64(b)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Entry is an item, which may be cached in the Cache.
|
||||||
|
type Entry interface {
|
||||||
|
// SizeBytes must return the approximate size of the given entry in bytes
|
||||||
|
SizeBytes() int
|
||||||
|
}
|
||||||
|
|
||||||
|
type cacheEntry struct {
|
||||||
|
// The timestamp in seconds for the last access to the given entry.
|
||||||
|
lastAccessTime uint64
|
||||||
|
|
||||||
|
// heapIdx is the index for the entry in lastAccessHeap.
|
||||||
|
heapIdx int
|
||||||
|
|
||||||
|
// k contains the associated key for the given entry.
|
||||||
|
k string
|
||||||
|
|
||||||
|
// e contains the cached entry.
|
||||||
|
e Entry
|
||||||
|
}
|
||||||
|
|
||||||
|
func newCache(getMaxSizeBytes func() int) *cache {
|
||||||
|
var c cache
|
||||||
|
c.getMaxSizeBytes = getMaxSizeBytes
|
||||||
|
c.m = make(map[string]*cacheEntry)
|
||||||
|
return &c
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *cache) updateSizeBytes(n int) {
|
||||||
|
atomic.AddInt64(&c.sizeBytes, int64(n))
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *cache) cleanByTimeout() {
|
||||||
|
// Delete items accessed more than three minutes ago.
|
||||||
|
// This time should be enough for repeated queries.
|
||||||
|
lastAccessTime := fasttime.UnixTimestamp() - 3*60
|
||||||
|
c.mu.Lock()
|
||||||
|
defer c.mu.Unlock()
|
||||||
|
|
||||||
|
for len(c.lah) > 0 {
|
||||||
|
if lastAccessTime < c.lah[0].lastAccessTime {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
c.removeLeastRecentlyAccessedItem()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *cache) GetEntry(k string) Entry {
|
||||||
|
atomic.AddUint64(&c.requests, 1)
|
||||||
|
c.mu.Lock()
|
||||||
|
defer c.mu.Unlock()
|
||||||
|
|
||||||
|
ce := c.m[k]
|
||||||
|
if ce == nil {
|
||||||
|
atomic.AddUint64(&c.misses, 1)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
currentTime := fasttime.UnixTimestamp()
|
||||||
|
if ce.lastAccessTime != currentTime {
|
||||||
|
ce.lastAccessTime = currentTime
|
||||||
|
heap.Fix(&c.lah, ce.heapIdx)
|
||||||
|
}
|
||||||
|
return ce.e
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *cache) PutEntry(k string, e Entry) {
|
||||||
|
c.mu.Lock()
|
||||||
|
defer c.mu.Unlock()
|
||||||
|
|
||||||
|
ce := c.m[k]
|
||||||
|
if ce != nil {
|
||||||
|
// The entry has been already registered by concurrent goroutine.
|
||||||
|
return
|
||||||
|
}
|
||||||
|
ce = &cacheEntry{
|
||||||
|
lastAccessTime: fasttime.UnixTimestamp(),
|
||||||
|
k: k,
|
||||||
|
e: e,
|
||||||
|
}
|
||||||
|
heap.Push(&c.lah, ce)
|
||||||
|
c.m[k] = ce
|
||||||
|
c.updateSizeBytes(e.SizeBytes())
|
||||||
|
maxSizeBytes := c.getMaxSizeBytes()
|
||||||
|
for c.SizeBytes() > maxSizeBytes && len(c.lah) > 0 {
|
||||||
|
c.removeLeastRecentlyAccessedItem()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *cache) removeLeastRecentlyAccessedItem() {
|
||||||
|
ce := c.lah[0]
|
||||||
|
c.updateSizeBytes(-ce.e.SizeBytes())
|
||||||
|
delete(c.m, ce.k)
|
||||||
|
heap.Pop(&c.lah)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *cache) Len() int {
|
||||||
|
c.mu.Lock()
|
||||||
|
defer c.mu.Unlock()
|
||||||
|
return len(c.m)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *cache) SizeBytes() int {
|
||||||
|
return int(atomic.LoadInt64(&c.sizeBytes))
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *cache) SizeMaxBytes() int {
|
||||||
|
return c.getMaxSizeBytes()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *cache) Requests() uint64 {
|
||||||
|
return atomic.LoadUint64(&c.requests)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *cache) Misses() uint64 {
|
||||||
|
return atomic.LoadUint64(&c.misses)
|
||||||
|
}
|
||||||
|
|
||||||
|
// lastAccessHeap implements heap.Interface
|
||||||
|
type lastAccessHeap []*cacheEntry
|
||||||
|
|
||||||
|
func (lah *lastAccessHeap) Len() int {
|
||||||
|
return len(*lah)
|
||||||
|
}
|
||||||
|
func (lah *lastAccessHeap) Swap(i, j int) {
|
||||||
|
h := *lah
|
||||||
|
a := h[i]
|
||||||
|
b := h[j]
|
||||||
|
a.heapIdx = j
|
||||||
|
b.heapIdx = i
|
||||||
|
h[i] = b
|
||||||
|
h[j] = a
|
||||||
|
}
|
||||||
|
func (lah *lastAccessHeap) Less(i, j int) bool {
|
||||||
|
h := *lah
|
||||||
|
return h[i].lastAccessTime < h[j].lastAccessTime
|
||||||
|
}
|
||||||
|
func (lah *lastAccessHeap) Push(x interface{}) {
|
||||||
|
e := x.(*cacheEntry)
|
||||||
|
h := *lah
|
||||||
|
e.heapIdx = len(h)
|
||||||
|
*lah = append(h, e)
|
||||||
|
}
|
||||||
|
func (lah *lastAccessHeap) Pop() interface{} {
|
||||||
|
h := *lah
|
||||||
|
e := h[len(h)-1]
|
||||||
|
|
||||||
|
// Remove the reference to deleted entry, so Go GC could free up memory occupied by the deleted entry.
|
||||||
|
h[len(h)-1] = nil
|
||||||
|
|
||||||
|
*lah = h[:len(h)-1]
|
||||||
|
return e
|
||||||
|
}
|
126
lib/lrucache/lrucache_test.go
Normal file
126
lib/lrucache/lrucache_test.go
Normal file
|
@ -0,0 +1,126 @@
|
||||||
|
package lrucache
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"sync"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/VictoriaMetrics/VictoriaMetrics/lib/cgroup"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestCache(t *testing.T) {
|
||||||
|
sizeMaxBytes := 64 * 1024
|
||||||
|
// Multiply sizeMaxBytes by the square of available CPU cores
|
||||||
|
// in order to get proper distribution of sizes between cache shards.
|
||||||
|
// See https://github.com/VictoriaMetrics/VictoriaMetrics/issues/2204
|
||||||
|
cpus := cgroup.AvailableCPUs()
|
||||||
|
sizeMaxBytes *= cpus * cpus
|
||||||
|
getMaxSize := func() int {
|
||||||
|
return sizeMaxBytes
|
||||||
|
}
|
||||||
|
c := NewCache(getMaxSize)
|
||||||
|
defer c.MustStop()
|
||||||
|
if n := c.SizeBytes(); n != 0 {
|
||||||
|
t.Fatalf("unexpected SizeBytes(); got %d; want %d", n, 0)
|
||||||
|
}
|
||||||
|
if n := c.SizeMaxBytes(); n != sizeMaxBytes {
|
||||||
|
t.Fatalf("unexpected SizeMaxBytes(); got %d; want %d", n, sizeMaxBytes)
|
||||||
|
}
|
||||||
|
k := "foobar"
|
||||||
|
var e testEntry
|
||||||
|
entrySize := e.SizeBytes()
|
||||||
|
// Put a single entry into cache
|
||||||
|
c.PutEntry(k, &e)
|
||||||
|
if n := c.Len(); n != 1 {
|
||||||
|
t.Fatalf("unexpected number of items in the cache; got %d; want %d", n, 1)
|
||||||
|
}
|
||||||
|
if n := c.SizeBytes(); n != entrySize {
|
||||||
|
t.Fatalf("unexpected SizeBytes(); got %d; want %d", n, entrySize)
|
||||||
|
}
|
||||||
|
if n := c.Requests(); n != 0 {
|
||||||
|
t.Fatalf("unexpected number of requests; got %d; want %d", n, 0)
|
||||||
|
}
|
||||||
|
if n := c.Misses(); n != 0 {
|
||||||
|
t.Fatalf("unexpected number of misses; got %d; want %d", n, 0)
|
||||||
|
}
|
||||||
|
// Obtain this entry from the cache
|
||||||
|
if e1 := c.GetEntry(k); e1 != &e {
|
||||||
|
t.Fatalf("unexpected entry obtained; got %v; want %v", e1, &e)
|
||||||
|
}
|
||||||
|
if n := c.Requests(); n != 1 {
|
||||||
|
t.Fatalf("unexpected number of requests; got %d; want %d", n, 1)
|
||||||
|
}
|
||||||
|
if n := c.Misses(); n != 0 {
|
||||||
|
t.Fatalf("unexpected number of misses; got %d; want %d", n, 0)
|
||||||
|
}
|
||||||
|
// Obtain non-existing entry from the cache
|
||||||
|
if e1 := c.GetEntry("non-existing-key"); e1 != nil {
|
||||||
|
t.Fatalf("unexpected non-nil block obtained for non-existing key: %v", e1)
|
||||||
|
}
|
||||||
|
if n := c.Requests(); n != 2 {
|
||||||
|
t.Fatalf("unexpected number of requests; got %d; want %d", n, 2)
|
||||||
|
}
|
||||||
|
if n := c.Misses(); n != 1 {
|
||||||
|
t.Fatalf("unexpected number of misses; got %d; want %d", n, 1)
|
||||||
|
}
|
||||||
|
// Store the entry again.
|
||||||
|
c.PutEntry(k, &e)
|
||||||
|
if n := c.SizeBytes(); n != entrySize {
|
||||||
|
t.Fatalf("unexpected SizeBytes(); got %d; want %d", n, entrySize)
|
||||||
|
}
|
||||||
|
if e1 := c.GetEntry(k); e1 != &e {
|
||||||
|
t.Fatalf("unexpected entry obtained; got %v; want %v", e1, &e)
|
||||||
|
}
|
||||||
|
if n := c.Requests(); n != 3 {
|
||||||
|
t.Fatalf("unexpected number of requests; got %d; want %d", n, 3)
|
||||||
|
}
|
||||||
|
if n := c.Misses(); n != 1 {
|
||||||
|
t.Fatalf("unexpected number of misses; got %d; want %d", n, 1)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Manually clean the cache. The entry shouldn't be deleted because it was recently accessed.
|
||||||
|
c.cleanByTimeout()
|
||||||
|
if n := c.SizeBytes(); n != entrySize {
|
||||||
|
t.Fatalf("unexpected SizeBytes(); got %d; want %d", n, entrySize)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCacheConcurrentAccess(t *testing.T) {
|
||||||
|
const sizeMaxBytes = 16 * 1024 * 1024
|
||||||
|
getMaxSize := func() int {
|
||||||
|
return sizeMaxBytes
|
||||||
|
}
|
||||||
|
c := NewCache(getMaxSize)
|
||||||
|
defer c.MustStop()
|
||||||
|
|
||||||
|
workers := 5
|
||||||
|
var wg sync.WaitGroup
|
||||||
|
wg.Add(workers)
|
||||||
|
for i := 0; i < workers; i++ {
|
||||||
|
go func(worker int) {
|
||||||
|
defer wg.Done()
|
||||||
|
testCacheSetGet(c, worker)
|
||||||
|
}(i)
|
||||||
|
}
|
||||||
|
wg.Wait()
|
||||||
|
}
|
||||||
|
|
||||||
|
func testCacheSetGet(c *Cache, worker int) {
|
||||||
|
for i := 0; i < 1000; i++ {
|
||||||
|
e := testEntry{}
|
||||||
|
k := fmt.Sprintf("key_%d_%d", worker, i)
|
||||||
|
c.PutEntry(k, &e)
|
||||||
|
if e1 := c.GetEntry(k); e1 != &e {
|
||||||
|
panic(fmt.Errorf("unexpected entry obtained; got %v; want %v", e1, &e))
|
||||||
|
}
|
||||||
|
if e1 := c.GetEntry("non-existing-key"); e1 != nil {
|
||||||
|
panic(fmt.Errorf("unexpected non-nil entry obtained: %v", e1))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type testEntry struct{}
|
||||||
|
|
||||||
|
func (tb *testEntry) SizeBytes() int {
|
||||||
|
return 42
|
||||||
|
}
|
|
@ -1315,9 +1315,9 @@ func (is *indexSearch) getSeriesCount() (uint64, error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetTSDBStatusWithFiltersForDate returns topN entries for tsdb status for the given tfss and the given date.
|
// GetTSDBStatusWithFiltersForDate returns topN entries for tsdb status for the given tfss and the given date.
|
||||||
func (db *indexDB) GetTSDBStatusWithFiltersForDate(tfss []*TagFilters, date uint64, topN int, deadline uint64) (*TSDBStatus, error) {
|
func (db *indexDB) GetTSDBStatusWithFiltersForDate(tfss []*TagFilters, date uint64, topN, maxMetrics int, deadline uint64) (*TSDBStatus, error) {
|
||||||
is := db.getIndexSearch(deadline)
|
is := db.getIndexSearch(deadline)
|
||||||
status, err := is.getTSDBStatusWithFiltersForDate(tfss, date, topN)
|
status, err := is.getTSDBStatusWithFiltersForDate(tfss, date, topN, maxMetrics)
|
||||||
db.putIndexSearch(is)
|
db.putIndexSearch(is)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
|
@ -1327,7 +1327,7 @@ func (db *indexDB) GetTSDBStatusWithFiltersForDate(tfss []*TagFilters, date uint
|
||||||
}
|
}
|
||||||
ok := db.doExtDB(func(extDB *indexDB) {
|
ok := db.doExtDB(func(extDB *indexDB) {
|
||||||
is := extDB.getIndexSearch(deadline)
|
is := extDB.getIndexSearch(deadline)
|
||||||
status, err = is.getTSDBStatusWithFiltersForDate(tfss, date, topN)
|
status, err = is.getTSDBStatusWithFiltersForDate(tfss, date, topN, maxMetrics)
|
||||||
extDB.putIndexSearch(is)
|
extDB.putIndexSearch(is)
|
||||||
})
|
})
|
||||||
if ok && err != nil {
|
if ok && err != nil {
|
||||||
|
@ -1337,14 +1337,14 @@ func (db *indexDB) GetTSDBStatusWithFiltersForDate(tfss []*TagFilters, date uint
|
||||||
}
|
}
|
||||||
|
|
||||||
// getTSDBStatusWithFiltersForDate returns topN entries for tsdb status for the given tfss and the given date.
|
// getTSDBStatusWithFiltersForDate returns topN entries for tsdb status for the given tfss and the given date.
|
||||||
func (is *indexSearch) getTSDBStatusWithFiltersForDate(tfss []*TagFilters, date uint64, topN int) (*TSDBStatus, error) {
|
func (is *indexSearch) getTSDBStatusWithFiltersForDate(tfss []*TagFilters, date uint64, topN, maxMetrics int) (*TSDBStatus, error) {
|
||||||
var filter *uint64set.Set
|
var filter *uint64set.Set
|
||||||
if len(tfss) > 0 {
|
if len(tfss) > 0 {
|
||||||
tr := TimeRange{
|
tr := TimeRange{
|
||||||
MinTimestamp: int64(date) * msecPerDay,
|
MinTimestamp: int64(date) * msecPerDay,
|
||||||
MaxTimestamp: int64(date+1) * msecPerDay,
|
MaxTimestamp: int64(date+1) * msecPerDay,
|
||||||
}
|
}
|
||||||
metricIDs, err := is.searchMetricIDsInternal(tfss, tr, 2e9)
|
metricIDs, err := is.searchMetricIDsInternal(tfss, tr, maxMetrics)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
|
@ -1766,7 +1766,7 @@ func TestSearchTSIDWithTimeRange(t *testing.T) {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Check GetTSDBStatusWithFiltersForDate with nil filters.
|
// Check GetTSDBStatusWithFiltersForDate with nil filters.
|
||||||
status, err := db.GetTSDBStatusWithFiltersForDate(nil, baseDate, 5, noDeadline)
|
status, err := db.GetTSDBStatusWithFiltersForDate(nil, baseDate, 5, 1e6, noDeadline)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("error in GetTSDBStatusWithFiltersForDate with nil filters: %s", err)
|
t.Fatalf("error in GetTSDBStatusWithFiltersForDate with nil filters: %s", err)
|
||||||
}
|
}
|
||||||
|
@ -1834,7 +1834,7 @@ func TestSearchTSIDWithTimeRange(t *testing.T) {
|
||||||
if err := tfs.Add([]byte("day"), []byte("0"), false, false); err != nil {
|
if err := tfs.Add([]byte("day"), []byte("0"), false, false); err != nil {
|
||||||
t.Fatalf("cannot add filter: %s", err)
|
t.Fatalf("cannot add filter: %s", err)
|
||||||
}
|
}
|
||||||
status, err = db.GetTSDBStatusWithFiltersForDate([]*TagFilters{tfs}, baseDate, 5, noDeadline)
|
status, err = db.GetTSDBStatusWithFiltersForDate([]*TagFilters{tfs}, baseDate, 5, 1e6, noDeadline)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("error in GetTSDBStatusWithFiltersForDate: %s", err)
|
t.Fatalf("error in GetTSDBStatusWithFiltersForDate: %s", err)
|
||||||
}
|
}
|
||||||
|
|
|
@ -226,17 +226,27 @@ func (s *Search) NextMetricBlock() bool {
|
||||||
|
|
||||||
// SearchQuery is used for sending search queries from vmselect to vmstorage.
|
// SearchQuery is used for sending search queries from vmselect to vmstorage.
|
||||||
type SearchQuery struct {
|
type SearchQuery struct {
|
||||||
|
// The time range for searching time series
|
||||||
MinTimestamp int64
|
MinTimestamp int64
|
||||||
MaxTimestamp int64
|
MaxTimestamp int64
|
||||||
TagFilterss [][]TagFilter
|
|
||||||
|
// Tag filters for the search query
|
||||||
|
TagFilterss [][]TagFilter
|
||||||
|
|
||||||
|
// The maximum number of time series the search query can return.
|
||||||
|
MaxMetrics int
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewSearchQuery creates new search query for the given args.
|
// NewSearchQuery creates new search query for the given args.
|
||||||
func NewSearchQuery(start, end int64, tagFilterss [][]TagFilter) *SearchQuery {
|
func NewSearchQuery(start, end int64, tagFilterss [][]TagFilter, maxMetrics int) *SearchQuery {
|
||||||
|
if maxMetrics <= 0 {
|
||||||
|
maxMetrics = 2e9
|
||||||
|
}
|
||||||
return &SearchQuery{
|
return &SearchQuery{
|
||||||
MinTimestamp: start,
|
MinTimestamp: start,
|
||||||
MaxTimestamp: end,
|
MaxTimestamp: end,
|
||||||
TagFilterss: tagFilterss,
|
TagFilterss: tagFilterss,
|
||||||
|
MaxMetrics: maxMetrics,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -1468,8 +1468,8 @@ func (s *Storage) GetSeriesCount(deadline uint64) (uint64, error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetTSDBStatusWithFiltersForDate returns TSDB status data for /api/v1/status/tsdb with match[] filters.
|
// GetTSDBStatusWithFiltersForDate returns TSDB status data for /api/v1/status/tsdb with match[] filters.
|
||||||
func (s *Storage) GetTSDBStatusWithFiltersForDate(tfss []*TagFilters, date uint64, topN int, deadline uint64) (*TSDBStatus, error) {
|
func (s *Storage) GetTSDBStatusWithFiltersForDate(tfss []*TagFilters, date uint64, topN, maxMetrics int, deadline uint64) (*TSDBStatus, error) {
|
||||||
return s.idb().GetTSDBStatusWithFiltersForDate(tfss, date, topN, deadline)
|
return s.idb().GetTSDBStatusWithFiltersForDate(tfss, date, topN, maxMetrics, deadline)
|
||||||
}
|
}
|
||||||
|
|
||||||
// MetricRow is a metric to insert into storage.
|
// MetricRow is a metric to insert into storage.
|
||||||
|
|
|
@ -9,8 +9,11 @@ import (
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
"sync/atomic"
|
"sync/atomic"
|
||||||
|
"unsafe"
|
||||||
|
|
||||||
|
"github.com/VictoriaMetrics/VictoriaMetrics/lib/bytesutil"
|
||||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
"github.com/VictoriaMetrics/VictoriaMetrics/lib/logger"
|
||||||
|
"github.com/VictoriaMetrics/VictoriaMetrics/lib/lrucache"
|
||||||
"github.com/VictoriaMetrics/VictoriaMetrics/lib/memory"
|
"github.com/VictoriaMetrics/VictoriaMetrics/lib/memory"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -471,42 +474,42 @@ func (tf *tagFilter) matchSuffix(b []byte) (bool, error) {
|
||||||
|
|
||||||
// RegexpCacheSize returns the number of cached regexps for tag filters.
|
// RegexpCacheSize returns the number of cached regexps for tag filters.
|
||||||
func RegexpCacheSize() int {
|
func RegexpCacheSize() int {
|
||||||
regexpCacheLock.RLock()
|
return regexpCache.Len()
|
||||||
n := len(regexpCacheMap)
|
|
||||||
regexpCacheLock.RUnlock()
|
|
||||||
return n
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// RegexpCacheRequests returns the number of requests to regexp cache.
|
// RegexpCacheSizeBytes returns an approximate size in bytes for the cached regexps for tag filters.
|
||||||
|
func RegexpCacheSizeBytes() int {
|
||||||
|
return regexpCache.SizeBytes()
|
||||||
|
}
|
||||||
|
|
||||||
|
// RegexpCacheMaxSizeBytes returns the maximum size in bytes for the cached regexps for tag filters.
|
||||||
|
func RegexpCacheMaxSizeBytes() int {
|
||||||
|
return regexpCache.SizeMaxBytes()
|
||||||
|
}
|
||||||
|
|
||||||
|
// RegexpCacheRequests returns the number of requests to regexp cache for tag filters.
|
||||||
func RegexpCacheRequests() uint64 {
|
func RegexpCacheRequests() uint64 {
|
||||||
return atomic.LoadUint64(®expCacheRequests)
|
return regexpCache.Requests()
|
||||||
}
|
}
|
||||||
|
|
||||||
// RegexpCacheMisses returns the number of cache misses for regexp cache.
|
// RegexpCacheMisses returns the number of cache misses for regexp cache for tag filters.
|
||||||
func RegexpCacheMisses() uint64 {
|
func RegexpCacheMisses() uint64 {
|
||||||
return atomic.LoadUint64(®expCacheMisses)
|
return regexpCache.Misses()
|
||||||
}
|
}
|
||||||
|
|
||||||
func getRegexpFromCache(expr []byte) (regexpCacheValue, error) {
|
func getRegexpFromCache(expr []byte) (*regexpCacheValue, error) {
|
||||||
atomic.AddUint64(®expCacheRequests, 1)
|
if rcv := regexpCache.GetEntry(bytesutil.ToUnsafeString(expr)); rcv != nil {
|
||||||
|
|
||||||
regexpCacheLock.RLock()
|
|
||||||
rcv, ok := regexpCacheMap[string(expr)]
|
|
||||||
regexpCacheLock.RUnlock()
|
|
||||||
if ok {
|
|
||||||
// Fast path - the regexp found in the cache.
|
// Fast path - the regexp found in the cache.
|
||||||
return rcv, nil
|
return rcv.(*regexpCacheValue), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Slow path - build the regexp.
|
// Slow path - build the regexp.
|
||||||
atomic.AddUint64(®expCacheMisses, 1)
|
|
||||||
exprOrig := string(expr)
|
exprOrig := string(expr)
|
||||||
|
|
||||||
expr = []byte(tagCharsRegexpEscaper.Replace(exprOrig))
|
expr = []byte(tagCharsRegexpEscaper.Replace(exprOrig))
|
||||||
exprStr := fmt.Sprintf("^(%s)$", expr)
|
exprStr := fmt.Sprintf("^(%s)$", expr)
|
||||||
re, err := regexp.Compile(exprStr)
|
re, err := regexp.Compile(exprStr)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return rcv, fmt.Errorf("invalid regexp %q: %w", exprStr, err)
|
return nil, fmt.Errorf("invalid regexp %q: %w", exprStr, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
sExpr := string(expr)
|
sExpr := string(expr)
|
||||||
|
@ -521,26 +524,16 @@ func getRegexpFromCache(expr []byte) (regexpCacheValue, error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Put the reMatch in the cache.
|
// Put the reMatch in the cache.
|
||||||
|
var rcv regexpCacheValue
|
||||||
rcv.orValues = orValues
|
rcv.orValues = orValues
|
||||||
rcv.reMatch = reMatch
|
rcv.reMatch = reMatch
|
||||||
rcv.reCost = reCost
|
rcv.reCost = reCost
|
||||||
rcv.literalSuffix = literalSuffix
|
rcv.literalSuffix = literalSuffix
|
||||||
|
// heuristic for rcv in-memory size
|
||||||
|
rcv.sizeBytes = 8*len(exprOrig) + len(literalSuffix)
|
||||||
|
regexpCache.PutEntry(exprOrig, &rcv)
|
||||||
|
|
||||||
regexpCacheLock.Lock()
|
return &rcv, nil
|
||||||
if overflow := len(regexpCacheMap) - getMaxRegexpCacheSize(); overflow > 0 {
|
|
||||||
overflow = int(float64(len(regexpCacheMap)) * 0.1)
|
|
||||||
for k := range regexpCacheMap {
|
|
||||||
delete(regexpCacheMap, k)
|
|
||||||
overflow--
|
|
||||||
if overflow <= 0 {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
regexpCacheMap[exprOrig] = rcv
|
|
||||||
regexpCacheLock.Unlock()
|
|
||||||
|
|
||||||
return rcv, nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func newMatchFuncForOrSuffixes(orValues []string) (reMatch func(b []byte) bool, reCost uint64) {
|
func newMatchFuncForOrSuffixes(orValues []string) (reMatch func(b []byte) bool, reCost uint64) {
|
||||||
|
@ -888,11 +881,7 @@ var tagCharsReverseRegexpEscaper = strings.NewReplacer(
|
||||||
|
|
||||||
func getMaxRegexpCacheSize() int {
|
func getMaxRegexpCacheSize() int {
|
||||||
maxRegexpCacheSizeOnce.Do(func() {
|
maxRegexpCacheSizeOnce.Do(func() {
|
||||||
n := memory.Allowed() / 1024 / 1024
|
maxRegexpCacheSize = int(0.05 * float64(memory.Allowed()))
|
||||||
if n < 100 {
|
|
||||||
n = 100
|
|
||||||
}
|
|
||||||
maxRegexpCacheSize = n
|
|
||||||
})
|
})
|
||||||
return maxRegexpCacheSize
|
return maxRegexpCacheSize
|
||||||
}
|
}
|
||||||
|
@ -903,11 +892,7 @@ var (
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
regexpCacheMap = make(map[string]regexpCacheValue)
|
regexpCache = lrucache.NewCache(getMaxRegexpCacheSize)
|
||||||
regexpCacheLock sync.RWMutex
|
|
||||||
|
|
||||||
regexpCacheRequests uint64
|
|
||||||
regexpCacheMisses uint64
|
|
||||||
)
|
)
|
||||||
|
|
||||||
type regexpCacheValue struct {
|
type regexpCacheValue struct {
|
||||||
|
@ -915,15 +900,18 @@ type regexpCacheValue struct {
|
||||||
reMatch func(b []byte) bool
|
reMatch func(b []byte) bool
|
||||||
reCost uint64
|
reCost uint64
|
||||||
literalSuffix string
|
literalSuffix string
|
||||||
|
sizeBytes int
|
||||||
|
}
|
||||||
|
|
||||||
|
// SizeBytes implements lrucache.Entry interface
|
||||||
|
func (rcv *regexpCacheValue) SizeBytes() int {
|
||||||
|
return rcv.sizeBytes
|
||||||
}
|
}
|
||||||
|
|
||||||
func getRegexpPrefix(b []byte) ([]byte, []byte) {
|
func getRegexpPrefix(b []byte) ([]byte, []byte) {
|
||||||
// Fast path - search the prefix in the cache.
|
// Fast path - search the prefix in the cache.
|
||||||
prefixesCacheLock.RLock()
|
if ps := prefixesCache.GetEntry(bytesutil.ToUnsafeString(b)); ps != nil {
|
||||||
ps, ok := prefixesCacheMap[string(b)]
|
ps := ps.(*prefixSuffix)
|
||||||
prefixesCacheLock.RUnlock()
|
|
||||||
|
|
||||||
if ok {
|
|
||||||
return ps.prefix, ps.suffix
|
return ps.prefix, ps.suffix
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -931,33 +919,18 @@ func getRegexpPrefix(b []byte) ([]byte, []byte) {
|
||||||
prefix, suffix := extractRegexpPrefix(b)
|
prefix, suffix := extractRegexpPrefix(b)
|
||||||
|
|
||||||
// Put the prefix and the suffix to the cache.
|
// Put the prefix and the suffix to the cache.
|
||||||
prefixesCacheLock.Lock()
|
ps := &prefixSuffix{
|
||||||
if overflow := len(prefixesCacheMap) - getMaxPrefixesCacheSize(); overflow > 0 {
|
|
||||||
overflow = int(float64(len(prefixesCacheMap)) * 0.1)
|
|
||||||
for k := range prefixesCacheMap {
|
|
||||||
delete(prefixesCacheMap, k)
|
|
||||||
overflow--
|
|
||||||
if overflow <= 0 {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
prefixesCacheMap[string(b)] = prefixSuffix{
|
|
||||||
prefix: prefix,
|
prefix: prefix,
|
||||||
suffix: suffix,
|
suffix: suffix,
|
||||||
}
|
}
|
||||||
prefixesCacheLock.Unlock()
|
prefixesCache.PutEntry(string(b), ps)
|
||||||
|
|
||||||
return prefix, suffix
|
return prefix, suffix
|
||||||
}
|
}
|
||||||
|
|
||||||
func getMaxPrefixesCacheSize() int {
|
func getMaxPrefixesCacheSize() int {
|
||||||
maxPrefixesCacheSizeOnce.Do(func() {
|
maxPrefixesCacheSizeOnce.Do(func() {
|
||||||
n := memory.Allowed() / 1024 / 1024
|
maxPrefixesCacheSize = int(0.05 * float64(memory.Allowed()))
|
||||||
if n < 100 {
|
|
||||||
n = 100
|
|
||||||
}
|
|
||||||
maxPrefixesCacheSize = n
|
|
||||||
})
|
})
|
||||||
return maxPrefixesCacheSize
|
return maxPrefixesCacheSize
|
||||||
}
|
}
|
||||||
|
@ -968,15 +941,44 @@ var (
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
prefixesCacheMap = make(map[string]prefixSuffix)
|
prefixesCache = lrucache.NewCache(getMaxPrefixesCacheSize)
|
||||||
prefixesCacheLock sync.RWMutex
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// RegexpPrefixesCacheSize returns the number of cached regexp prefixes for tag filters.
|
||||||
|
func RegexpPrefixesCacheSize() int {
|
||||||
|
return prefixesCache.Len()
|
||||||
|
}
|
||||||
|
|
||||||
|
// RegexpPrefixesCacheSizeBytes returns an approximate size in bytes for cached regexp prefixes for tag filters.
|
||||||
|
func RegexpPrefixesCacheSizeBytes() int {
|
||||||
|
return prefixesCache.SizeBytes()
|
||||||
|
}
|
||||||
|
|
||||||
|
// RegexpPrefixesCacheMaxSizeBytes returns the maximum size in bytes for cached regexp prefixes for tag filters in bytes.
|
||||||
|
func RegexpPrefixesCacheMaxSizeBytes() int {
|
||||||
|
return prefixesCache.SizeMaxBytes()
|
||||||
|
}
|
||||||
|
|
||||||
|
// RegexpPrefixesCacheRequests returns the number of requests to regexp prefixes cache.
|
||||||
|
func RegexpPrefixesCacheRequests() uint64 {
|
||||||
|
return prefixesCache.Requests()
|
||||||
|
}
|
||||||
|
|
||||||
|
// RegexpPrefixesCacheMisses returns the number of cache misses for regexp prefixes cache.
|
||||||
|
func RegexpPrefixesCacheMisses() uint64 {
|
||||||
|
return prefixesCache.Misses()
|
||||||
|
}
|
||||||
|
|
||||||
type prefixSuffix struct {
|
type prefixSuffix struct {
|
||||||
prefix []byte
|
prefix []byte
|
||||||
suffix []byte
|
suffix []byte
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// SizeBytes implements lrucache.Entry interface
|
||||||
|
func (ps *prefixSuffix) SizeBytes() int {
|
||||||
|
return cap(ps.prefix) + cap(ps.suffix) + int(unsafe.Sizeof(*ps))
|
||||||
|
}
|
||||||
|
|
||||||
func extractRegexpPrefix(b []byte) ([]byte, []byte) {
|
func extractRegexpPrefix(b []byte) ([]byte, []byte) {
|
||||||
sre, err := syntax.Parse(string(b), syntax.Perl)
|
sre, err := syntax.Parse(string(b), syntax.Perl)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
|
50
vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go
generated
vendored
50
vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go
generated
vendored
|
@ -6481,6 +6481,9 @@ var awsPartition = partition{
|
||||||
endpointKey{
|
endpointKey{
|
||||||
Region: "ap-southeast-2",
|
Region: "ap-southeast-2",
|
||||||
}: endpoint{},
|
}: endpoint{},
|
||||||
|
endpointKey{
|
||||||
|
Region: "ap-southeast-3",
|
||||||
|
}: endpoint{},
|
||||||
endpointKey{
|
endpointKey{
|
||||||
Region: "ca-central-1",
|
Region: "ca-central-1",
|
||||||
}: endpoint{},
|
}: endpoint{},
|
||||||
|
@ -9563,6 +9566,13 @@ var awsPartition = partition{
|
||||||
}: endpoint{},
|
}: endpoint{},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
"gamesparks": service{
|
||||||
|
Endpoints: serviceEndpoints{
|
||||||
|
endpointKey{
|
||||||
|
Region: "us-east-1",
|
||||||
|
}: endpoint{},
|
||||||
|
},
|
||||||
|
},
|
||||||
"glacier": service{
|
"glacier": service{
|
||||||
Defaults: endpointDefaults{
|
Defaults: endpointDefaults{
|
||||||
defaultKey{}: endpoint{
|
defaultKey{}: endpoint{
|
||||||
|
@ -26425,6 +26435,46 @@ var awsusgovPartition = partition{
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
"meetings-chime": service{
|
||||||
|
Endpoints: serviceEndpoints{
|
||||||
|
endpointKey{
|
||||||
|
Region: "us-gov-east-1",
|
||||||
|
}: endpoint{},
|
||||||
|
endpointKey{
|
||||||
|
Region: "us-gov-east-1",
|
||||||
|
Variant: fipsVariant,
|
||||||
|
}: endpoint{
|
||||||
|
Hostname: "meetings-chime-fips.us-gov-east-1.amazonaws.com",
|
||||||
|
},
|
||||||
|
endpointKey{
|
||||||
|
Region: "us-gov-east-1-fips",
|
||||||
|
}: endpoint{
|
||||||
|
Hostname: "meetings-chime-fips.us-gov-east-1.amazonaws.com",
|
||||||
|
CredentialScope: credentialScope{
|
||||||
|
Region: "us-gov-east-1",
|
||||||
|
},
|
||||||
|
Deprecated: boxedTrue,
|
||||||
|
},
|
||||||
|
endpointKey{
|
||||||
|
Region: "us-gov-west-1",
|
||||||
|
}: endpoint{},
|
||||||
|
endpointKey{
|
||||||
|
Region: "us-gov-west-1",
|
||||||
|
Variant: fipsVariant,
|
||||||
|
}: endpoint{
|
||||||
|
Hostname: "meetings-chime-fips.us-gov-west-1.amazonaws.com",
|
||||||
|
},
|
||||||
|
endpointKey{
|
||||||
|
Region: "us-gov-west-1-fips",
|
||||||
|
}: endpoint{
|
||||||
|
Hostname: "meetings-chime-fips.us-gov-west-1.amazonaws.com",
|
||||||
|
CredentialScope: credentialScope{
|
||||||
|
Region: "us-gov-west-1",
|
||||||
|
},
|
||||||
|
Deprecated: boxedTrue,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
"metering.marketplace": service{
|
"metering.marketplace": service{
|
||||||
Defaults: endpointDefaults{
|
Defaults: endpointDefaults{
|
||||||
defaultKey{}: endpoint{
|
defaultKey{}: endpoint{
|
||||||
|
|
2
vendor/github.com/aws/aws-sdk-go/aws/version.go
generated
vendored
2
vendor/github.com/aws/aws-sdk-go/aws/version.go
generated
vendored
|
@ -5,4 +5,4 @@ package aws
|
||||||
const SDKName = "aws-sdk-go"
|
const SDKName = "aws-sdk-go"
|
||||||
|
|
||||||
// SDKVersion is the version of this SDK
|
// SDKVersion is the version of this SDK
|
||||||
const SDKVersion = "1.43.21"
|
const SDKVersion = "1.43.26"
|
||||||
|
|
1
vendor/golang.org/x/sys/unix/mkerrors.sh
generated
vendored
1
vendor/golang.org/x/sys/unix/mkerrors.sh
generated
vendored
|
@ -603,6 +603,7 @@ ccflags="$@"
|
||||||
$2 ~ /^ITIMER_/ ||
|
$2 ~ /^ITIMER_/ ||
|
||||||
$2 !~ "WMESGLEN" &&
|
$2 !~ "WMESGLEN" &&
|
||||||
$2 ~ /^W[A-Z0-9]+$/ ||
|
$2 ~ /^W[A-Z0-9]+$/ ||
|
||||||
|
$2 ~ /^P_/ ||
|
||||||
$2 ~/^PPPIOC/ ||
|
$2 ~/^PPPIOC/ ||
|
||||||
$2 ~ /^FAN_|FANOTIFY_/ ||
|
$2 ~ /^FAN_|FANOTIFY_/ ||
|
||||||
$2 == "HID_MAX_DESCRIPTOR_SIZE" ||
|
$2 == "HID_MAX_DESCRIPTOR_SIZE" ||
|
||||||
|
|
3
vendor/golang.org/x/sys/unix/syscall_linux.go
generated
vendored
3
vendor/golang.org/x/sys/unix/syscall_linux.go
generated
vendored
|
@ -366,6 +366,8 @@ func Wait4(pid int, wstatus *WaitStatus, options int, rusage *Rusage) (wpid int,
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
//sys Waitid(idType int, id int, info *Siginfo, options int, rusage *Rusage) (err error)
|
||||||
|
|
||||||
func Mkfifo(path string, mode uint32) error {
|
func Mkfifo(path string, mode uint32) error {
|
||||||
return Mknod(path, mode|S_IFIFO, 0)
|
return Mknod(path, mode|S_IFIFO, 0)
|
||||||
}
|
}
|
||||||
|
@ -2446,5 +2448,4 @@ func Setitimer(which ItimerWhich, it Itimerval) (Itimerval, error) {
|
||||||
// Vfork
|
// Vfork
|
||||||
// Vhangup
|
// Vhangup
|
||||||
// Vserver
|
// Vserver
|
||||||
// Waitid
|
|
||||||
// _Sysctl
|
// _Sysctl
|
||||||
|
|
124
vendor/golang.org/x/sys/unix/syscall_solaris.go
generated
vendored
124
vendor/golang.org/x/sys/unix/syscall_solaris.go
generated
vendored
|
@ -737,8 +737,20 @@ type fileObjCookie struct {
|
||||||
type EventPort struct {
|
type EventPort struct {
|
||||||
port int
|
port int
|
||||||
mu sync.Mutex
|
mu sync.Mutex
|
||||||
fds map[uintptr]interface{}
|
fds map[uintptr]*fileObjCookie
|
||||||
paths map[string]*fileObjCookie
|
paths map[string]*fileObjCookie
|
||||||
|
// The user cookie presents an interesting challenge from a memory management perspective.
|
||||||
|
// There are two paths by which we can discover that it is no longer in use:
|
||||||
|
// 1. The user calls port_dissociate before any events fire
|
||||||
|
// 2. An event fires and we return it to the user
|
||||||
|
// The tricky situation is if the event has fired in the kernel but
|
||||||
|
// the user hasn't requested/received it yet.
|
||||||
|
// If the user wants to port_dissociate before the event has been processed,
|
||||||
|
// we should handle things gracefully. To do so, we need to keep an extra
|
||||||
|
// reference to the cookie around until the event is processed
|
||||||
|
// thus the otherwise seemingly extraneous "cookies" map
|
||||||
|
// The key of this map is a pointer to the corresponding &fCookie.cookie
|
||||||
|
cookies map[*interface{}]*fileObjCookie
|
||||||
}
|
}
|
||||||
|
|
||||||
// PortEvent is an abstraction of the port_event C struct.
|
// PortEvent is an abstraction of the port_event C struct.
|
||||||
|
@ -762,9 +774,10 @@ func NewEventPort() (*EventPort, error) {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
e := &EventPort{
|
e := &EventPort{
|
||||||
port: port,
|
port: port,
|
||||||
fds: make(map[uintptr]interface{}),
|
fds: make(map[uintptr]*fileObjCookie),
|
||||||
paths: make(map[string]*fileObjCookie),
|
paths: make(map[string]*fileObjCookie),
|
||||||
|
cookies: make(map[*interface{}]*fileObjCookie),
|
||||||
}
|
}
|
||||||
return e, nil
|
return e, nil
|
||||||
}
|
}
|
||||||
|
@ -779,9 +792,13 @@ func NewEventPort() (*EventPort, error) {
|
||||||
func (e *EventPort) Close() error {
|
func (e *EventPort) Close() error {
|
||||||
e.mu.Lock()
|
e.mu.Lock()
|
||||||
defer e.mu.Unlock()
|
defer e.mu.Unlock()
|
||||||
|
err := Close(e.port)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
e.fds = nil
|
e.fds = nil
|
||||||
e.paths = nil
|
e.paths = nil
|
||||||
return Close(e.port)
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// PathIsWatched checks to see if path is associated with this EventPort.
|
// PathIsWatched checks to see if path is associated with this EventPort.
|
||||||
|
@ -818,6 +835,7 @@ func (e *EventPort) AssociatePath(path string, stat os.FileInfo, events int, coo
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
e.paths[path] = fCookie
|
e.paths[path] = fCookie
|
||||||
|
e.cookies[&fCookie.cookie] = fCookie
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -830,11 +848,19 @@ func (e *EventPort) DissociatePath(path string) error {
|
||||||
return fmt.Errorf("%v is not associated with this Event Port", path)
|
return fmt.Errorf("%v is not associated with this Event Port", path)
|
||||||
}
|
}
|
||||||
_, err := port_dissociate(e.port, PORT_SOURCE_FILE, uintptr(unsafe.Pointer(f.fobj)))
|
_, err := port_dissociate(e.port, PORT_SOURCE_FILE, uintptr(unsafe.Pointer(f.fobj)))
|
||||||
if err != nil {
|
// If the path is no longer associated with this event port (ENOENT)
|
||||||
|
// we should delete it from our map. We can still return ENOENT to the caller.
|
||||||
|
// But we need to save the cookie
|
||||||
|
if err != nil && err != ENOENT {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
if err == nil {
|
||||||
|
// dissociate was successful, safe to delete the cookie
|
||||||
|
fCookie := e.paths[path]
|
||||||
|
delete(e.cookies, &fCookie.cookie)
|
||||||
|
}
|
||||||
delete(e.paths, path)
|
delete(e.paths, path)
|
||||||
return nil
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
// AssociateFd wraps calls to port_associate(3c) on file descriptors.
|
// AssociateFd wraps calls to port_associate(3c) on file descriptors.
|
||||||
|
@ -844,12 +870,13 @@ func (e *EventPort) AssociateFd(fd uintptr, events int, cookie interface{}) erro
|
||||||
if _, found := e.fds[fd]; found {
|
if _, found := e.fds[fd]; found {
|
||||||
return fmt.Errorf("%v is already associated with this Event Port", fd)
|
return fmt.Errorf("%v is already associated with this Event Port", fd)
|
||||||
}
|
}
|
||||||
pcookie := &cookie
|
fCookie := &fileObjCookie{nil, cookie}
|
||||||
_, err := port_associate(e.port, PORT_SOURCE_FD, fd, events, (*byte)(unsafe.Pointer(pcookie)))
|
_, err := port_associate(e.port, PORT_SOURCE_FD, fd, events, (*byte)(unsafe.Pointer(&fCookie.cookie)))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
e.fds[fd] = pcookie
|
e.fds[fd] = fCookie
|
||||||
|
e.cookies[&fCookie.cookie] = fCookie
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -862,11 +889,16 @@ func (e *EventPort) DissociateFd(fd uintptr) error {
|
||||||
return fmt.Errorf("%v is not associated with this Event Port", fd)
|
return fmt.Errorf("%v is not associated with this Event Port", fd)
|
||||||
}
|
}
|
||||||
_, err := port_dissociate(e.port, PORT_SOURCE_FD, fd)
|
_, err := port_dissociate(e.port, PORT_SOURCE_FD, fd)
|
||||||
if err != nil {
|
if err != nil && err != ENOENT {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
if err == nil {
|
||||||
|
// dissociate was successful, safe to delete the cookie
|
||||||
|
fCookie := e.fds[fd]
|
||||||
|
delete(e.cookies, &fCookie.cookie)
|
||||||
|
}
|
||||||
delete(e.fds, fd)
|
delete(e.fds, fd)
|
||||||
return nil
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
func createFileObj(name string, stat os.FileInfo) (*fileObj, error) {
|
func createFileObj(name string, stat os.FileInfo) (*fileObj, error) {
|
||||||
|
@ -894,26 +926,48 @@ func (e *EventPort) GetOne(t *Timespec) (*PortEvent, error) {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
p := new(PortEvent)
|
p := new(PortEvent)
|
||||||
p.Events = pe.Events
|
|
||||||
p.Source = pe.Source
|
|
||||||
e.mu.Lock()
|
e.mu.Lock()
|
||||||
defer e.mu.Unlock()
|
defer e.mu.Unlock()
|
||||||
switch pe.Source {
|
e.peIntToExt(pe, p)
|
||||||
case PORT_SOURCE_FD:
|
|
||||||
p.Fd = uintptr(pe.Object)
|
|
||||||
cookie := (*interface{})(unsafe.Pointer(pe.User))
|
|
||||||
p.Cookie = *cookie
|
|
||||||
delete(e.fds, p.Fd)
|
|
||||||
case PORT_SOURCE_FILE:
|
|
||||||
p.fobj = (*fileObj)(unsafe.Pointer(uintptr(pe.Object)))
|
|
||||||
p.Path = BytePtrToString((*byte)(unsafe.Pointer(p.fobj.Name)))
|
|
||||||
cookie := (*interface{})(unsafe.Pointer(pe.User))
|
|
||||||
p.Cookie = *cookie
|
|
||||||
delete(e.paths, p.Path)
|
|
||||||
}
|
|
||||||
return p, nil
|
return p, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// peIntToExt converts a cgo portEvent struct into the friendlier PortEvent
|
||||||
|
// NOTE: Always call this function while holding the e.mu mutex
|
||||||
|
func (e *EventPort) peIntToExt(peInt *portEvent, peExt *PortEvent) {
|
||||||
|
peExt.Events = peInt.Events
|
||||||
|
peExt.Source = peInt.Source
|
||||||
|
cookie := (*interface{})(unsafe.Pointer(peInt.User))
|
||||||
|
peExt.Cookie = *cookie
|
||||||
|
switch peInt.Source {
|
||||||
|
case PORT_SOURCE_FD:
|
||||||
|
delete(e.cookies, cookie)
|
||||||
|
peExt.Fd = uintptr(peInt.Object)
|
||||||
|
// Only remove the fds entry if it exists and this cookie matches
|
||||||
|
if fobj, ok := e.fds[peExt.Fd]; ok {
|
||||||
|
if &fobj.cookie == cookie {
|
||||||
|
delete(e.fds, peExt.Fd)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
case PORT_SOURCE_FILE:
|
||||||
|
if fCookie, ok := e.cookies[cookie]; ok && uintptr(unsafe.Pointer(fCookie.fobj)) == uintptr(peInt.Object) {
|
||||||
|
// Use our stashed reference rather than using unsafe on what we got back
|
||||||
|
// the unsafe version would be (*fileObj)(unsafe.Pointer(uintptr(peInt.Object)))
|
||||||
|
peExt.fobj = fCookie.fobj
|
||||||
|
} else {
|
||||||
|
panic("mismanaged memory")
|
||||||
|
}
|
||||||
|
delete(e.cookies, cookie)
|
||||||
|
peExt.Path = BytePtrToString((*byte)(unsafe.Pointer(peExt.fobj.Name)))
|
||||||
|
// Only remove the paths entry if it exists and this cookie matches
|
||||||
|
if fobj, ok := e.paths[peExt.Path]; ok {
|
||||||
|
if &fobj.cookie == cookie {
|
||||||
|
delete(e.paths, peExt.Path)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Pending wraps port_getn(3c) and returns how many events are pending.
|
// Pending wraps port_getn(3c) and returns how many events are pending.
|
||||||
func (e *EventPort) Pending() (int, error) {
|
func (e *EventPort) Pending() (int, error) {
|
||||||
var n uint32 = 0
|
var n uint32 = 0
|
||||||
|
@ -944,21 +998,7 @@ func (e *EventPort) Get(s []PortEvent, min int, timeout *Timespec) (int, error)
|
||||||
e.mu.Lock()
|
e.mu.Lock()
|
||||||
defer e.mu.Unlock()
|
defer e.mu.Unlock()
|
||||||
for i := 0; i < int(got); i++ {
|
for i := 0; i < int(got); i++ {
|
||||||
s[i].Events = ps[i].Events
|
e.peIntToExt(&ps[i], &s[i])
|
||||||
s[i].Source = ps[i].Source
|
|
||||||
switch ps[i].Source {
|
|
||||||
case PORT_SOURCE_FD:
|
|
||||||
s[i].Fd = uintptr(ps[i].Object)
|
|
||||||
cookie := (*interface{})(unsafe.Pointer(ps[i].User))
|
|
||||||
s[i].Cookie = *cookie
|
|
||||||
delete(e.fds, s[i].Fd)
|
|
||||||
case PORT_SOURCE_FILE:
|
|
||||||
s[i].fobj = (*fileObj)(unsafe.Pointer(uintptr(ps[i].Object)))
|
|
||||||
s[i].Path = BytePtrToString((*byte)(unsafe.Pointer(s[i].fobj.Name)))
|
|
||||||
cookie := (*interface{})(unsafe.Pointer(ps[i].User))
|
|
||||||
s[i].Cookie = *cookie
|
|
||||||
delete(e.paths, s[i].Path)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
return int(got), err
|
return int(got), err
|
||||||
}
|
}
|
||||||
|
|
4
vendor/golang.org/x/sys/unix/zerrors_linux.go
generated
vendored
4
vendor/golang.org/x/sys/unix/zerrors_linux.go
generated
vendored
|
@ -2135,6 +2135,10 @@ const (
|
||||||
PTRACE_SYSCALL_INFO_NONE = 0x0
|
PTRACE_SYSCALL_INFO_NONE = 0x0
|
||||||
PTRACE_SYSCALL_INFO_SECCOMP = 0x3
|
PTRACE_SYSCALL_INFO_SECCOMP = 0x3
|
||||||
PTRACE_TRACEME = 0x0
|
PTRACE_TRACEME = 0x0
|
||||||
|
P_ALL = 0x0
|
||||||
|
P_PGID = 0x2
|
||||||
|
P_PID = 0x1
|
||||||
|
P_PIDFD = 0x3
|
||||||
QNX4_SUPER_MAGIC = 0x2f
|
QNX4_SUPER_MAGIC = 0x2f
|
||||||
QNX6_SUPER_MAGIC = 0x68191122
|
QNX6_SUPER_MAGIC = 0x68191122
|
||||||
RAMFS_MAGIC = 0x858458f6
|
RAMFS_MAGIC = 0x858458f6
|
||||||
|
|
10
vendor/golang.org/x/sys/unix/zsyscall_linux.go
generated
vendored
10
vendor/golang.org/x/sys/unix/zsyscall_linux.go
generated
vendored
|
@ -231,6 +231,16 @@ func wait4(pid int, wstatus *_C_int, options int, rusage *Rusage) (wpid int, err
|
||||||
|
|
||||||
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||||
|
|
||||||
|
func Waitid(idType int, id int, info *Siginfo, options int, rusage *Rusage) (err error) {
|
||||||
|
_, _, e1 := Syscall6(SYS_WAITID, uintptr(idType), uintptr(id), uintptr(unsafe.Pointer(info)), uintptr(options), uintptr(unsafe.Pointer(rusage)), 0)
|
||||||
|
if e1 != 0 {
|
||||||
|
err = errnoErr(e1)
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
|
||||||
|
|
||||||
func KeyctlInt(cmd int, arg2 int, arg3 int, arg4 int, arg5 int) (ret int, err error) {
|
func KeyctlInt(cmd int, arg2 int, arg3 int, arg4 int, arg5 int) (ret int, err error) {
|
||||||
r0, _, e1 := Syscall6(SYS_KEYCTL, uintptr(cmd), uintptr(arg2), uintptr(arg3), uintptr(arg4), uintptr(arg5), 0)
|
r0, _, e1 := Syscall6(SYS_KEYCTL, uintptr(cmd), uintptr(arg2), uintptr(arg3), uintptr(arg4), uintptr(arg5), 0)
|
||||||
ret = int(r0)
|
ret = int(r0)
|
||||||
|
|
19
vendor/google.golang.org/protobuf/encoding/protowire/wire.go
generated
vendored
19
vendor/google.golang.org/protobuf/encoding/protowire/wire.go
generated
vendored
|
@ -21,10 +21,11 @@ import (
|
||||||
type Number int32
|
type Number int32
|
||||||
|
|
||||||
const (
|
const (
|
||||||
MinValidNumber Number = 1
|
MinValidNumber Number = 1
|
||||||
FirstReservedNumber Number = 19000
|
FirstReservedNumber Number = 19000
|
||||||
LastReservedNumber Number = 19999
|
LastReservedNumber Number = 19999
|
||||||
MaxValidNumber Number = 1<<29 - 1
|
MaxValidNumber Number = 1<<29 - 1
|
||||||
|
DefaultRecursionLimit = 10000
|
||||||
)
|
)
|
||||||
|
|
||||||
// IsValid reports whether the field number is semantically valid.
|
// IsValid reports whether the field number is semantically valid.
|
||||||
|
@ -55,6 +56,7 @@ const (
|
||||||
errCodeOverflow
|
errCodeOverflow
|
||||||
errCodeReserved
|
errCodeReserved
|
||||||
errCodeEndGroup
|
errCodeEndGroup
|
||||||
|
errCodeRecursionDepth
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
|
@ -112,6 +114,10 @@ func ConsumeField(b []byte) (Number, Type, int) {
|
||||||
// When parsing a group, the length includes the end group marker and
|
// When parsing a group, the length includes the end group marker and
|
||||||
// the end group is verified to match the starting field number.
|
// the end group is verified to match the starting field number.
|
||||||
func ConsumeFieldValue(num Number, typ Type, b []byte) (n int) {
|
func ConsumeFieldValue(num Number, typ Type, b []byte) (n int) {
|
||||||
|
return consumeFieldValueD(num, typ, b, DefaultRecursionLimit)
|
||||||
|
}
|
||||||
|
|
||||||
|
func consumeFieldValueD(num Number, typ Type, b []byte, depth int) (n int) {
|
||||||
switch typ {
|
switch typ {
|
||||||
case VarintType:
|
case VarintType:
|
||||||
_, n = ConsumeVarint(b)
|
_, n = ConsumeVarint(b)
|
||||||
|
@ -126,6 +132,9 @@ func ConsumeFieldValue(num Number, typ Type, b []byte) (n int) {
|
||||||
_, n = ConsumeBytes(b)
|
_, n = ConsumeBytes(b)
|
||||||
return n
|
return n
|
||||||
case StartGroupType:
|
case StartGroupType:
|
||||||
|
if depth < 0 {
|
||||||
|
return errCodeRecursionDepth
|
||||||
|
}
|
||||||
n0 := len(b)
|
n0 := len(b)
|
||||||
for {
|
for {
|
||||||
num2, typ2, n := ConsumeTag(b)
|
num2, typ2, n := ConsumeTag(b)
|
||||||
|
@ -140,7 +149,7 @@ func ConsumeFieldValue(num Number, typ Type, b []byte) (n int) {
|
||||||
return n0 - len(b)
|
return n0 - len(b)
|
||||||
}
|
}
|
||||||
|
|
||||||
n = ConsumeFieldValue(num2, typ2, b)
|
n = consumeFieldValueD(num2, typ2, b, depth-1)
|
||||||
if n < 0 {
|
if n < 0 {
|
||||||
return n // forward error code
|
return n // forward error code
|
||||||
}
|
}
|
||||||
|
|
2
vendor/google.golang.org/protobuf/internal/encoding/text/decode.go
generated
vendored
2
vendor/google.golang.org/protobuf/internal/encoding/text/decode.go
generated
vendored
|
@ -381,7 +381,7 @@ func (d *Decoder) currentOpenKind() (Kind, byte) {
|
||||||
case '[':
|
case '[':
|
||||||
return ListOpen, ']'
|
return ListOpen, ']'
|
||||||
}
|
}
|
||||||
panic(fmt.Sprintf("Decoder: openStack contains invalid byte %s", string(openCh)))
|
panic(fmt.Sprintf("Decoder: openStack contains invalid byte %c", openCh))
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *Decoder) pushOpenStack(ch byte) {
|
func (d *Decoder) pushOpenStack(ch byte) {
|
||||||
|
|
1
vendor/google.golang.org/protobuf/internal/errors/is_go112.go
generated
vendored
1
vendor/google.golang.org/protobuf/internal/errors/is_go112.go
generated
vendored
|
@ -2,6 +2,7 @@
|
||||||
// Use of this source code is governed by a BSD-style
|
// Use of this source code is governed by a BSD-style
|
||||||
// license that can be found in the LICENSE file.
|
// license that can be found in the LICENSE file.
|
||||||
|
|
||||||
|
//go:build !go1.13
|
||||||
// +build !go1.13
|
// +build !go1.13
|
||||||
|
|
||||||
package errors
|
package errors
|
||||||
|
|
1
vendor/google.golang.org/protobuf/internal/errors/is_go113.go
generated
vendored
1
vendor/google.golang.org/protobuf/internal/errors/is_go113.go
generated
vendored
|
@ -2,6 +2,7 @@
|
||||||
// Use of this source code is governed by a BSD-style
|
// Use of this source code is governed by a BSD-style
|
||||||
// license that can be found in the LICENSE file.
|
// license that can be found in the LICENSE file.
|
||||||
|
|
||||||
|
//go:build go1.13
|
||||||
// +build go1.13
|
// +build go1.13
|
||||||
|
|
||||||
package errors
|
package errors
|
||||||
|
|
1
vendor/google.golang.org/protobuf/internal/flags/proto_legacy_disable.go
generated
vendored
1
vendor/google.golang.org/protobuf/internal/flags/proto_legacy_disable.go
generated
vendored
|
@ -2,6 +2,7 @@
|
||||||
// Use of this source code is governed by a BSD-style
|
// Use of this source code is governed by a BSD-style
|
||||||
// license that can be found in the LICENSE file.
|
// license that can be found in the LICENSE file.
|
||||||
|
|
||||||
|
//go:build !protolegacy
|
||||||
// +build !protolegacy
|
// +build !protolegacy
|
||||||
|
|
||||||
package flags
|
package flags
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue