8645438a79
Automated changes by [create-pull-request](https://github.com/peter-evans/create-pull-request) GitHub action Signed-off-by: Github Actions <133988544+victoriametrics-bot@users.noreply.github.com> Co-authored-by: AndrewChubatiuk <3162380+AndrewChubatiuk@users.noreply.github.com> |
||
---|---|---|
.. | ||
_changelog.md | ||
_index.md | ||
CHANGELOG.md | ||
README.md | ||
RELEASE_GUIDE.md | ||
todo.md |
Kubernetes monitoring on VictoriaMetrics stack. Includes VictoriaMetrics Operator, Grafana dashboards, ServiceScrapes and VMRules
- Overview
- Configuration
- Prerequisites
- Dependencies
- Quick Start
- Uninstall
- Version Upgrade
- Troubleshooting
- Values
Overview
This chart is an All-in-one solution to start monitoring kubernetes cluster. It installs multiple dependency charts like grafana, node-exporter, kube-state-metrics and victoria-metrics-operator. Also it installs Custom Resources like VMSingle, VMCluster, VMAgent, VMAlert.
By default, the operator converts all existing prometheus-operator API objects into corresponding VictoriaMetrics Operator objects.
To enable metrics collection for kubernetes this chart installs multiple scrape configurations for kuberenetes components like kubelet and kube-proxy, etc. Metrics collection is done by VMAgent. So if want to ship metrics to external VictoriaMetrics database you can disable VMSingle installation by setting vmsingle.enabled
to false
and setting vmagent.vmagentSpec.remoteWrite.url
to your external VictoriaMetrics database.
This chart also installs bunch of dashboards and recording rules from kube-prometheus project.
Configuration
Configuration of this chart is done through helm values.
Dependencies
Dependencies can be enabled or disabled by setting enabled
to true
or false
in values.yaml
file.
!Important: for dependency charts anything that you can find in values.yaml of dependency chart can be configured in this chart under key for that dependency. For example if you want to configure grafana
you can find all possible configuration options in values.yaml and you should set them in values for this chart under grafana: key. For example if you want to configure grafana.persistence.enabled
you should set it in values.yaml like this:
#################################################
### dependencies #####
#################################################
# Grafana dependency chart configuration. For possible values refer to https://github.com/grafana/helm-charts/tree/main/charts/grafana#configuration
grafana:
enabled: true
persistence:
type: pvc
enabled: false
VictoriaMetrics components
This chart installs multiple VictoriaMetrics components using Custom Resources that are managed by victoria-metrics-operator
Each resource can be configured using spec
of that resource from API docs of victoria-metrics-operator. For example if you want to configure VMAgent
you can find all possible configuration options in API docs and you should set them in values for this chart under vmagent.spec
key. For example if you want to configure remoteWrite.url
you should set it in values.yaml like this:
vmagent:
spec:
remoteWrite:
- url: "https://insert.vmcluster.domain.com/insert/0/prometheus/api/v1/write"
ArgoCD issues
Operator self signed certificates
When deploying K8s stack using ArgoCD without Cert Manager (.Values.victoria-metrics-operator.admissionWebhooks.certManager.enabled: false
)
it will rerender operator's webhook certificates on each sync since Helm lookup
function is not respected by ArgoCD.
To prevent this please update you K8s stack Application spec.syncPolicy
and spec.ignoreDifferences
with a following:
apiVersion: argoproj.io/v1alpha1
kind: Application
...
spec:
...
syncPolicy:
syncOptions:
# https://argo-cd.readthedocs.io/en/stable/user-guide/sync-options/#respect-ignore-difference-configs
# argocd must also ignore difference during apply stage
# otherwise it ll silently override changes and cause a problem
- RespectIgnoreDifferences=true
ignoreDifferences:
- group: ""
kind: Secret
name: <fullname>-validation
namespace: kube-system
jsonPointers:
- /data
- group: admissionregistration.k8s.io
kind: ValidatingWebhookConfiguration
name: <fullname>-admission
jqPathExpressions:
- '.webhooks[]?.clientConfig.caBundle'
where <fullname>
is output of {{ include "vm-operator.fullname" }}
for your setup
metadata.annotations: Too long: must have at most 262144 bytes
on dashboards
If one of dashboards ConfigMap is failing with error Too long: must have at most 262144 bytes
, please make sure you've added argocd.argoproj.io/sync-options: ServerSideApply=true
annotation to your dashboards:
grafana:
sidecar:
dashboards:
additionalDashboardAnnotations
argocd.argoproj.io/sync-options: ServerSideApply=true
argocd.argoproj.io/sync-options: ServerSideApply=true
Rules and dashboards
This chart by default install multiple dashboards and recording rules from kube-prometheus
you can disable dashboards with defaultDashboardsEnabled: false
and experimentalDashboardsEnabled: false
and rules can be configured under defaultRules
Prometheus scrape configs
This chart installs multiple scrape configurations for kubernetes monitoring. They are configured under #ServiceMonitors
section in values.yaml
file. For example if you want to configure scrape config for kubelet
you should set it in values.yaml like this:
kubelet:
enabled: true
# spec for VMNodeScrape crd
# https://docs.victoriametrics.com/operator/api#vmnodescrapespec
spec:
interval: "30s"
Using externally managed Grafana
If you want to use an externally managed Grafana instance but still want to use the dashboards provided by this chart you can set
grafana.enabled
to false
and set defaultDashboardsEnabled
to true
. This will install the dashboards
but will not install Grafana.
For example:
defaultDashboardsEnabled: true
grafana:
enabled: false
This will create ConfigMaps with dashboards to be imported into Grafana.
If additional configuration for labels or annotations is needed in order to import dashboard to an existing Grafana you can
set .grafana.sidecar.dashboards.additionalDashboardLabels
or .grafana.sidecar.dashboards.additionalDashboardAnnotations
in values.yaml
:
For example:
defaultDashboardsEnabled: true
grafana:
enabled: false
sidecar:
dashboards:
additionalDashboardLabels:
key: value
additionalDashboardAnnotations:
key: value
Prerequisites
-
Install the follow packages:
git
,kubectl
,helm
,helm-docs
. See this tutorial. -
Add dependency chart repositories
helm repo add grafana https://grafana.github.io/helm-charts
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
- PV support on underlying infrastructure.
How to install
Access a Kubernetes cluster.
Setup chart repository (can be omitted for OCI repositories)
Add a chart helm repository with follow commands:
helm repo add vm https://victoriametrics.github.io/helm-charts/
helm repo update
List versions of vm/victoria-metrics-k8s-stack
chart available to installation:
helm search repo vm/victoria-metrics-k8s-stack -l
Install victoria-metrics-k8s-stack
chart
Export default values of victoria-metrics-k8s-stack
chart to file values.yaml
:
-
For HTTPS repository
helm show values vm/victoria-metrics-k8s-stack > values.yaml
-
For OCI repository
helm show values oci://ghcr.io/victoriametrics/helm-charts/victoria-metrics-k8s-stack > values.yaml
Change the values according to the need of the environment in values.yaml
file.
Test the installation with command:
-
For HTTPS repository
helm install vmks vm/victoria-metrics-k8s-stack -f values.yaml -n NAMESPACE --debug --dry-run
-
For OCI repository
helm install vmks oci://ghcr.io/victoriametrics/helm-charts/victoria-metrics-k8s-stack -f values.yaml -n NAMESPACE --debug --dry-run
Install chart with command:
-
For HTTPS repository
helm install vmks vm/victoria-metrics-k8s-stack -f values.yaml -n NAMESPACE
-
For OCI repository
helm install vmks oci://ghcr.io/victoriametrics/helm-charts/victoria-metrics-k8s-stack -f values.yaml -n NAMESPACE
Get the pods lists by running this commands:
kubectl get pods -A | grep 'vmks'
Get the application by running this command:
helm list -f vmks -n NAMESPACE
See the history of versions of vmks
application with command.
helm history vmks -n NAMESPACE
Install locally (Minikube)
To run VictoriaMetrics stack locally it's possible to use Minikube. To avoid dashboards and alert rules issues please follow the steps below:
Run Minikube cluster
minikube start --container-runtime=containerd --extra-config=scheduler.bind-address=0.0.0.0 --extra-config=controller-manager.bind-address=0.0.0.0
Install helm chart
helm install [RELEASE_NAME] vm/victoria-metrics-k8s-stack -f values.yaml -f values.minikube.yaml -n NAMESPACE --debug --dry-run
How to uninstall
Remove application with command.
helm uninstall vmks -n NAMESPACE
CRDs created by this chart are not removed by default and should be manually cleaned up:
kubectl get crd | grep victoriametrics.com | awk '{print $1 }' | xargs -i kubectl delete crd {}
Troubleshooting
- If you cannot install helm chart with error
configmap already exist
. It could happen because of name collisions, if you set too long release name. Kubernetes by default, allows only 63 symbols at resource names and all resource names are trimmed by helm to 63 symbols. To mitigate it, use shorter name for helm chart release name, like:
# stack - is short enough
helm upgrade -i stack vm/victoria-metrics-k8s-stack
Or use override for helm chart release name:
helm upgrade -i some-very-long-name vm/victoria-metrics-k8s-stack --set fullnameOverride=stack
Upgrade guide
Usually, helm upgrade doesn't requires manual actions. Just execute command:
$ helm upgrade [RELEASE_NAME] vm/victoria-metrics-k8s-stack
But release with CRD update can only be patched manually with kubectl. Since helm does not perform a CRD update, we recommend that you always perform this when updating the helm-charts version:
# 1. check the changes in CRD
$ helm show crds vm/victoria-metrics-k8s-stack --version [YOUR_CHART_VERSION] | kubectl diff -f -
# 2. apply the changes (update CRD)
$ helm show crds vm/victoria-metrics-k8s-stack --version [YOUR_CHART_VERSION] | kubectl apply -f - --server-side
All other manual actions upgrades listed below:
Upgrade to 0.13.0
- node-exporter starting from version 4.0.0 is using the Kubernetes recommended labels. Therefore you have to delete the daemonset before you upgrade.
kubectl delete daemonset -l app=prometheus-node-exporter
-
scrape configuration for kubernetes components was moved from
vmServiceScrape.spec
section tospec
section. If you previously modified scrape configuration you need to update yourvalues.yaml
-
grafana.defaultDashboardsEnabled
was renamed todefaultDashboardsEnabled
(moved to top level). You may need to update it in yourvalues.yaml
Upgrade to 0.6.0
All CRD
must be update to the lastest version with command:
kubectl apply -f https://raw.githubusercontent.com/VictoriaMetrics/helm-charts/master/charts/victoria-metrics-k8s-stack/crds/crd.yaml
Upgrade to 0.4.0
All CRD
must be update to v1
version with command:
kubectl apply -f https://raw.githubusercontent.com/VictoriaMetrics/helm-charts/master/charts/victoria-metrics-k8s-stack/crds/crd.yaml
Upgrade from 0.2.8 to 0.2.9
Update VMAgent
crd
command:
kubectl apply -f https://raw.githubusercontent.com/VictoriaMetrics/operator/v0.16.0/config/crd/bases/operator.victoriametrics.com_vmagents.yaml
Upgrade from 0.2.5 to 0.2.6
New CRD added to operator - VMUser
and VMAuth
, new fields added to exist crd.
Manual commands:
kubectl apply -f https://raw.githubusercontent.com/VictoriaMetrics/operator/v0.15.0/config/crd/bases/operator.victoriametrics.com_vmusers.yaml
kubectl apply -f https://raw.githubusercontent.com/VictoriaMetrics/operator/v0.15.0/config/crd/bases/operator.victoriametrics.com_vmauths.yaml
kubectl apply -f https://raw.githubusercontent.com/VictoriaMetrics/operator/v0.15.0/config/crd/bases/operator.victoriametrics.com_vmalerts.yaml
kubectl apply -f https://raw.githubusercontent.com/VictoriaMetrics/operator/v0.15.0/config/crd/bases/operator.victoriametrics.com_vmagents.yaml
kubectl apply -f https://raw.githubusercontent.com/VictoriaMetrics/operator/v0.15.0/config/crd/bases/operator.victoriametrics.com_vmsingles.yaml
kubectl apply -f https://raw.githubusercontent.com/VictoriaMetrics/operator/v0.15.0/config/crd/bases/operator.victoriametrics.com_vmclusters.yaml
Documentation of Helm Chart
Install helm-docs
following the instructions on this tutorial.
Generate docs with helm-docs
command.
cd charts/victoria-metrics-k8s-stack
helm-docs
The markdown generation is entirely go template driven. The tool parses metadata from charts and generates a number of sub-templates that can be referenced in a template file (by default README.md.gotmpl
). If no template file is provided, the tool has a default internal template that will generate a reasonably formatted README.
Parameters
The following tables lists the configurable parameters of the chart and their default values.
Change the values according to the need of the environment in victoria-metrics-k8s-stack/values.yaml
file.
Key | Type | Default | Description |
---|---|---|---|
additionalVictoriaMetricsMap | string | null |
|
alertmanager.annotations | object | {}
|
|
alertmanager.config | object | receivers:
- name: blackhole
route:
receiver: blackhole
templates:
- /etc/vm/configs/**/*.tmpl
|
alertmanager configuration |
alertmanager.enabled | bool | true |
|
alertmanager.ingress | object | annotations: {}
enabled: false
extraPaths: []
hosts:
- alertmanager.domain.com
labels: {}
path: '{{ .Values.alertmanager.spec.routePrefix | default "/" }}'
pathType: Prefix
tls: []
|
alertmanager ingress configuration |
alertmanager.monzoTemplate.enabled | bool | true |
|
alertmanager.spec | object | configSecret: ""
externalURL: ""
image:
tag: v0.25.0
port: "9093"
routePrefix: /
selectAllByDefault: true
|
full spec for VMAlertmanager CRD. Allowed values described here |
alertmanager.spec.configSecret | string | "" |
if this one defined, it will be used for alertmanager configuration and config parameter will be ignored |
alertmanager.templateFiles | object | {}
|
extra alert templates |
argocdReleaseOverride | string | "" |
For correct working need set value ‘argocdReleaseOverride=$ARGOCD_APP_NAME’ |
coreDns.enabled | bool | true |
|
coreDns.service.enabled | bool | true |
|
coreDns.service.port | int | 9153 |
|
coreDns.service.selector.k8s-app | string | kube-dns |
|
coreDns.service.targetPort | int | 9153 |
|
coreDns.vmScrape | object | spec:
endpoints:
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
port: http-metrics
jobLabel: jobLabel
namespaceSelector:
matchNames:
- kube-system
|
spec for VMServiceScrape crd https://docs.victoriametrics.com/operator/api.html#vmservicescrapespec |
crds.enabled | bool | true |
|
dashboards | object | node-exporter-full: true
operator: false
vmalert: false
|
Enable dashboards despite it’s dependency is not installed |
dashboards.node-exporter-full | bool | true |
in ArgoCD using client-side apply this dashboard reaches annotations size limit and causes k8s issues without server side apply See this issue |
defaultDashboardsEnabled | bool | true |
Create default dashboards |
defaultRules | object | alerting:
spec:
annotations: {}
labels: {}
annotations: {}
create: true
group:
spec:
params: {}
groups:
alertmanager:
create: true
rules: {}
etcd:
create: true
rules: {}
general:
create: true
rules: {}
k8sContainerCpuUsageSecondsTotal:
create: true
rules: {}
k8sContainerMemoryCache:
create: true
rules: {}
k8sContainerMemoryRss:
create: true
rules: {}
k8sContainerMemorySwap:
create: true
rules: {}
k8sContainerMemoryWorkingSetBytes:
create: true
rules: {}
k8sContainerResource:
create: true
rules: {}
k8sPodOwner:
create: true
rules: {}
kubeApiserver:
create: true
rules: {}
kubeApiserverAvailability:
create: true
rules: {}
kubeApiserverBurnrate:
create: true
rules: {}
kubeApiserverHistogram:
create: true
rules: {}
kubeApiserverSlos:
create: true
rules: {}
kubePrometheusGeneral:
create: true
rules: {}
kubePrometheusNodeRecording:
create: true
rules: {}
kubeScheduler:
create: true
rules: {}
kubeStateMetrics:
create: true
rules: {}
kubelet:
create: true
rules: {}
kubernetesApps:
create: true
rules: {}
targetNamespace: .*
kubernetesResources:
create: true
rules: {}
kubernetesStorage:
create: true
rules: {}
targetNamespace: .*
kubernetesSystem:
create: true
rules: {}
kubernetesSystemApiserver:
create: true
rules: {}
kubernetesSystemControllerManager:
create: true
rules: {}
kubernetesSystemKubelet:
create: true
rules: {}
kubernetesSystemScheduler:
create: true
rules: {}
node:
create: true
rules: {}
nodeNetwork:
create: true
rules: {}
vmHealth:
create: true
rules: {}
vmagent:
create: true
rules: {}
vmcluster:
create: true
rules: {}
vmoperator:
create: true
rules: {}
vmsingle:
create: true
rules: {}
labels: {}
recording:
spec:
annotations: {}
labels: {}
rule:
spec:
annotations: {}
labels: {}
rules: {}
runbookUrl: https://runbooks.prometheus-operator.dev/runbooks
|
Create default rules for monitoring the cluster |
defaultRules.alerting | object | spec:
annotations: {}
labels: {}
|
Common properties for VMRules alerts |
defaultRules.alerting.spec.annotations | object | {}
|
Additional annotations for VMRule alerts |
defaultRules.alerting.spec.labels | object | {}
|
Additional labels for VMRule alerts |
defaultRules.annotations | object | {}
|
Annotations for default rules |
defaultRules.group | object | spec:
params: {}
|
Common properties for VMRule groups |
defaultRules.group.spec.params | object | {}
|
Optional HTTP URL parameters added to each rule request |
defaultRules.groups.etcd.rules | object | {}
|
Common properties for all rules in a group |
defaultRules.labels | object | {}
|
Labels for default rules |
defaultRules.recording | object | spec:
annotations: {}
labels: {}
|
Common properties for VMRules recording rules |
defaultRules.recording.spec.annotations | object | {}
|
Additional annotations for VMRule recording rules |
defaultRules.recording.spec.labels | object | {}
|
Additional labels for VMRule recording rules |
defaultRules.rule | object | spec:
annotations: {}
labels: {}
|
Common properties for all VMRules |
defaultRules.rule.spec.annotations | object | {}
|
Additional annotations for all VMRules |
defaultRules.rule.spec.labels | object | {}
|
Additional labels for all VMRules |
defaultRules.rules | object | {}
|
Per rule properties |
defaultRules.runbookUrl | string | https://runbooks.prometheus-operator.dev/runbooks |
Runbook url prefix for default rules |
experimentalDashboardsEnabled | bool | true |
Create experimental dashboards |
externalVM.read.url | string | "" |
|
externalVM.write.url | string | "" |
|
extraObjects | list | []
|
Add extra objects dynamically to this chart |
fullnameOverride | string | "" |
|
global.clusterLabel | string | cluster |
|
global.license.key | string | "" |
|
global.license.keyRef | object | {}
|
|
grafana.additionalDataSources | list | []
|
|
grafana.defaultDashboardsTimezone | string | utc |
|
grafana.defaultDatasourceType | string | prometheus |
|
grafana.enabled | bool | true |
|
grafana.forceDeployDatasource | bool | false |
|
grafana.ingress.annotations | object | {}
|
|
grafana.ingress.enabled | bool | false |
|
grafana.ingress.extraPaths | list | []
|
|
grafana.ingress.hosts[0] | string | grafana.domain.com |
|
grafana.ingress.labels | object | {}
|
|
grafana.ingress.path | string | / |
|
grafana.ingress.pathType | string | Prefix |
|
grafana.ingress.tls | list | []
|
|
grafana.sidecar.dashboards.additionalDashboardAnnotations | object | {}
|
|
grafana.sidecar.dashboards.additionalDashboardLabels | object | {}
|
|
grafana.sidecar.dashboards.defaultFolderName | string | default |
|
grafana.sidecar.dashboards.enabled | bool | true |
|
grafana.sidecar.dashboards.folder | string | /var/lib/grafana/dashboards |
|
grafana.sidecar.dashboards.multicluster | bool | false |
|
grafana.sidecar.dashboards.provider.name | string | default |
|
grafana.sidecar.dashboards.provider.orgid | int | 1 |
|
grafana.sidecar.datasources.createVMReplicasDatasources | bool | false |
|
grafana.sidecar.datasources.default | list | - isDefault: true
name: VictoriaMetrics
- isDefault: false
name: VictoriaMetrics (DS)
type: victoriametrics-datasource
|
list of default prometheus compatible datasource configurations. VM |
grafana.sidecar.datasources.enabled | bool | true |
|
grafana.sidecar.datasources.initDatasources | bool | true |
|
grafana.vmScrape | object | enabled: true
spec:
endpoints:
- port: '{{ .Values.grafana.service.portName }}'
selector:
matchLabels:
app.kubernetes.io/name: '{{ include "grafana.name" .Subcharts.grafana }}'
|
grafana VM scrape config |
grafana.vmScrape.spec | object | endpoints:
- port: '{{ .Values.grafana.service.portName }}'
selector:
matchLabels:
app.kubernetes.io/name: '{{ include "grafana.name" .Subcharts.grafana }}'
|
Scrape configuration for Grafana |
grafanaOperatorDashboardsFormat | object | allowCrossNamespaceImport: false
enabled: false
instanceSelector:
matchLabels:
dashboards: grafana
|
Create dashboards as CRDs (reuqires grafana-operator to be installed) |
kube-state-metrics.enabled | bool | true |
|
kube-state-metrics.vmScrape | object | enabled: true
spec:
endpoints:
- honorLabels: true
metricRelabelConfigs:
- action: labeldrop
regex: (uid|container_id|image_id)
port: http
jobLabel: app.kubernetes.io/name
selector:
matchLabels:
app.kubernetes.io/instance: '{{ include "vm.release" . }}'
app.kubernetes.io/name: '{{ include "kube-state-metrics.name" (index .Subcharts "kube-state-metrics") }}'
|
Scrape configuration for Kube State Metrics |
kubeApiServer.enabled | bool | true |
|
kubeApiServer.vmScrape | object | spec:
endpoints:
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
port: https
scheme: https
tlsConfig:
caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
serverName: kubernetes
jobLabel: component
namespaceSelector:
matchNames:
- default
selector:
matchLabels:
component: apiserver
provider: kubernetes
|
spec for VMServiceScrape crd https://docs.victoriametrics.com/operator/api.html#vmservicescrapespec |
kubeControllerManager.enabled | bool | true |
|
kubeControllerManager.endpoints | list | []
|
|
kubeControllerManager.service.enabled | bool | true |
|
kubeControllerManager.service.port | int | 10257 |
|
kubeControllerManager.service.selector.component | string | kube-controller-manager |
|
kubeControllerManager.service.targetPort | int | 10257 |
|
kubeControllerManager.vmScrape | object | spec:
endpoints:
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
port: http-metrics
scheme: https
tlsConfig:
caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
serverName: kubernetes
jobLabel: jobLabel
namespaceSelector:
matchNames:
- kube-system
|
spec for VMServiceScrape crd https://docs.victoriametrics.com/operator/api.html#vmservicescrapespec |
kubeDns.enabled | bool | false |
|
kubeDns.service.enabled | bool | false |
|
kubeDns.service.ports.dnsmasq.port | int | 10054 |
|
kubeDns.service.ports.dnsmasq.targetPort | int | 10054 |
|
kubeDns.service.ports.skydns.port | int | 10055 |
|
kubeDns.service.ports.skydns.targetPort | int | 10055 |
|
kubeDns.service.selector.k8s-app | string | kube-dns |
|
kubeDns.vmScrape | object | spec:
endpoints:
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
port: http-metrics-dnsmasq
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
port: http-metrics-skydns
jobLabel: jobLabel
namespaceSelector:
matchNames:
- kube-system
|
spec for VMServiceScrape crd https://docs.victoriametrics.com/operator/api.html#vmservicescrapespec |
kubeEtcd.enabled | bool | true |
|
kubeEtcd.endpoints | list | []
|
|
kubeEtcd.service.enabled | bool | true |
|
kubeEtcd.service.port | int | 2379 |
|
kubeEtcd.service.selector.component | string | etcd |
|
kubeEtcd.service.targetPort | int | 2379 |
|
kubeEtcd.vmScrape | object | spec:
endpoints:
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
port: http-metrics
scheme: https
tlsConfig:
caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
jobLabel: jobLabel
namespaceSelector:
matchNames:
- kube-system
|
spec for VMServiceScrape crd https://docs.victoriametrics.com/operator/api.html#vmservicescrapespec |
kubeProxy.enabled | bool | false |
|
kubeProxy.endpoints | list | []
|
|
kubeProxy.service.enabled | bool | true |
|
kubeProxy.service.port | int | 10249 |
|
kubeProxy.service.selector.k8s-app | string | kube-proxy |
|
kubeProxy.service.targetPort | int | 10249 |
|
kubeProxy.vmScrape | object | spec:
endpoints:
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
port: http-metrics
scheme: https
tlsConfig:
caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
jobLabel: jobLabel
namespaceSelector:
matchNames:
- kube-system
|
spec for VMServiceScrape crd https://docs.victoriametrics.com/operator/api.html#vmservicescrapespec |
kubeScheduler.enabled | bool | true |
|
kubeScheduler.endpoints | list | []
|
|
kubeScheduler.service.enabled | bool | true |
|
kubeScheduler.service.port | int | 10259 |
|
kubeScheduler.service.selector.component | string | kube-scheduler |
|
kubeScheduler.service.targetPort | int | 10259 |
|
kubeScheduler.vmScrape | object | spec:
endpoints:
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
port: http-metrics
scheme: https
tlsConfig:
caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
jobLabel: jobLabel
namespaceSelector:
matchNames:
- kube-system
|
spec for VMServiceScrape crd https://docs.victoriametrics.com/operator/api.html#vmservicescrapespec |
kubelet.enabled | bool | true |
|
kubelet.vmScrape | object | kind: VMNodeScrape
spec:
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
honorLabels: true
honorTimestamps: false
interval: 30s
metricRelabelConfigs:
- action: labeldrop
regex: (uid)
- action: labeldrop
regex: (id|name)
- action: drop
regex: (rest_client_request_duration_seconds_bucket|rest_client_request_duration_seconds_sum|rest_client_request_duration_seconds_count)
source_labels:
- __name__
relabelConfigs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- sourceLabels:
- __metrics_path__
targetLabel: metrics_path
- replacement: kubelet
targetLabel: job
scheme: https
scrapeTimeout: 5s
tlsConfig:
caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecureSkipVerify: true
|
spec for VMNodeScrape crd https://docs.victoriametrics.com/operator/api.html#vmnodescrapespec |
kubelet.vmScrapes.cadvisor | object | enabled: true
spec:
path: /metrics/cadvisor
|
Enable scraping /metrics/cadvisor from kubelet’s service |
kubelet.vmScrapes.kubelet.spec | object | {}
|
|
kubelet.vmScrapes.probes | object | enabled: true
spec:
path: /metrics/probes
|
Enable scraping /metrics/probes from kubelet’s service |
nameOverride | string | "" |
|
prometheus-node-exporter.enabled | bool | true |
|
prometheus-node-exporter.extraArgs[0] | string | --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/.+)($|/) |
|
prometheus-node-exporter.extraArgs[1] | string | --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ |
|
prometheus-node-exporter.service.labels.jobLabel | string | node-exporter |
|
prometheus-node-exporter.vmScrape | object | enabled: true
spec:
endpoints:
- metricRelabelConfigs:
- action: drop
regex: /var/lib/kubelet/pods.+
source_labels:
- mountpoint
port: metrics
jobLabel: jobLabel
selector:
matchLabels:
app.kubernetes.io/name: '{{ include "prometheus-node-exporter.name" (index .Subcharts "prometheus-node-exporter") }}'
|
node exporter VM scrape config |
prometheus-node-exporter.vmScrape.spec | object | endpoints:
- metricRelabelConfigs:
- action: drop
regex: /var/lib/kubelet/pods.+
source_labels:
- mountpoint
port: metrics
jobLabel: jobLabel
selector:
matchLabels:
app.kubernetes.io/name: '{{ include "prometheus-node-exporter.name" (index .Subcharts "prometheus-node-exporter") }}'
|
Scrape configuration for Node Exporter |
prometheus-operator-crds.enabled | bool | false |
|
serviceAccount.annotations | object | {}
|
Annotations to add to the service account |
serviceAccount.create | bool | true |
Specifies whether a service account should be created |
serviceAccount.name | string | "" |
If not set and create is true, a name is generated using the fullname template |
tenant | string | "0" |
|
victoria-metrics-operator | object | crd:
cleanup:
enabled: true
image:
pullPolicy: IfNotPresent
repository: bitnami/kubectl
create: false
enabled: true
operator:
disable_prometheus_converter: false
serviceMonitor:
enabled: true
|
also checkout here possible ENV variables to configure operator behaviour https://docs.victoriametrics.com/operator/vars |
victoria-metrics-operator.crd.cleanup | object | enabled: true
image:
pullPolicy: IfNotPresent
repository: bitnami/kubectl
|
tells helm to clean up vm cr resources when uninstalling |
victoria-metrics-operator.crd.create | bool | false |
we disable crd creation by operator chart as we create them in this chart |
victoria-metrics-operator.operator.disable_prometheus_converter | bool | false |
By default, operator converts prometheus-operator objects. |
vmagent.additionalRemoteWrites | list | []
|
remoteWrite configuration of VMAgent, allowed parameters defined in a spec |
vmagent.annotations | object | {}
|
|
vmagent.enabled | bool | true |
|
vmagent.ingress | object | annotations: {}
enabled: false
extraPaths: []
hosts:
- vmagent.domain.com
labels: {}
path: ""
pathType: Prefix
tls: []
|
vmagent ingress configuration |
vmagent.ingress.extraPaths | list | []
|
Extra paths to prepend to every host configuration. This is useful when working with annotation based services. |
vmagent.spec | object | externalLabels: {}
extraArgs:
promscrape.dropOriginalLabels: "true"
promscrape.streamParse: "true"
image:
tag: v1.103.0
port: "8429"
scrapeInterval: 20s
selectAllByDefault: true
|
full spec for VMAgent CRD. Allowed values described here |
vmalert.additionalNotifierConfigs | object | {}
|
|
vmalert.annotations | object | {}
|
|
vmalert.enabled | bool | true |
|
vmalert.ingress | object | annotations: {}
enabled: false
extraPaths: []
hosts:
- vmalert.domain.com
labels: {}
path: ""
pathType: Prefix
tls: []
|
vmalert ingress config |
vmalert.remoteWriteVMAgent | bool | false |
|
vmalert.spec | object | evaluationInterval: 15s
externalLabels: {}
extraArgs:
http.pathPrefix: /
image:
tag: v1.103.0
port: "8080"
selectAllByDefault: true
|
full spec for VMAlert CRD. Allowed values described here |
vmalert.templateFiles | object | {}
|
extra vmalert annotation templates |
vmauth.annotations | object | {}
|
|
vmauth.enabled | bool | false |
|
vmauth.spec | object | discover_backend_ips: true
port: "8427"
|
full spec for VMAuth CRD. Allowed values described here |
vmcluster.annotations | object | {}
|
|
vmcluster.enabled | bool | false |
|
vmcluster.ingress.insert.annotations | object | {}
|
|
vmcluster.ingress.insert.enabled | bool | false |
|
vmcluster.ingress.insert.extraPaths | list | []
|
|
vmcluster.ingress.insert.hosts[0] | string | vminsert.domain.com |
|
vmcluster.ingress.insert.labels | object | {}
|
|
vmcluster.ingress.insert.path | string | '{{ dig "extraArgs" "http.pathPrefix" "/" .Values.vmcluster.spec.vminsert }}' |
|
vmcluster.ingress.insert.pathType | string | Prefix |
|
vmcluster.ingress.insert.tls | list | []
|
|
vmcluster.ingress.select.annotations | object | {}
|
|
vmcluster.ingress.select.enabled | bool | false |
|
vmcluster.ingress.select.extraPaths | list | []
|
|
vmcluster.ingress.select.hosts[0] | string | vmselect.domain.com |
|
vmcluster.ingress.select.labels | object | {}
|
|
vmcluster.ingress.select.path | string | '{{ dig "extraArgs" "http.pathPrefix" "/" .Values.vmcluster.spec.vmselect }}' |
|
vmcluster.ingress.select.pathType | string | Prefix |
|
vmcluster.ingress.select.tls | list | []
|
|
vmcluster.ingress.storage.annotations | object | {}
|
|
vmcluster.ingress.storage.enabled | bool | false |
|
vmcluster.ingress.storage.extraPaths | list | []
|
|
vmcluster.ingress.storage.hosts[0] | string | vmstorage.domain.com |
|
vmcluster.ingress.storage.labels | object | {}
|
|
vmcluster.ingress.storage.path | string | "" |
|
vmcluster.ingress.storage.pathType | string | Prefix |
|
vmcluster.ingress.storage.tls | list | []
|
|
vmcluster.spec | object | replicationFactor: 2
retentionPeriod: "1"
vminsert:
extraArgs: {}
image:
tag: v1.103.0-cluster
port: "8480"
replicaCount: 2
resources: {}
vmselect:
cacheMountPath: /select-cache
extraArgs: {}
image:
tag: v1.103.0-cluster
port: "8481"
replicaCount: 2
resources: {}
storage:
volumeClaimTemplate:
spec:
resources:
requests:
storage: 2Gi
vmstorage:
image:
tag: v1.103.0-cluster
replicaCount: 2
resources: {}
storage:
volumeClaimTemplate:
spec:
resources:
requests:
storage: 10Gi
storageDataPath: /vm-data
|
full spec for VMCluster CRD. Allowed values described here |
vmcluster.spec.retentionPeriod | string | "1" |
Data retention period. Possible units character: h(ours), d(ays), w(eeks), y(ears), if no unit character specified - month. The minimum retention period is 24h. See these docs |
vmsingle.annotations | object | {}
|
|
vmsingle.enabled | bool | true |
|
vmsingle.ingress.annotations | object | {}
|
|
vmsingle.ingress.enabled | bool | false |
|
vmsingle.ingress.extraPaths | list | []
|
|
vmsingle.ingress.hosts[0] | string | vmsingle.domain.com |
|
vmsingle.ingress.labels | object | {}
|
|
vmsingle.ingress.path | string | "" |
|
vmsingle.ingress.pathType | string | Prefix |
|
vmsingle.ingress.tls | list | []
|
|
vmsingle.spec | object | extraArgs: {}
image:
tag: v1.103.0
port: "8429"
replicaCount: 1
retentionPeriod: "1"
storage:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
|
full spec for VMSingle CRD. Allowed values describe here |
vmsingle.spec.retentionPeriod | string | "1" |
Data retention period. Possible units character: h(ours), d(ays), w(eeks), y(ears), if no unit character specified - month. The minimum retention period is 24h. See these docs |