VictoriaMetrics/docs/helm/victoria-logs-single/README.md
Github Actions 689196048f
Automatic update helm docs from VictoriaMetrics/helm-charts@8a9669e (#7616)
Automated changes by
[create-pull-request](https://github.com/peter-evans/create-pull-request)
GitHub action

Signed-off-by: Github Actions <133988544+victoriametrics-bot@users.noreply.github.com>
Co-authored-by: AndrewChubatiuk <3162380+AndrewChubatiuk@users.noreply.github.com>
2024-11-21 09:47:46 +01:00

32 KiB
Raw Permalink Blame History

Version: 0.8.1 Artifact Hub Slack

Victoria Logs Single version - high-performance, cost-effective and scalable logs storage

Prerequisites

  • Install the follow packages: git, kubectl, helm, helm-docs. See this tutorial.

  • PV support on underlying infrastructure.

Chart Details

This chart will do the following:

  • Rollout Victoria Logs Single.
  • (optional) Rollout vector to collect logs from pods.

Chart allows to configure logs collection from Kubernetes pods to VictoriaLogs. In order to do that you need to enable vector:

vector:
  enabled: true

By default, vector will forward logs to VictoriaLogs installation deployed by this chart.

How to install

Access a Kubernetes cluster.

Setup chart repository (can be omitted for OCI repositories)

Add a chart helm repository with follow commands:

helm repo add vm https://victoriametrics.github.io/helm-charts/

helm repo update

List versions of vm/victoria-logs-single chart available to installation:

helm search repo vm/victoria-logs-single -l

Install victoria-logs-single chart

Export default values of victoria-logs-single chart to file values.yaml:

  • For HTTPS repository

    helm show values vm/victoria-logs-single > values.yaml
    
  • For OCI repository

    helm show values oci://ghcr.io/victoriametrics/helm-charts/victoria-logs-single > values.yaml
    

Change the values according to the need of the environment in values.yaml file.

Test the installation with command:

  • For HTTPS repository

    helm install vls vm/victoria-logs-single -f values.yaml -n NAMESPACE --debug --dry-run
    
  • For OCI repository

    helm install vls oci://ghcr.io/victoriametrics/helm-charts/victoria-logs-single -f values.yaml -n NAMESPACE --debug --dry-run
    

Install chart with command:

  • For HTTPS repository

    helm install vls vm/victoria-logs-single -f values.yaml -n NAMESPACE
    
  • For OCI repository

    helm install vls oci://ghcr.io/victoriametrics/helm-charts/victoria-logs-single -f values.yaml -n NAMESPACE
    

Get the pods lists by running this commands:

kubectl get pods -A | grep 'vls'

Get the application by running this command:

helm list -f vls -n NAMESPACE

See the history of versions of vls application with command.

helm history vls -n NAMESPACE

How to uninstall

Remove application with command.

helm uninstall vls -n NAMESPACE

Documentation of Helm Chart

Install helm-docs following the instructions on this tutorial.

Generate docs with helm-docs command.

cd charts/victoria-logs-single

helm-docs

The markdown generation is entirely go template driven. The tool parses metadata from charts and generates a number of sub-templates that can be referenced in a template file (by default README.md.gotmpl). If no template file is provided, the tool has a default internal template that will generate a reasonably formatted README.

Parameters

The following tables lists the configurable parameters of the chart and their default values.

Change the values according to the need of the environment in victoria-logs-single/values.yaml file.

Key Type Default Description
dashboards.annotations object
{}

Dashboard annotations

dashboards.enabled bool
false

Create VictoriaLogs dashboards

dashboards.grafanaOperator.enabled bool
false

dashboards.grafanaOperator.spec.allowCrossNamespaceImport bool
false

dashboards.grafanaOperator.spec.instanceSelector.matchLabels.dashboards string
grafana

dashboards.labels object
{}

Dashboard labels

extraObjects list
[]

Add extra specs dynamically to this chart

global.cluster.dnsDomain string
cluster.local.

K8s cluster domain suffix, uses for building storage pods FQDN. Details are here

global.compatibility object
openshift:
    adaptSecurityContext: auto

Openshift security context compatibility configuration

global.image.registry string
""

Image registry, that can be shared across multiple helm charts

global.imagePullSecrets list
[]

Image pull secrets, that can be shared across multiple helm charts

nameOverride string
""

Override chart name

podDisruptionBudget object
enabled: false
extraLabels: {}

See kubectl explain poddisruptionbudget.spec for more. Details are here

podDisruptionBudget.extraLabels object
{}

PodDisruptionBudget extra labels

printNotes bool
true

Print chart notes

server.affinity object
{}

Pod affinity

server.containerWorkingDir string
""

Container workdir

server.emptyDir object
{}

Use an alternate scheduler, e.g. “stork”. Check details here schedulerName:

server.enabled bool
true

Enable deployment of server component. Deployed as StatefulSet

server.env list
[]

Additional environment variables (ex.: secret tokens, flags). Details are here

server.envFrom list
[]

Specify alternative source for env variables

server.extraArgs object
envflag.enable: "true"
envflag.prefix: VM_
loggerFormat: json

Extra command line arguments for container of component

server.extraContainers list
[]

Extra containers to run in a pod with Victoria Logs container

server.extraHostPathMounts list
[]

Additional hostPath mounts

server.extraLabels object
{}

StatefulSet/Deployment additional labels

server.extraVolumeMounts list
[]

Extra Volume Mounts for the container

server.extraVolumes list
[]

Extra Volumes for the pod

server.image.pullPolicy string
IfNotPresent

Image pull policy

server.image.registry string
""

Image registry

server.image.repository string
victoriametrics/victoria-logs

Image repository

server.image.tag string
""

Image tag

server.image.variant string
victorialogs

Image tag suffix, which is appended to Chart.AppVersion if no server.image.tag is defined

server.imagePullSecrets list
[]

Image pull secrets

server.ingress.annotations string
null

Ingress annotations

server.ingress.enabled bool
false

Enable deployment of ingress for server component

server.ingress.extraLabels object
{}

Ingress extra labels

server.ingress.hosts list
[]

Array of host objects

server.ingress.ingressClassName string
""

Ingress controller class name

server.ingress.pathType string
Prefix

Ingress path type

server.ingress.tls list
[]

Array of TLS objects

server.initContainers list
[]

Init containers for Victoria Logs Pod

server.nodeSelector object
{}

Pods node selector. Details are here

server.persistentVolume.accessModes list
- ReadWriteOnce

Array of access modes. Must match those of existing PV or dynamic provisioner. Details are here

server.persistentVolume.annotations object
{}

Persistant volume annotations

server.persistentVolume.enabled bool
false

Create/use Persistent Volume Claim for server component. Empty dir if false

server.persistentVolume.existingClaim string
""

Existing Claim name. If defined, PVC must be created manually before volume will be bound

server.persistentVolume.matchLabels object
{}

Bind Persistent Volume by labels. Must match all labels of targeted PV.

server.persistentVolume.mountPath string
/storage

Mount path. Server data Persistent Volume mount root path.

server.persistentVolume.name string
""

Override Persistent Volume Claim name

server.persistentVolume.size string
3Gi

Size of the volume. Should be calculated based on the logs you send and retention policy you set.

server.persistentVolume.storageClassName string
""

StorageClass to use for persistent volume. Requires server.persistentVolume.enabled: true. If defined, PVC created automatically

server.persistentVolume.subPath string
""

Mount subpath

server.podAnnotations object
{}

Pods annotations

server.podLabels object
{}

Pods additional labels

server.podManagementPolicy string
OrderedReady

Pods management policy

server.podSecurityContext object
enabled: true
fsGroup: 2000
runAsNonRoot: true
runAsUser: 1000

Pods security context. Details are here

server.priorityClassName string
""

Name of Priority Class

server.probe.liveness object
failureThreshold: 10
initialDelaySeconds: 30
periodSeconds: 30
tcpSocket: {}
timeoutSeconds: 5

Indicates whether the Container is running. If the liveness probe fails, the kubelet kills the Container, and the Container is subjected to its restart policy. If a Container does not provide a liveness probe, the default state is Success.

server.probe.readiness object
failureThreshold: 3
httpGet: {}
initialDelaySeconds: 5
periodSeconds: 15
timeoutSeconds: 5

Indicates whether the Container is ready to service requests. If the readiness probe fails, the endpoints controller removes the Pods IP address from the endpoints of all Services that match the Pod. The default state of readiness before the initial delay is Failure. If a Container does not provide a readiness probe, the default state is Success.

server.probe.startup object
{}

Indicates whether the Container is done with potentially costly initialization. If set it is executed first. If it fails Container is restarted. If it succeeds liveness and readiness probes takes over.

server.replicaCount int
1

Replica count

server.resources object
{}

Resource object. Details are here

server.retentionPeriod int
1

Data retention period in month

server.securityContext object
allowPrivilegeEscalation: false
capabilities:
    drop:
        - ALL
enabled: true
readOnlyRootFilesystem: true

Security context to be added to server pods

server.service.annotations object
{}

Service annotations

server.service.clusterIP string
""

Service ClusterIP

server.service.externalIPs list
[]

Service external IPs. Details are here

server.service.externalTrafficPolicy string
""

Service external traffic policy. Check here for details

server.service.healthCheckNodePort string
""

Health check node port for a service. Check here for details

server.service.ipFamilies list
[]

List of service IP families. Check here for details.

server.service.ipFamilyPolicy string
""

Service IP family policy. Check here for details.

server.service.labels object
{}

Service labels

server.service.loadBalancerIP string
""

Service load balacner IP

server.service.loadBalancerSourceRanges list
[]

Load balancer source range

server.service.servicePort int
9428

Service port

server.service.type string
ClusterIP

Service type

server.serviceMonitor.annotations object
{}

Service Monitor annotations

server.serviceMonitor.basicAuth object
{}

Basic auth params for Service Monitor

server.serviceMonitor.enabled bool
false

Enable deployment of Service Monitor for server component. This is Prometheus operator object

server.serviceMonitor.extraLabels object
{}

Service Monitor labels

server.serviceMonitor.metricRelabelings list
[]

Service Monitor metricRelabelings

server.serviceMonitor.relabelings list
[]

Service Monitor relabelings

server.statefulSet.enabled bool
true

Creates statefulset instead of deployment, useful when you want to keep the cache

server.statefulSet.podManagementPolicy string
OrderedReady

Deploy order policy for StatefulSet pods

server.terminationGracePeriodSeconds int
60

Pods termination grace period in seconds

server.tolerations list
[]

Node tolerations for server scheduling to nodes with taints. Details are here

server.topologySpreadConstraints list
[]

Pod topologySpreadConstraints

vector object
containerPorts:
    - containerPort: 9090
      name: prom-exporter
      protocol: TCP
customConfig:
    api:
        address: 127.0.0.1:8686
        enabled: false
        playground: true
    data_dir: /vector-data-dir
    sinks:
        exporter:
            address: 0.0.0.0:9090
            inputs:
                - internal_metrics
            type: prometheus_exporter
        vlogs:
            api_version: v8
            compression: gzip
            endpoints: << include "vlogs.es.urls" . >>
            healthcheck:
                enabled: false
            inputs:
                - parser
            mode: bulk
            request:
                headers:
                    AccountID: "0"
                    ProjectID: "0"
                    VL-Msg-Field: message,msg,_msg,log.msg,log.message,log
                    VL-Stream-Fields: stream,kubernetes.pod_name,kubernetes.container_name,kubernetes.pod_namespace
                    VL-Time-Field: timestamp
            type: elasticsearch
    sources:
        internal_metrics:
            type: internal_metrics
        k8s:
            type: kubernetes_logs
    transforms:
        parser:
            inputs:
                - k8s
            source: |
                .log = parse_json(.message) ?? .message
                del(.message)
            type: remap
dataDir: /vector-data-dir
enabled: false
existingConfigMaps:
    - vl-config
podMonitor:
    enabled: false
resources: {}
role: Agent
service:
    enabled: false

Values for vector helm chart

vector.enabled bool
false

Enable deployment of vector