victorialogs: marked fluentd support in roadmap, added syslog example (#7098)

### Describe Your Changes

Marked fluentd in victorialogs roadmap
Added fluentd syslog example setup

### Checklist

The following checks are **mandatory**:

- [ ] My change adheres [VictoriaMetrics contributing
guidelines](https://docs.victoriametrics.com/contributing/).
This commit is contained in:
Andrii Chubatiuk 2024-09-27 15:38:39 +03:00 committed by GitHub
parent 86c0eb816c
commit 05a64a8c14
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
26 changed files with 298 additions and 55 deletions

View file

@ -1,10 +1,10 @@
# Docker compose Filebeat integration with VictoriaLogs using listed below protocols: # Docker compose Filebeat integration with VictoriaLogs
The folder contains examples of [Filebeat](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-overview.html) integration with VictoriaLogs using protocols:
* [syslog](./syslog) * [syslog](./syslog)
* [elasticsearch](./elasticsearch) * [elasticsearch](./elasticsearch)
The folder contains the example of integration of [filebeat](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-overview.html) with Victorialogs
To spin-up environment `cd` to any of listed above directories run the following command: To spin-up environment `cd` to any of listed above directories run the following command:
``` ```
docker compose up -d docker compose up -d
@ -18,9 +18,9 @@ docker compose rm -f
The docker compose file contains the following components: The docker compose file contains the following components:
* filebeat - fileabeat is configured to collect logs from the `docker`, you can find configuration in the `filebeat.yml`. It writes data in VictoriaLogs * filebeat - logs collection agent configured to collect and write data to `victorialogs`
* VictoriaLogs - the log database, it accepts the data from `filebeat` by elastic protocol * victorialogs - logs database, receives data from `filebeat` agent
* VictoriaMetrics - collects metrics from `filebeat` via `filebeat-exporter`, `VictoriaLogs` and `VictoriaMetrics` * victoriametrics - metrics database, which collects metrics from `victorialogs` and `filebeat` for observability purposes
Querying the data Querying the data

View file

@ -1,8 +1,8 @@
include: include:
- ../compose.yml - ../compose.yml
services: services:
filebeat-victorialogs: filebeat:
image: docker.elastic.co/beats/filebeat:8.15.0 image: docker.elastic.co/beats/filebeat:8.15.1
restart: on-failure restart: on-failure
volumes: volumes:
- type: bind - type: bind

View file

@ -1,11 +1,11 @@
# Docker compose Fluentbit integration with VictoriaLogs using given below protocols: # Docker compose FluentBit integration with VictoriaLogs
The folder contains examples of [FluentBit](https://docs.fluentbit.io/manual) integration with VictoriaLogs using protocols:
* [loki](./loki) * [loki](./loki)
* [jsonline single node](./jsonline) * [jsonline single node](./jsonline)
* [jsonline HA setup](./jsonline-ha) * [jsonline HA setup](./jsonline-ha)
The folder contains the example of integration of [fluentbit](https://docs.fluentbit.io/manual) with Victorialogs
To spin-up environment `cd` to any of listed above directories run the following command: To spin-up environment `cd` to any of listed above directories run the following command:
``` ```
docker compose up -d docker compose up -d
@ -19,8 +19,9 @@ docker compose rm -f
The docker compose file contains the following components: The docker compose file contains the following components:
* fluentbit - fluentbit is configured to collect logs from the `docker`, you can find configuration in the `fluent-bit.conf`. It writes data in VictoriaLogs * fluentbit - logs collection agent configured to collect and write data to `victorialogs`
* VictoriaLogs - the log database, it accepts the data from `fluentbit` by json line protocol * victorialogs - logs database, receives data from `fluentbit` agent
* victoriametrics - metrics database, which collects metrics from `victorialogs` and `fluentbit` for observability purposes
Querying the data Querying the data

View file

@ -13,11 +13,23 @@
Parser syslog-rfc3164 Parser syslog-rfc3164
Mode tcp Mode tcp
[INPUT]
name fluentbit_metrics
tag internal_metrics
scrape_interval 2
[SERVICE] [SERVICE]
Flush 1 Flush 1
Parsers_File parsers.conf Parsers_File parsers.conf
[Output] [OUTPUT]
Name prometheus_remote_write
Match internal_metrics
Host victoriametrics
Port 8428
Uri /api/v1/write
[OUTPUT]
Name http Name http
Match * Match *
host victorialogs host victorialogs
@ -29,7 +41,7 @@
header AccountID 0 header AccountID 0
header ProjectID 0 header ProjectID 0
[Output] [OUTPUT]
Name http Name http
Match * Match *
host victorialogs-2 host victorialogs-2

View file

@ -13,11 +13,23 @@
Parser syslog-rfc3164 Parser syslog-rfc3164
Mode tcp Mode tcp
[INPUT]
name fluentbit_metrics
tag internal_metrics
scrape_interval 2
[SERVICE] [SERVICE]
Flush 1 Flush 1
Parsers_File parsers.conf Parsers_File parsers.conf
[Output] [OUTPUT]
Name prometheus_remote_write
Match internal_metrics
Host victoriametrics
Port 8428
Uri /api/v1/write
[OUTPUT]
Name http Name http
Match * Match *
host victorialogs host victorialogs

View file

@ -13,10 +13,22 @@
Parser syslog-rfc3164 Parser syslog-rfc3164
Mode tcp Mode tcp
[INPUT]
name fluentbit_metrics
tag internal_metrics
scrape_interval 2
[SERVICE] [SERVICE]
Flush 1 Flush 1
Parsers_File parsers.conf Parsers_File parsers.conf
[OUTPUT]
Name prometheus_remote_write
Match internal_metrics
Host victoriametrics
Port 8428
Uri /api/v1/write
[OUTPUT] [OUTPUT]
name loki name loki
match * match *

View file

@ -4,5 +4,6 @@ RUN \
gem install \ gem install \
fluent-plugin-datadog \ fluent-plugin-datadog \
fluent-plugin-grafana-loki \ fluent-plugin-grafana-loki \
fluent-plugin-elasticsearch fluent-plugin-elasticsearch \
fluent-plugin-remote_syslog
USER fluent USER fluent

View file

@ -1,10 +1,12 @@
# Docker compose Fluentd integration with VictoriaLogs using given below protocols: # Docker compose Fluentd integration with VictoriaLogs
The folder contains examples of [Fluentd](https://www.fluentd.org/) integration with VictoriaLogs using protocols:
* [loki](./loki) * [loki](./loki)
* [jsonline](./jsonline) * [jsonline](./jsonline)
* [elasticsearch](./elasticsearch) * [elasticsearch](./elasticsearch)
The folder contains the example of integration of [fluentd](https://www.fluentd.org/) with Victorialogs All required plugins, that should be installed in order to support protocols listed above can be found in a [Dockerfile](./Dockerfile)
To spin-up environment `cd` to any of listed above directories run the following command: To spin-up environment `cd` to any of listed above directories run the following command:
``` ```
@ -19,8 +21,9 @@ docker compose rm -f
The docker compose file contains the following components: The docker compose file contains the following components:
* fluentd - fluentd is configured to collect logs from the `docker`, you can find configuration in the `fluent-bit.conf`. It writes data in VictoriaLogs * fluentd - logs collection agent configured to collect and write data to `victorialogs`
* VictoriaLogs - the log database, it accepts the data from `fluentd` by json line protocol * victorialogs - logs database, receives data from `fluentd` agent
* victoriametrics - metrics database, which collects metrics from `victorialogs` and `fluentd` for observability purposes
Querying the data Querying the data

View file

@ -0,0 +1,3 @@
include:
- ../compose.yml
name: fluentbit-syslog

View file

@ -0,0 +1,19 @@
<source>
@type tail
format none
tag docker.testlog
path /var/lib/docker/containers/**/*.log
</source>
<match **>
@type remote_syslog
host victorialogs
port 8094
severity debug
program fluentd
protocol tcp
<format>
@type single_value
message_key message
</format>
</match>

View file

@ -1,16 +1,13 @@
# Docker compose Logstash integration with VictoriaLogs for given below protocols: # Docker compose Logstash integration with VictoriaLogs
The folder contains examples of [Logstash](https://www.elastic.co/logstash) integration with VictoriaLogs using protocols:
* [loki](./loki) * [loki](./loki)
* [jsonline single node](./jsonline) * [jsonline single node](./jsonline)
* [jsonline HA setup](./jsonline-ha) * [jsonline HA setup](./jsonline-ha)
* [elasticsearch](./elasticsearch) * [elasticsearch](./elasticsearch)
It is required to use [OpenSearch plugin](https://github.com/opensearch-project/logstash-output-opensearch) for output configuration. All required plugins, that should be installed in order to support protocols listed above can be found in a [Dockerfile](./Dockerfile)
Plugin can be installed by using the following command:
```
bin/logstash-plugin install logstash-output-opensearch
```
OpenSearch plugin is required because elasticsearch output plugin performs various checks for Elasticsearch version and license which are not applicable for VictoriaLogs.
To spin-up environment `cd` to any of listed above directories run the following command: To spin-up environment `cd` to any of listed above directories run the following command:
``` ```
@ -25,8 +22,9 @@ docker compose rm -f
The docker compose file contains the following components: The docker compose file contains the following components:
* logstash - logstash is configured to accept `syslog` on `5140` port, you can find configuration in the `pipeline.conf`. It writes data in VictoriaLogs * logstash - logs collection agent configured to collect and write data to `victorialogs`
* VictoriaLogs - the log database, it accepts the data from `logstash` by elastic protocol * victorialogs - logs database, receives data from `logstash` agent
* victoriametrics - metrics database, which collects metrics from `victorialogs` and `logstash` for observability purposes
Querying the data Querying the data

View file

@ -1,4 +1,6 @@
# Docker compose OpenTelemetry integration with VictoriaLogs using protocols: # Docker compose OpenTelemetry collector integration with VictoriaLogs
The folder contains examples of [OpenTelemetry collector](https://opentelemetry.io/docs/collector/) integration with VictoriaLogs using protocols:
* [loki](./loki) * [loki](./loki)
* [otlp](./otlp) * [otlp](./otlp)
@ -6,8 +8,6 @@
* [elasticsearch single node](./elasticsearch) * [elasticsearch single node](./elasticsearch)
* [elasticsearch HA mode](./elasticsearch-ha/) * [elasticsearch HA mode](./elasticsearch-ha/)
The folder contains the example of integration of [OpenTelemetry collector](https://opentelemetry.io/docs/collector/) with Victorialogs
To spin-up environment `cd` to any of listed above directories run the following command: To spin-up environment `cd` to any of listed above directories run the following command:
``` ```
docker compose up -d docker compose up -d
@ -21,9 +21,9 @@ docker compose rm -f
The docker compose file contains the following components: The docker compose file contains the following components:
* collector - vector is configured to collect logs from the `docker`, you can find configuration in the `config.yaml`. It writes data in VictoriaLogs. It pushes metrics to VictoriaMetrics. * collector - logs collection agent configured to collect and write data to `victorialogs`
* VictoriaLogs - the log database, it accepts the data from `collector` by elastic protocol * victorialogs - logs database, receives data from `collector` agent
* VictoriaMetrics - collects metrics from `VictoriaLogs` and `VictoriaMetrics` * victoriametrics - metrics database, which collects metrics from `victorialogs` and `collector` for observability purposes
Querying the data Querying the data

View file

@ -9,6 +9,15 @@ receivers:
resource: resource:
region: us-east-1 region: us-east-1
service: service:
telemetry:
metrics:
readers:
- periodic:
interval: 5000
exporter:
otlp:
protocol: http/protobuf
endpoint: http://victoriametrics:8428/opentelemetry/api/v1/push
pipelines: pipelines:
logs: logs:
receivers: [filelog] receivers: [filelog]

View file

@ -8,6 +8,15 @@ receivers:
resource: resource:
region: us-east-1 region: us-east-1
service: service:
telemetry:
metrics:
readers:
- periodic:
interval: 5000
exporter:
otlp:
protocol: http/protobuf
endpoint: http://victoriametrics:8428/opentelemetry/api/v1/push
pipelines: pipelines:
logs: logs:
receivers: [filelog] receivers: [filelog]

View file

@ -7,6 +7,15 @@ receivers:
resource: resource:
region: us-east-1 region: us-east-1
service: service:
telemetry:
metrics:
readers:
- periodic:
interval: 5000
exporter:
otlp:
protocol: http/protobuf
endpoint: http://victoriametrics:8428/opentelemetry/api/v1/push
pipelines: pipelines:
logs: logs:
receivers: [filelog] receivers: [filelog]

View file

@ -9,6 +9,15 @@ receivers:
resource: resource:
region: us-east-1 region: us-east-1
service: service:
telemetry:
metrics:
readers:
- periodic:
interval: 5000
exporter:
otlp:
protocol: http/protobuf
endpoint: http://victoriametrics:8428/opentelemetry/api/v1/push
pipelines: pipelines:
logs: logs:
receivers: [filelog] receivers: [filelog]

View file

@ -17,6 +17,15 @@ receivers:
filelog: filelog:
include: [/tmp/logs/*.log] include: [/tmp/logs/*.log]
service: service:
telemetry:
metrics:
readers:
- periodic:
interval: 5000
exporter:
otlp:
protocol: http/protobuf
endpoint: http://victoriametrics:8428/opentelemetry/api/v1/push
pipelines: pipelines:
logs: logs:
receivers: [filelog] receivers: [filelog]

View file

@ -0,0 +1,32 @@
# Docker compose Promtail integration with VictoriaLogs
The folder contains the example of integration of [Promtail agent](https://grafana.com/docs/loki/latest/send-data/promtail/) with VictoriaLogs using protocols:
* [loki](./loki)
To spin-up environment `cd` to any of listed above directories run the following command:
```
docker compose up -d
```
To shut down the docker-compose environment run the following command:
```
docker compose down
docker compose rm -f
```
The docker compose file contains the following components:
* promtail - logs collection agent configured to collect and write data to `victorialogs`
* victorialogs - logs database, receives data from `promtail` agent
* victoriametrics - metrics database, which collects metrics from `victorialogs` and `promtail` for observability purposes
Querying the data
* [vmui](https://docs.victoriametrics.com/victorialogs/querying/#vmui) - a web UI is accessible by `http://localhost:9428/select/vmui`
* for querying the data via command-line please check [these docs](https://docs.victoriametrics.com/victorialogs/querying/#command-line)
Promtail agent configuration example can be found below:
* [loki](./loki/config.yml)
Please, note that `_stream_fields` parameter must follow recommended [best practices](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields) to achieve better performance.

View file

@ -1,6 +1,6 @@
# Docker compose Telegraf integration with VictoriaLogs for docker # Docker compose Telegraf integration with VictoriaLogs
The folder contains the examples of integration of [telegraf](https://www.influxdata.com/time-series-platform/telegraf/) with VictoriaLogs using: The folder contains examples of [Telegraf](https://www.influxdata.com/time-series-platform/telegraf/) integration with VictoriaLogs using protocols:
* [elasticsearch](./elasticsearch) * [elasticsearch](./elasticsearch)
* [loki](./loki) * [loki](./loki)
@ -20,9 +20,9 @@ docker compose rm -f
The docker compose file contains the following components: The docker compose file contains the following components:
* telegraf - telegraf is configured to collect logs from the `docker`, you can find configuration in the `telegraf.conf`. It writes data in VictoriaLogs. It pushes metrics to VictoriaMetrics. * telegraf - logs collection agent configured to collect and write data to `victorialogs`
* VictoriaLogs - the log database, it accepts the data from `telegraf` by elastic protocol * victorialogs - logs database, receives data from `telegraf` agent
* VictoriaMetrics - collects metrics from `VictoriaLogs` and `VictoriaMetrics` * victoriametrics - metrics database, which collects metrics from `victorialogs` and `telegraf` for observability purposes
Querying the data Querying the data

View file

@ -1,12 +1,12 @@
# Docker compose Vector integration with VictoriaLogs using given below protocols: # Docker compose Vector integration with VictoriaLogs
The folder contains examples of [Vector](https://vector.dev/docs/) integration with VictoriaLogs using protocols:
* [elasticsearch](./elasticsearch) * [elasticsearch](./elasticsearch)
* [loki](./loki) * [loki](./loki)
* [jsonline single node](./jsonline) * [jsonline single node](./jsonline)
* [jsonline HA setup](./jsonline-ha) * [jsonline HA setup](./jsonline-ha)
The folder contains the example of integration of [vector](https://vector.dev/docs/) with Victorialogs
To spin-up environment `cd` to any of listed above directories run the following command: To spin-up environment `cd` to any of listed above directories run the following command:
``` ```
docker compose up -d docker compose up -d
@ -20,9 +20,9 @@ docker compose rm -f
The docker compose file contains the following components: The docker compose file contains the following components:
* vector - vector is configured to collect logs from the `docker`, you can find configuration in the `vector.yaml`. It writes data in VictoriaLogs. It pushes metrics to VictoriaMetrics. * vector - logs collection agent configured to collect and write data to `victorialogs`
* VictoriaLogs - the log database, it accepts the data from `vector` by DataDog protocol * victorialogs - logs database, receives data from `vector` agent
* VictoriaMetrics - collects metrics from `VictoriaLogs` and `VictoriaMetrics` * victoriametrics - metrics database, which collects metrics from `victorialogs` and `vector` for observability purposes
Querying the data Querying the data

View file

@ -22,7 +22,7 @@ The following functionality is planned in the future versions of VictoriaLogs:
- Support for [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/) from popular log collectors and formats: - Support for [data ingestion](https://docs.victoriametrics.com/victorialogs/data-ingestion/) from popular log collectors and formats:
- [x] [OpenTelemetry for logs](https://docs.victoriametrics.com/victorialogs/data-ingestion/opentelemetry/) - [x] [OpenTelemetry for logs](https://docs.victoriametrics.com/victorialogs/data-ingestion/opentelemetry/)
- [ ] Fluentd - [x] [Fluentd](https://docs.victoriametrics.com/victorialogs/data-ingestion/fluentd/)
- [ ] [Journald](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4618) (systemd) - [ ] [Journald](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/4618) (systemd)
- [ ] [Datadog protocol for logs](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6632) - [ ] [Datadog protocol for logs](https://github.com/VictoriaMetrics/VictoriaMetrics/issues/6632)
- [x] [Telegraf](https://docs.victoriametrics.com/victorialogs/data-ingestion/telegraf/) - [x] [Telegraf](https://docs.victoriametrics.com/victorialogs/data-ingestion/telegraf/)

View file

@ -11,9 +11,6 @@ aliases:
- /victorialogs/data-ingestion/fluentbit.html - /victorialogs/data-ingestion/fluentbit.html
- /victorialogs/data-ingestion/Fluentbit.html - /victorialogs/data-ingestion/Fluentbit.html
--- ---
# Fluentbit setup
VictoriaLogs supports given below Fluentbit outputs: VictoriaLogs supports given below Fluentbit outputs:
- [Loki](#loki) - [Loki](#loki)
- [HTTP JSON](#http) - [HTTP JSON](#http)

View file

@ -0,0 +1,109 @@
---
weight: 2
title: Fluentd setup
disableToc: true
menu:
docs:
parent: "victorialogs-data-ingestion"
weight: 2
aliases:
- /VictoriaLogs/data-ingestion/Fluentd.html
- /victorialogs/data-ingestion/fluentd.html
- /victorialogs/data-ingestion/Fluentd.html
---
VictoriaLogs supports given below Fluentd outputs:
- [Loki](#loki)
- [HTTP JSON](#http)
## Loki
Specify [loki output](https://docs.fluentd.io/manual/pipeline/outputs/loki) section in the `fluentd.conf`
for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/victorialogs/):
```conf
<match **>
@type loki
url "http://localhost:9428/insert"
<buffer>
flush_interval 10s
flush_at_shutdown true
</buffer>
custom_headers {"VL-Msg-Field": "log", "VL-Time-Field": "time", "VL-Stream-Fields": "path"}
buffer_chunk_limit 1m
</match>
```
## HTTP
Specify [http output](https://docs.fluentd.io/manual/pipeline/outputs/http) section in the `fluentd.conf`
for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/victorialogs/):
```fluentd
<match **>
@type http
endpoint "http://localhost:9428/insert/jsonline"
headers {"VL-Msg-Field": "log", "VL-Time-Field": "time", "VL-Stream-Fields": "path"}
</match>
```
Substitute the host (`localhost`) and port (`9428`) with the real TCP address of VictoriaLogs.
See [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters) for details on the query args specified in the `endpoint`.
It is recommended verifying whether the initial setup generates the needed [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model)
and uses the correct [stream fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#stream-fields).
This can be done by specifying `debug` [parameter](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters) in the `endpoint`
and inspecting VictoriaLogs logs then:
```fluentd
<match **>
@type http
endpoint "http://localhost:9428/insert/jsonline&debug=1"
headers {"VL-Msg-Field": "log", "VL-Time-Field": "time", "VL-Stream-Fields": "path"}
</match>
```
If some [log fields](https://docs.victoriametrics.com/victorialogs/keyconcepts/#data-model) must be skipped
during data ingestion, then they can be put into `ignore_fields` [parameter](https://docs.victoriametrics.com/victorialogs/data-ingestion/#http-parameters).
For example, the following config instructs VictoriaLogs to ignore `log.offset` and `event.original` fields in the ingested logs:
```fluentd
<match **>
@type http
endpoint "http://localhost:9428/insert/jsonline&ignore_fields=log.offset,event.original"
headers {"VL-Msg-Field": "log", "VL-Time-Field": "time", "VL-Stream-Fields": "path"}
</match>
```
If the Fluentd sends logs to VictoriaLogs in another datacenter, then it may be useful enabling data compression via `compress gzip` option.
This usually allows saving network bandwidth and costs by up to 5 times:
```fluentd
<match **>
@type http
endpoint "http://localhost:9428/insert/jsonline&ignore_fields=log.offset,event.original"
headers {"VL-Msg-Field": "log", "VL-Time-Field": "time", "VL-Stream-Fields": "path"}
compress gzip
</match>
```
By default, the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/victorialogs/keyconcepts/#multitenancy).
If you need storing logs in other tenant, then specify the needed tenant via `header` options.
For example, the following `fluentd.conf` config instructs Fluentd to store the data to `(AccountID=12, ProjectID=34)` tenant:
```fluentd
<match **>
@type http
endpoint "http://localhost:9428/insert/jsonline"
headers {"VL-Msg-Field": "log", "VL-Time-Field": "time", "VL-Stream-Fields": "path"}
header AccountID 12
header ProjectID 23
</match>
```
See also:
- [Data ingestion troubleshooting](https://docs.victoriametrics.com/victorialogs/data-ingestion/#troubleshooting).
- [How to query VictoriaLogs](https://docs.victoriametrics.com/victorialogs/querying/).
- [Fluentd HTTP output config docs](https://docs.fluentd.org/output/http)
- [Docker-compose demo for Fluentd integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/fluentd).

View file

@ -3,6 +3,7 @@
- Syslog, Rsyslog and Syslog-ng - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/). - Syslog, Rsyslog and Syslog-ng - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/syslog/).
- Filebeat - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/filebeat/). - Filebeat - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/filebeat/).
- Fluentbit - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/fluentbit/). - Fluentbit - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/fluentbit/).
- Fluentd - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/fluentd/).
- Logstash - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/logstash/). - Logstash - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/logstash/).
- Vector - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/vector/). - Vector - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/vector/).
- Promtail (aka Grafana Loki) - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/promtail/). - Promtail (aka Grafana Loki) - see [these docs](https://docs.victoriametrics.com/victorialogs/data-ingestion/promtail/).
@ -286,3 +287,5 @@ Here is the list of log collectors and their ingestion formats supported by Vict
| [Promtail](https://docs.victoriametrics.com/victorialogs/data-ingestion/promtail/) | No | No | [Yes](https://grafana.com/docs/loki/latest/clients/promtail/configuration/#clients) | No | No | | [Promtail](https://docs.victoriametrics.com/victorialogs/data-ingestion/promtail/) | No | No | [Yes](https://grafana.com/docs/loki/latest/clients/promtail/configuration/#clients) | No | No |
| [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/) | [Yes](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/elasticsearchexporter) | No | [Yes](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/lokiexporter) | [Yes](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/exporter/syslogexporter) | [Yes](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/otlphttpexporter) | | [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/) | [Yes](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/elasticsearchexporter) | No | [Yes](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/lokiexporter) | [Yes](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/exporter/syslogexporter) | [Yes](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/otlphttpexporter) |
| [Telegraf](https://docs.victoriametrics.com/victorialogs/data-ingestion/telegraf/) | [Yes](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/elasticsearch) | [Yes](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/http) | [Yes](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/loki) | [Yes](https://github.com/influxdata/telegraf/blob/master/plugins/outputs/syslog) | Yes | | [Telegraf](https://docs.victoriametrics.com/victorialogs/data-ingestion/telegraf/) | [Yes](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/elasticsearch) | [Yes](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/http) | [Yes](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/loki) | [Yes](https://github.com/influxdata/telegraf/blob/master/plugins/outputs/syslog) | Yes |
| [Fluentd](https://docs.victoriametrics.com/victorialogs/data-ingestion/fluentd/) | [Yes](https://github.com/uken/fluent-plugin-elasticsearch) | [Yes](https://docs.fluentd.org/output/http) | [Yes](https://grafana.com/docs/loki/latest/send-data/fluentd/) | [Yes](https://github.com/fluent-plugins-nursery/fluent-plugin-remote_syslog) | No |

View file

@ -9,8 +9,6 @@ menu:
aliases: aliases:
- /VictoriaLogs/data-ingestion/Telegraf.html - /VictoriaLogs/data-ingestion/Telegraf.html
--- ---
# Telegraf setup
VictoriaLogs supports given below Telegraf outputs: VictoriaLogs supports given below Telegraf outputs:
- [Elasticsearch](#elasticsearch) - [Elasticsearch](#elasticsearch)
- [Loki](#loki) - [Loki](#loki)

View file

@ -9,8 +9,6 @@ menu:
aliases: aliases:
- /VictoriaLogs/data-ingestion/OpenTelemetry.html - /VictoriaLogs/data-ingestion/OpenTelemetry.html
--- ---
VictoriaLogs supports both client open-telemetry [SDK](https://opentelemetry.io/docs/languages/) and [collector](https://opentelemetry.io/docs/collector/). VictoriaLogs supports both client open-telemetry [SDK](https://opentelemetry.io/docs/languages/) and [collector](https://opentelemetry.io/docs/collector/).
## Client SDK ## Client SDK