# Filebeat setup [Filebeat](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-overview.html) log collector supports [Elasticsearch output](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html) compatible with VictoriaMetrics [ingestion format](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#elasticsearch-bulk-api). Specify [`output.elasicsearch`](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html) section in the `filebeat.yml` for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/): ```yml output.elasticsearch: hosts: ["http://localhost:9428/insert/elasticsearch/"] parameters: _msg_field: "message" _time_field: "@timestamp" _stream_fields: "host.hostname,log.file.path" ``` Substitute the `localhost:9428` address inside `hosts` section with the real TCP address of VictoriaLogs. See [these docs](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters) for details on the `parameters` section. It is recommended verifying whether the initial setup generates the needed [log fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#data-model) and uses the correct [stream fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#stream-fields). This can be done by specifying `debug` [parameter](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters) and inspecting VictoriaLogs logs then: ```yml output.elasticsearch: hosts: ["http://localhost:9428/insert/elasticsearch/"] parameters: _msg_field: "message" _time_field: "@timestamp" _stream_fields: "host.hostname,log.file.path" debug: "1" ``` If some [log fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#data-model) must be skipped during data ingestion, then they can be put into `ignore_fields` [parameter](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters). For example, the following config instructs VictoriaLogs to ignore `log.offset` and `event.original` fields in the ingested logs: ```yml output.elasticsearch: hosts: ["http://localhost:9428/insert/elasticsearch/"] parameters: _msg_field: "message" _time_field: "@timestamp" _stream_fields: "host.name,log.file.path" ignore_fields: "log.offset,event.original" ``` When Filebeat ingests logs into VictoriaLogs at a high rate, then it may be needed to tune `worker` and `bulk_max_size` options. For example, the following config is optimized for higher than usual ingestion rate: ```yml output.elasticsearch: hosts: ["http://localhost:9428/insert/elasticsearch/"] parameters: _msg_field: "message" _time_field: "@timestamp" _stream_fields: "host.name,log.file.path" worker: 8 bulk_max_size: 1000 ``` If the Filebeat sends logs to VictoriaLogs in another datacenter, then it may be useful enabling data compression via `compression_level` option. This usually allows saving network bandwidth and costs by up to 5 times: ```yml output.elasticsearch: hosts: ["http://localhost:9428/insert/elasticsearch/"] parameters: _msg_field: "message" _time_field: "@timestamp" _stream_fields: "host.name,log.file.path" compression_level: 1 ``` By default, the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/VictoriaLogs/#multitenancy). If you need storing logs in other tenant, then specify the needed tenant via `headers` at `output.elasticsearch` section. For example, the following `filebeat.yml` config instructs Filebeat to store the data to `(AccountID=12, ProjectID=34)` tenant: ```yml output.elasticsearch: hosts: ["http://localhost:9428/insert/elasticsearch/"] headers: AccountID: 12 ProjectID: 34 parameters: _msg_field: "message" _time_field: "@timestamp" _stream_fields: "host.name,log.file.path" ``` More info about output parameters you can find in [these docs](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html). [Here is a demo](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/filebeat-docker) for running Filebeat with VictoriaLogs with docker-compose and collecting logs to VictoriaLogs. The ingested log entries can be queried according to [these docs](https://docs.victoriametrics.com/VictoriaLogs/querying/). See also [data ingestion troubleshooting](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#troubleshooting) docs.