diff --git a/docs/VictoriaLogs/data-ingestion/Filebeat.md b/docs/VictoriaLogs/data-ingestion/Filebeat.md
index 73c762ab8f..4bb1b5891b 100644
--- a/docs/VictoriaLogs/data-ingestion/Filebeat.md
+++ b/docs/VictoriaLogs/data-ingestion/Filebeat.md
@@ -1,9 +1,5 @@
 # Filebeat setup
 
-[Filebeat](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-overview.html) log collector supports
-[Elasticsearch output](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html) compatible with
-VictoriaMetrics [ingestion format](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#elasticsearch-bulk-api).
-
 Specify [`output.elasicsearch`](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html) section in the `filebeat.yml`
 for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/):
 
@@ -92,12 +88,9 @@ output.elasticsearch:
     _stream_fields: "host.name,log.file.path"
 ```
 
-More info about output parameters you can find in [these docs](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html).
-
-[Here is a demo](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/filebeat-docker) for
-running Filebeat with VictoriaLogs with docker-compose and collecting logs to VictoriaLogs.
-
-The ingested log entries can be queried according to [these docs](https://docs.victoriametrics.com/VictoriaLogs/querying/).
-
-See also [data ingestion troubleshooting](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#troubleshooting) docs.
+See also:
 
+- [Data ingestion troubleshooting](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#troubleshooting).
+- [How to query VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/querying/).
+- [Filebeat `output.elasticsearch` docs](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html).
+- [Docker-compose demo for Filebeat integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/filebeat-docker).
diff --git a/docs/VictoriaLogs/data-ingestion/Fluentbit.md b/docs/VictoriaLogs/data-ingestion/Fluentbit.md
index 48d5030eea..e9894df8c8 100644
--- a/docs/VictoriaLogs/data-ingestion/Fluentbit.md
+++ b/docs/VictoriaLogs/data-ingestion/Fluentbit.md
@@ -1,10 +1,7 @@
 ## Fluentbit setup
 
-[Fluentbit](https://docs.fluentbit.io/manual) log collector supports [HTTP output](https://docs.fluentbit.io/manual/pipeline/outputs/http) compatible with
-VictoriaMetrics [JSON stream API](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#json-stream-api).
-
-Specify [`output`](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html) section with `Name http` in the `fluentbit.conf`
-for sending the collected logs to VictoriaLogs:
+Specify [http output](https://docs.fluentbit.io/manual/pipeline/outputs/http) section in the `fluentbit.conf`
+for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/):
 
 ```conf
 [Output]
@@ -17,18 +14,42 @@ for sending the collected logs to VictoriaLogs:
      json_date_format iso8601
 ```
 
-Substitute the address (`localhost`) and port (`9428`) inside `Output` section with the real TCP address of VictoriaLogs.
+Substitute the host (`localhost`) and port (`9428`) with the real TCP address of VictoriaLogs.
 
-The `_msg_field` parameter must contain the field name with the log message generated by Fluentbit. This is usually `message` field.
-See [these docs](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#message-field) for details.
+See [these docs](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters) for details on the query args specified in the `uri`.
 
-The `_time_field` parameter must contain the field name with the log timestamp generated by Fluentbit. This is usually `@timestamp` field.
-See [these docs](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#time-field) for details.
+It is recommended verifying whether the initial setup generates the needed [log fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#data-model)
+and uses the correct [stream fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#stream-fields).
+This can be done by specifying `debug` [parameter](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters) in the `uri`
+and inspecting VictoriaLogs logs then:
 
-It is recommended specifying comma-separated list of field names, which uniquely identify every log stream collected by Fluentbit, in the `_stream_fields` parameter.
-See [these docs](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#stream-fields) for details.
+```conf
+[Output]
+     Name http
+     Match *
+     host localhost
+     port 9428
+     uri /insert/jsonline/?_stream_fields=stream&_msg_field=log&_time_field=date&debug=1
+     format json_lines
+     json_date_format iso8601
+```
 
-If the Fluentbit sends logs to VictoriaLogs in another datacenter, then it may be useful enabling data compression via `compress` option.
+If some [log fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#data-model) must be skipped
+during data ingestion, then they can be put into `ignore_fields` [parameter](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters).
+For example, the following config instructs VictoriaLogs to ignore `log.offset` and `event.original` fields in the ingested logs:
+
+```conf
+[Output]
+     Name http
+     Match *
+     host localhost
+     port 9428
+     uri /insert/jsonline/?_stream_fields=stream&_msg_field=log&_time_field=date&ignore_fields=log.offset,event.original
+     format json_lines
+     json_date_format iso8601
+```
+
+If the Fluentbit sends logs to VictoriaLogs in another datacenter, then it may be useful enabling data compression via `compress gzip` option.
 This usually allows saving network bandwidth and costs by up to 5 times:
 
 ```conf
@@ -44,8 +65,8 @@ This usually allows saving network bandwidth and costs by up to 5 times:
 ```
 
 By default, the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#multitenancy).
-If you need storing logs in other tenant, then specify the needed tenant via `headers` at `output.elasticsearch` section.
-For example, the following `fluentbit.conf` config instructs Filebeat to store the data to `(AccountID=12, ProjectID=34)` tenant:
+If you need storing logs in other tenant, then specify the needed tenant via `header` options.
+For example, the following `fluentbit.conf` config instructs Fluentbit to store the data to `(AccountID=12, ProjectID=34)` tenant:
 
 ```conf
 [Output]
@@ -60,11 +81,9 @@ For example, the following `fluentbit.conf` config instructs Filebeat to store t
      header ProjectID 23
 ```
 
-More info about output tuning you can find in [these docs](https://docs.fluentbit.io/manual/pipeline/outputs/http).
+See also:
 
-[Here is a demo](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/fluentbit-docker)
-for running Fluentbit with VictoriaLogs with docker-compose and collecting logs from docker-containers to VictoriaLogs.
-
-The ingested log entries can be queried according to [these docs](https://docs.victoriametrics.com/VictoriaLogs/querying/).
-
-See also [data ingestion troubleshooting](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#troubleshooting) docs.
+- [Data ingestion troubleshooting](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#troubleshooting).
+- [How to query VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/querying/).
+- [Fluentbit HTTP output config docs](https://docs.fluentbit.io/manual/pipeline/outputs/http).
+- [Docker-compose demo for Fluentbit integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/fluentbit-docker).
diff --git a/docs/VictoriaLogs/data-ingestion/Logstash.md b/docs/VictoriaLogs/data-ingestion/Logstash.md
index 825647a613..9e4fc4525c 100644
--- a/docs/VictoriaLogs/data-ingestion/Logstash.md
+++ b/docs/VictoriaLogs/data-ingestion/Logstash.md
@@ -1,10 +1,5 @@
 # Logstash setup
 
-[Logstash](https://www.elastic.co/guide/en/logstash/8.8/introduction.html) log collector supports
-[Opensearch output plugin](https://github.com/opensearch-project/logstash-output-opensearch) compatible with
-[Elasticsearch bulk API](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#elasticsearch-bulk-api)
-in VictoriaMetrics.
-
 Specify [`output.elasticsearch`](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html) section in the `logstash.conf` file
 for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/):
 
@@ -100,12 +95,9 @@ output {
 }
 ```
 
-More info about output tuning you can find in [these docs](https://github.com/opensearch-project/logstash-output-opensearch/blob/main/README.md).
+See also:
 
-[Here is a demo](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/logstash)
-for running Logstash with VictoriaLogs with docker-compose and collecting logs to VictoriaLogs
-(via [Elasticsearch bulk API](https://docs.victoriametrics.com/VictoriaLogs/daat-ingestion/#elasticsearch-bulk-api)).
-
-The ingested log entries can be queried according to [these docs](https://docs.victoriametrics.com/VictoriaLogs/querying/).
-
-See also [data ingestion troubleshooting](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#troubleshooting) docs.
+- [Data ingestion troubleshooting](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#troubleshooting).
+- [How to query VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/querying/).
+- [Logstash `output.elasticsearch` docs](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html).
+- [Docker-compose demo for Logstash integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/logstash).
diff --git a/docs/VictoriaLogs/data-ingestion/README.md b/docs/VictoriaLogs/data-ingestion/README.md
index 24d28c354d..36e5b378bb 100644
--- a/docs/VictoriaLogs/data-ingestion/README.md
+++ b/docs/VictoriaLogs/data-ingestion/README.md
@@ -7,11 +7,13 @@
 - Logstash. See [how to setup Logstash for sending logs to VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Logstash.html).
 - Vector. See [how to setup Vector for sending logs to VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Vector.html).
 
-See also [Log collectors and data ingestion formats](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#log-collectors-and-data-ingestion-formats) in VictoriaMetrics.
-
 The ingested logs can be queried according to [these docs](https://docs.victoriametrics.com/VictoriaLogs/querying/).
 
-See also [data ingestion troubleshooting](#troubleshooting) docs.
+See also:
+
+- [Log collectors and data ingestion formats](#log-collectors-and-data-ingestion-formats).
+- [Data ingestion troubleshooting](#troubleshooting).
+
 
 ## HTTP APIs
 
@@ -122,11 +124,11 @@ VictoriaLogs exposes various [metrics](https://docs.victoriametrics.com/Victoria
 
 ## Log collectors and data ingestion formats
 
-Here is the list of supported collectors and their ingestion formats supported by VictoriaLogs:
+Here is the list of log collectors and their ingestion formats supported by VictoriaLogs:
 
-| Collector                                                                                | Elasticsearch                                                                              | JSON Stream                                                   |
+| How to setup the collector                                                               | Format: Elasticsearch                                                                     | Format: JSON Stream                                            |
 |------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------|---------------------------------------------------------------|
-| [filebeat](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Filebeat.html)   | [Yes](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html)    | No                                                            |
-| [fluentbit](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Fluentbit.html) | No                                                                                         | [Yes](https://docs.fluentbit.io/manual/pipeline/outputs/http) |
-| [logstash](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Logstash.html)   | [Yes](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html) | No                                                            |
-| [vector](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Vector.html)       | [Yes](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/)                | No                                                            |
+| [Filebeat](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Filebeat.html)   | [Yes](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html)    | No                                                            |
+| [Fluentbit](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Fluentbit.html) | No                                                                                         | [Yes](https://docs.fluentbit.io/manual/pipeline/outputs/http) |
+| [Logstash](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Logstash.html)   | [Yes](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html) | No                                                            |
+| [Vector](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/Vector.html)       | [Yes](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/)                | No                                                            |
diff --git a/docs/VictoriaLogs/data-ingestion/Vector.md b/docs/VictoriaLogs/data-ingestion/Vector.md
index 3d07b0d9c8..18fde4a31f 100644
--- a/docs/VictoriaLogs/data-ingestion/Vector.md
+++ b/docs/VictoriaLogs/data-ingestion/Vector.md
@@ -1,11 +1,7 @@
 # Vector setup
 
-[Vector](http://vector.dev) log collector supports
-[Elasticsearch sink](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/) compatible with
-[VictoriaMetrics Elasticsearch bulk API](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#elasticsearch-bulk-api).
-
-Specify [`sinks.vlogs`](https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html)  with `type=elasticsearch` section in the `vector.toml`
-for sending the collected logs to VictoriaLogs:
+Specify [Elasticsearch sink type](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/) in the `vector.toml`
+for sending the collected logs to [VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/):
 
 ```toml
 [sinks.vlogs]
@@ -26,17 +22,32 @@ Substitute the `localhost:9428` address inside `endpoints` section with the real
 
 Replace `your_input` with the name of the `inputs` section, which collects logs. See [these docs](https://vector.dev/docs/reference/configuration/sources/) for details.
 
-The `_msg_field` parameter must contain the field name with the log message generated by Vector. This is usually `message` field.
-See [these docs](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#message-field) for details.
+See [these docs](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters) for details on parameters specified
+in the `[sinks.vlogs.query]` section.
 
-The `_time_field` parameter must contain the field name with the log timestamp generated by Vector. This is usually `@timestamp` field.
-See [these docs](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#time-field) for details.
+It is recommended verifying whether the initial setup generates the needed [log fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#data-model)
+and uses the correct [stream fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#stream-fields).
+This can be done by specifying `debug` [parameter](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters)
+in the `[sinks.vlogs.query]` section and inspecting VictoriaLogs logs then:
 
-It is recommended specifying comma-separated list of field names, which uniquely identify every log stream collected by Vector, in the `_stream_fields` parameter.
-See [these docs](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#stream-fields) for details.
+```toml
+[sinks.vlogs]
+  inputs = [ "your_input" ]
+  type = "elasticsearch"
+  endpoints = [ "http://localhost:9428/insert/elasticsearch/" ]
+  mode = "bulk"
+  api_version = "v8"
+  healthcheck.enabled = false
 
-If some [log fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#data-model) aren't needed,
-then VictoriaLogs can be instructed to ignore them during data ingestion - just pass `ignore_fields` parameter with comma-separated list of fields to ignore.
+  [sinks.vlogs.query]
+    _msg_field = "message"
+    _time_field = "timestamp"
+    _stream_fields = "host,container_name"
+    debug = "1"
+```
+
+If some [log fields](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#data-model) must be skipped
+during data ingestion, then they can be put into `ignore_fields` [parameter](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters).
 For example, the following config instructs VictoriaLogs to ignore `log.offset` and `event.original` fields in the ingested logs:
 
 ```toml
@@ -55,9 +66,6 @@ For example, the following config instructs VictoriaLogs to ignore `log.offset`
     ignore_fields = "log.offset,event.original"
 ```
 
-More details about `_msg_field`, `_time_field`, `_stream_fields` and `ignore_fields` are
-available [here](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#http-parameters).
-
 When Vector ingests logs into VictoriaLogs at a high rate, then it may be needed to tune `batch.max_events` option.
 For example, the following config is optimized for higher than usual ingestion rate:
 
@@ -79,7 +87,7 @@ For example, the following config is optimized for higher than usual ingestion r
     max_events = 1000
 ```
 
-If the Vector sends logs to VictoriaLogs in another datacenter, then it may be useful enabling data compression via `compression` option.
+If the Vector sends logs to VictoriaLogs in another datacenter, then it may be useful enabling data compression via `compression = "gzip"` option.
 This usually allows saving network bandwidth and costs by up to 5 times:
 
 ```toml
@@ -99,8 +107,8 @@ This usually allows saving network bandwidth and costs by up to 5 times:
 ```
 
 By default, the ingested logs are stored in the `(AccountID=0, ProjectID=0)` [tenant](https://docs.victoriametrics.com/VictoriaLogs/keyConcepts.html#multitenancy).
-If you need storing logs in other tenant, then specify the needed tenant via `custom_headers` at `output.elasticsearch` section.
-For example, the following `vector.toml` config instructs Logstash to store the data to `(AccountID=12, ProjectID=34)` tenant:
+If you need storing logs in other tenant, then specify the needed tenant via `[sinks.vlogq.request.headers]` section.
+For example, the following `vector.toml` config instructs Vector to store the data to `(AccountID=12, ProjectID=34)` tenant:
 
 ```toml
 [sinks.vlogs]
@@ -121,12 +129,9 @@ For example, the following `vector.toml` config instructs Logstash to store the
     ProjectID = "34"
 ```
 
-More info about output tuning you can find in [these docs](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/).
+See also:
 
-[Here is a demo](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/vector-docker)
-for running Vector with VictoriaLogs with docker-compose and collecting logs from docker-containers
-to VictoriaLogs (via [Elasticsearch API](https://docs.victoriametrics.com/VictoriaLogs/ingestion/#elasticsearch-bulk-api)).
-
-The ingested log entries can be queried according to [these docs](https://docs.victoriametrics.com/VictoriaLogs/querying/).
-
-See also [data ingestion troubleshooting](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#troubleshooting) docs.
+- [Data ingestion troubleshooting](https://docs.victoriametrics.com/VictoriaLogs/data-ingestion/#troubleshooting).
+- [How to query VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/querying/).
+- [Elasticsearch output docs for Vector.dev](https://vector.dev/docs/reference/configuration/sinks/elasticsearch/).
+- [Docker-compose demo for Filebeat integration with VictoriaLogs](https://github.com/VictoriaMetrics/VictoriaMetrics/tree/master/deployment/docker/victorialogs/vector-docker).
diff --git a/docs/VictoriaLogs/querying/README.md b/docs/VictoriaLogs/querying/README.md
index a42a592f8b..75c77ab6dc 100644
--- a/docs/VictoriaLogs/querying/README.md
+++ b/docs/VictoriaLogs/querying/README.md
@@ -9,7 +9,7 @@ via the following ways:
 
 ## HTTP API
 
-[VictoriaLogs](https://docs.victoriametrics.com/VictoriaLogs/) can be queried at the `/select/logsql/query` HTTP endpoint.
+VictoriaLogs can be queried at the `/select/logsql/query` HTTP endpoint.
 The [LogsQL](https://docs.victoriametrics.com/VictoriaLogs/LogsQL.html) query must be passed via `query` argument.
 For example, the following query returns all the log entries with the `error` word: