Skip to content

out_es: sds: Implement es upstream with sds view#11647

Open
cosmo0920 wants to merge 11 commits intomasterfrom
cosmo0920-implement-es-upstream-with-sds-view
Open

out_es: sds: Implement es upstream with sds view#11647
cosmo0920 wants to merge 11 commits intomasterfrom
cosmo0920-implement-es-upstream-with-sds-view

Conversation

@cosmo0920
Copy link
Copy Markdown
Contributor

@cosmo0920 cosmo0920 commented Apr 1, 2026

To implement Elasticsearch upstream connection feature,
we need to implement sds_view which means borrowed sds type operations at first.
Then, we can implement upstream HA feature on out_es plugin.

This could be more simple approach against #7608.


Enter [N/A] in the box, if an item is not applicable to your change.

Testing
Before we can approve your change; please submit the following in a comment:

  • Example configuration file for the change

Example configs are here:
Normal es ingestions:
fluent-bit-es/fluent-bit.conf and (same in YAML format) fluent-bit-es/fluent-bit.yaml
upstream ingestions:
fluent-bit-es-cluster/fluent-bit.yaml in mabrarov/elastic-stack repository.

  • Debug log output from testing the change

Normal:

Fluent Bit v5.0.2
* Copyright (C) 2015-2026 The Fluent Bit Authors
* Fluent Bit is a CNCF graduated project under the Fluent organization
* https://fluentbit.io

______ _                  _    ______ _ _           _____  _____ 
|  ___| |                | |   | ___ (_) |         |  ___||  _  |
| |_  | |_   _  ___ _ __ | |_  | |_/ /_| |_  __   _|___ \ | |/' |
|  _| | | | | |/ _ \ '_ \| __| | ___ \ | __| \ \ / /   \ \|  /| |
| |   | | |_| |  __/ | | | |_  | |_/ / | |_   \ V //\__/ /\ |_/ /
\_|   |_|\__,_|\___|_| |_|\__| \____/|_|\__|   \_/ \____(_)\___/


[2026/04/02 17:04:42.401] [ info] Configuration:
[2026/04/02 17:04:42.423] [ info]  flush time     | 10.000000 seconds
[2026/04/02 17:04:42.431] [ info]  grace          | 5 seconds
[2026/04/02 17:04:42.431] [ info]  daemon         | 0
[2026/04/02 17:04:42.431] [ info] ___________
[2026/04/02 17:04:42.431] [ info]  inputs:
[2026/04/02 17:04:42.432] [ info]      dummy
[2026/04/02 17:04:42.432] [ info] ___________
[2026/04/02 17:04:42.432] [ info]  filters:
[2026/04/02 17:04:42.432] [ info] ___________
[2026/04/02 17:04:42.433] [ info]  outputs:
[2026/04/02 17:04:42.433] [ info]      es.0
[2026/04/02 17:04:42.433] [ info] ___________
[2026/04/02 17:04:42.434] [ info]  collectors:
[2026/04/02 17:04:42.497] [ info] [fluent bit] version=5.0.2, commit=fb8617662e, pid=2737294
[2026/04/02 17:04:42.502] [debug] [engine] coroutine stack size: 24576 bytes (24.0K)
[2026/04/02 17:04:42.508] [ info] [storage] ver=1.5.4, type=memory, sync=normal, checksum=off, max_chunks_up=128
[2026/04/02 17:04:42.508] [ info] [simd    ] SSE2
[2026/04/02 17:04:42.508] [ info] [cmetrics] version=2.1.1
[2026/04/02 17:04:42.509] [ info] [ctraces ] version=0.7.1
[2026/04/02 17:04:42.523] [ info] [input:dummy:dummy.0] initializing
[2026/04/02 17:04:42.523] [ info] [input:dummy:dummy.0] storage_strategy='memory' (memory only)
[2026/04/02 17:04:42.525] [debug] [dummy:dummy.0] created event channels: read=21 write=22
[2026/04/02 17:04:42.542] [debug] [es:es.0] created event channels: read=23 write=24
[2026/04/02 17:04:42.907] [debug] [output:es:es.0] host=localhost port=9201 uri=/_bulk index=fluent-bit type=_doc
[2026/04/02 17:04:42.982] [ info] [sp] stream processor started
[2026/04/02 17:04:42.984] [ info] [engine] Shutdown Grace Period=5, Shutdown Input Grace Period=2
[2026/04/02 17:04:43.028] [ info] [output:es:es.0] worker #0 started
[2026/04/02 17:04:43.031] [ info] [output:es:es.0] worker #3 started
[2026/04/02 17:04:43.031] [ info] [output:es:es.0] worker #1 started
[2026/04/02 17:04:43.034] [ info] [output:es:es.0] worker #2 started
[2026/04/02 17:04:52.019] [debug] [task] created task=0x9863950 id=0 OK
[2026/04/02 17:04:52.021] [debug] [output:es:es.0] task_id=0 assigned to thread #0
[2026/04/02 17:04:52.591] [debug] [upstream] KA connection #69 to localhost:9201 is connected
[2026/04/02 17:04:52.607] [debug] [out_es] converted_size is 0
[2026/04/02 17:04:52.610] [debug] [out_es] converted_size is 0
[2026/04/02 17:04:52.614] [debug] [http_client] not using http_proxy for header
[2026/04/02 17:04:52.687] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
[2026/04/02 17:04:52.698] [debug] [output:es:es.0] Elasticsearch response
{"errors":false,"took":0,"items":[{"create":{"_index":"fluent-bit","_id":"2u45TZ0BckMZbRRnR-wd","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":539,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"2-45TZ0BckMZbRRnR-wd","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":540,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"3O45TZ0BckMZbRRnR-wd","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":541,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"3e45TZ0BckMZbRRnR-wd","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":542,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"3u45TZ0BckMZbRRnR-wd","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":543,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"3-45TZ0BckMZbRRnR-wd","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":544,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"4O45TZ0BckMZbRRnR-wd","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":545,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"4e45TZ0BckMZbRRnR-wd","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":546,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"4u45TZ0BckMZbRRnR-wd","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":547,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"4-45TZ0BckMZbRRnR-wd","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":548,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"5O45TZ0BckMZbRRnR-wd","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":549,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"5e45TZ0BckMZbRRnR-wd","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":550,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"5u45TZ0BckMZbRRnR-wd","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":551,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"5-45TZ0BckMZbRRnR-wd","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":552,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"6O45TZ0BckMZbRRnR-wd","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":553,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"6e45TZ0BckMZbRRnR-wd","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":554,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"6u45TZ0BckMZbRRnR-wd","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":555,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"6-45TZ0BckMZbRRnR-wd","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":556,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"7O45TZ0BckMZbRRnR-wd","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":557,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"7e45TZ0BckMZbRRnR-wd","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":558,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"7u45TZ0BckMZbRRnR-wd","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":559,"_primary_term":
[2026/04/02 17:04:52.701] [debug] [upstream] KA connection #69 to localhost:9201 is now available
[2026/04/02 17:04:52.704] [debug] [out flush] cb_destroy coro_id=0
[2026/04/02 17:04:52.711] [debug] [task] destroy task=0x9863950 (task_id=0)
^C[2026/04/02 17:04:53] [engine] caught signal (SIGINT)
[2026/04/02 17:04:53.201] [debug] [task] created task=0x9be4f70 id=0 OK
[2026/04/02 17:04:53.202] [debug] [output:es:es.0] task_id=0 assigned to thread #1
[2026/04/02 17:04:53.203] [ warn] [engine] service will shutdown in max 5 seconds
[2026/04/02 17:04:53.204] [debug] [engine] task 0 already scheduled to run, not re-scheduling it.
[2026/04/02 17:04:53.204] [ info] [engine] pausing all inputs..
[2026/04/02 17:04:53.206] [ info] [input] pausing dummy.0
[2026/04/02 17:04:53.235] [debug] [upstream] KA connection #70 to localhost:9201 is connected
[2026/04/02 17:04:53.236] [debug] [http_client] not using http_proxy for header
[2026/04/02 17:04:53.267] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
[2026/04/02 17:04:53.268] [debug] [output:es:es.0] Elasticsearch response
{"errors":false,"took":0,"items":[{"create":{"_index":"fluent-bit","_id":"Ne45TZ0BckMZbRRnSe14","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":630,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"Nu45TZ0BckMZbRRnSe14","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":631,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"N-45TZ0BckMZbRRnSe14","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":632,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"OO45TZ0BckMZbRRnSe14","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":633,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"Oe45TZ0BckMZbRRnSe14","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":634,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"Ou45TZ0BckMZbRRnSe14","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":635,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"O-45TZ0BckMZbRRnSe14","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":636,"_primary_term":1,"status":201}}]}
[2026/04/02 17:04:53.268] [debug] [upstream] KA connection #70 to localhost:9201 is now available
[2026/04/02 17:04:53.268] [debug] [out flush] cb_destroy coro_id=0
[2026/04/02 17:04:53.269] [debug] [task] destroy task=0x9be4f70 (task_id=0)
[2026/04/02 17:04:54.014] [ info] [engine] service has stopped (0 pending tasks)
[2026/04/02 17:04:54.014] [ info] [input] pausing dummy.0
[2026/04/02 17:04:54.016] [ info] [output:es:es.0] thread worker #0 stopping...
[2026/04/02 17:04:54.035] [ info] [output:es:es.0] thread worker #0 stopped
[2026/04/02 17:04:54.056] [ info] [output:es:es.0] thread worker #1 stopping...
[2026/04/02 17:04:54.057] [ info] [output:es:es.0] thread worker #1 stopped
[2026/04/02 17:04:54.058] [ info] [output:es:es.0] thread worker #2 stopping...
[2026/04/02 17:04:54.058] [ info] [output:es:es.0] thread worker #2 stopped
[2026/04/02 17:04:54.058] [ info] [output:es:es.0] thread worker #3 stopping...
[2026/04/02 17:04:54.058] [ info] [output:es:es.0] thread worker #3 stopped

Upstream:

Fluent Bit v5.0.2
* Copyright (C) 2015-2026 The Fluent Bit Authors
* Fluent Bit is a CNCF graduated project under the Fluent organization
* https://fluentbit.io

______ _                  _    ______ _ _           _____  _____ 
|  ___| |                | |   | ___ (_) |         |  ___||  _  |
| |_  | |_   _  ___ _ __ | |_  | |_/ /_| |_  __   _|___ \ | |/' |
|  _| | | | | |/ _ \ '_ \| __| | ___ \ | __| \ \ / /   \ \|  /| |
| |   | | |_| |  __/ | | | |_  | |_/ / | |_   \ V //\__/ /\ |_/ /
\_|   |_|\__,_|\___|_| |_|\__| \____/|_|\__|   \_/ \____(_)\___/


[2026/04/02 17:02:43.323] [ info] Configuration:
[2026/04/02 17:02:43.345] [ info]  flush time     | 10.000000 seconds
[2026/04/02 17:02:43.352] [ info]  grace          | 5 seconds
[2026/04/02 17:02:43.352] [ info]  daemon         | 0
[2026/04/02 17:02:43.353] [ info] ___________
[2026/04/02 17:02:43.353] [ info]  inputs:
[2026/04/02 17:02:43.353] [ info]      dummy
[2026/04/02 17:02:43.353] [ info] ___________
[2026/04/02 17:02:43.354] [ info]  filters:
[2026/04/02 17:02:43.354] [ info] ___________
[2026/04/02 17:02:43.354] [ info]  outputs:
[2026/04/02 17:02:43.354] [ info]      es.0
[2026/04/02 17:02:43.355] [ info] ___________
[2026/04/02 17:02:43.355] [ info]  collectors:
[2026/04/02 17:02:43.422] [ info] [fluent bit] version=5.0.2, commit=fb8617662e, pid=2736553
[2026/04/02 17:02:43.429] [debug] [engine] coroutine stack size: 24576 bytes (24.0K)
[2026/04/02 17:02:43.435] [ info] [storage] ver=1.5.4, type=memory, sync=normal, checksum=off, max_chunks_up=128
[2026/04/02 17:02:43.435] [ info] [simd    ] SSE2
[2026/04/02 17:02:43.435] [ info] [cmetrics] version=2.1.1
[2026/04/02 17:02:43.436] [ info] [ctraces ] version=0.7.1
[2026/04/02 17:02:43.450] [ info] [input:dummy:dummy.0] initializing
[2026/04/02 17:02:43.450] [ info] [input:dummy:dummy.0] storage_strategy='memory' (memory only)
[2026/04/02 17:02:43.452] [debug] [dummy:dummy.0] created event channels: read=21 write=22
[2026/04/02 17:02:43.469] [debug] [es:es.0] created event channels: read=23 write=24
[2026/04/02 17:02:43.480] [debug] [upstream_ha] opening file ./upstream.conf
[2026/04/02 17:02:43.863] [debug] [output:es:es.0] host=127.0.0.1 port=9200 uri=/_bulk index=fluent-bit type=_doc
[2026/04/02 17:02:43.938] [ info] [sp] stream processor started
[2026/04/02 17:02:43.941] [ info] [engine] Shutdown Grace Period=5, Shutdown Input Grace Period=2
[2026/04/02 17:02:43.982] [ info] [output:es:es.0] worker #3 started
[2026/04/02 17:02:43.985] [ info] [output:es:es.0] worker #0 started
[2026/04/02 17:02:43.986] [ info] [output:es:es.0] worker #2 started
[2026/04/02 17:02:43.985] [ info] [output:es:es.0] worker #1 started
[2026/04/02 17:02:53.019] [debug] [task] created task=0x98edc90 id=0 OK
[2026/04/02 17:02:53.021] [debug] [output:es:es.0] task_id=0 assigned to thread #0
[2026/04/02 17:02:53.163] [debug] [tls] connection #69 SSL_connect: before SSL initialization
[2026/04/02 17:02:53.253] [debug] [tls] connection #69 SSL_connect: SSLv3/TLS write client hello
[2026/04/02 17:02:53.257] [debug] [tls] connection #69 WANT_READ
[2026/04/02 17:02:53.262] [debug] [tls] connection #69 SSL_connect: SSLv3/TLS write client hello
[2026/04/02 17:02:53.393] [debug] [tls] connection #69 SSL_connect: SSLv3/TLS read server hello
[2026/04/02 17:02:53.395] [debug] [tls] connection #69 SSL_connect: TLSv1.3 read encrypted extensions
[2026/04/02 17:02:53.564] [debug] [tls] connection #69 SSL_connect: SSLv3/TLS read server certificate
[2026/04/02 17:02:53.581] [debug] [tls] connection #69 SSL_connect: TLSv1.3 read server certificate verify
[2026/04/02 17:02:53.586] [debug] [tls] connection #69 SSL_connect: SSLv3/TLS read finished
[2026/04/02 17:02:53.587] [debug] [tls] connection #69 SSL_connect: SSLv3/TLS write change cipher spec
[2026/04/02 17:02:53.596] [debug] [tls] connection #69 SSL_connect: SSLv3/TLS write finished
[2026/04/02 17:02:53.598] [debug] [upstream] KA connection #69 to localhost:9201 is connected
[2026/04/02 17:02:53.615] [debug] [out_es] converted_size is 0
[2026/04/02 17:02:53.618] [debug] [out_es] converted_size is 0
[2026/04/02 17:02:53.623] [debug] [http_client] not using http_proxy for header
[2026/04/02 17:02:53.646] [debug] [tls] connection #69 SSL_connect: SSL negotiation finished successfully
[2026/04/02 17:02:53.646] [debug] [tls] connection #69 SSL_connect: SSL negotiation finished successfully
[2026/04/02 17:02:53.652] [debug] [tls] connection #69 SSL_connect: SSLv3/TLS read server session ticket
[2026/04/02 17:02:53.683] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
[2026/04/02 17:02:53.699] [debug] [output:es:es.0] Elasticsearch response
{"errors":false,"took":0,"items":[{"create":{"_index":"fluent-bit","_id":"f-43TZ0BckMZbRRnduxO","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":432,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"gO43TZ0BckMZbRRnduxO","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":433,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"ge43TZ0BckMZbRRnduxO","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":434,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"gu43TZ0BckMZbRRnduxO","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":435,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"g-43TZ0BckMZbRRnduxO","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":436,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"hO43TZ0BckMZbRRnduxO","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":437,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"he43TZ0BckMZbRRnduxO","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":438,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"hu43TZ0BckMZbRRnduxO","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":439,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"h-43TZ0BckMZbRRnduxO","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":440,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"iO43TZ0BckMZbRRnduxO","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":441,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"ie43TZ0BckMZbRRnduxO","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":442,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"iu43TZ0BckMZbRRnduxO","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":443,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"i-43TZ0BckMZbRRnduxO","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":444,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"jO43TZ0BckMZbRRnduxO","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":445,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"je43TZ0BckMZbRRnduxO","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":446,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"ju43TZ0BckMZbRRnduxO","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":447,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"j-43TZ0BckMZbRRnduxO","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":448,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"kO43TZ0BckMZbRRnduxO","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":449,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"ke43TZ0BckMZbRRnduxO","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":450,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"ku43TZ0BckMZbRRnduxO","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":451,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"k-43TZ0BckMZbRRnduxO","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":452,"_primary_term":
[2026/04/02 17:02:53.704] [debug] [upstream] KA connection #69 to localhost:9201 is now available
[2026/04/02 17:02:53.708] [debug] [out flush] cb_destroy coro_id=0
[2026/04/02 17:02:53.720] [debug] [task] destroy task=0x98edc90 (task_id=0)
^C[2026/04/02 17:02:55] [engine] caught signal (SIGINT)
[2026/04/02 17:02:55.054] [debug] [task] created task=0x9cc00c0 id=0 OK
[2026/04/02 17:02:55.055] [debug] [output:es:es.0] task_id=0 assigned to thread #1
[2026/04/02 17:02:55.056] [ warn] [engine] service will shutdown in max 5 seconds
[2026/04/02 17:02:55.056] [debug] [engine] task 0 already scheduled to run, not re-scheduling it.
[2026/04/02 17:02:55.057] [ info] [engine] pausing all inputs..
[2026/04/02 17:02:55.058] [ info] [input] pausing dummy.0
[2026/04/02 17:02:55.066] [debug] [tls] connection #70 SSL_connect: before SSL initialization
[2026/04/02 17:02:55.069] [debug] [tls] connection #70 SSL_connect: SSLv3/TLS write client hello
[2026/04/02 17:02:55.069] [debug] [tls] connection #70 WANT_READ
[2026/04/02 17:02:55.073] [debug] [tls] connection #70 SSL_connect: SSLv3/TLS write client hello
[2026/04/02 17:02:55.077] [debug] [tls] connection #70 SSL_connect: SSLv3/TLS read server hello
[2026/04/02 17:02:55.077] [debug] [tls] connection #70 SSL_connect: TLSv1.3 read encrypted extensions
[2026/04/02 17:02:55.085] [debug] [tls] connection #70 SSL_connect: SSLv3/TLS read server certificate
[2026/04/02 17:02:55.087] [debug] [tls] connection #70 SSL_connect: TLSv1.3 read server certificate verify
[2026/04/02 17:02:55.088] [debug] [tls] connection #70 SSL_connect: SSLv3/TLS read finished
[2026/04/02 17:02:55.088] [debug] [tls] connection #70 SSL_connect: SSLv3/TLS write change cipher spec
[2026/04/02 17:02:55.090] [debug] [tls] connection #70 SSL_connect: SSLv3/TLS write finished
[2026/04/02 17:02:55.090] [debug] [upstream] KA connection #70 to localhost:9202 is connected
[2026/04/02 17:02:55.092] [debug] [http_client] not using http_proxy for header
[2026/04/02 17:02:55.093] [debug] [tls] connection #70 SSL_connect: SSL negotiation finished successfully
[2026/04/02 17:02:55.093] [debug] [tls] connection #70 SSL_connect: SSL negotiation finished successfully
[2026/04/02 17:02:55.094] [debug] [tls] connection #70 SSL_connect: SSLv3/TLS read server session ticket
[2026/04/02 17:02:55.125] [debug] [output:es:es.0] HTTP Status=200 URI=/_bulk
[2026/04/02 17:02:55.126] [debug] [output:es:es.0] Elasticsearch response
{"errors":false,"took":0,"items":[{"create":{"_index":"fluent-bit","_id":"Kls3TZ0BfsFs5OVhfD8C","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":523,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"K1s3TZ0BfsFs5OVhfD8C","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":524,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"LFs3TZ0BfsFs5OVhfD8C","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":525,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"LVs3TZ0BfsFs5OVhfD8C","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":526,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"Lls3TZ0BfsFs5OVhfD8C","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":527,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"L1s3TZ0BfsFs5OVhfD8C","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":528,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"MFs3TZ0BfsFs5OVhfD8C","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":529,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"MVs3TZ0BfsFs5OVhfD8C","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":530,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"Mls3TZ0BfsFs5OVhfD8C","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":531,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"M1s3TZ0BfsFs5OVhfD8C","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":532,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"NFs3TZ0BfsFs5OVhfD8C","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":533,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"NVs3TZ0BfsFs5OVhfD8C","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":534,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"Nls3TZ0BfsFs5OVhfD8C","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":535,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"N1s3TZ0BfsFs5OVhfD8C","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":536,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"OFs3TZ0BfsFs5OVhfD8C","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":537,"_primary_term":1,"status":201}},{"create":{"_index":"fluent-bit","_id":"OVs3TZ0BfsFs5OVhfD8C","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":538,"_primary_term":1,"status":201}}]}
[2026/04/02 17:02:55.127] [debug] [upstream] KA connection #70 to localhost:9202 is now available
[2026/04/02 17:02:55.127] [debug] [out flush] cb_destroy coro_id=0
[2026/04/02 17:02:55.127] [debug] [task] destroy task=0x9cc00c0 (task_id=0)
[2026/04/02 17:02:56.013] [ info] [engine] service has stopped (0 pending tasks)
[2026/04/02 17:02:56.014] [ info] [input] pausing dummy.0
[2026/04/02 17:02:56.018] [ info] [output:es:es.0] thread worker #0 stopping...
[2026/04/02 17:02:56.029] [debug] [tls] connection #69 SSL3 alert write:warning:close notify
[2026/04/02 17:02:56.043] [ info] [output:es:es.0] thread worker #0 stopped
[2026/04/02 17:02:56.064] [ info] [output:es:es.0] thread worker #1 stopping...
[2026/04/02 17:02:56.065] [debug] [tls] connection #70 SSL3 alert write:warning:close notify
[2026/04/02 17:02:56.065] [ info] [output:es:es.0] thread worker #1 stopped
[2026/04/02 17:02:56.066] [ info] [output:es:es.0] thread worker #2 stopping...
[2026/04/02 17:02:56.066] [ info] [output:es:es.0] thread worker #2 stopped
[2026/04/02 17:02:56.066] [ info] [output:es:es.0] thread worker #3 stopping...
[2026/04/02 17:02:56.066] [ info] [output:es:es.0] thread worker #3 stopped
  • Attached Valgrind output that shows no leaks or memory corruption was found

Normal:

==2737294== 
==2737294== HEAP SUMMARY:
==2737294==     in use at exit: 0 bytes in 0 blocks
==2737294==   total heap usage: 20,676 allocs, 20,676 frees, 9,024,073 bytes allocated
==2737294== 
==2737294== All heap blocks were freed -- no leaks are possible
==2737294== 
==2737294== For lists of detected and suppressed errors, rerun with: -s
==2737294== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)

Upstream:

==2736553== 
==2736553== HEAP SUMMARY:
==2736553==     in use at exit: 0 bytes in 0 blocks
==2736553==   total heap usage: 24,036 allocs, 24,036 frees, 9,949,052 bytes allocated
==2736553== 
==2736553== All heap blocks were freed -- no leaks are possible
==2736553== 
==2736553== For lists of detected and suppressed errors, rerun with: -s
==2736553== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)

If this is a change to packaging of containers or native binaries then please confirm it works for all targets.

  • Run local packaging test showing all targets (including any new ones) build.
  • Set ok-package-test label to test for all targets (requires maintainer to do).

Documentation

  • Documentation required for this feature

Backporting

  • Backport to latest stable release.

Fluent Bit is licensed under Apache 2.0, by submitting this pull request I understand that this code will be released under the terms of that license.

Summary by CodeRabbit

  • New Features

    • Elasticsearch output: added upstream (HA) support with per-node routing and per-request bulk URI composition.
    • Public SDS API: introduced a lightweight string-view type and a constructor to create SDS from a view.
  • Behavior Changes

    • Node-scoped authentication now takes precedence over instance-level credentials; logs and retries report composed per-request URIs.
  • Tests

    • Added unit and runtime tests for upstream/HA and SDS view behaviors.

Signed-off-by: Hiroshi Hatake <hiroshi@chronosphere.io>
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 1, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

A non-owning SDS string view type (flb_sds_view_t) and constructor (flb_sds_create_from_view) were added. The Elasticsearch output gained upstream HA support: per-node property resolution, per-request bulk-URI composition, node-scoped credentials, an upstream config option, and new unit/integration tests.

Changes

Cohort / File(s) Summary
SDS View / API
include/fluent-bit/flb_sds.h, src/flb_sds.c
Introduce flb_sds_view_t and inline helpers; add flb_sds_create_from_view() to construct an owning SDS from a view.
Elasticsearch Plugin — core
plugins/out_es/es.c
Add HA-aware flush: select HA node, resolve node-scoped properties (path/pipeline/credentials) with fallback, compose per-request _bulk URI, use composed URI in HTTP client, adjust logging and cleanup, guard client/signature/URI destruction, and extend config_map with upstream.
Elasticsearch Plugin — header
plugins/out_es/es.h
Add int ha_mode and struct flb_upstream_ha *ha to struct flb_elasticsearch; include flb_upstream_ha.h.
Elasticsearch Plugin — config
plugins/out_es/es_conf.c
Support upstream option: load HA upstream from file, enable HA mode on success, associate output with HA upstream, and conditionally destroy HA vs non-HA upstream on teardown.
SDS Unit Tests
tests/internal/sds.c
Add test_sds_view() to validate view creation from raw buffer and from owned SDS, and creation of owning SDS from a view; register test.
Elasticsearch Runtime Tests
tests/runtime/out_elasticsearch.c
Add temp upstream file helpers and six integration tests exercising upstream-driven behaviors (write_operation, null_index, index_type, logstash_format, replace_dots, id_key); register tests and clean up temp file.

Sequence Diagram

sequenceDiagram
    autonumber
    participant Client
    participant "ES Output" as ES
    participant "Upstream HA" as HA
    participant "Upstream Conn" as Conn
    participant "Elasticsearch Node" as Node

    Client->>ES: flush trigger
    ES->>HA: select active node (flb_upstream_ha_node_get)
    HA-->>ES: node info (properties)
    ES->>ES: resolve node props (path, pipeline, creds)
    ES->>ES: es_compose_bulk_uri()
    ES->>HA: get upstream connection for node
    HA-->>Conn: connection
    ES->>Conn: HTTP POST composed /_bulk[?pipeline] with headers
    Conn->>Node: send request
    Node-->>Conn: response
    Conn-->>ES: response
    ES->>ES: destroy composed uri/signature as needed
    ES-->>Client: flush result
Loading

Estimated Code Review Effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐰
I nibble bytes and stitch a view so slim,
Nodes line up for hops on high-availability whim.
Per-node paths and keys I neatly bind,
Tests hop through fields to check what they find.
A tiny rabbit cheers — code neat and prim!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title directly summarizes the main change: implementing Elasticsearch upstream with SDS view abstraction across multiple components.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch cosmo0920-implement-es-upstream-with-sds-view

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@cosmo0920
Copy link
Copy Markdown
Contributor Author

Hi @mabrarov,
I tried to implement sds borrowed type which is called as flb_sds_view approach to simplify your patch.
With your testing repository, there's no errors with valgrind.
So, is this patch working on your environment and working as intended on your thoughts?

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 7408ae174b

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (1)
tests/runtime/out_elasticsearch.c (1)

1092-1118: Add one upstream test that goes through cb_es_flush().

All of the new cases use the formatter hook, so they stop at out_es_plugin.test_formatter.callback and never exercise the new HA branch in cb_es_flush() (flb_upstream_ha_node_get(), per-node URI composition, or node-scoped auth). A single response/integration case with two upstream nodes would cover the behavior this PR adds.

Based on learnings: Add or update tests for behavior changes, especially protocol parsing and encoder/decoder paths.

Also applies to: 1121-1326

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/runtime/out_elasticsearch.c` around lines 1092 - 1118, Add an
integration test that exercises cb_es_flush() end-to-end (including
flb_upstream_ha_node_get, per-node URI composition and node-scoped auth) by
creating a context with create_upstream_test_ctx but without installing the
formatter hook (avoid out_es_plugin.test_formatter.callback) so the output
plugin performs a real flush; specifically, create an upstream file with two
nodes, configure the "es" output to use that upstream, send records and assert
the request goes through the HA branch (e.g., by mocking responses for both
nodes or asserting composed URIs and auth for the selected node), ensuring the
new HA behavior in cb_es_flush() is exercised.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@plugins/out_es/es_conf.c`:
- Around line 249-260: flb_upstream_ha_from_file() may return an HA object with
zero nodes which later causes cb_es_flush() to loop on FLB_RETRY when
flb_upstream_ha_node_get() returns NULL; after creating ctx->ha in the upstream
branch check that the HA contains at least one node (e.g., via the HA API or
node-count accessor) and if empty log an error (use flb_plg_error with context
like "upstream file contains no [NODE] entries"), call flb_es_conf_destroy(ctx)
and return NULL; ensure this validation occurs immediately after
flb_upstream_ha_from_file() and before flb_output_upstream_ha_set(ctx->ha, ins)
so empty upstream files are rejected during init.

In `@plugins/out_es/es.c`:
- Around line 885-895: The code currently assigns the result of
flb_sds_printf(...) directly back into header_line and then calls
flb_sds_len(header_line), which dereferences header_line if flb_sds_printf
returns NULL and the original SDS is lost; change this by printing into a
temporary flb_sds_t (e.g., tmp_header) and only assign it to header_line after
verifying tmp_header != NULL, and always check tmp_header for NULL before
calling flb_sds_len or using header_line (update call sites around
flb_sds_printf and flb_sds_len that reference header_line accordingly).
- Around line 969-975: The current check uses
flb_sds_view_is_empty(http_user/http_passwd) which treats "" the same as a
missing property and drops Basic Auth when the password is intentionally empty;
change the condition to check for presence by testing the .buf pointer returned
by es_get_property_view instead (e.g. if (http_user.buf != NULL &&
http_passwd.buf != NULL) call flb_http_basic_auth), so an empty-string password
still results in an Authorization header; update the code around
es_get_property_view/http_user/http_passwd and the flb_http_basic_auth call
accordingly.

In `@src/flb_sds.c`:
- Around line 92-95: flb_sds_create_from_view currently passes view.len (size_t)
directly into flb_sds_create_len(int) causing truncation for lengths > INT_MAX;
update flb_sds_create_from_view to check that view.len <= INT_MAX before calling
flb_sds_create_len and handle the overflow case (e.g., return NULL or propagate
an error) instead of narrowing silently, ensuring any error path is
documented/consistent with other SDS constructors.

---

Nitpick comments:
In `@tests/runtime/out_elasticsearch.c`:
- Around line 1092-1118: Add an integration test that exercises cb_es_flush()
end-to-end (including flb_upstream_ha_node_get, per-node URI composition and
node-scoped auth) by creating a context with create_upstream_test_ctx but
without installing the formatter hook (avoid
out_es_plugin.test_formatter.callback) so the output plugin performs a real
flush; specifically, create an upstream file with two nodes, configure the "es"
output to use that upstream, send records and assert the request goes through
the HA branch (e.g., by mocking responses for both nodes or asserting composed
URIs and auth for the selected node), ensuring the new HA behavior in
cb_es_flush() is exercised.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 619235e1-d33e-4dba-b62c-1648782b2631

📥 Commits

Reviewing files that changed from the base of the PR and between 66ffbe4 and ce5b402.

📒 Files selected for processing (7)
  • include/fluent-bit/flb_sds.h
  • plugins/out_es/es.c
  • plugins/out_es/es.h
  • plugins/out_es/es_conf.c
  • src/flb_sds.c
  • tests/internal/sds.c
  • tests/runtime/out_elasticsearch.c

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
plugins/out_es/es.c (1)

91-104: Apply the same flb_sds_printf null-check pattern used for header_line.

If flb_sds_printf fails and returns NULL, the buffer allocated by flb_sds_create_size may be leaked (depending on internal SDS behavior), and uri is left in an inconsistent state. The same defensive pattern applied at lines 987-995 should be used here.

♻️ Suggested fix
     if (!flb_sds_view_is_empty(pipeline)) {
         uri = flb_sds_create_size(path.len + pipeline.len + 19);
         if (uri == NULL) {
             flb_errno();
             return NULL;
         }
 
-        uri = flb_sds_printf(&uri, "%.*s/_bulk/?pipeline=%.*s",
-                             (int) path.len, path.buf,
-                             (int) pipeline.len, pipeline.buf);
+        if (flb_sds_printf(&uri, "%.*s/_bulk/?pipeline=%.*s",
+                           (int) path.len, path.buf,
+                           (int) pipeline.len, pipeline.buf) == NULL) {
+            flb_sds_destroy(uri);
+            return NULL;
+        }
     }
     else {
         uri = flb_sds_create_size(path.len + 8);
         if (uri == NULL) {
             flb_errno();
             return NULL;
         }
 
-        uri = flb_sds_printf(&uri, "%.*s/_bulk",
-                             (int) path.len, path.buf);
+        if (flb_sds_printf(&uri, "%.*s/_bulk",
+                           (int) path.len, path.buf) == NULL) {
+            flb_sds_destroy(uri);
+            return NULL;
+        }
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@plugins/out_es/es.c` around lines 91 - 104, The flb_sds_printf call that
populates uri can return NULL and leave the previously allocated SDS buffer
leaked or uri in an inconsistent state; replicate the defensive pattern used for
header_line: after each flb_sds_printf(...) assignment to uri check if uri ==
NULL, call flb_errno(), free/destroy the allocated uri SDS (using the same
cleanup used in the header_line fix), and return NULL so no leak or invalid
state remains; apply this to both branches (the pipeline and non-pipeline uri
constructions).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@plugins/out_es/es.c`:
- Around line 954-960: es_compose_bulk_uri() can return NULL and code jumps to
the retry label while the local http client pointer c may still be NULL; update
the retry handling to guard the flb_http_client_destroy(c) call by checking c !=
NULL before calling flb_http_client_destroy to avoid a NULL dereference (look
for the retry: label, the variable c, the es_compose_bulk_uri() call and the
flb_http_client_destroy() invocation).

---

Nitpick comments:
In `@plugins/out_es/es.c`:
- Around line 91-104: The flb_sds_printf call that populates uri can return NULL
and leave the previously allocated SDS buffer leaked or uri in an inconsistent
state; replicate the defensive pattern used for header_line: after each
flb_sds_printf(...) assignment to uri check if uri == NULL, call flb_errno(),
free/destroy the allocated uri SDS (using the same cleanup used in the
header_line fix), and return NULL so no leak or invalid state remains; apply
this to both branches (the pipeline and non-pipeline uri constructions).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 0c169969-2f0b-43c5-9b8a-6e9d6c48eb0e

📥 Commits

Reviewing files that changed from the base of the PR and between ce5b402 and 8d0d51d.

📒 Files selected for processing (2)
  • plugins/out_es/es.c
  • plugins/out_es/es_conf.c
🚧 Files skipped from review as they are similar to previous changes (1)
  • plugins/out_es/es_conf.c

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@plugins/out_es/es.c`:
- Around line 970-980: The current auth selection checks instance/cloud basic
auth before node-local API key, causing nodes that only set http_api_key to
incorrectly inherit http_user/http_passwd; change the order in the auth decision
(the block using es_get_property_view for http_user, http_passwd, http_api_key)
to prefer node-local credentials first: if http_user and http_passwd (from
es_get_property_view) then call flb_http_basic_auth with those; else if
http_api_key is non-empty (flb_sds_view_is_empty) then use the node API key
handling; only if neither node-local basic nor node API key exist fall back to
ctx->cloud_user/ctx->cloud_passwd and call flb_http_basic_auth with cloud
credentials.
- Around line 91-93: The formatted URI currently inserts an extra '/' before the
query string when a pipeline is present; change the flb_sds_printf call that
builds uri (the line using "%.*s/_bulk/?pipeline=%.*s") to remove that slash so
it uses "%.*s/_bulk?pipeline=%.*s" and thus preserves the intended _bulk path
regardless of pipeline value (refer to variables uri, path, pipeline and the
flb_sds_printf invocation to locate the change).
- Around line 1103-1114: The retry path currently leaks the SigV4 `signature`
allocated by add_aws_auth(); ensure you free it before jumping to `retry` and on
the error/cleanup path: check `signature` for non-NULL and call the appropriate
destroy function (e.g. flb_sds_destroy(signature)) and then set `signature =
NULL`; update the cleanup block that runs on retries (the code around the
`retry:` label and where add_aws_auth() is called) to always destroy `signature`
to prevent unbounded leaks.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: b92e4300-340b-4b2f-8774-f3eaed0d9e42

📥 Commits

Reviewing files that changed from the base of the PR and between 8d0d51d and 4da9ce6.

📒 Files selected for processing (1)
  • plugins/out_es/es.c

cosmo0920 and others added 7 commits April 3, 2026 14:52
Signed-off-by: Hiroshi Hatake <hiroshi@chronosphere.io>
Signed-off-by: Hiroshi Hatake <hiroshi@chronosphere.io>
Co-authored-by: Marat Abrarov <abrarov@gmail.com>
Signed-off-by: Hiroshi Hatake <hiroshi@chronosphere.io>
Co-authored-by: Marat Abrarov <abrarov@gmail.com>
Signed-off-by: Hiroshi Hatake <hiroshi@chronosphere.io>
Signed-off-by: Hiroshi Hatake <hiroshi@chronosphere.io>
Signed-off-by: Hiroshi Hatake <hiroshi@chronosphere.io>
Signed-off-by: Hiroshi Hatake <hiroshi@chronosphere.io>
Signed-off-by: Hiroshi Hatake <hiroshi@chronosphere.io>
Signed-off-by: Hiroshi Hatake <hiroshi@chronosphere.io>
Signed-off-by: Hiroshi Hatake <hiroshi@chronosphere.io>
value = flb_upstream_node_get_property(property, node);
if (value != NULL) {
ret = flb_utils_bool(value);
if (ret != -1) {
Copy link
Copy Markdown
Contributor

@mabrarov mabrarov Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When boolean property is set in plugin configuration and not in upstream node configuration, then it is handled differently - refer to

ret = flb_utils_bool(kv->val);

which is used in flb_output_config_map_set which is used in flb_es_conf_create. In that place if boolean property is given unsupported value then it fails plugin configuration which, if I understand Fluent Bit correctly, fails start of Fluent Bit. This question (about validation of configuration) is described in #11647 (comment) too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants