aws_msk_iam: add AWS MSK IAM authentication support#11270
aws_msk_iam: add AWS MSK IAM authentication support#11270kalavt wants to merge 10 commits intofluent:masterfrom
Conversation
📝 WalkthroughWalkthroughReplaces cluster‑ARN MSK IAM with brokers+region, adds TLS-backed persistent AWS provider with mutexed credential refresh, drives MSK IAM via Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant FB as Fluent Bit
participant LR as librdkafka
participant MSK as flb_aws_msk_iam
participant AWS as AWS Provider
Note over FB: Init reads `rdkafka.sasl.mechanism`
FB->>LR: create rd_kafka_conf()
FB->>FB: detect sasl.mechanism == "aws_msk_iam"
FB->>MSK: flb_aws_msk_iam_register_oauth_cb(config,kconf,opaque,brokers,region)
MSK->>MSK: extract_region_from_broker / init TLS / create provider / init mutex
MSK-->>FB: register callback handle
Note over LR,MSK: Background token refresh
LR->>MSK: oauthbearer_token_refresh_cb(request)
MSK->>MSK: lock mutex (rgba(0,128,0,0.5))
MSK->>AWS: provider->refresh_credentials()
AWS-->>MSK: credentials
MSK->>MSK: build_msk_iam_payload(host, credentials)
MSK-->>LR: rd_kafka_oauthbearer_set_token(token, lifetime=300s)
MSK->>MSK: unlock mutex (rgba(0,128,0,0.5))
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
plugins/out_kafka/kafka_config.c (1)
74-82: Consider checking return values fromflb_output_set_property.The calls to
flb_output_set_property()at lines 74 and 81 don't check return values. While unlikely to fail in practice, property setting can fail on allocation errors. For robustness:- flb_output_set_property(ins, "rdkafka.sasl.mechanism", "OAUTHBEARER"); + if (flb_output_set_property(ins, "rdkafka.sasl.mechanism", "OAUTHBEARER") < 0) { + flb_plg_error(ins, "failed to set OAUTHBEARER SASL mechanism"); + flb_sds_destroy(ctx->sasl_mechanism); + flb_free(ctx); + return NULL; + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
include/fluent-bit/aws/flb_aws_msk_iam.h(1 hunks)plugins/in_kafka/in_kafka.c(4 hunks)plugins/in_kafka/in_kafka.h(1 hunks)plugins/out_kafka/kafka.c(0 hunks)plugins/out_kafka/kafka_config.c(4 hunks)plugins/out_kafka/kafka_config.h(1 hunks)src/aws/flb_aws_credentials_ec2.c(1 hunks)src/aws/flb_aws_credentials_profile.c(1 hunks)src/aws/flb_aws_credentials_sts.c(2 hunks)src/aws/flb_aws_msk_iam.c(12 hunks)src/flb_kafka.c(1 hunks)
💤 Files with no reviewable changes (1)
- plugins/out_kafka/kafka.c
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-08-31T12:46:11.940Z
Learnt from: ThomasDevoogdt
Repo: fluent/fluent-bit PR: 9277
File: .github/workflows/pr-compile-check.yaml:147-151
Timestamp: 2025-08-31T12:46:11.940Z
Learning: In fluent-bit, the correct CMake flag for using system librdkafka is `FLB_PREFER_SYSTEM_LIB_KAFKA=ON`.
Applied to files:
src/flb_kafka.csrc/aws/flb_aws_msk_iam.c
📚 Learning: 2025-08-31T12:46:11.940Z
Learnt from: ThomasDevoogdt
Repo: fluent/fluent-bit PR: 9277
File: .github/workflows/pr-compile-check.yaml:147-151
Timestamp: 2025-08-31T12:46:11.940Z
Learning: In fluent-bit CMakeLists.txt, the system library preference flags are defined as FLB_PREFER_SYSTEM_LIB_ZSTD and FLB_PREFER_SYSTEM_LIB_KAFKA with the FLB_ prefix.
Applied to files:
src/aws/flb_aws_msk_iam.c
📚 Learning: 2025-08-29T06:25:27.250Z
Learnt from: shadowshot-x
Repo: fluent/fluent-bit PR: 10794
File: tests/internal/aws_compress.c:93-107
Timestamp: 2025-08-29T06:25:27.250Z
Learning: In Fluent Bit, ZSTD compression is enabled by default and is treated as a core dependency, not requiring conditional compilation guards like `#ifdef FLB_HAVE_ZSTD`. Unlike some other optional components such as ARROW/PARQUET (which use `#ifdef FLB_HAVE_ARROW` guards), ZSTD support is always available and doesn't need build-time conditionals. ZSTD headers are included directly without guards across multiple plugins and core components.
Applied to files:
src/aws/flb_aws_msk_iam.c
🧬 Code graph analysis (3)
plugins/in_kafka/in_kafka.c (4)
src/flb_input.c (2)
flb_input_get_property(776-780)flb_input_set_property(557-774)src/flb_sds.c (2)
flb_sds_create(78-90)flb_sds_destroy(389-399)src/aws/flb_aws_msk_iam.c (2)
flb_aws_msk_iam_register_oauth_cb(628-761)flb_aws_msk_iam_destroy(764-786)src/flb_kafka.c (1)
flb_kafka_opaque_destroy(233-240)
plugins/out_kafka/kafka_config.c (2)
src/flb_output.c (2)
flb_output_get_property(1108-1111)flb_output_set_property(843-1068)src/aws/flb_aws_msk_iam.c (1)
flb_aws_msk_iam_register_oauth_cb(628-761)
include/fluent-bit/aws/flb_aws_msk_iam.h (1)
src/aws/flb_aws_msk_iam.c (1)
flb_aws_msk_iam_register_oauth_cb(628-761)
🪛 Cppcheck (2.18.0)
src/aws/flb_aws_credentials_sts.c
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
src/flb_kafka.c
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
src/aws/flb_aws_credentials_ec2.c
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
plugins/in_kafka/in_kafka.c
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
src/aws/flb_aws_credentials_profile.c
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
plugins/out_kafka/kafka_config.c
[information] 61-61: Include file
(missingIncludeSystem)
[information] 62-62: Include file
(missingIncludeSystem)
[information] 63-63: Include file
(missingIncludeSystem)
[information] 65-65: Include file
(missingIncludeSystem)
[information] 66-66: Include file
(missingIncludeSystem)
[information] 67-67: Include file
(missingIncludeSystem)
[information] 68-68: Include file
(missingIncludeSystem)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
src/aws/flb_aws_msk_iam.c
[information] 31-31: Include file
(missingIncludeSystem)
[information] 31-31: Include file
(missingIncludeSystem)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] 31-31: Include file
(missingIncludeSystem)
[information] 31-31: Include file
(missingIncludeSystem)
[information] 31-31: Include file
(missingIncludeSystem)
[information] 31-31: Include file
(missingIncludeSystem)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] 49-49: Include file
(missingIncludeSystem)
[information] 50-50: Include file
(missingIncludeSystem)
[information] 51-51: Include file
(missingIncludeSystem)
[information] 52-52: Include file
(missingIncludeSystem)
[information] 53-53: Include file
(missingIncludeSystem)
[information] 54-54: Include file
(missingIncludeSystem)
[information] 40-40: Include file
(missingIncludeSystem)
[information] 41-41: Include file
(missingIncludeSystem)
[information] 40-40: Include file
(missingIncludeSystem)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] 53-53: Include file
(missingIncludeSystem)
[information] 54-54: Include file
(missingIncludeSystem)
[information] 52-52: Include file
(missingIncludeSystem)
[information] 53-53: Include file
(missingIncludeSystem)
[information] 54-54: Include file
(missingIncludeSystem)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] 137-137: Include file
(missingIncludeSystem)
[information] 138-138: Include file
(missingIncludeSystem)
[information] 139-139: Include file
(missingIncludeSystem)
[information] 140-140: Include file
(missingIncludeSystem)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
🔇 Additional comments (24)
src/aws/flb_aws_credentials_ec2.c (1)
128-139: Whitespace-only change acknowledged.This is a minor formatting change adding a blank line after the debug log. No functional impact.
src/aws/flb_aws_credentials_sts.c (1)
173-186: Formatting improvements enhance code consistency.The added blank lines after debug statements in
refresh_fn_stsandrefresh_fn_eksalign these functions with the existing style ininit_fn_sts,init_fn_eks,get_credentials_fn_sts, andget_credentials_fn_eks. This improves readability by separating logging statements from control flow logic.Also applies to: 478-489
src/aws/flb_aws_credentials_profile.c (1)
664-674: LGTM! Log level adjustment is appropriate.Downgrading the log level for a missing credentials file (ENOENT) to debug is correct. A missing
~/.aws/credentialsfile is not an error when other credential sources (EC2/ECS metadata, config file withcredential_process, environment variables) may be available. This aligns with the similar handling for the config file at line 623.src/flb_kafka.c (1)
96-100: LGTM! Critical bug fix for proper resource cleanup.Using
rd_kafka_conf_destroy()is the correct way to clean up ard_kafka_conf_t*object allocated byrd_kafka_conf_new(). The previous use offlb_free()would have caused memory corruption or leaks since librdkafka's configuration objects have internal structures that require proper destruction.plugins/in_kafka/in_kafka.h (1)
57-60: LGTM! Clean API simplification.The explicit
aws_msk_iamflag replaces the removedaws_msk_iam_cluster_arnfield, aligning with the PR's shift to broker-based region detection. The flag clearly indicates user intent viardkafka.sasl.mechanism=aws_msk_iam, making the opt-in explicit rather than implicit.plugins/out_kafka/kafka_config.h (1)
128-137: LGTM! Consistent with in_kafka changes.The
aws_msk_iamflag andsasl_mechanismfield additions mirror thein_kafka.hstructure, maintaining consistency across Kafka input and output plugins. The explicit flag design is cleaner than the previous ARN-based approach.include/fluent-bit/aws/flb_aws_msk_iam.h (1)
37-49: LGTM! API simplification with clear documentation.The updated signature removes
cluster_arnin favor ofbrokers, enabling automatic region extraction from broker addresses. This simplifies user configuration—users no longer need to provide the cluster ARN. The documentation clearly explains each parameter's purpose.plugins/out_kafka/kafka_config.c (4)
209-218: LGTM! Essential for OAuth token refresh on idle connections.Enabling the SASL queue before producer creation is correct. This allows librdkafka's background thread to handle OAuth token refresh even when
rd_kafka_poll()isn't called frequently, preventing authentication failures on idle connections.
220-250: LGTM! Well-structured MSK IAM registration with proper validation.Good defensive checks: explicit flag, OAUTHBEARER mechanism, and MSK broker patterns must all be present. The callback registration failure correctly triggers cleanup and return.
One observation: the
sasl.oauthbearer.configerror at lines 243-247 only logs but doesn't fail. This is likely fine since it's a secondary configuration, but verify this doesn't cause issues with librdkafka's OAUTHBEARER validation.
253-285: LGTM! Correct ownership semantics and background callback handling.Setting
ctx->conf = NULLafter successfulrd_kafka_new()correctly reflects ownership transfer—librdkafka now owns the configuration. The SASL background callback enabling is done post-creation as required, and the non-fatal warning on failure is appropriate (graceful degradation).
344-351: LGTM! Proper two-path cleanup for configuration ownership.The conditional cleanup correctly handles both scenarios:
- Producer created:
rd_kafka_destroy()handles both producer and configuration- Producer creation failed:
ctx->confis still valid and needs explicitrd_kafka_conf_destroy()This pairs correctly with the
ctx->conf = NULLassignment after successfulrd_kafka_new().plugins/in_kafka/in_kafka.c (6)
339-345: LGTM!Opaque context creation and configuration follows proper error handling patterns.
347-356: LGTM!Enabling SASL queue for OAUTHBEARER is the correct approach for background token refresh. The comment clearly explains the benefit for all OAUTHBEARER methods.
358-389: LGTM with minor observation.The MSK IAM OAuth callback registration logic is well-structured. The broker pattern validation (
".kafka."or".kafka-serverless."with".amazonaws.com") appropriately identifies MSK endpoints.Note: The
sasl.oauthbearer.configsetting failure at lines 379-385 only logs an error but doesn't fail initialization. This appears intentional since theprincipal=adminvalue is primarily for librdkafka's internal validation and the actual authentication uses the generated token.
391-424: LGTM!Excellent ownership semantics handling:
kafka_conf = NULLafter successfulrd_kafka_new()correctly prevents double-free- Clear comments document the ownership transfer
- Graceful degradation when SASL background callbacks fail (warn but continue)
The SASL background callback enablement ensures tokens refresh even during idle periods.
486-512: LGTM!Comprehensive error cleanup path:
- Correctly handles the mutual exclusivity between
kafka.rk(owns conf) and standalonekafka_conf- MSK IAM resources cleaned up under proper compile guard
- Proper cleanup order prevents use-after-free
549-559: LGTM!Exit cleanup mirrors the error path cleanup correctly. The cleanup order (MSK IAM → opaque → sasl_mechanism) is appropriate.
src/aws/flb_aws_msk_iam.c (7)
42-55: LGTM!Well-designed structure with proper thread safety considerations. The 5-minute token lifetime aligns with AWS MSK IAM standards, and the mutex protects credential provider access during concurrent refresh callbacks.
214-268: LGTM!Clean function signature with explicit credential passing (rather than fetching internally). Input validation is thorough with informative error messages.
432-478: LGTM!The presigned URL construction and Base64 URL encoding are implemented correctly:
- User-Agent parameter added for identification
- Base64 URL encoding properly converts
+→-,/→_, and strips padding- Memory cleanup is thorough in both success and error paths
555-612: LGTM!Excellent concurrency handling:
- Mutex protects credential provider access during refresh/get_credentials
- Lock held for minimal duration (released immediately after getting credentials)
- Credentials destroyed after use (security best practice)
The explicit vtable calls (
provider_vtable->refresh,provider_vtable->get_credentials) assume the provider is valid, which is guaranteed by the context lifecycle.
763-785: LGTM!Proper cleanup order respects dependencies:
- Provider (uses TLS)
- TLS
- Region string
- Mutex
- Context struct
The unconditional
pthread_mutex_destroyis safe since a context only reaches this function if initialization succeeded (including mutex init).
519-551: LGTM!Host buffer sizing is safe - the maximum formatted string length (~63 characters) is well within the 256-byte buffer, especially given the 32-character region length limit enforced in
extract_region_from_broker.
728-738: Verify provider initialization sequence order.The code calls
sync()beforeinit(), but Fluent Bit's documented AWS credential provider lifecycle callsinit()at startup before periodicsync()calls. Confirm whether this reversed sequence is intentional for MSK IAM OAuth callback initialization or ifinit()should be called first.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
|
Still failing our linter: |
b486cb7 to
2264d52
Compare
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (3)
src/aws/flb_aws_msk_iam.c (2)
137-212: Verify bounds check for VPC endpoint detection.Line 180: The condition
if (p >= broker + 5 && ...)uses pointer comparison. While likely correct, using offset comparisonif (p - broker >= 5 && ...)would be clearer and more portable, explicitly checking there are at least 5 bytes beforepbefore accessingp - 5.Apply this diff for clarity:
/* Check for VPC endpoint format: .vpce.amazonaws.com */ - if (p >= broker + 5 && strncmp(p - 5, ".vpce", 5) == 0) { + if (p - broker >= 5 && strncmp(p - 5, ".vpce", 5) == 0) { /* For VPC endpoints, region ends at .vpce */ end = p - 5; }Consider adding unit tests for:
- VPC endpoint format (
vpce-xxx.kafka.region.vpce.amazonaws.com)- Brokers with/without ports
- Edge cases near 32-character region limit
701-738: Verify TLS ownership to prevent potential double-free.Lines 702-711 create
ctx->cred_tls, which is then passed toflb_standard_chain_provider_createat line 714. If the provider stores this pointer internally (e.g., inprovider->cred_tls), then the destroy path at lines 771-776 may cause a double-free:flb_aws_provider_destroy(ctx->provider)would free the TLS handle, and thenflb_tls_destroy(ctx->cred_tls)would attempt to free it again.Run the following script to check if the provider stores the TLS pointer:
#!/bin/bash # Check if flb_standard_chain_provider_create stores the cred_tls pointer ast-grep --pattern $'flb_standard_chain_provider_create($$$) { $$$ $PROVIDER->cred_tls = $TLS; $$$ }' # Also check the provider destroy function rg -A 10 "flb_aws_provider_destroy|flb_standard_chain_provider_destroy" --type cplugins/in_kafka/in_kafka.c (1)
271-297: Add NULL checks after flb_sds_create calls.Both
flb_sds_createcalls (lines 274 and 286) can fail and return NULL, but there are no checks. If allocation fails,ctx->sasl_mechanismwill be NULL, causing crashes in subsequentstrcasecmpcalls at lines 279, 353, 361, and 412.Apply this diff to add proper NULL checks:
conf = flb_input_get_property("rdkafka.sasl.mechanism", ins); if (conf) { ctx->sasl_mechanism = flb_sds_create(conf); + if (!ctx->sasl_mechanism) { + flb_plg_error(ins, "failed to allocate SASL mechanism string"); + flb_free(ctx); + return -1; + } flb_plg_info(ins, "SASL mechanism configured: %s", ctx->sasl_mechanism); #ifdef FLB_HAVE_AWS_MSK_IAM /* Check if using aws_msk_iam as SASL mechanism */ if (strcasecmp(conf, "aws_msk_iam") == 0) { /* Mark that user explicitly requested AWS MSK IAM */ ctx->aws_msk_iam = FLB_TRUE; /* Set SASL mechanism to OAUTHBEARER for librdkafka */ flb_input_set_property(ins, "rdkafka.sasl.mechanism", "OAUTHBEARER"); flb_sds_destroy(ctx->sasl_mechanism); ctx->sasl_mechanism = flb_sds_create("OAUTHBEARER"); + if (!ctx->sasl_mechanism) { + flb_plg_error(ins, "failed to allocate SASL mechanism string"); + flb_free(ctx); + return -1; + }
🧹 Nitpick comments (4)
src/aws/flb_aws_credentials_sts.c (1)
178-178: Trailing whitespace detected.Lines 178 and 483 contain trailing whitespace/spaces on otherwise blank lines. While not a functional issue, this may cause linter warnings or be flagged in CI.
- +Also applies to: 483-483
src/aws/flb_aws_credentials_ec2.c (1)
133-133: Trailing whitespace on blank line.Same pattern as in other credential provider files—line 133 has trailing whitespace. Consider removing for consistency with project style.
plugins/out_kafka/kafka_config.c (1)
220-250: Consider makingsasl.oauthbearer.configfailure fatal.The OAuth callback registration correctly validates broker patterns for MSK. However, if
rd_kafka_conf_setfails forsasl.oauthbearer.config(lines 241-247), execution continues. Since this configuration is part of the OAUTHBEARER setup, a failure here may cause authentication issues at runtime.res = rd_kafka_conf_set(ctx->conf, "sasl.oauthbearer.config", "principal=admin", errstr, sizeof(errstr)); if (res != RD_KAFKA_CONF_OK) { flb_plg_error(ctx->ins, "failed to set sasl.oauthbearer.config: %s", errstr); + flb_out_kafka_destroy(ctx); + return NULL; }plugins/in_kafka/in_kafka.c (1)
347-356: Consider checking rd_kafka_conf_enable_sasl_queue return value.While unlikely to fail,
rd_kafka_conf_enable_sasl_queuereturns an error code. Adding a check would make error handling more complete.if (ctx->sasl_mechanism && strcasecmp(ctx->sasl_mechanism, "OAUTHBEARER") == 0) { - rd_kafka_conf_enable_sasl_queue(kafka_conf, 1); + res = rd_kafka_conf_enable_sasl_queue(kafka_conf, 1); + if (res != RD_KAFKA_CONF_OK) { + flb_plg_warn(ins, "Failed to enable SASL queue (non-critical)"); + } flb_plg_debug(ins, "SASL queue enabled for OAUTHBEARER mechanism"); }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
include/fluent-bit/aws/flb_aws_msk_iam.h(1 hunks)plugins/in_kafka/in_kafka.c(4 hunks)plugins/in_kafka/in_kafka.h(1 hunks)plugins/out_kafka/kafka.c(0 hunks)plugins/out_kafka/kafka_config.c(4 hunks)plugins/out_kafka/kafka_config.h(1 hunks)src/aws/flb_aws_credentials_ec2.c(1 hunks)src/aws/flb_aws_credentials_profile.c(1 hunks)src/aws/flb_aws_credentials_sts.c(2 hunks)src/aws/flb_aws_msk_iam.c(12 hunks)src/flb_kafka.c(1 hunks)
💤 Files with no reviewable changes (1)
- plugins/out_kafka/kafka.c
🚧 Files skipped from review as they are similar to previous changes (1)
- include/fluent-bit/aws/flb_aws_msk_iam.h
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-08-31T12:46:11.940Z
Learnt from: ThomasDevoogdt
Repo: fluent/fluent-bit PR: 9277
File: .github/workflows/pr-compile-check.yaml:147-151
Timestamp: 2025-08-31T12:46:11.940Z
Learning: In fluent-bit, the correct CMake flag for using system librdkafka is `FLB_PREFER_SYSTEM_LIB_KAFKA=ON`.
Applied to files:
src/aws/flb_aws_msk_iam.csrc/flb_kafka.c
📚 Learning: 2025-08-31T12:46:11.940Z
Learnt from: ThomasDevoogdt
Repo: fluent/fluent-bit PR: 9277
File: .github/workflows/pr-compile-check.yaml:147-151
Timestamp: 2025-08-31T12:46:11.940Z
Learning: In fluent-bit CMakeLists.txt, the system library preference flags are defined as FLB_PREFER_SYSTEM_LIB_ZSTD and FLB_PREFER_SYSTEM_LIB_KAFKA with the FLB_ prefix.
Applied to files:
src/aws/flb_aws_msk_iam.c
📚 Learning: 2025-08-29T06:25:27.250Z
Learnt from: shadowshot-x
Repo: fluent/fluent-bit PR: 10794
File: tests/internal/aws_compress.c:93-107
Timestamp: 2025-08-29T06:25:27.250Z
Learning: In Fluent Bit, ZSTD compression is enabled by default and is treated as a core dependency, not requiring conditional compilation guards like `#ifdef FLB_HAVE_ZSTD`. Unlike some other optional components such as ARROW/PARQUET (which use `#ifdef FLB_HAVE_ARROW` guards), ZSTD support is always available and doesn't need build-time conditionals. ZSTD headers are included directly without guards across multiple plugins and core components.
Applied to files:
src/aws/flb_aws_msk_iam.c
🧬 Code graph analysis (3)
plugins/out_kafka/kafka_config.c (3)
src/flb_output.c (2)
flb_output_get_property(1108-1111)flb_output_set_property(843-1068)src/flb_sds.c (2)
flb_sds_create(78-90)flb_sds_destroy(389-399)src/aws/flb_aws_msk_iam.c (1)
flb_aws_msk_iam_register_oauth_cb(628-761)
src/aws/flb_aws_msk_iam.c (1)
src/tls/flb_tls.c (1)
flb_tls_create(183-232)
plugins/in_kafka/in_kafka.c (4)
src/flb_input.c (2)
flb_input_get_property(776-780)flb_input_set_property(557-774)src/flb_sds.c (2)
flb_sds_create(78-90)flb_sds_destroy(389-399)src/aws/flb_aws_msk_iam.c (2)
flb_aws_msk_iam_register_oauth_cb(628-761)flb_aws_msk_iam_destroy(764-786)src/flb_kafka.c (1)
flb_kafka_opaque_destroy(233-240)
🪛 Cppcheck (2.18.0)
plugins/out_kafka/kafka_config.c
[information] 61-61: Include file
(missingIncludeSystem)
[information] 62-62: Include file
(missingIncludeSystem)
[information] 63-63: Include file
(missingIncludeSystem)
[information] 65-65: Include file
(missingIncludeSystem)
[information] 66-66: Include file
(missingIncludeSystem)
[information] 67-67: Include file
(missingIncludeSystem)
[information] 68-68: Include file
(missingIncludeSystem)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
src/aws/flb_aws_msk_iam.c
[information] 31-31: Include file
(missingIncludeSystem)
[information] 31-31: Include file
(missingIncludeSystem)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] 31-31: Include file
(missingIncludeSystem)
[information] 31-31: Include file
(missingIncludeSystem)
[information] 31-31: Include file
(missingIncludeSystem)
[information] 31-31: Include file
(missingIncludeSystem)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] 49-49: Include file
(missingIncludeSystem)
[information] 50-50: Include file
(missingIncludeSystem)
[information] 51-51: Include file
(missingIncludeSystem)
[information] 52-52: Include file
(missingIncludeSystem)
[information] 53-53: Include file
(missingIncludeSystem)
[information] 54-54: Include file
(missingIncludeSystem)
[information] 40-40: Include file
(missingIncludeSystem)
[information] 41-41: Include file
(missingIncludeSystem)
[information] 40-40: Include file
(missingIncludeSystem)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] 53-53: Include file
(missingIncludeSystem)
[information] 54-54: Include file
(missingIncludeSystem)
[information] 52-52: Include file
(missingIncludeSystem)
[information] 53-53: Include file
(missingIncludeSystem)
[information] 54-54: Include file
(missingIncludeSystem)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] 137-137: Include file
(missingIncludeSystem)
[information] 138-138: Include file
(missingIncludeSystem)
[information] 139-139: Include file
(missingIncludeSystem)
[information] 140-140: Include file
(missingIncludeSystem)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
plugins/in_kafka/in_kafka.c
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
src/flb_kafka.c
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
src/aws/flb_aws_credentials_ec2.c
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
src/aws/flb_aws_credentials_profile.c
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
src/aws/flb_aws_credentials_sts.c
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
🔇 Additional comments (16)
src/aws/flb_aws_credentials_profile.c (1)
664-674: LGTM! Appropriate log level adjustment for missing credentials file.Downgrading the log level to
AWS_CREDS_DEBUGforENOENTis correct—a missing credentials file is an expected scenario when using alternative credential sources (environment variables, EC2 IMDS, EKS, etc.). This aligns with the handling inget_shared_config_credentialsat line 623.plugins/out_kafka/kafka_config.h (1)
128-131: LGTM! Clean struct refactoring for MSK IAM configuration.Replacing the
aws_msk_iam_cluster_arnfield with a simpleaws_msk_iamflag is a good simplification. Per the PR objectives, region and cluster type are now auto-extracted from broker hostnames, making the explicit cluster ARN unnecessary.src/flb_kafka.c (1)
96-100: LGTM! Critical fix for proper librdkafka configuration cleanup.Using
rd_kafka_conf_destroy()instead offlb_free()is the correct approach. Therd_kafka_conf_tobject allocated byrd_kafka_conf_new()has internal structures that must be properly released by librdkafka's destructor, not by a simple memory free.plugins/in_kafka/in_kafka.h (1)
57-60: LGTM! Consistent with out_kafka configuration structure.The
aws_msk_iamflag addition mirrors the changes inplugins/out_kafka/kafka_config.h, ensuring both input and output Kafka plugins have aligned MSK IAM configuration approaches.plugins/out_kafka/kafka_config.c (4)
61-87: LGTM! Well-structured SASL mechanism handling for MSK IAM.The logic correctly:
- Captures the user-configured SASL mechanism
- Detects the
aws_msk_iamalias and converts it toOAUTHBEARER- Sets appropriate defaults for
security.protocol- Properly manages the
sasl_mechanismSDS string lifecycle
209-218: LGTM! Universal OAUTHBEARER enhancement.Enabling the SASL queue for all OAUTHBEARER configurations is a good design choice. This ensures token refresh works correctly on idle connections regardless of the OAuth provider (AWS IAM, OIDC, custom, etc.).
252-286: LGTM! Correct ownership semantics for librdkafka configuration.The ownership handling is well-documented and correct:
- On
rd_kafka_new()success:ctx->conf = NULLprevents double-free since librdkafka now owns it- On failure:
ctx->confremains valid for cleanup inflb_out_kafka_destroy()The degraded handling for SASL background callback failures (warning instead of fatal) is reasonable—the connection may still function, though with potential token refresh issues on idle connections.
344-351: LGTM! Proper cleanup handling for all failure scenarios.The destroy logic correctly handles both cases:
- When
rd_kafka_new()succeeded:rd_kafka_destroy()handles the configuration- When
rd_kafka_new()failed: manualrd_kafka_conf_destroy()is neededSince
ctx->confis set toNULLafter successful producer creation (line 266), there's no risk of double-free.plugins/in_kafka/in_kafka.c (4)
358-389: LGTM: MSK IAM registration logic is sound.The conditional registration based on explicit user request, SASL mechanism, and broker patterns is appropriate. Error handling properly cleans up on failure.
391-424: LGTM: Ownership semantics and SASL background callbacks.The ownership transfer of
kafka_confis properly handled: preserved on failure, nulled on success. SASL background callback enablement is correctly implemented with appropriate error handling.
486-507: LGTM: Error cleanup path is comprehensive.The cleanup logic properly handles all resources: kafka_topics, kafka.rk, kafka_conf (conditional), opaque, and msk_iam. The conditional destruction of
kafka_confbased on whetherrd_kafka_newsucceeded is correct.
549-553: LGTM: Exit cleanup is correct.MSK IAM cleanup in the exit path is properly implemented and consistent with the init_error cleanup.
src/aws/flb_aws_msk_iam.c (4)
31-55: LGTM: Struct and include updates support new lifecycle management.The added includes (TLS, pthread) and struct fields (is_serverless, cred_tls, provider, lock) properly support the enhanced credential provider lifecycle and concurrency control.
214-512: LGTM: Payload generation refactored with proper credential handling.The updated signature accepts credentials directly, separating concerns. Validation, error handling, and cleanup are comprehensive. The User-Agent addition is a useful enhancement.
514-625: LGTM: OAuth callback with proper concurrency control.The mutex protection for credential provider access is critical and correctly implemented. The lock is held only during credential operations, not payload generation. Error paths properly unlock the mutex, and credentials are destroyed after use.
740-761: LGTM: Mutex initialization and OAuth callback registration.Mutex initialization includes proper error handling and cleanup. The opaque context setup and OAuth callback registration are correctly implemented.
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (2)
src/aws/flb_aws_msk_iam.c (1)
627-760: Verify TLS ownership to avoid potential double‑free in MSK IAM destroy.
flb_aws_msk_iam_register_oauth_cb()createsctx->cred_tlsand passes it intoflb_standard_chain_provider_create(), thenflb_aws_msk_iam_destroy()calls:if (ctx->provider) { flb_aws_provider_destroy(ctx->provider); } if (ctx->cred_tls) { flb_tls_destroy(ctx->cred_tls); }Depending on how
flb_standard_chain_provider_create()andflb_aws_provider_destroy()are implemented, the provider may already own and destroycred_tls. If so, callingflb_tls_destroy(ctx->cred_tls)would double‑free the same TLS handle.This concern mirrors an earlier review comment on the same area; please re‑confirm the ownership contract and adjust either the provider or the MSK IAM destroy path so TLS is released exactly once (e.g., by having the provider own TLS and dropping the explicit
flb_tls_destroy, or vice versa).#!/bin/bash # Inspect AWS provider/TLS ownership to confirm whether cred_tls is freed by the provider. rg -n "struct flb_aws_provider" src/aws include -n -C3 || true rg -n "flb_standard_chain_provider_create" src/aws include -n -C5 || true rg -n "cred_tls" src/aws include -n -C5 || trueAlso applies to: 763-785
plugins/in_kafka/in_kafka.c (1)
271-276: Handleflb_sds_create(conf)OOM before logging/usingctx->sasl_mechanism.
flb_sds_create(conf)can return NULL, butctx->sasl_mechanismis immediately formatted with%s(Line 275) and later used in comparisons. On OOM this is undefined behavior and can crash the process.Consider failing init (or at least skipping SASL‑specific logic) on allocation failure:
conf = flb_input_get_property("rdkafka.sasl.mechanism", ins); if (conf) { - ctx->sasl_mechanism = flb_sds_create(conf); - flb_plg_info(ins, "SASL mechanism configured: %s", ctx->sasl_mechanism); + ctx->sasl_mechanism = flb_sds_create(conf); + if (!ctx->sasl_mechanism) { + flb_plg_error(ins, "failed to allocate SASL mechanism string"); + goto init_error; + } + flb_plg_info(ins, "SASL mechanism configured: %s", ctx->sasl_mechanism);
🧹 Nitpick comments (3)
plugins/in_kafka/in_kafka.c (1)
365-393: Log whenaws_msk_iamwas requested but brokers don’t look like MSK.If
ctx->aws_msk_iamis true butctx->kafka.brokersis unset or doesn’t contain the expected.kafka./.kafka-serverless.and.amazonaws.comsubstrings, MSK IAM is silently skipped. That can be confusing for users who setrdkafka.sasl.mechanism=aws_msk_iambut get no IAM callback.Consider adding an explicit warning in the “else” case to make this visible:
#ifdef FLB_HAVE_AWS_MSK_IAM - if (ctx->aws_msk_iam && ctx->sasl_mechanism && - strcasecmp(ctx->sasl_mechanism, "OAUTHBEARER") == 0) { - /* Check if brokers are configured for MSK IAM */ - if (ctx->kafka.brokers && - (strstr(ctx->kafka.brokers, ".kafka.") || strstr(ctx->kafka.brokers, ".kafka-serverless.")) && - strstr(ctx->kafka.brokers, ".amazonaws.com")) { + if (ctx->aws_msk_iam && ctx->sasl_mechanism && + strcasecmp(ctx->sasl_mechanism, "OAUTHBEARER") == 0) { + /* Check if brokers are configured for MSK IAM */ + if (ctx->kafka.brokers && + (strstr(ctx->kafka.brokers, ".kafka.") || + strstr(ctx->kafka.brokers, ".kafka-serverless.")) && + strstr(ctx->kafka.brokers, ".amazonaws.com")) { ... - } + } + else { + flb_plg_warn(ins, + "aws_msk_iam requested but brokers do not look like MSK; " + "skipping MSK IAM OAuth callback registration"); + } } #endifsrc/aws/flb_aws_msk_iam.c (2)
42-55: Region extraction helper looks safe; add focused tests for broker variants.
extract_region_from_broker()now:
- Strips the port,
- Ensures
.amazonaws.comlies within the host portion,- Safely detects
.vpce.amazonaws.comwith thep - broker >= 5guard, and- Bounds region length to
(0, 32].The pointer arithmetic and length checks look correct, but this logic is subtle and critical. It would benefit from unit tests covering at least:
- Standard MSK:
b-1.example.c1.kafka.us-east-1.amazonaws.com[:9098]- Serverless:
boot-xxx.c1.kafka-serverless.eu-west-1.amazonaws.com[:9098]- VPC endpoint:
vpce-xxx.kafka.ap-southeast-1.vpce.amazonaws.com[:9098]- Malformed inputs: missing region, missing
.amazonaws.com, or trailing junk after.amazonaws.com.Also applies to: 137-212
514-625: OAuth refresh callback concurrency fix is sound; consider minor defensive checks.The callback now:
- Validates
opaqueandmsk_iam_ctx,- Builds the MSK host based on
config->is_serverless,- Locks
config->lockaroundprovider->refresh()andprovider->get_credentials(),- Unlocks before building the payload and interacting with librdkafka, and
- Always destroys
credson all paths.This is a good fix for concurrent access to the shared provider. As a minor hardening step, you could also guard against a null
config->provider(e.g., if future refactors allow partial teardown) before dereferencing it inside the locked section, returning token failure early if it’s missing.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
plugins/in_kafka/in_kafka.c(4 hunks)src/aws/flb_aws_msk_iam.c(12 hunks)
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-08-31T12:46:11.940Z
Learnt from: ThomasDevoogdt
Repo: fluent/fluent-bit PR: 9277
File: .github/workflows/pr-compile-check.yaml:147-151
Timestamp: 2025-08-31T12:46:11.940Z
Learning: In fluent-bit, the correct CMake flag for using system librdkafka is `FLB_PREFER_SYSTEM_LIB_KAFKA=ON`.
Applied to files:
src/aws/flb_aws_msk_iam.c
📚 Learning: 2025-08-31T12:46:11.940Z
Learnt from: ThomasDevoogdt
Repo: fluent/fluent-bit PR: 9277
File: .github/workflows/pr-compile-check.yaml:147-151
Timestamp: 2025-08-31T12:46:11.940Z
Learning: In fluent-bit CMakeLists.txt, the system library preference flags are defined as FLB_PREFER_SYSTEM_LIB_ZSTD and FLB_PREFER_SYSTEM_LIB_KAFKA with the FLB_ prefix.
Applied to files:
src/aws/flb_aws_msk_iam.c
📚 Learning: 2025-08-29T06:25:27.250Z
Learnt from: shadowshot-x
Repo: fluent/fluent-bit PR: 10794
File: tests/internal/aws_compress.c:93-107
Timestamp: 2025-08-29T06:25:27.250Z
Learning: In Fluent Bit, ZSTD compression is enabled by default and is treated as a core dependency, not requiring conditional compilation guards like `#ifdef FLB_HAVE_ZSTD`. Unlike some other optional components such as ARROW/PARQUET (which use `#ifdef FLB_HAVE_ARROW` guards), ZSTD support is always available and doesn't need build-time conditionals. ZSTD headers are included directly without guards across multiple plugins and core components.
Applied to files:
src/aws/flb_aws_msk_iam.c
🧬 Code graph analysis (1)
plugins/in_kafka/in_kafka.c (4)
src/flb_input.c (1)
flb_input_get_property(776-780)src/flb_sds.c (2)
flb_sds_create(78-90)flb_sds_destroy(389-399)src/aws/flb_aws_msk_iam.c (1)
flb_aws_msk_iam_register_oauth_cb(628-761)src/flb_kafka.c (1)
flb_kafka_opaque_destroy(233-240)
🪛 Cppcheck (2.18.0)
src/aws/flb_aws_msk_iam.c
[information] 31-31: Include file
(missingIncludeSystem)
[information] 31-31: Include file
(missingIncludeSystem)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] 31-31: Include file
(missingIncludeSystem)
[information] 31-31: Include file
(missingIncludeSystem)
[information] 31-31: Include file
(missingIncludeSystem)
[information] 31-31: Include file
(missingIncludeSystem)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] 49-49: Include file
(missingIncludeSystem)
[information] 50-50: Include file
(missingIncludeSystem)
[information] 51-51: Include file
(missingIncludeSystem)
[information] 52-52: Include file
(missingIncludeSystem)
[information] 53-53: Include file
(missingIncludeSystem)
[information] 54-54: Include file
(missingIncludeSystem)
[information] 40-40: Include file
(missingIncludeSystem)
[information] 41-41: Include file
(missingIncludeSystem)
[information] 40-40: Include file
(missingIncludeSystem)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] 53-53: Include file
(missingIncludeSystem)
[information] 54-54: Include file
(missingIncludeSystem)
[information] 52-52: Include file
(missingIncludeSystem)
[information] 53-53: Include file
(missingIncludeSystem)
[information] 54-54: Include file
(missingIncludeSystem)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] 137-137: Include file
(missingIncludeSystem)
[information] 138-138: Include file
(missingIncludeSystem)
[information] 139-139: Include file
(missingIncludeSystem)
[information] 140-140: Include file
(missingIncludeSystem)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
plugins/in_kafka/in_kafka.c
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.
(normalCheckLevelMaxBranches)
[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.
(toomanyconfigs)
🔇 Additional comments (5)
plugins/in_kafka/in_kafka.c (4)
279-297: aws_msk_iam branch looks correct; minor allocation path nuance only.The aws_msk_iam handling (flagging explicit use, flipping mechanism to OAUTHBEARER, defaulting
security.protocolto SASL_SSL) is consistent and the secondflb_sds_create("OAUTHBEARER")is now correctly NULL‑checked. The earlyreturn -1on OAUTHBEARER SDS OOM is safe since no additional resources have been created yet beyondctx.No further changes needed here once the initial
ctx->sasl_mechanismOOM is guarded.
352-362: SASL queue enablement for OAUTHBEARER is well‑scoped.Enabling the SASL queue only when
ctx->sasl_mechanism == "OAUTHBEARER"confines the behavior to OAuth flows and avoids surprising other mechanisms. The debug log is also useful for troubleshooting.
417-429: SASL background callbacks enablement is robustly guarded.The background callbacks are only enabled for the OAUTHBEARER mechanism, the error path logs a clear warning and destroys the returned
rd_kafka_error_t *, and the success path logs at info. This is a solid integration of librdkafka’s background refresh with good observability.
491-503: Kafka conf / MSK IAM cleanup paths are consistent with ownership semantics.The init error path correctly distinguishes:
rd_kafka_destroy(ctx->kafka.rk)when rk exists (conf owned by rk),rd_kafka_conf_destroy(kafka_conf)when rk was never created, andflb_aws_msk_iam_destroy(ctx->msk_iam)when MSK IAM was partially initialized.This avoids leaking
kafka_confand MSK IAM resources without risking double‑destroy.Also applies to: 508-512
src/aws/flb_aws_msk_iam.c (1)
214-512: MSK IAM payload builder refactor is thorough and cleanup‑safe.The refactored
build_msk_iam_payload():
- Validates
config->region,host, and key fields oncredsearly with clear error logs.- Builds SigV4 credential, query, canonical request, string‑to‑sign, and signature with
goto erroron any failure.- Correctly frees all intermediate SDS values and the presigned URL in the error path (including
empty_payload_hexandkey), and nullsempty_payload_hexon the success path to avoid double destroy.- Generates a base64 payload, then converts to URL‑safe base64 and strips padding in place.
This looks correct and memory‑safe with no obvious leaks or double‑frees.
a94d17e to
095b814
Compare
@cosmo0920 should be clean now? |
Yes, it's clean now: |
380724f to
ac685a1
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@examples/kafka_filter/kafka_msk_iam.conf`:
- Around line 14-16: The example comment shows mixed regions between the listed
hosts (one line mentions us-west-2 while the example broker hostnames use
us-east-1); update the comment so the region in the explanatory text matches the
broker hostnames (e.g., change "us-west-2" to "us-east-1" or make the broker
hostnames use us-west-2) to avoid implying a cross-region MSK cluster; ensure
the descriptive lines that reference example endpoints and the example broker
hostnames are consistent.
ac685a1 to
af20ec9
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@cmake/kafka.cmake`:
- Around line 9-34: When pkg_check_modules(SASL libsasl2) runs but SASL_FOUND is
false, still perform the fallback detection using find_library(SASL2_LIB NAMES
sasl2) and find_path(SASL2_INCLUDE NAMES sasl/sasl.h) and, if found, set
FLB_SASL_CYRUS_ENABLED ON and populate SASL_LIBRARIES and SASL_INCLUDE_DIRS
(same as the existing fallback branch); otherwise emit the existing "cyrus-sasl
not found" message. In short, move/duplicate the find_library/find_path logic
into the inner else branch after pkg_check_modules failure and ensure variables
FLB_SASL_CYRUS_ENABLED, SASL_LIBRARIES and SASL_INCLUDE_DIRS are set
consistently when the fallback succeeds.
- Around line 50-55: The CMake snippet enables FLB_KAFKA_MSK_IAM_ENABLED
regardless of whether FLB_SASL_OAUTHBEARER_ENABLED is available, causing MSK IAM
to be reported enabled on platforms (e.g. Windows without FLB_TLS) that lack
OAUTHBEARER support; change the logic so FLB_KAFKA_MSK_IAM_ENABLED is set based
on FLB_SASL_OAUTHBEARER_ENABLED (enable MSK IAM only when
FLB_SASL_OAUTHBEARER_ENABLED is ON, otherwise set it OFF) and update the
adjacent comment to reflect that MSK IAM requires OAUTHBEARER availability
rather than being "always available now".
🧹 Nitpick comments (1)
cmake/kafka.cmake (1)
59-64: Simplify the redundantWITH_SASLplatform branch.Lines 60–64 set
WITH_SASLto ON in both branches. You can collapse this to a single option assignment to reduce noise.♻️ Suggested simplification
-# On Windows, enable WITH_SASL for SSPI support (built-in, no Cyrus needed) -# On other platforms, WITH_SASL is always ON (built-in PLAIN/SCRAM) -if(FLB_SYSTEM_WINDOWS) - FLB_OPTION(WITH_SASL ON) -else() - FLB_OPTION(WITH_SASL ON) -endif() +# On all platforms, WITH_SASL is ON (built-in PLAIN/SCRAM) +FLB_OPTION(WITH_SASL ON)
af20ec9 to
85d6506
Compare
|
@cosmo0920 & @edsiper request your look into this MR could be merge now? |
|
We need to follow the linter rules: Your commit still are not following the linter rules. |
|
Plus, occurring compilation error: We need to fix it before queuing review review queue. |
|
Yeah red CI is immediate skip for review. |
85d6506 to
96aa98b
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@plugins/out_kafka/kafka_config.c`:
- Around line 61-86: The code replaces ctx->sasl_mechanism with
flb_sds_create("OAUTHBEARER") without checking for allocation failure; if that
returns NULL and you already destroyed the old SDS, later uses (e.g.,
strcasecmp(tmp, ...), or any access to ctx->sasl_mechanism) can crash. Fix by
creating the new SDS into a temporary variable (e.g., new_sasl =
flb_sds_create("OAUTHBEARER")), check new_sasl != NULL before calling
flb_sds_destroy(ctx->sasl_mechanism) and assigning ctx->sasl_mechanism =
new_sasl; if allocation fails, log an error via flb_plg_error and avoid
destroying the existing ctx->sasl_mechanism or falling back to a safe value (do
not set property or change ctx on failure).
🧹 Nitpick comments (1)
cmake/kafka.cmake (1)
72-76: Simplify redundant conditional.Both branches of this if/else set
WITH_SASL ON. The conditional can be removed.♻️ Suggested simplification
# Configure librdkafka options # On Windows, enable WITH_SASL for SSPI support (built-in, no Cyrus needed) -# On other platforms, WITH_SASL is always ON (built-in PLAIN/SCRAM) -if(FLB_SYSTEM_WINDOWS) - FLB_OPTION(WITH_SASL ON) -else() - FLB_OPTION(WITH_SASL ON) -endif() +# On all platforms, WITH_SASL is always ON (built-in PLAIN/SCRAM, SSPI on Windows) +FLB_OPTION(WITH_SASL ON)
96aa98b to
ee0bf94
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cmake/kafka.cmake (1)
52-64: Align platform comment with actual gating logic.Line 54 says Linux/macOS need SSL and Cyrus SASL, but Lines 62-63 enable OAuth Bearer unconditionally on non‑Windows. Update the comment (or gating) so they match.
✏️ Suggested comment fix
-# - Linux/macOS: Needs both SSL and Cyrus SASL +# - Linux/macOS: Built-in SASL/OAUTHBEARER (no Cyrus SASL required)
🤖 Fix all issues with AI agents
In `@plugins/in_kafka/in_kafka.c`:
- Around line 619-626: Update the help text for the FLB_CONFIG_MAP_STR entry for
"aws_region" in struct flb_in_kafka_config to reflect that region auto-detection
covers both standard MSK endpoints and the serverless/VPC endpoint formats
parsed by the broker parsing code and examples; change the sentence that
currently limits auto-detection to "standard MSK endpoints" to instead state it
also supports serverless/VPC endpoint formats but still requires explicitly
setting aws_region for custom DNS names (e.g., PrivateLink).
In `@plugins/out_kafka/kafka_config.c`:
- Around line 61-95: The code currently logs and swaps ctx->sasl_mechanism
without checking flb_sds_create() return and can leak the old SDS if new_sasl
allocation fails; update the block around
flb_output_get_property("rdkafka.sasl.mechanism", ins) to check the result of
flb_sds_create(tmp) before calling flb_plg_info() or assigning
ctx->sasl_mechanism, return NULL with an error log if allocation fails, and when
creating new_sasl for the aws_msk_iam branch, check new_sasl for NULL and on
failure free the previously allocated ctx->sasl_mechanism (if non-NULL) and ctx,
log the error via flb_plg_error, and return NULL (avoid calling flb_sds_destroy
on a NULL pointer); also ensure any flb_output_set_property or flb_sds_destroy
swaps happen only after successful allocation.
In `@src/aws/flb_aws_msk_iam.c`:
- Around line 720-754: The error paths and flb_aws_msk_iam_destroy currently
call flb_tls_destroy(ctx->cred_tls) even when ctx->provider exists, which leads
to double-free because flb_aws_provider_destroy() already destroys
provider->cred_tls; update the cleanup logic to treat the provider as the owner:
when calling flb_aws_provider_destroy(ctx->provider) do not call
flb_tls_destroy(ctx->cred_tls) afterwards (or set ctx->cred_tls = NULL
immediately after destroying the provider), and in all other error/teardown
paths only call flb_tls_destroy(ctx->cred_tls) if ctx->provider is NULL (or
ctx->cred_tls != NULL). Ensure this change is applied to the error branches
around pthread_mutex_init, the provider init block, and in
flb_aws_msk_iam_destroy.
Implements the core AWS MSK IAM authentication mechanism including: - OAuth callback mechanism for token generation and refresh - Token lifecycle management and expiration handling - Integration with AWS credential providers - SASL/OAUTHBEARER protocol support for librdkafka This provides the foundation for AWS MSK IAM authentication support in Fluent Bit's Kafka plugins. Signed-off-by: Arbin <arbin.cheng@coins.ph>
Enhance EC2 credential provider to better support MSK IAM authentication by improving credential refresh behavior and lifecycle management. Signed-off-by: Arbin <arbin.cheng@coins.ph> aws: improve credential refresh for MSK IAM Enhance profile credential provider to better support MSK IAM authentication by improving credential refresh behavior and lifecycle management. Signed-off-by: Arbin <arbin.cheng@coins.ph> aws: improve credential refresh for MSK IAM Enhance STS credential provider to better support MSK IAM authentication by improving credential refresh behavior and lifecycle management. Signed-off-by: Arbin <arbin.cheng@coins.ph>
Update Kafka core functionality to support AWS MSK IAM authentication, including necessary configuration and lifecycle improvements. Signed-off-by: Arbin <arbin.cheng@coins.ph>
Enable AWS MSK IAM authentication in the Kafka input plugin: - Add AWS MSK IAM configuration options - Integrate with OAuth callback mechanism - Support automatic credential refresh - Add TLS configuration for secure connections Signed-off-by: Arbin <arbin.cheng@coins.ph> in_kafka: add NULL checks for SASL mechanism allocation Add NULL checks after flb_sds_create() when allocating SASL mechanism strings to prevent crashes on allocation failure. This covers both the initial SASL mechanism configuration and the AWS MSK IAM OAUTHBEARER conversion. Signed-off-by: Arbin <arbin.cheng@coins.ph> in_kafka: add aws_region parameter for MSK IAM auth - Add aws_region configuration field - Remove hostname pattern check for MSK IAM registration - Pass aws_region to MSK IAM registration function - Support PrivateLink and custom DNS scenarios Signed-off-by: Arbin <arbin.cheng@coins.ph>
Enable AWS MSK IAM authentication in the Kafka output plugin: - Add AWS MSK IAM configuration options - Integrate with OAuth callback mechanism - Support automatic credential refresh - Add TLS configuration for secure connections Signed-off-by: Arbin <arbin.cheng@coins.ph> out_kafka: add aws_region parameter for MSK IAM auth - Add aws_region configuration field - Remove hostname pattern check for MSK IAM registration - Pass aws_region to MSK IAM registration function - Support PrivateLink and custom DNS scenarios Signed-off-by: Arbin <arbin.cheng@coins.ph>
Replace pointer comparison with offset comparison in VPC endpoint detection to improve safety and clarity. Changes 'p >= broker + 5' to 'p - broker >= 5' to properly check offset within string bounds before accessing p - 5. Signed-off-by: Arbin <arbin.cheng@coins.ph> aws: use actual broker hostname for signing - Remove is_serverless detection logic - Use actual broker hostname instead of constructed host - Fix memory leak in error cleanup path - Add broker_host field to store actual hostname - Update function signature to accept optional region parameter This aligns with official AWS MSK IAM signers behavior where the signature Host must match the TLS SNI/actual connection host. Signed-off-by: Arbin <arbin.cheng@coins.ph> aws: optimize by removing redundant service_host member Remove service_host from struct flb_aws_msk_iam and construct it dynamically in OAuth callback. This eliminates data redundancy since service_host can be derived from region. Also clean up unused struct flb_msk_iam_cb definition and use flb_sds_len() instead of strlen() for consistency. Signed-off-by: Arbin <arbin.cheng@coins.ph> aws: fix buffer overread and improve code robustness Fix critical security issue and improve code quality: 1. Fix potential buffer overread in extract_region_from_broker(): - Changed iteration from 'start = end - 1' to 'start = end' - Check boundary before reading: while (start > broker && *(start - 1) != '.') - Eliminates undefined behavior when broker string is malformed 2. Avoid implicit NUL-termination in base64 URL encoding: - Replace pointer-based iteration with length-based for loop - Remove dependency on flb_base64_encode() NUL-termination behavior - Remove unused variable declaration These changes address security concerns and improve code maintainability. Signed-off-by: Arbin <arbin.cheng@coins.ph> aws: Improve AWS MSK IAM authentication error logging Signed-off-by: Arbin <arbin.cheng@coins.ph> aws: Add error checking for pthread mutex operations in MSK IAM Signed-off-by: Arbin <arbin.cheng@coins.ph>
- Add comprehensive MSK IAM configuration examples - Cover Standard MSK, Serverless, PrivateLink scenarios - Document aws_region parameter usage - Add troubleshooting guide and IAM permissions - Update README with detailed usage instructions Signed-off-by: Arbin <arbin.cheng@coins.ph> examples: improve README formatting and IAM permissions - Add blank lines for better readability - Fix markdown table formatting - Correct IAM permission ARNs to include cluster UUID - Add note about finding cluster UUID via AWS Console/CLI - Improve section spacing and code block formatting Signed-off-by: Arbin <arbin.cheng@coins.ph>
- Recognize librdkafka built-in SASL support (PLAIN/SCRAM/OAUTHBEARER) - Mark cyrus-sasl as optional (only for Kerberos/GSSAPI) - Enable AWS MSK IAM on all platforms with SSL support - Simplify dependency logic and improve configurability Signed-off-by: Arbin <arbin.cheng@coins.ph>
ee0bf94 to
ceb77a0
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@cmake/kafka.cmake`:
- Around line 70-75: The CMake option FORCES SSL unconditionally via
FLB_OPTION(WITH_SSL On); change it to honor Fluent Bit TLS by wiring WITH_SSL to
the FLB_TLS variable: replace the hardcoded "On" with the FLB_TLS variable so
WITH_SSL is enabled only when FLB_TLS is true (but still keep
WITH_SASL_OAUTHBEARER handling since SASL SCRAM/OAUTHBEARER require OpenSSL).
Update the FLB_OPTION call that sets WITH_SSL and ensure any logic referencing
WITH_SASL_OAUTHBEARER remains consistent with the new WITH_SSL behavior.
In `@plugins/out_kafka/kafka_config.c`:
- Around line 315-316: Initialize the mk_list ctx->topics immediately after
allocating/setting up the kafka context so it's always valid even if
flb_out_kafka_destroy() runs early (e.g., during opaque/MSK IAM failures);
locate the allocation of the context structure (the variable referenced as ctx)
and call mk_list_init(&ctx->topics) there (before any code paths that can call
flb_out_kafka_destroy), ensuring every code path that may free/cleanup the
context sees an initialized list.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
plugins/out_kafka/kafka_config.c (1)
342-342: Potential NULL pointer in format string.
tmpfromflb_output_get_property("topics", ins)at line 319 may be NULL if the property is not set. Passing NULL to%sinflb_plg_infois undefined behavior on some platforms.🛠️ Proposed fix
- flb_plg_info(ctx->ins, "brokers='%s' topics='%s'", ctx->kafka.brokers, tmp); + flb_plg_info(ctx->ins, "brokers='%s' topics='%s'", ctx->kafka.brokers, tmp ? tmp : FLB_KAFKA_TOPIC);
🤖 Fix all issues with AI agents
In `@cmake/kafka.cmake`:
- Around line 49-75: The OAUTHBEARER flag is currently enabled unconditionally
on non-Windows platforms, which can conflict with librdkafka's requirement that
WITH_SASL_OAUTHBEARER only be enabled when TLS is enabled; update the logic that
sets FLB_SASL_OAUTHBEARER_ENABLED so it is true only when FLB_TLS is true (i.e.,
gate it on FLB_TLS for both Windows and non-Windows flows), ensure
FLB_KAFKA_MSK_IAM_ENABLED continues to follow FLB_SASL_OAUTHBEARER_ENABLED, and
keep the FLB_OPTION(WITH_SASL_OAUTHBEARER ${FLB_SASL_OAUTHBEARER_ENABLED}) usage
unchanged.
🧹 Nitpick comments (1)
plugins/out_kafka/kafka_config.c (1)
255-261: Consider failing onsasl.oauthbearer.configset error.When
rd_kafka_conf_set()fails forsasl.oauthbearer.config, the code logs an error but continues execution. While this is a placeholder value, a configuration failure could indicate a deeper issue that might cause confusing problems later.♻️ Proposed change to abort on failure
res = rd_kafka_conf_set(ctx->conf, "sasl.oauthbearer.config", "principal=admin", errstr, sizeof(errstr)); if (res != RD_KAFKA_CONF_OK) { flb_plg_error(ctx->ins, "failed to set sasl.oauthbearer.config: %s", errstr); + flb_out_kafka_destroy(ctx); + return NULL; }
Signed-off-by: Arbin <arbin.cheng@coins.ph>
Signed-off-by: Arbin <arbin.cheng@coins.ph>
8ea3227 to
de92624
Compare
Summary
Add comprehensive AWS MSK IAM authentication support with simplified configuration and fix OAuth token expiration on idle connections. This PR automatically extracts region and cluster type information from broker addresses, provides explicit opt-in for MSK IAM, enhances OAUTHBEARER token refresh for all OAuth methods, and enables automatic background token refresh to prevent authentication failures on idle connections.
Changes
Key Features
Explicit MSK IAM Opt-in
rdkafka.sasl.mechanism=aws_msk_iamaws_msk_iamflag to track user intentSimplified Configuration
cluster_arnparameterrdkafka.sasl.mechanism=aws_msk_iamOAUTHBEARERinternally and registers OAuth callbackAutomatic Region Extraction
Automatic Cluster Type Detection
Universal OAUTHBEARER Enhancements
OAuth Token Lifetime Management
rd_kafka_poll()TLS Support for AWS Credentials
Technical Details
Explicit MSK IAM Activation:
Configuration Simplification:
rdkafka.sasl.mechanism=aws_msk_iamOAUTHBEARERand registers OAuth callbackrdkafka.security.protocol=SASL_SSL(if not configured)Region Extraction Logic:
b-1.example.kafka.us-east-1.amazonaws.com)*.kafka.<region>.amazonaws.com*.kafka-serverless.<region>.amazonaws.comCluster Type Detection:
.kafka-serverless.to determine cluster typekafkaorkafka-serverless)Universal OAUTHBEARER Background Processing:
Modified Files
AWS MSK IAM Core (2 files)
include/fluent-bit/aws/flb_aws_msk_iam.h- Updated function signature (removed cluster_arn parameter)src/aws/flb_aws_msk_iam.c- Refactored region extraction and cluster type detection logicKafka Input Plugin (2 files)
plugins/in_kafka/in_kafka.h- Addedaws_msk_iamflag, removed deprecated fieldsplugins/in_kafka/in_kafka.c- Added explicit MSK IAM activation, universal OAUTHBEARER supportKafka Output Plugin (3 files)
plugins/out_kafka/kafka_config.h- Addedaws_msk_iamflag, removed deprecated fieldsplugins/out_kafka/kafka_config.c- Added explicit MSK IAM activation, universal OAUTHBEARER supportplugins/out_kafka/kafka.c- Removed deprecated configuration mappingAWS Credentials & TLS Support (4 files)
src/aws/flb_aws_credentials_ec2.c- Enhanced TLS support for EC2 metadata credential fetchingsrc/aws/flb_aws_credentials_profile.c- Enhanced TLS support for profile credential fetchingsrc/aws/flb_aws_credentials_sts.c- Enhanced TLS support for STS credential fetchingsrc/flb_kafka.c- Core Kafka integration improvementsTotal: 11 files modified
Configuration
Simple AWS MSK IAM Setup:
No
cluster_arnor additional AWS-specific parameters needed!Supported Configurations
This PR ensures compatibility with multiple OAuth scenarios:
1. AWS MSK IAM (Fluent Bit convenience syntax)
2. librdkafka OIDC (unaffected by MSK IAM)
3. librdkafka AWS method (unaffected by MSK IAM)
All configurations benefit from automatic background token refresh!
Design for Extensibility
This PR establishes a clean, extensible pattern for adding cloud provider IAM authentication:
1. Layered Configuration Approach
2. Explicit Opt-in Pattern
3. Benefits of This Design
4. Future Extensions
This architecture makes it straightforward to add:
Each can be added with the same explicit opt-in pattern without affecting existing functionality.
OAuth Token Expiration Fix
Problem Statement:
After prolonged idle periods (5+ minutes), Kafka outputs experienced authentication failures:
Root Cause:
librdkafka's OAuth token refresh mechanism relies on
rd_kafka_poll()being called regularly. For idle connections,rd_kafka_poll()is only called when producing messages. This is documented in librdkafka issue #3871:Timeline without background callbacks:
Solution: Background Callbacks
librdkafka v1.9.0+ provides
rd_kafka_sasl_background_callbacks_enable()specifically for this use case:Timeline with background callbacks:
Benefits:
rd_kafka_poll()not required)TLS Support
This PR includes proper TLS support for AWS credential fetching:
Features:
Usage:
Testing
Packaging
ok-package-testlabel to test for all targets (requires maintainer to do)Documentation
Backporting
Fluent Bit is licensed under Apache 2.0, by submitting this pull request I understand that this code will be released under the terms of that license.
Summary by CodeRabbit
New Features
Bug Fixes
Configuration Changes
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.