Skip to content

aws_msk_iam: add AWS MSK IAM authentication support#11270

Open
kalavt wants to merge 10 commits intofluent:masterfrom
kalavt:feature/aws-msk-iam-clean
Open

aws_msk_iam: add AWS MSK IAM authentication support#11270
kalavt wants to merge 10 commits intofluent:masterfrom
kalavt:feature/aws-msk-iam-clean

Conversation

@kalavt
Copy link

@kalavt kalavt commented Dec 9, 2025

Summary

Add comprehensive AWS MSK IAM authentication support with simplified configuration and fix OAuth token expiration on idle connections. This PR automatically extracts region and cluster type information from broker addresses, provides explicit opt-in for MSK IAM, enhances OAUTHBEARER token refresh for all OAuth methods, and enables automatic background token refresh to prevent authentication failures on idle connections.

Changes

Key Features

  1. Explicit MSK IAM Opt-in

    • MSK IAM is only activated when explicitly requested via rdkafka.sasl.mechanism=aws_msk_iam
    • Uses explicit aws_msk_iam flag to track user intent
    • Ensures compatibility with other OAUTHBEARER methods (OIDC, custom OAuth, etc.)
  2. Simplified Configuration

    • No need for cluster_arn parameter
    • Enable AWS MSK IAM authentication by simply setting rdkafka.sasl.mechanism=aws_msk_iam
    • Automatically converts to OAUTHBEARER internally and registers OAuth callback
  3. Automatic Region Extraction

    • Intelligently extract AWS region information from broker addresses
    • Supports both MSK Standard and Serverless formats
  4. Automatic Cluster Type Detection

    • Automatically identify MSK Standard and MSK Serverless cluster types
    • Selects correct service endpoint based on cluster type
  5. Universal OAUTHBEARER Enhancements

    • Enhanced background token refresh for ALL OAUTHBEARER methods
    • Enabled SASL queue and background callbacks for all OAUTHBEARER configurations
    • Benefits AWS MSK IAM, librdkafka OIDC, custom OAuth implementations, etc.
    • Prevents token expiration on idle connections for both producers and consumers
    • Fixes authentication failures that occurred on idle connections after token expiration
  6. OAuth Token Lifetime Management

    • Maintains 5-minute OAuth token lifetime (AWS industry standard, matches AWS Go SDK)
    • Automatic refresh at 80% of token lifetime (4 minutes)
    • librdkafka's background thread handles refresh independently
    • Works perfectly for completely idle connections without requiring rd_kafka_poll()
    • Fixes authentication failures that occurred on idle connections after 5+ minutes
  7. TLS Support for AWS Credentials

    • Added TLS support for secure AWS credential fetching
    • Supports EC2 metadata, ECS, STS, and credential file sources
    • Ensures secure communication with AWS services
    • Properly manages TLS lifecycle (creation and cleanup)

Technical Details

  1. Explicit MSK IAM Activation:

    // Only activates when user explicitly sets aws_msk_iam
    if (ctx->aws_msk_iam && ctx->sasl_mechanism && 
        strcasecmp(ctx->sasl_mechanism, "OAUTHBEARER") == 0) {
        // Register MSK IAM OAuth callback
    }
    
    • Prevents automatic activation for generic OAUTHBEARER users
    • Allows users to use OIDC or custom OAuth on AWS brokers without interference
  2. Configuration Simplification:

    • Users only need to set rdkafka.sasl.mechanism=aws_msk_iam
    • System automatically converts it to OAUTHBEARER and registers OAuth callback
    • Automatically sets rdkafka.security.protocol=SASL_SSL (if not configured)
  3. Region Extraction Logic:

    • Parse region from broker address (e.g., b-1.example.kafka.us-east-1.amazonaws.com)
    • Support MSK Standard format: *.kafka.<region>.amazonaws.com
    • Support MSK Serverless format: *.kafka-serverless.<region>.amazonaws.com
  4. Cluster Type Detection:

    • Check if broker address contains .kafka-serverless. to determine cluster type
    • Automatically select correct service endpoint (kafka or kafka-serverless)
  5. Universal OAUTHBEARER Background Processing:

    // Applied to ALL OAUTHBEARER configurations
    if (ctx->sasl_mechanism && strcasecmp(ctx->sasl_mechanism, "OAUTHBEARER") == 0) {
        rd_kafka_conf_enable_sasl_queue(conf, 1);
        rd_kafka_sasl_background_callbacks_enable(rk);
    }
    
    • Enables automatic token refresh for all OAUTHBEARER methods
    • Handles idle connections, large poll intervals, paused collectors
    • Benefits both consumers (in_kafka) and producers (out_kafka)

Modified Files

AWS MSK IAM Core (2 files)

  • include/fluent-bit/aws/flb_aws_msk_iam.h - Updated function signature (removed cluster_arn parameter)
  • src/aws/flb_aws_msk_iam.c - Refactored region extraction and cluster type detection logic

Kafka Input Plugin (2 files)

  • plugins/in_kafka/in_kafka.h - Added aws_msk_iam flag, removed deprecated fields
  • plugins/in_kafka/in_kafka.c - Added explicit MSK IAM activation, universal OAUTHBEARER support

Kafka Output Plugin (3 files)

  • plugins/out_kafka/kafka_config.h - Added aws_msk_iam flag, removed deprecated fields
  • plugins/out_kafka/kafka_config.c - Added explicit MSK IAM activation, universal OAUTHBEARER support
  • plugins/out_kafka/kafka.c - Removed deprecated configuration mapping

AWS Credentials & TLS Support (4 files)

  • src/aws/flb_aws_credentials_ec2.c - Enhanced TLS support for EC2 metadata credential fetching
  • src/aws/flb_aws_credentials_profile.c - Enhanced TLS support for profile credential fetching
  • src/aws/flb_aws_credentials_sts.c - Enhanced TLS support for STS credential fetching
  • src/flb_kafka.c - Core Kafka integration improvements

Total: 11 files modified

Configuration

Simple AWS MSK IAM Setup:

[INPUT]
    Name kafka
    Brokers b-1.example.kafka.us-east-1.amazonaws.com:9098
    rdkafka.sasl.mechanism aws_msk_iam

No cluster_arn or additional AWS-specific parameters needed!

Supported Configurations

This PR ensures compatibility with multiple OAuth scenarios:

1. AWS MSK IAM (Fluent Bit convenience syntax)

[INPUT]
    Name kafka
    Brokers b-1.my-cluster.kafka.us-east-1.amazonaws.com:9098
    rdkafka.sasl.mechanism aws_msk_iam

2. librdkafka OIDC (unaffected by MSK IAM)

[INPUT]
    Name kafka
    Brokers b-1.my-cluster.kafka.us-east-1.amazonaws.com:9098
    rdkafka.sasl.mechanism OAUTHBEARER
    rdkafka.sasl.oauthbearer.method oidc
    rdkafka.sasl.oauthbearer.client.id my_client_id
    rdkafka.sasl.oauthbearer.client.secret my_secret
    rdkafka.sasl.oauthbearer.token.endpoint.url https://auth.example.com/token

3. librdkafka AWS method (unaffected by MSK IAM)

[INPUT]
    Name kafka
    Brokers b-1.my-cluster.kafka.us-east-1.amazonaws.com:9098
    rdkafka.sasl.mechanism OAUTHBEARER
    rdkafka.sasl.oauthbearer.method aws

All configurations benefit from automatic background token refresh!

Design for Extensibility

This PR establishes a clean, extensible pattern for adding cloud provider IAM authentication:

1. Layered Configuration Approach

Layer 1: Fluent Bit Convenience Syntax (High-level abstraction)
├─ rdkafka.sasl.mechanism=aws_msk_iam       → Auto-configured MSK IAM
├─ rdkafka.sasl.mechanism=gcp_iam           → Future: GCP Kafka IAM
└─ rdkafka.sasl.mechanism=azure_eventhubs   → Future: Azure Event Hubs

Layer 2: librdkafka Native (Direct pass-through)
├─ rdkafka.sasl.mechanism=OAUTHBEARER
├─ rdkafka.sasl.oauthbearer.method=oidc
└─ rdkafka.sasl.oauthbearer.method=aws

Layer 3: Custom Extensions (User plugins)
└─ Custom Fluent Bit extensions

2. Explicit Opt-in Pattern

// Extensible pattern for cloud provider authentication
if (strcasecmp(mechanism, "aws_msk_iam") == 0) {
    ctx->cloud_provider = CLOUD_PROVIDER_AWS;
}
// Future additions follow the same pattern:
// else if (strcasecmp(mechanism, "gcp_iam") == 0) {
//     ctx->cloud_provider = CLOUD_PROVIDER_GCP;
// }

3. Benefits of This Design

  • No interference: Each authentication method is explicitly opted-in
  • Clear separation: Cloud-specific logic isolated from generic OAUTHBEARER handling
  • Easy extension: New providers can be added following the same pattern
  • Backward compatible: Existing OAUTHBEARER configurations unaffected
  • Testable: Each auth method can be tested independently

4. Future Extensions
This architecture makes it straightforward to add:

  • Google Cloud Platform Kafka IAM
  • Azure Event Hubs authentication
  • Other cloud provider-specific OAuth implementations

Each can be added with the same explicit opt-in pattern without affecting existing functionality.

OAuth Token Expiration Fix

Problem Statement:

After prolonged idle periods (5+ minutes), Kafka outputs experienced authentication failures:

[error] SASL authentication error: Access denied (after 302ms in state AUTH_REQ)
[error] 3/3 brokers are down

Root Cause:

librdkafka's OAuth token refresh mechanism relies on rd_kafka_poll() being called regularly. For idle connections, rd_kafka_poll() is only called when producing messages. This is documented in librdkafka issue #3871:

"You need to explicitly call poll() once after creating the client to trigger the oauth callback"

Timeline without background callbacks:

T=0:     Connection established, OAuth token set (5-min lifetime)
T=1-5min: No messages to produce → rd_kafka_poll() never called
T=5min:  Token expires ❌
T=10min: New data arrives, rd_kafka_poll() called
         ├─ librdkafka tries to use expired token
         └─> Access Denied ❌

Solution: Background Callbacks

librdkafka v1.9.0+ provides rd_kafka_sasl_background_callbacks_enable() specifically for this use case:

"Enable SASL OAUTHBEARER refresh callbacks on the librdkafka background thread. This serves as an alternative for applications that do NOT call rd_kafka_poll() at regular intervals"

// Enable automatic token refresh in background thread
rd_kafka_sasl_background_callbacks_enable(rk);

Timeline with background callbacks:

T=0:00  Token generated (expires T=5:00)
        ├─ librdkafka starts background thread
        └─ Token refresh timer active in background

T=4:00  Background thread detects token at 80% lifetime
        ├─ Automatically triggers oauthbearer_token_refresh_cb()
        ├─ New token generated (fresh 5-min lifetime)
        └─> Token refreshed ✅

T=8:00  Background thread refreshes again
T=12:00 Background thread refreshes again
...

Result: Token NEVER expires, even with ZERO traffic ✅

Benefits:

  • ✅ Token refresh occurs automatically every ~4 minutes
  • ✅ Works on completely idle connections (no traffic for hours)
  • ✅ No application involvement needed (rd_kafka_poll() not required)
  • ✅ Built-in librdkafka feature (v1.9.0+, Fluent Bit uses 2.10.1)
  • ✅ Zero authentication failures on idle connections

TLS Support

This PR includes proper TLS support for AWS credential fetching:

ctx->cred_tls = flb_tls_create(FLB_TLS_CLIENT_MODE,
                                FLB_TRUE,
                                FLB_LOG_DEBUG,
                                NULL, NULL, NULL, NULL, NULL, NULL);

Features:

  • ✅ Secure communication with AWS credential services
  • ✅ Supports EC2 metadata, ECS, STS endpoints
  • ✅ Proper TLS lifecycle management (creation and cleanup)
  • ✅ Used by AWS credentials provider chain

Usage:

ctx->provider = flb_standard_chain_provider_create(config,
                                                   ctx->cred_tls,  // ← TLS instance
                                                   ctx->region,
                                                   ...);

Testing

  • Example configuration file for the change
  • Debug log output from testing the change
  • Attached Valgrind output that shows no leaks or memory corruption was found

Packaging

  • Run local packaging test showing all targets (including any new ones) build
  • Set ok-package-test label to test for all targets (requires maintainer to do)

Documentation

  • Documentation required for this feature

Backporting

  • Backport to latest stable release

Fluent Bit is licensed under Apache 2.0, by submitting this pull request I understand that this code will be released under the terms of that license.

Summary by CodeRabbit

  • New Features

    • Auto-detect AWS region from MSK broker addresses; optional aws_region override. MSK IAM now uses OAUTHBEARER with TLS-backed credentials and background token refresh (5 min lifetime).
  • Bug Fixes

    • Improved startup/error cleanup, concurrency safety for credential refresh, and validation requiring brokers when MSK IAM is enabled.
  • Configuration Changes

    • sasl.mechanism=aws_msk_iam activates MSK IAM; legacy aws_msk_iam_cluster_arn and aws_msk_iam options removed.
  • Documentation

    • Added Kafka MSK IAM README and example configuration.

✏️ Tip: You can customize this high-level summary in your review settings.

@kalavt kalavt requested review from a team, cosmo0920 and edsiper as code owners December 9, 2025 02:51
@coderabbitai
Copy link

coderabbitai bot commented Dec 9, 2025

📝 Walkthrough

Walkthrough

Replaces cluster‑ARN MSK IAM with brokers+region, adds TLS-backed persistent AWS provider with mutexed credential refresh, drives MSK IAM via rdkafka.sasl.mechanismOAUTHBEARER, updates Kafka plugin config/ownership/cleanup, expands CMake SASL/Cyrus detection, and adds MSK IAM examples/docs.

Changes

Cohort / File(s) Summary
MSK IAM Header
include/fluent-bit/aws/flb_aws_msk_iam.h
Removed public struct flb_msk_iam_cb; updated flb_aws_msk_iam_register_oauth_cb signature to (config, kconf, opaque, brokers, region) and refreshed comments.
MSK IAM Core
src/aws/flb_aws_msk_iam.c
Major rewrite: add TLS (cred_tls), persistent AWS provider, pthread_mutex_t lock; fixed token lifetime (300s); add extract_region_from_broker; payload builder now accepts explicit creds; registration accepts brokers+region; updated init/destroy and concurrency.
Input Kafka Plugin
plugins/in_kafka/in_kafka.c, plugins/in_kafka.h
Read rdkafka.sasl.mechanism; treat aws_msk_iam as explicit request → switch to OAUTHBEARER; require brokers to register callback; add aws_region member; remove cluster_arn config; improve conf ownership/cleanup and enable SASL background callbacks.
Output Kafka Plugin
plugins/out_kafka/kafka.c, plugins/out_kafka/kafka_config.c, plugins/out_kafka/kafka_config.h
Add aws_region config option; remove aws_msk_iam_cluster_arn and legacy aws_msk_iam; detect sasl.mechanism early, map aws_msk_iamOAUTHBEARER, enable SASL queue, conditionally register OAuth callback with brokers+region, and adjust conf ownership/cleanup.
Kafka Conf Utilities
src/flb_kafka.c
Fix error-path cleanup: destroy rd_kafka_conf_t via rd_kafka_conf_destroy instead of flb_free.
AWS Credentials Minor Edits
src/aws/flb_aws_credentials_ec2.c, src/aws/flb_aws_credentials_profile.c, src/aws/flb_aws_credentials_sts.c
Whitespace/log-level tweaks; no behavior changes.
Examples / Docs
examples/kafka_filter/README.md, examples/kafka_filter/kafka_msk_iam.conf
Add README and sample config demonstrating MSK IAM scenarios, region auto-detection, PrivateLink, testing and troubleshooting guidance.
Build / CMake
cmake/kafka.cmake
Adjust SASL/Cyrus detection and flags; introduce FLB_SASL_CYRUS_ENABLED/WITH_SASL_CYRUS; treat SASL built-in; gate OAuth Bearer on TLS and update status messages.

Sequence Diagram(s)

sequenceDiagram
    autonumber
    participant FB as Fluent Bit
    participant LR as librdkafka
    participant MSK as flb_aws_msk_iam
    participant AWS as AWS Provider

    Note over FB: Init reads `rdkafka.sasl.mechanism`
    FB->>LR: create rd_kafka_conf()
    FB->>FB: detect sasl.mechanism == "aws_msk_iam"
    FB->>MSK: flb_aws_msk_iam_register_oauth_cb(config,kconf,opaque,brokers,region)
    MSK->>MSK: extract_region_from_broker / init TLS / create provider / init mutex
    MSK-->>FB: register callback handle

    Note over LR,MSK: Background token refresh
    LR->>MSK: oauthbearer_token_refresh_cb(request)
    MSK->>MSK: lock mutex (rgba(0,128,0,0.5))
    MSK->>AWS: provider->refresh_credentials()
    AWS-->>MSK: credentials
    MSK->>MSK: build_msk_iam_payload(host, credentials)
    MSK-->>LR: rd_kafka_oauthbearer_set_token(token, lifetime=300s)
    MSK->>MSK: unlock mutex (rgba(0,128,0,0.5))
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • patrick-stephens
  • niedbalski
  • edsiper
  • fujimotos

Poem

🐇 I sniffed the brokers, found a region bright,
Cluster ARNs hopped off into the night.
TLS and mutex guard the token's song,
Background refresh keeps sessions strong.
Hop-hop hooray — MSK IAM sings along!

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 40.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title clearly and concisely describes the main change: adding AWS MSK IAM authentication support. It is specific, directly related to the changeset, and accurately summarizes the primary objective without unnecessary detail.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
plugins/out_kafka/kafka_config.c (1)

74-82: Consider checking return values from flb_output_set_property.

The calls to flb_output_set_property() at lines 74 and 81 don't check return values. While unlikely to fail in practice, property setting can fail on allocation errors. For robustness:

-            flb_output_set_property(ins, "rdkafka.sasl.mechanism", "OAUTHBEARER");
+            if (flb_output_set_property(ins, "rdkafka.sasl.mechanism", "OAUTHBEARER") < 0) {
+                flb_plg_error(ins, "failed to set OAUTHBEARER SASL mechanism");
+                flb_sds_destroy(ctx->sasl_mechanism);
+                flb_free(ctx);
+                return NULL;
+            }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7ded9ae and b486cb7.

📒 Files selected for processing (11)
  • include/fluent-bit/aws/flb_aws_msk_iam.h (1 hunks)
  • plugins/in_kafka/in_kafka.c (4 hunks)
  • plugins/in_kafka/in_kafka.h (1 hunks)
  • plugins/out_kafka/kafka.c (0 hunks)
  • plugins/out_kafka/kafka_config.c (4 hunks)
  • plugins/out_kafka/kafka_config.h (1 hunks)
  • src/aws/flb_aws_credentials_ec2.c (1 hunks)
  • src/aws/flb_aws_credentials_profile.c (1 hunks)
  • src/aws/flb_aws_credentials_sts.c (2 hunks)
  • src/aws/flb_aws_msk_iam.c (12 hunks)
  • src/flb_kafka.c (1 hunks)
💤 Files with no reviewable changes (1)
  • plugins/out_kafka/kafka.c
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-08-31T12:46:11.940Z
Learnt from: ThomasDevoogdt
Repo: fluent/fluent-bit PR: 9277
File: .github/workflows/pr-compile-check.yaml:147-151
Timestamp: 2025-08-31T12:46:11.940Z
Learning: In fluent-bit, the correct CMake flag for using system librdkafka is `FLB_PREFER_SYSTEM_LIB_KAFKA=ON`.

Applied to files:

  • src/flb_kafka.c
  • src/aws/flb_aws_msk_iam.c
📚 Learning: 2025-08-31T12:46:11.940Z
Learnt from: ThomasDevoogdt
Repo: fluent/fluent-bit PR: 9277
File: .github/workflows/pr-compile-check.yaml:147-151
Timestamp: 2025-08-31T12:46:11.940Z
Learning: In fluent-bit CMakeLists.txt, the system library preference flags are defined as FLB_PREFER_SYSTEM_LIB_ZSTD and FLB_PREFER_SYSTEM_LIB_KAFKA with the FLB_ prefix.

Applied to files:

  • src/aws/flb_aws_msk_iam.c
📚 Learning: 2025-08-29T06:25:27.250Z
Learnt from: shadowshot-x
Repo: fluent/fluent-bit PR: 10794
File: tests/internal/aws_compress.c:93-107
Timestamp: 2025-08-29T06:25:27.250Z
Learning: In Fluent Bit, ZSTD compression is enabled by default and is treated as a core dependency, not requiring conditional compilation guards like `#ifdef FLB_HAVE_ZSTD`. Unlike some other optional components such as ARROW/PARQUET (which use `#ifdef FLB_HAVE_ARROW` guards), ZSTD support is always available and doesn't need build-time conditionals. ZSTD headers are included directly without guards across multiple plugins and core components.

Applied to files:

  • src/aws/flb_aws_msk_iam.c
🧬 Code graph analysis (3)
plugins/in_kafka/in_kafka.c (4)
src/flb_input.c (2)
  • flb_input_get_property (776-780)
  • flb_input_set_property (557-774)
src/flb_sds.c (2)
  • flb_sds_create (78-90)
  • flb_sds_destroy (389-399)
src/aws/flb_aws_msk_iam.c (2)
  • flb_aws_msk_iam_register_oauth_cb (628-761)
  • flb_aws_msk_iam_destroy (764-786)
src/flb_kafka.c (1)
  • flb_kafka_opaque_destroy (233-240)
plugins/out_kafka/kafka_config.c (2)
src/flb_output.c (2)
  • flb_output_get_property (1108-1111)
  • flb_output_set_property (843-1068)
src/aws/flb_aws_msk_iam.c (1)
  • flb_aws_msk_iam_register_oauth_cb (628-761)
include/fluent-bit/aws/flb_aws_msk_iam.h (1)
src/aws/flb_aws_msk_iam.c (1)
  • flb_aws_msk_iam_register_oauth_cb (628-761)
🪛 Cppcheck (2.18.0)
src/aws/flb_aws_credentials_sts.c

[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)

src/flb_kafka.c

[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)

src/aws/flb_aws_credentials_ec2.c

[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)

plugins/in_kafka/in_kafka.c

[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)

src/aws/flb_aws_credentials_profile.c

[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)

plugins/out_kafka/kafka_config.c

[information] 61-61: Include file

(missingIncludeSystem)


[information] 62-62: Include file

(missingIncludeSystem)


[information] 63-63: Include file

(missingIncludeSystem)


[information] 65-65: Include file

(missingIncludeSystem)


[information] 66-66: Include file

(missingIncludeSystem)


[information] 67-67: Include file

(missingIncludeSystem)


[information] 68-68: Include file

(missingIncludeSystem)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)

src/aws/flb_aws_msk_iam.c

[information] 31-31: Include file

(missingIncludeSystem)


[information] 31-31: Include file

(missingIncludeSystem)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] 31-31: Include file

(missingIncludeSystem)


[information] 31-31: Include file

(missingIncludeSystem)


[information] 31-31: Include file

(missingIncludeSystem)


[information] 31-31: Include file

(missingIncludeSystem)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] 49-49: Include file

(missingIncludeSystem)


[information] 50-50: Include file

(missingIncludeSystem)


[information] 51-51: Include file

(missingIncludeSystem)


[information] 52-52: Include file

(missingIncludeSystem)


[information] 53-53: Include file

(missingIncludeSystem)


[information] 54-54: Include file

(missingIncludeSystem)


[information] 40-40: Include file

(missingIncludeSystem)


[information] 41-41: Include file

(missingIncludeSystem)


[information] 40-40: Include file

(missingIncludeSystem)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] 53-53: Include file

(missingIncludeSystem)


[information] 54-54: Include file

(missingIncludeSystem)


[information] 52-52: Include file

(missingIncludeSystem)


[information] 53-53: Include file

(missingIncludeSystem)


[information] 54-54: Include file

(missingIncludeSystem)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] 137-137: Include file

(missingIncludeSystem)


[information] 138-138: Include file

(missingIncludeSystem)


[information] 139-139: Include file

(missingIncludeSystem)


[information] 140-140: Include file

(missingIncludeSystem)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)

🔇 Additional comments (24)
src/aws/flb_aws_credentials_ec2.c (1)

128-139: Whitespace-only change acknowledged.

This is a minor formatting change adding a blank line after the debug log. No functional impact.

src/aws/flb_aws_credentials_sts.c (1)

173-186: Formatting improvements enhance code consistency.

The added blank lines after debug statements in refresh_fn_sts and refresh_fn_eks align these functions with the existing style in init_fn_sts, init_fn_eks, get_credentials_fn_sts, and get_credentials_fn_eks. This improves readability by separating logging statements from control flow logic.

Also applies to: 478-489

src/aws/flb_aws_credentials_profile.c (1)

664-674: LGTM! Log level adjustment is appropriate.

Downgrading the log level for a missing credentials file (ENOENT) to debug is correct. A missing ~/.aws/credentials file is not an error when other credential sources (EC2/ECS metadata, config file with credential_process, environment variables) may be available. This aligns with the similar handling for the config file at line 623.

src/flb_kafka.c (1)

96-100: LGTM! Critical bug fix for proper resource cleanup.

Using rd_kafka_conf_destroy() is the correct way to clean up a rd_kafka_conf_t* object allocated by rd_kafka_conf_new(). The previous use of flb_free() would have caused memory corruption or leaks since librdkafka's configuration objects have internal structures that require proper destruction.

plugins/in_kafka/in_kafka.h (1)

57-60: LGTM! Clean API simplification.

The explicit aws_msk_iam flag replaces the removed aws_msk_iam_cluster_arn field, aligning with the PR's shift to broker-based region detection. The flag clearly indicates user intent via rdkafka.sasl.mechanism=aws_msk_iam, making the opt-in explicit rather than implicit.

plugins/out_kafka/kafka_config.h (1)

128-137: LGTM! Consistent with in_kafka changes.

The aws_msk_iam flag and sasl_mechanism field additions mirror the in_kafka.h structure, maintaining consistency across Kafka input and output plugins. The explicit flag design is cleaner than the previous ARN-based approach.

include/fluent-bit/aws/flb_aws_msk_iam.h (1)

37-49: LGTM! API simplification with clear documentation.

The updated signature removes cluster_arn in favor of brokers, enabling automatic region extraction from broker addresses. This simplifies user configuration—users no longer need to provide the cluster ARN. The documentation clearly explains each parameter's purpose.

plugins/out_kafka/kafka_config.c (4)

209-218: LGTM! Essential for OAuth token refresh on idle connections.

Enabling the SASL queue before producer creation is correct. This allows librdkafka's background thread to handle OAuth token refresh even when rd_kafka_poll() isn't called frequently, preventing authentication failures on idle connections.


220-250: LGTM! Well-structured MSK IAM registration with proper validation.

Good defensive checks: explicit flag, OAUTHBEARER mechanism, and MSK broker patterns must all be present. The callback registration failure correctly triggers cleanup and return.

One observation: the sasl.oauthbearer.config error at lines 243-247 only logs but doesn't fail. This is likely fine since it's a secondary configuration, but verify this doesn't cause issues with librdkafka's OAUTHBEARER validation.


253-285: LGTM! Correct ownership semantics and background callback handling.

Setting ctx->conf = NULL after successful rd_kafka_new() correctly reflects ownership transfer—librdkafka now owns the configuration. The SASL background callback enabling is done post-creation as required, and the non-fatal warning on failure is appropriate (graceful degradation).


344-351: LGTM! Proper two-path cleanup for configuration ownership.

The conditional cleanup correctly handles both scenarios:

  1. Producer created: rd_kafka_destroy() handles both producer and configuration
  2. Producer creation failed: ctx->conf is still valid and needs explicit rd_kafka_conf_destroy()

This pairs correctly with the ctx->conf = NULL assignment after successful rd_kafka_new().

plugins/in_kafka/in_kafka.c (6)

339-345: LGTM!

Opaque context creation and configuration follows proper error handling patterns.


347-356: LGTM!

Enabling SASL queue for OAUTHBEARER is the correct approach for background token refresh. The comment clearly explains the benefit for all OAUTHBEARER methods.


358-389: LGTM with minor observation.

The MSK IAM OAuth callback registration logic is well-structured. The broker pattern validation (".kafka." or ".kafka-serverless." with ".amazonaws.com") appropriately identifies MSK endpoints.

Note: The sasl.oauthbearer.config setting failure at lines 379-385 only logs an error but doesn't fail initialization. This appears intentional since the principal=admin value is primarily for librdkafka's internal validation and the actual authentication uses the generated token.


391-424: LGTM!

Excellent ownership semantics handling:

  • kafka_conf = NULL after successful rd_kafka_new() correctly prevents double-free
  • Clear comments document the ownership transfer
  • Graceful degradation when SASL background callbacks fail (warn but continue)

The SASL background callback enablement ensures tokens refresh even during idle periods.


486-512: LGTM!

Comprehensive error cleanup path:

  • Correctly handles the mutual exclusivity between kafka.rk (owns conf) and standalone kafka_conf
  • MSK IAM resources cleaned up under proper compile guard
  • Proper cleanup order prevents use-after-free

549-559: LGTM!

Exit cleanup mirrors the error path cleanup correctly. The cleanup order (MSK IAM → opaque → sasl_mechanism) is appropriate.

src/aws/flb_aws_msk_iam.c (7)

42-55: LGTM!

Well-designed structure with proper thread safety considerations. The 5-minute token lifetime aligns with AWS MSK IAM standards, and the mutex protects credential provider access during concurrent refresh callbacks.


214-268: LGTM!

Clean function signature with explicit credential passing (rather than fetching internally). Input validation is thorough with informative error messages.


432-478: LGTM!

The presigned URL construction and Base64 URL encoding are implemented correctly:

  • User-Agent parameter added for identification
  • Base64 URL encoding properly converts +-, /_, and strips padding
  • Memory cleanup is thorough in both success and error paths

555-612: LGTM!

Excellent concurrency handling:

  • Mutex protects credential provider access during refresh/get_credentials
  • Lock held for minimal duration (released immediately after getting credentials)
  • Credentials destroyed after use (security best practice)

The explicit vtable calls (provider_vtable->refresh, provider_vtable->get_credentials) assume the provider is valid, which is guaranteed by the context lifecycle.


763-785: LGTM!

Proper cleanup order respects dependencies:

  1. Provider (uses TLS)
  2. TLS
  3. Region string
  4. Mutex
  5. Context struct

The unconditional pthread_mutex_destroy is safe since a context only reaches this function if initialization succeeded (including mutex init).


519-551: LGTM!

Host buffer sizing is safe - the maximum formatted string length (~63 characters) is well within the 256-byte buffer, especially given the 32-character region length limit enforced in extract_region_from_broker.


728-738: Verify provider initialization sequence order.

The code calls sync() before init(), but Fluent Bit's documented AWS credential provider lifecycle calls init() at startup before periodic sync() calls. Confirm whether this reversed sequence is intentional for MSK IAM OAuth callback initialization or if init() should be called first.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

@cosmo0920
Copy link
Contributor

Still failing our linter:
❌ Commit b486cb7 failed:
Missing prefix in commit subject: 'aws_msk_iam,in_kafka,out_kafka: add AWS MSK IAM authentication support'

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (3)
src/aws/flb_aws_msk_iam.c (2)

137-212: Verify bounds check for VPC endpoint detection.

Line 180: The condition if (p >= broker + 5 && ...) uses pointer comparison. While likely correct, using offset comparison if (p - broker >= 5 && ...) would be clearer and more portable, explicitly checking there are at least 5 bytes before p before accessing p - 5.

Apply this diff for clarity:

     /* Check for VPC endpoint format: .vpce.amazonaws.com */
-    if (p >= broker + 5 && strncmp(p - 5, ".vpce", 5) == 0) {
+    if (p - broker >= 5 && strncmp(p - 5, ".vpce", 5) == 0) {
         /* For VPC endpoints, region ends at .vpce */
         end = p - 5;
     }

Consider adding unit tests for:

  • VPC endpoint format (vpce-xxx.kafka.region.vpce.amazonaws.com)
  • Brokers with/without ports
  • Edge cases near 32-character region limit

701-738: Verify TLS ownership to prevent potential double-free.

Lines 702-711 create ctx->cred_tls, which is then passed to flb_standard_chain_provider_create at line 714. If the provider stores this pointer internally (e.g., in provider->cred_tls), then the destroy path at lines 771-776 may cause a double-free: flb_aws_provider_destroy(ctx->provider) would free the TLS handle, and then flb_tls_destroy(ctx->cred_tls) would attempt to free it again.

Run the following script to check if the provider stores the TLS pointer:

#!/bin/bash
# Check if flb_standard_chain_provider_create stores the cred_tls pointer
ast-grep --pattern $'flb_standard_chain_provider_create($$$) {
  $$$
  $PROVIDER->cred_tls = $TLS;
  $$$
}'

# Also check the provider destroy function
rg -A 10 "flb_aws_provider_destroy|flb_standard_chain_provider_destroy" --type c
plugins/in_kafka/in_kafka.c (1)

271-297: Add NULL checks after flb_sds_create calls.

Both flb_sds_create calls (lines 274 and 286) can fail and return NULL, but there are no checks. If allocation fails, ctx->sasl_mechanism will be NULL, causing crashes in subsequent strcasecmp calls at lines 279, 353, 361, and 412.

Apply this diff to add proper NULL checks:

     conf = flb_input_get_property("rdkafka.sasl.mechanism", ins);
     if (conf) {
         ctx->sasl_mechanism = flb_sds_create(conf);
+        if (!ctx->sasl_mechanism) {
+            flb_plg_error(ins, "failed to allocate SASL mechanism string");
+            flb_free(ctx);
+            return -1;
+        }
         flb_plg_info(ins, "SASL mechanism configured: %s", ctx->sasl_mechanism);
         
 #ifdef FLB_HAVE_AWS_MSK_IAM
         /* Check if using aws_msk_iam as SASL mechanism */
         if (strcasecmp(conf, "aws_msk_iam") == 0) {
             /* Mark that user explicitly requested AWS MSK IAM */
             ctx->aws_msk_iam = FLB_TRUE;
             
             /* Set SASL mechanism to OAUTHBEARER for librdkafka */
             flb_input_set_property(ins, "rdkafka.sasl.mechanism", "OAUTHBEARER");
             flb_sds_destroy(ctx->sasl_mechanism);
             ctx->sasl_mechanism = flb_sds_create("OAUTHBEARER");
+            if (!ctx->sasl_mechanism) {
+                flb_plg_error(ins, "failed to allocate SASL mechanism string");
+                flb_free(ctx);
+                return -1;
+            }
🧹 Nitpick comments (4)
src/aws/flb_aws_credentials_sts.c (1)

178-178: Trailing whitespace detected.

Lines 178 and 483 contain trailing whitespace/spaces on otherwise blank lines. While not a functional issue, this may cause linter warnings or be flagged in CI.

-    
+

Also applies to: 483-483

src/aws/flb_aws_credentials_ec2.c (1)

133-133: Trailing whitespace on blank line.

Same pattern as in other credential provider files—line 133 has trailing whitespace. Consider removing for consistency with project style.

plugins/out_kafka/kafka_config.c (1)

220-250: Consider making sasl.oauthbearer.config failure fatal.

The OAuth callback registration correctly validates broker patterns for MSK. However, if rd_kafka_conf_set fails for sasl.oauthbearer.config (lines 241-247), execution continues. Since this configuration is part of the OAUTHBEARER setup, a failure here may cause authentication issues at runtime.

            res = rd_kafka_conf_set(ctx->conf, "sasl.oauthbearer.config",
                                    "principal=admin", errstr, sizeof(errstr));
            if (res != RD_KAFKA_CONF_OK) {
                flb_plg_error(ctx->ins,
                             "failed to set sasl.oauthbearer.config: %s",
                             errstr);
+               flb_out_kafka_destroy(ctx);
+               return NULL;
            }
plugins/in_kafka/in_kafka.c (1)

347-356: Consider checking rd_kafka_conf_enable_sasl_queue return value.

While unlikely to fail, rd_kafka_conf_enable_sasl_queue returns an error code. Adding a check would make error handling more complete.

     if (ctx->sasl_mechanism && strcasecmp(ctx->sasl_mechanism, "OAUTHBEARER") == 0) {
-        rd_kafka_conf_enable_sasl_queue(kafka_conf, 1);
+        res = rd_kafka_conf_enable_sasl_queue(kafka_conf, 1);
+        if (res != RD_KAFKA_CONF_OK) {
+            flb_plg_warn(ins, "Failed to enable SASL queue (non-critical)");
+        }
         flb_plg_debug(ins, "SASL queue enabled for OAUTHBEARER mechanism");
     }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b486cb7 and 2264d52.

📒 Files selected for processing (11)
  • include/fluent-bit/aws/flb_aws_msk_iam.h (1 hunks)
  • plugins/in_kafka/in_kafka.c (4 hunks)
  • plugins/in_kafka/in_kafka.h (1 hunks)
  • plugins/out_kafka/kafka.c (0 hunks)
  • plugins/out_kafka/kafka_config.c (4 hunks)
  • plugins/out_kafka/kafka_config.h (1 hunks)
  • src/aws/flb_aws_credentials_ec2.c (1 hunks)
  • src/aws/flb_aws_credentials_profile.c (1 hunks)
  • src/aws/flb_aws_credentials_sts.c (2 hunks)
  • src/aws/flb_aws_msk_iam.c (12 hunks)
  • src/flb_kafka.c (1 hunks)
💤 Files with no reviewable changes (1)
  • plugins/out_kafka/kafka.c
🚧 Files skipped from review as they are similar to previous changes (1)
  • include/fluent-bit/aws/flb_aws_msk_iam.h
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-08-31T12:46:11.940Z
Learnt from: ThomasDevoogdt
Repo: fluent/fluent-bit PR: 9277
File: .github/workflows/pr-compile-check.yaml:147-151
Timestamp: 2025-08-31T12:46:11.940Z
Learning: In fluent-bit, the correct CMake flag for using system librdkafka is `FLB_PREFER_SYSTEM_LIB_KAFKA=ON`.

Applied to files:

  • src/aws/flb_aws_msk_iam.c
  • src/flb_kafka.c
📚 Learning: 2025-08-31T12:46:11.940Z
Learnt from: ThomasDevoogdt
Repo: fluent/fluent-bit PR: 9277
File: .github/workflows/pr-compile-check.yaml:147-151
Timestamp: 2025-08-31T12:46:11.940Z
Learning: In fluent-bit CMakeLists.txt, the system library preference flags are defined as FLB_PREFER_SYSTEM_LIB_ZSTD and FLB_PREFER_SYSTEM_LIB_KAFKA with the FLB_ prefix.

Applied to files:

  • src/aws/flb_aws_msk_iam.c
📚 Learning: 2025-08-29T06:25:27.250Z
Learnt from: shadowshot-x
Repo: fluent/fluent-bit PR: 10794
File: tests/internal/aws_compress.c:93-107
Timestamp: 2025-08-29T06:25:27.250Z
Learning: In Fluent Bit, ZSTD compression is enabled by default and is treated as a core dependency, not requiring conditional compilation guards like `#ifdef FLB_HAVE_ZSTD`. Unlike some other optional components such as ARROW/PARQUET (which use `#ifdef FLB_HAVE_ARROW` guards), ZSTD support is always available and doesn't need build-time conditionals. ZSTD headers are included directly without guards across multiple plugins and core components.

Applied to files:

  • src/aws/flb_aws_msk_iam.c
🧬 Code graph analysis (3)
plugins/out_kafka/kafka_config.c (3)
src/flb_output.c (2)
  • flb_output_get_property (1108-1111)
  • flb_output_set_property (843-1068)
src/flb_sds.c (2)
  • flb_sds_create (78-90)
  • flb_sds_destroy (389-399)
src/aws/flb_aws_msk_iam.c (1)
  • flb_aws_msk_iam_register_oauth_cb (628-761)
src/aws/flb_aws_msk_iam.c (1)
src/tls/flb_tls.c (1)
  • flb_tls_create (183-232)
plugins/in_kafka/in_kafka.c (4)
src/flb_input.c (2)
  • flb_input_get_property (776-780)
  • flb_input_set_property (557-774)
src/flb_sds.c (2)
  • flb_sds_create (78-90)
  • flb_sds_destroy (389-399)
src/aws/flb_aws_msk_iam.c (2)
  • flb_aws_msk_iam_register_oauth_cb (628-761)
  • flb_aws_msk_iam_destroy (764-786)
src/flb_kafka.c (1)
  • flb_kafka_opaque_destroy (233-240)
🪛 Cppcheck (2.18.0)
plugins/out_kafka/kafka_config.c

[information] 61-61: Include file

(missingIncludeSystem)


[information] 62-62: Include file

(missingIncludeSystem)


[information] 63-63: Include file

(missingIncludeSystem)


[information] 65-65: Include file

(missingIncludeSystem)


[information] 66-66: Include file

(missingIncludeSystem)


[information] 67-67: Include file

(missingIncludeSystem)


[information] 68-68: Include file

(missingIncludeSystem)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)

src/aws/flb_aws_msk_iam.c

[information] 31-31: Include file

(missingIncludeSystem)


[information] 31-31: Include file

(missingIncludeSystem)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] 31-31: Include file

(missingIncludeSystem)


[information] 31-31: Include file

(missingIncludeSystem)


[information] 31-31: Include file

(missingIncludeSystem)


[information] 31-31: Include file

(missingIncludeSystem)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] 49-49: Include file

(missingIncludeSystem)


[information] 50-50: Include file

(missingIncludeSystem)


[information] 51-51: Include file

(missingIncludeSystem)


[information] 52-52: Include file

(missingIncludeSystem)


[information] 53-53: Include file

(missingIncludeSystem)


[information] 54-54: Include file

(missingIncludeSystem)


[information] 40-40: Include file

(missingIncludeSystem)


[information] 41-41: Include file

(missingIncludeSystem)


[information] 40-40: Include file

(missingIncludeSystem)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] 53-53: Include file

(missingIncludeSystem)


[information] 54-54: Include file

(missingIncludeSystem)


[information] 52-52: Include file

(missingIncludeSystem)


[information] 53-53: Include file

(missingIncludeSystem)


[information] 54-54: Include file

(missingIncludeSystem)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] 137-137: Include file

(missingIncludeSystem)


[information] 138-138: Include file

(missingIncludeSystem)


[information] 139-139: Include file

(missingIncludeSystem)


[information] 140-140: Include file

(missingIncludeSystem)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)

plugins/in_kafka/in_kafka.c

[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)

src/flb_kafka.c

[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)

src/aws/flb_aws_credentials_ec2.c

[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)

src/aws/flb_aws_credentials_profile.c

[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)

src/aws/flb_aws_credentials_sts.c

[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)

🔇 Additional comments (16)
src/aws/flb_aws_credentials_profile.c (1)

664-674: LGTM! Appropriate log level adjustment for missing credentials file.

Downgrading the log level to AWS_CREDS_DEBUG for ENOENT is correct—a missing credentials file is an expected scenario when using alternative credential sources (environment variables, EC2 IMDS, EKS, etc.). This aligns with the handling in get_shared_config_credentials at line 623.

plugins/out_kafka/kafka_config.h (1)

128-131: LGTM! Clean struct refactoring for MSK IAM configuration.

Replacing the aws_msk_iam_cluster_arn field with a simple aws_msk_iam flag is a good simplification. Per the PR objectives, region and cluster type are now auto-extracted from broker hostnames, making the explicit cluster ARN unnecessary.

src/flb_kafka.c (1)

96-100: LGTM! Critical fix for proper librdkafka configuration cleanup.

Using rd_kafka_conf_destroy() instead of flb_free() is the correct approach. The rd_kafka_conf_t object allocated by rd_kafka_conf_new() has internal structures that must be properly released by librdkafka's destructor, not by a simple memory free.

plugins/in_kafka/in_kafka.h (1)

57-60: LGTM! Consistent with out_kafka configuration structure.

The aws_msk_iam flag addition mirrors the changes in plugins/out_kafka/kafka_config.h, ensuring both input and output Kafka plugins have aligned MSK IAM configuration approaches.

plugins/out_kafka/kafka_config.c (4)

61-87: LGTM! Well-structured SASL mechanism handling for MSK IAM.

The logic correctly:

  1. Captures the user-configured SASL mechanism
  2. Detects the aws_msk_iam alias and converts it to OAUTHBEARER
  3. Sets appropriate defaults for security.protocol
  4. Properly manages the sasl_mechanism SDS string lifecycle

209-218: LGTM! Universal OAUTHBEARER enhancement.

Enabling the SASL queue for all OAUTHBEARER configurations is a good design choice. This ensures token refresh works correctly on idle connections regardless of the OAuth provider (AWS IAM, OIDC, custom, etc.).


252-286: LGTM! Correct ownership semantics for librdkafka configuration.

The ownership handling is well-documented and correct:

  • On rd_kafka_new() success: ctx->conf = NULL prevents double-free since librdkafka now owns it
  • On failure: ctx->conf remains valid for cleanup in flb_out_kafka_destroy()

The degraded handling for SASL background callback failures (warning instead of fatal) is reasonable—the connection may still function, though with potential token refresh issues on idle connections.


344-351: LGTM! Proper cleanup handling for all failure scenarios.

The destroy logic correctly handles both cases:

  1. When rd_kafka_new() succeeded: rd_kafka_destroy() handles the configuration
  2. When rd_kafka_new() failed: manual rd_kafka_conf_destroy() is needed

Since ctx->conf is set to NULL after successful producer creation (line 266), there's no risk of double-free.

plugins/in_kafka/in_kafka.c (4)

358-389: LGTM: MSK IAM registration logic is sound.

The conditional registration based on explicit user request, SASL mechanism, and broker patterns is appropriate. Error handling properly cleans up on failure.


391-424: LGTM: Ownership semantics and SASL background callbacks.

The ownership transfer of kafka_conf is properly handled: preserved on failure, nulled on success. SASL background callback enablement is correctly implemented with appropriate error handling.


486-507: LGTM: Error cleanup path is comprehensive.

The cleanup logic properly handles all resources: kafka_topics, kafka.rk, kafka_conf (conditional), opaque, and msk_iam. The conditional destruction of kafka_conf based on whether rd_kafka_new succeeded is correct.


549-553: LGTM: Exit cleanup is correct.

MSK IAM cleanup in the exit path is properly implemented and consistent with the init_error cleanup.

src/aws/flb_aws_msk_iam.c (4)

31-55: LGTM: Struct and include updates support new lifecycle management.

The added includes (TLS, pthread) and struct fields (is_serverless, cred_tls, provider, lock) properly support the enhanced credential provider lifecycle and concurrency control.


214-512: LGTM: Payload generation refactored with proper credential handling.

The updated signature accepts credentials directly, separating concerns. Validation, error handling, and cleanup are comprehensive. The User-Agent addition is a useful enhancement.


514-625: LGTM: OAuth callback with proper concurrency control.

The mutex protection for credential provider access is critical and correctly implemented. The lock is held only during credential operations, not payload generation. Error paths properly unlock the mutex, and credentials are destroyed after use.


740-761: LGTM: Mutex initialization and OAuth callback registration.

Mutex initialization includes proper error handling and cleanup. The opaque context setup and OAuth callback registration are correctly implemented.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
src/aws/flb_aws_msk_iam.c (1)

627-760: Verify TLS ownership to avoid potential double‑free in MSK IAM destroy.

flb_aws_msk_iam_register_oauth_cb() creates ctx->cred_tls and passes it into flb_standard_chain_provider_create(), then flb_aws_msk_iam_destroy() calls:

if (ctx->provider) {
    flb_aws_provider_destroy(ctx->provider);
}

if (ctx->cred_tls) {
    flb_tls_destroy(ctx->cred_tls);
}

Depending on how flb_standard_chain_provider_create() and flb_aws_provider_destroy() are implemented, the provider may already own and destroy cred_tls. If so, calling flb_tls_destroy(ctx->cred_tls) would double‑free the same TLS handle.

This concern mirrors an earlier review comment on the same area; please re‑confirm the ownership contract and adjust either the provider or the MSK IAM destroy path so TLS is released exactly once (e.g., by having the provider own TLS and dropping the explicit flb_tls_destroy, or vice versa).

#!/bin/bash
# Inspect AWS provider/TLS ownership to confirm whether cred_tls is freed by the provider.
rg -n "struct flb_aws_provider" src/aws include -n -C3 || true
rg -n "flb_standard_chain_provider_create" src/aws include -n -C5 || true
rg -n "cred_tls" src/aws include -n -C5 || true

Also applies to: 763-785

plugins/in_kafka/in_kafka.c (1)

271-276: Handle flb_sds_create(conf) OOM before logging/using ctx->sasl_mechanism.

flb_sds_create(conf) can return NULL, but ctx->sasl_mechanism is immediately formatted with %s (Line 275) and later used in comparisons. On OOM this is undefined behavior and can crash the process.

Consider failing init (or at least skipping SASL‑specific logic) on allocation failure:

    conf = flb_input_get_property("rdkafka.sasl.mechanism", ins);
    if (conf) {
-        ctx->sasl_mechanism = flb_sds_create(conf);
-        flb_plg_info(ins, "SASL mechanism configured: %s", ctx->sasl_mechanism);
+        ctx->sasl_mechanism = flb_sds_create(conf);
+        if (!ctx->sasl_mechanism) {
+            flb_plg_error(ins, "failed to allocate SASL mechanism string");
+            goto init_error;
+        }
+        flb_plg_info(ins, "SASL mechanism configured: %s", ctx->sasl_mechanism);
🧹 Nitpick comments (3)
plugins/in_kafka/in_kafka.c (1)

365-393: Log when aws_msk_iam was requested but brokers don’t look like MSK.

If ctx->aws_msk_iam is true but ctx->kafka.brokers is unset or doesn’t contain the expected .kafka. / .kafka-serverless. and .amazonaws.com substrings, MSK IAM is silently skipped. That can be confusing for users who set rdkafka.sasl.mechanism=aws_msk_iam but get no IAM callback.

Consider adding an explicit warning in the “else” case to make this visible:

#ifdef FLB_HAVE_AWS_MSK_IAM
-    if (ctx->aws_msk_iam && ctx->sasl_mechanism && 
-        strcasecmp(ctx->sasl_mechanism, "OAUTHBEARER") == 0) {
-        /* Check if brokers are configured for MSK IAM */
-        if (ctx->kafka.brokers && 
-            (strstr(ctx->kafka.brokers, ".kafka.") || strstr(ctx->kafka.brokers, ".kafka-serverless.")) && 
-            strstr(ctx->kafka.brokers, ".amazonaws.com")) {
+    if (ctx->aws_msk_iam && ctx->sasl_mechanism &&
+        strcasecmp(ctx->sasl_mechanism, "OAUTHBEARER") == 0) {
+        /* Check if brokers are configured for MSK IAM */
+        if (ctx->kafka.brokers &&
+            (strstr(ctx->kafka.brokers, ".kafka.") ||
+             strstr(ctx->kafka.brokers, ".kafka-serverless.")) &&
+            strstr(ctx->kafka.brokers, ".amazonaws.com")) {
             ...
-        }
+        }
+        else {
+            flb_plg_warn(ins,
+                         "aws_msk_iam requested but brokers do not look like MSK; "
+                         "skipping MSK IAM OAuth callback registration");
+        }
     }
#endif
src/aws/flb_aws_msk_iam.c (2)

42-55: Region extraction helper looks safe; add focused tests for broker variants.

extract_region_from_broker() now:

  • Strips the port,
  • Ensures .amazonaws.com lies within the host portion,
  • Safely detects .vpce.amazonaws.com with the p - broker >= 5 guard, and
  • Bounds region length to (0, 32].

The pointer arithmetic and length checks look correct, but this logic is subtle and critical. It would benefit from unit tests covering at least:

  • Standard MSK: b-1.example.c1.kafka.us-east-1.amazonaws.com[:9098]
  • Serverless: boot-xxx.c1.kafka-serverless.eu-west-1.amazonaws.com[:9098]
  • VPC endpoint: vpce-xxx.kafka.ap-southeast-1.vpce.amazonaws.com[:9098]
  • Malformed inputs: missing region, missing .amazonaws.com, or trailing junk after .amazonaws.com.

Also applies to: 137-212


514-625: OAuth refresh callback concurrency fix is sound; consider minor defensive checks.

The callback now:

  • Validates opaque and msk_iam_ctx,
  • Builds the MSK host based on config->is_serverless,
  • Locks config->lock around provider->refresh() and provider->get_credentials(),
  • Unlocks before building the payload and interacting with librdkafka, and
  • Always destroys creds on all paths.

This is a good fix for concurrent access to the shared provider. As a minor hardening step, you could also guard against a null config->provider (e.g., if future refactors allow partial teardown) before dereferencing it inside the locked section, returning token failure early if it’s missing.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2264d52 and a94d17e.

📒 Files selected for processing (2)
  • plugins/in_kafka/in_kafka.c (4 hunks)
  • src/aws/flb_aws_msk_iam.c (12 hunks)
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-08-31T12:46:11.940Z
Learnt from: ThomasDevoogdt
Repo: fluent/fluent-bit PR: 9277
File: .github/workflows/pr-compile-check.yaml:147-151
Timestamp: 2025-08-31T12:46:11.940Z
Learning: In fluent-bit, the correct CMake flag for using system librdkafka is `FLB_PREFER_SYSTEM_LIB_KAFKA=ON`.

Applied to files:

  • src/aws/flb_aws_msk_iam.c
📚 Learning: 2025-08-31T12:46:11.940Z
Learnt from: ThomasDevoogdt
Repo: fluent/fluent-bit PR: 9277
File: .github/workflows/pr-compile-check.yaml:147-151
Timestamp: 2025-08-31T12:46:11.940Z
Learning: In fluent-bit CMakeLists.txt, the system library preference flags are defined as FLB_PREFER_SYSTEM_LIB_ZSTD and FLB_PREFER_SYSTEM_LIB_KAFKA with the FLB_ prefix.

Applied to files:

  • src/aws/flb_aws_msk_iam.c
📚 Learning: 2025-08-29T06:25:27.250Z
Learnt from: shadowshot-x
Repo: fluent/fluent-bit PR: 10794
File: tests/internal/aws_compress.c:93-107
Timestamp: 2025-08-29T06:25:27.250Z
Learning: In Fluent Bit, ZSTD compression is enabled by default and is treated as a core dependency, not requiring conditional compilation guards like `#ifdef FLB_HAVE_ZSTD`. Unlike some other optional components such as ARROW/PARQUET (which use `#ifdef FLB_HAVE_ARROW` guards), ZSTD support is always available and doesn't need build-time conditionals. ZSTD headers are included directly without guards across multiple plugins and core components.

Applied to files:

  • src/aws/flb_aws_msk_iam.c
🧬 Code graph analysis (1)
plugins/in_kafka/in_kafka.c (4)
src/flb_input.c (1)
  • flb_input_get_property (776-780)
src/flb_sds.c (2)
  • flb_sds_create (78-90)
  • flb_sds_destroy (389-399)
src/aws/flb_aws_msk_iam.c (1)
  • flb_aws_msk_iam_register_oauth_cb (628-761)
src/flb_kafka.c (1)
  • flb_kafka_opaque_destroy (233-240)
🪛 Cppcheck (2.18.0)
src/aws/flb_aws_msk_iam.c

[information] 31-31: Include file

(missingIncludeSystem)


[information] 31-31: Include file

(missingIncludeSystem)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] 31-31: Include file

(missingIncludeSystem)


[information] 31-31: Include file

(missingIncludeSystem)


[information] 31-31: Include file

(missingIncludeSystem)


[information] 31-31: Include file

(missingIncludeSystem)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] 49-49: Include file

(missingIncludeSystem)


[information] 50-50: Include file

(missingIncludeSystem)


[information] 51-51: Include file

(missingIncludeSystem)


[information] 52-52: Include file

(missingIncludeSystem)


[information] 53-53: Include file

(missingIncludeSystem)


[information] 54-54: Include file

(missingIncludeSystem)


[information] 40-40: Include file

(missingIncludeSystem)


[information] 41-41: Include file

(missingIncludeSystem)


[information] 40-40: Include file

(missingIncludeSystem)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] 53-53: Include file

(missingIncludeSystem)


[information] 54-54: Include file

(missingIncludeSystem)


[information] 52-52: Include file

(missingIncludeSystem)


[information] 53-53: Include file

(missingIncludeSystem)


[information] 54-54: Include file

(missingIncludeSystem)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] 137-137: Include file

(missingIncludeSystem)


[information] 138-138: Include file

(missingIncludeSystem)


[information] 139-139: Include file

(missingIncludeSystem)


[information] 140-140: Include file

(missingIncludeSystem)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)

plugins/in_kafka/in_kafka.c

[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)


[information] Limiting analysis of branches. Use --check-level=exhaustive to analyze all branches.

(normalCheckLevelMaxBranches)


[information] Too many #ifdef configurations - cppcheck only checks 12 configurations. Use --force to check all configurations. For more details, use --enable=information.

(toomanyconfigs)

🔇 Additional comments (5)
plugins/in_kafka/in_kafka.c (4)

279-297: aws_msk_iam branch looks correct; minor allocation path nuance only.

The aws_msk_iam handling (flagging explicit use, flipping mechanism to OAUTHBEARER, defaulting security.protocol to SASL_SSL) is consistent and the second flb_sds_create("OAUTHBEARER") is now correctly NULL‑checked. The early return -1 on OAUTHBEARER SDS OOM is safe since no additional resources have been created yet beyond ctx.

No further changes needed here once the initial ctx->sasl_mechanism OOM is guarded.


352-362: SASL queue enablement for OAUTHBEARER is well‑scoped.

Enabling the SASL queue only when ctx->sasl_mechanism == "OAUTHBEARER" confines the behavior to OAuth flows and avoids surprising other mechanisms. The debug log is also useful for troubleshooting.


417-429: SASL background callbacks enablement is robustly guarded.

The background callbacks are only enabled for the OAUTHBEARER mechanism, the error path logs a clear warning and destroys the returned rd_kafka_error_t *, and the success path logs at info. This is a solid integration of librdkafka’s background refresh with good observability.


491-503: Kafka conf / MSK IAM cleanup paths are consistent with ownership semantics.

The init error path correctly distinguishes:

  • rd_kafka_destroy(ctx->kafka.rk) when rk exists (conf owned by rk),
  • rd_kafka_conf_destroy(kafka_conf) when rk was never created, and
  • flb_aws_msk_iam_destroy(ctx->msk_iam) when MSK IAM was partially initialized.

This avoids leaking kafka_conf and MSK IAM resources without risking double‑destroy.

Also applies to: 508-512

src/aws/flb_aws_msk_iam.c (1)

214-512: MSK IAM payload builder refactor is thorough and cleanup‑safe.

The refactored build_msk_iam_payload():

  • Validates config->region, host, and key fields on creds early with clear error logs.
  • Builds SigV4 credential, query, canonical request, string‑to‑sign, and signature with goto error on any failure.
  • Correctly frees all intermediate SDS values and the presigned URL in the error path (including empty_payload_hex and key), and nulls empty_payload_hex on the success path to avoid double destroy.
  • Generates a base64 payload, then converts to URL‑safe base64 and strips padding in place.

This looks correct and memory‑safe with no obvious leaks or double‑frees.

@kalavt kalavt force-pushed the feature/aws-msk-iam-clean branch from a94d17e to 095b814 Compare December 9, 2025 04:15
@kalavt kalavt changed the title aws_msk_iam,in_kafka,out_kafka: add AWS MSK IAM authentication support aws_msk_iam: add AWS MSK IAM authentication support Dec 9, 2025
@kalavt
Copy link
Author

kalavt commented Dec 9, 2025

Still failing our linter: ❌ Commit b486cb7 failed: Missing prefix in commit subject: 'aws_msk_iam,in_kafka,out_kafka: add AWS MSK IAM authentication support'

@cosmo0920 should be clean now?

@cosmo0920
Copy link
Contributor

Still failing our linter: ❌ Commit b486cb7 failed: Missing prefix in commit subject: 'aws_msk_iam,in_kafka,out_kafka: add AWS MSK IAM authentication support'

@cosmo0920 should be clean now?

Yes, it's clean now:
https://github.com/fluent/fluent-bit/actions/runs/20051852936/job/57522155166?pr=11270

@kalavt kalavt force-pushed the feature/aws-msk-iam-clean branch from 380724f to ac685a1 Compare January 28, 2026 08:38
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@examples/kafka_filter/kafka_msk_iam.conf`:
- Around line 14-16: The example comment shows mixed regions between the listed
hosts (one line mentions us-west-2 while the example broker hostnames use
us-east-1); update the comment so the region in the explanatory text matches the
broker hostnames (e.g., change "us-west-2" to "us-east-1" or make the broker
hostnames use us-west-2) to avoid implying a cross-region MSK cluster; ensure
the descriptive lines that reference example endpoints and the example broker
hostnames are consistent.

@kalavt kalavt force-pushed the feature/aws-msk-iam-clean branch from ac685a1 to af20ec9 Compare January 28, 2026 09:29
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@cmake/kafka.cmake`:
- Around line 9-34: When pkg_check_modules(SASL libsasl2) runs but SASL_FOUND is
false, still perform the fallback detection using find_library(SASL2_LIB NAMES
sasl2) and find_path(SASL2_INCLUDE NAMES sasl/sasl.h) and, if found, set
FLB_SASL_CYRUS_ENABLED ON and populate SASL_LIBRARIES and SASL_INCLUDE_DIRS
(same as the existing fallback branch); otherwise emit the existing "cyrus-sasl
not found" message. In short, move/duplicate the find_library/find_path logic
into the inner else branch after pkg_check_modules failure and ensure variables
FLB_SASL_CYRUS_ENABLED, SASL_LIBRARIES and SASL_INCLUDE_DIRS are set
consistently when the fallback succeeds.
- Around line 50-55: The CMake snippet enables FLB_KAFKA_MSK_IAM_ENABLED
regardless of whether FLB_SASL_OAUTHBEARER_ENABLED is available, causing MSK IAM
to be reported enabled on platforms (e.g. Windows without FLB_TLS) that lack
OAUTHBEARER support; change the logic so FLB_KAFKA_MSK_IAM_ENABLED is set based
on FLB_SASL_OAUTHBEARER_ENABLED (enable MSK IAM only when
FLB_SASL_OAUTHBEARER_ENABLED is ON, otherwise set it OFF) and update the
adjacent comment to reflect that MSK IAM requires OAUTHBEARER availability
rather than being "always available now".
🧹 Nitpick comments (1)
cmake/kafka.cmake (1)

59-64: Simplify the redundant WITH_SASL platform branch.

Lines 60–64 set WITH_SASL to ON in both branches. You can collapse this to a single option assignment to reduce noise.

♻️ Suggested simplification
-# On Windows, enable WITH_SASL for SSPI support (built-in, no Cyrus needed)
-# On other platforms, WITH_SASL is always ON (built-in PLAIN/SCRAM)
-if(FLB_SYSTEM_WINDOWS)
-  FLB_OPTION(WITH_SASL ON)
-else()
-  FLB_OPTION(WITH_SASL ON)
-endif()
+# On all platforms, WITH_SASL is ON (built-in PLAIN/SCRAM)
+FLB_OPTION(WITH_SASL ON)

@kalavt kalavt force-pushed the feature/aws-msk-iam-clean branch from af20ec9 to 85d6506 Compare January 28, 2026 09:56
@kalavt
Copy link
Author

kalavt commented Jan 29, 2026

@cosmo0920 & @edsiper request your look into this MR could be merge now?

@cosmo0920
Copy link
Contributor

We need to follow the linter rules:


❌ Commit b58ce6281d failed:
Missing prefix in commit subject: 'docs(kafka): improve README formatting and IAM permissions'

❌ Commit 6d0fdc7b4a failed:
Subject prefix 'aws_msk_iam:' does not match files changed.
Expected one of: aws:

❌ Commit 229e9b378c failed:
Subject prefix 'aws_msk_iam:' does not match files changed.
Expected one of: aws:

❌ Commit c439a50a03 failed:
Subject prefix 'aws_msk_iam:' does not match files changed.
Expected one of: aws:

❌ Commit ed958c9d2a failed:
Subject prefix 'aws_msk_iam:' does not match files changed.
Expected one of: aws:

❌ Commit 2b15ddd972 failed:
Subject prefix 'aws_msk_iam:' does not match files changed.
Expected one of: aws:

❌ Commit ffcdd2eaae failed:
Subject prefix 'aws_msk_iam:' does not match files changed.
Expected one of: aws:

❌ Commit cc2ee43105 failed:
Subject prefix 'aws_credentials_sts:' does not match files changed.
Expected one of: aws:

❌ Commit 758b446d70 failed:
Subject prefix 'aws_credentials_profile:' does not match files changed.
Expected one of: aws:

❌ Commit c7eb33642d failed:
Subject prefix 'aws_credentials_ec2:' does not match files changed.
Expected one of: aws:

❌ Commit 861b86f82d failed:
Subject prefix 'aws_msk_iam:' does not match files changed.
Expected one of: aws:

Your commit still are not following the linter rules.

@cosmo0920
Copy link
Contributor

cosmo0920 commented Jan 29, 2026

Plus, occurring compilation error:

13 warnings generated.
[ 39%] Building C object lib/librdkafka-2.10.1/src/CMakeFiles/rdkafka.dir/rdkafka_sasl_oauthbearer.c.o
/home/runner/work/fluent-bit/fluent-bit/lib/librdkafka-2.10.1/src/rdkafka_sasl_cyrus.c:47:10: fatal error: 'sasl/sasl.h' file not found
#include <sasl/sasl.h>
         ^~~~~~~~~~~~~
1 error generated.
[ 39%] Building C object lib/librdkafka-2.10.1/src/CMakeFiles/rdkafka.dir/rdgz.c.o
make[2]: *** [lib/librdkafka-2.10.1/src/CMakeFiles/rdkafka.dir/build.make:1185: lib/librdkafka-2.10.1/src/CMakeFiles/rdkafka.dir/rdkafka_sasl_cyrus.c.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[1]: *** [CMakeFiles/Makefile2:5659: lib/librdkafka-2.10.1/src/CMakeFiles/rdkafka.dir/all] Error 2
make: *** [Makefile:156: all] Error 2
Error: Process completed with exit code 2.

We need to fix it before queuing review review queue.

@patrick-stephens
Copy link
Contributor

Yeah red CI is immediate skip for review.

@kalavt kalavt force-pushed the feature/aws-msk-iam-clean branch from 85d6506 to 96aa98b Compare January 29, 2026 14:23
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@plugins/out_kafka/kafka_config.c`:
- Around line 61-86: The code replaces ctx->sasl_mechanism with
flb_sds_create("OAUTHBEARER") without checking for allocation failure; if that
returns NULL and you already destroyed the old SDS, later uses (e.g.,
strcasecmp(tmp, ...), or any access to ctx->sasl_mechanism) can crash. Fix by
creating the new SDS into a temporary variable (e.g., new_sasl =
flb_sds_create("OAUTHBEARER")), check new_sasl != NULL before calling
flb_sds_destroy(ctx->sasl_mechanism) and assigning ctx->sasl_mechanism =
new_sasl; if allocation fails, log an error via flb_plg_error and avoid
destroying the existing ctx->sasl_mechanism or falling back to a safe value (do
not set property or change ctx on failure).
🧹 Nitpick comments (1)
cmake/kafka.cmake (1)

72-76: Simplify redundant conditional.

Both branches of this if/else set WITH_SASL ON. The conditional can be removed.

♻️ Suggested simplification
 # Configure librdkafka options
 # On Windows, enable WITH_SASL for SSPI support (built-in, no Cyrus needed)
-# On other platforms, WITH_SASL is always ON (built-in PLAIN/SCRAM)
-if(FLB_SYSTEM_WINDOWS)
-  FLB_OPTION(WITH_SASL ON)
-else()
-  FLB_OPTION(WITH_SASL ON)
-endif()
+# On all platforms, WITH_SASL is always ON (built-in PLAIN/SCRAM, SSPI on Windows)
+FLB_OPTION(WITH_SASL ON)

@kalavt kalavt force-pushed the feature/aws-msk-iam-clean branch from 96aa98b to ee0bf94 Compare January 29, 2026 14:37
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
cmake/kafka.cmake (1)

52-64: Align platform comment with actual gating logic.

Line 54 says Linux/macOS need SSL and Cyrus SASL, but Lines 62-63 enable OAuth Bearer unconditionally on non‑Windows. Update the comment (or gating) so they match.

✏️ Suggested comment fix
-# - Linux/macOS: Needs both SSL and Cyrus SASL
+# - Linux/macOS: Built-in SASL/OAUTHBEARER (no Cyrus SASL required)
🤖 Fix all issues with AI agents
In `@plugins/in_kafka/in_kafka.c`:
- Around line 619-626: Update the help text for the FLB_CONFIG_MAP_STR entry for
"aws_region" in struct flb_in_kafka_config to reflect that region auto-detection
covers both standard MSK endpoints and the serverless/VPC endpoint formats
parsed by the broker parsing code and examples; change the sentence that
currently limits auto-detection to "standard MSK endpoints" to instead state it
also supports serverless/VPC endpoint formats but still requires explicitly
setting aws_region for custom DNS names (e.g., PrivateLink).

In `@plugins/out_kafka/kafka_config.c`:
- Around line 61-95: The code currently logs and swaps ctx->sasl_mechanism
without checking flb_sds_create() return and can leak the old SDS if new_sasl
allocation fails; update the block around
flb_output_get_property("rdkafka.sasl.mechanism", ins) to check the result of
flb_sds_create(tmp) before calling flb_plg_info() or assigning
ctx->sasl_mechanism, return NULL with an error log if allocation fails, and when
creating new_sasl for the aws_msk_iam branch, check new_sasl for NULL and on
failure free the previously allocated ctx->sasl_mechanism (if non-NULL) and ctx,
log the error via flb_plg_error, and return NULL (avoid calling flb_sds_destroy
on a NULL pointer); also ensure any flb_output_set_property or flb_sds_destroy
swaps happen only after successful allocation.

In `@src/aws/flb_aws_msk_iam.c`:
- Around line 720-754: The error paths and flb_aws_msk_iam_destroy currently
call flb_tls_destroy(ctx->cred_tls) even when ctx->provider exists, which leads
to double-free because flb_aws_provider_destroy() already destroys
provider->cred_tls; update the cleanup logic to treat the provider as the owner:
when calling flb_aws_provider_destroy(ctx->provider) do not call
flb_tls_destroy(ctx->cred_tls) afterwards (or set ctx->cred_tls = NULL
immediately after destroying the provider), and in all other error/teardown
paths only call flb_tls_destroy(ctx->cred_tls) if ctx->provider is NULL (or
ctx->cred_tls != NULL). Ensure this change is applied to the error branches
around pthread_mutex_init, the provider init block, and in
flb_aws_msk_iam_destroy.

Implements the core AWS MSK IAM authentication mechanism including:
- OAuth callback mechanism for token generation and refresh
- Token lifecycle management and expiration handling
- Integration with AWS credential providers
- SASL/OAUTHBEARER protocol support for librdkafka

This provides the foundation for AWS MSK IAM authentication support
in Fluent Bit's Kafka plugins.

Signed-off-by: Arbin <arbin.cheng@coins.ph>
Enhance EC2 credential provider to better support MSK IAM authentication
by improving credential refresh behavior and lifecycle management.

Signed-off-by: Arbin <arbin.cheng@coins.ph>

aws: improve credential refresh for MSK IAM

Enhance profile credential provider to better support MSK IAM authentication
by improving credential refresh behavior and lifecycle management.

Signed-off-by: Arbin <arbin.cheng@coins.ph>

aws: improve credential refresh for MSK IAM

Enhance STS credential provider to better support MSK IAM authentication
by improving credential refresh behavior and lifecycle management.

Signed-off-by: Arbin <arbin.cheng@coins.ph>
Update Kafka core functionality to support AWS MSK IAM authentication,
including necessary configuration and lifecycle improvements.

Signed-off-by: Arbin <arbin.cheng@coins.ph>
Enable AWS MSK IAM authentication in the Kafka input plugin:
- Add AWS MSK IAM configuration options
- Integrate with OAuth callback mechanism
- Support automatic credential refresh
- Add TLS configuration for secure connections

Signed-off-by: Arbin <arbin.cheng@coins.ph>

in_kafka: add NULL checks for SASL mechanism allocation

Add NULL checks after flb_sds_create() when allocating SASL mechanism strings to prevent crashes on allocation failure. This covers both the initial SASL mechanism configuration and the AWS MSK IAM OAUTHBEARER conversion.

Signed-off-by: Arbin <arbin.cheng@coins.ph>

in_kafka: add aws_region parameter for MSK IAM auth

- Add aws_region configuration field
- Remove hostname pattern check for MSK IAM registration
- Pass aws_region to MSK IAM registration function
- Support PrivateLink and custom DNS scenarios

Signed-off-by: Arbin <arbin.cheng@coins.ph>
Enable AWS MSK IAM authentication in the Kafka output plugin:
- Add AWS MSK IAM configuration options
- Integrate with OAuth callback mechanism
- Support automatic credential refresh
- Add TLS configuration for secure connections

Signed-off-by: Arbin <arbin.cheng@coins.ph>

out_kafka: add aws_region parameter for MSK IAM auth

- Add aws_region configuration field
- Remove hostname pattern check for MSK IAM registration
- Pass aws_region to MSK IAM registration function
- Support PrivateLink and custom DNS scenarios

Signed-off-by: Arbin <arbin.cheng@coins.ph>
Replace pointer comparison with offset comparison in VPC endpoint detection to improve safety and clarity. Changes 'p >= broker + 5' to 'p - broker >= 5' to properly check offset within string bounds before accessing p - 5.

Signed-off-by: Arbin <arbin.cheng@coins.ph>

aws: use actual broker hostname for signing

- Remove is_serverless detection logic
- Use actual broker hostname instead of constructed host
- Fix memory leak in error cleanup path
- Add broker_host field to store actual hostname
- Update function signature to accept optional region parameter

This aligns with official AWS MSK IAM signers behavior where
the signature Host must match the TLS SNI/actual connection host.

Signed-off-by: Arbin <arbin.cheng@coins.ph>

aws: optimize by removing redundant service_host member

Remove service_host from struct flb_aws_msk_iam and construct it
dynamically in OAuth callback. This eliminates data redundancy since
service_host can be derived from region.

Also clean up unused struct flb_msk_iam_cb definition and use
flb_sds_len() instead of strlen() for consistency.

Signed-off-by: Arbin <arbin.cheng@coins.ph>

aws: fix buffer overread and improve code robustness

Fix critical security issue and improve code quality:

1. Fix potential buffer overread in extract_region_from_broker():
   - Changed iteration from 'start = end - 1' to 'start = end'
   - Check boundary before reading: while (start > broker && *(start - 1) != '.')
   - Eliminates undefined behavior when broker string is malformed

2. Avoid implicit NUL-termination in base64 URL encoding:
   - Replace pointer-based iteration with length-based for loop
   - Remove dependency on flb_base64_encode() NUL-termination behavior
   - Remove unused variable declaration

These changes address security concerns and improve code maintainability.

Signed-off-by: Arbin <arbin.cheng@coins.ph>

aws: Improve AWS MSK IAM authentication error logging

Signed-off-by: Arbin <arbin.cheng@coins.ph>

aws: Add error checking for pthread mutex operations in MSK IAM

Signed-off-by: Arbin <arbin.cheng@coins.ph>
- Add comprehensive MSK IAM configuration examples
- Cover Standard MSK, Serverless, PrivateLink scenarios
- Document aws_region parameter usage
- Add troubleshooting guide and IAM permissions
- Update README with detailed usage instructions

Signed-off-by: Arbin <arbin.cheng@coins.ph>

examples: improve README formatting and IAM permissions

- Add blank lines for better readability
- Fix markdown table formatting
- Correct IAM permission ARNs to include cluster UUID
- Add note about finding cluster UUID via AWS Console/CLI
- Improve section spacing and code block formatting

Signed-off-by: Arbin <arbin.cheng@coins.ph>
- Recognize librdkafka built-in SASL support (PLAIN/SCRAM/OAUTHBEARER)
- Mark cyrus-sasl as optional (only for Kerberos/GSSAPI)
- Enable AWS MSK IAM on all platforms with SSL support
- Simplify dependency logic and improve configurability

Signed-off-by: Arbin <arbin.cheng@coins.ph>
@kalavt kalavt force-pushed the feature/aws-msk-iam-clean branch from ee0bf94 to ceb77a0 Compare January 29, 2026 14:59
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@cmake/kafka.cmake`:
- Around line 70-75: The CMake option FORCES SSL unconditionally via
FLB_OPTION(WITH_SSL On); change it to honor Fluent Bit TLS by wiring WITH_SSL to
the FLB_TLS variable: replace the hardcoded "On" with the FLB_TLS variable so
WITH_SSL is enabled only when FLB_TLS is true (but still keep
WITH_SASL_OAUTHBEARER handling since SASL SCRAM/OAUTHBEARER require OpenSSL).
Update the FLB_OPTION call that sets WITH_SSL and ensure any logic referencing
WITH_SASL_OAUTHBEARER remains consistent with the new WITH_SSL behavior.

In `@plugins/out_kafka/kafka_config.c`:
- Around line 315-316: Initialize the mk_list ctx->topics immediately after
allocating/setting up the kafka context so it's always valid even if
flb_out_kafka_destroy() runs early (e.g., during opaque/MSK IAM failures);
locate the allocation of the context structure (the variable referenced as ctx)
and call mk_list_init(&ctx->topics) there (before any code paths that can call
flb_out_kafka_destroy), ensuring every code path that may free/cleanup the
context sees an initialized list.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
plugins/out_kafka/kafka_config.c (1)

342-342: Potential NULL pointer in format string.

tmp from flb_output_get_property("topics", ins) at line 319 may be NULL if the property is not set. Passing NULL to %s in flb_plg_info is undefined behavior on some platforms.

🛠️ Proposed fix
-    flb_plg_info(ctx->ins, "brokers='%s' topics='%s'", ctx->kafka.brokers, tmp);
+    flb_plg_info(ctx->ins, "brokers='%s' topics='%s'", ctx->kafka.brokers, tmp ? tmp : FLB_KAFKA_TOPIC);
🤖 Fix all issues with AI agents
In `@cmake/kafka.cmake`:
- Around line 49-75: The OAUTHBEARER flag is currently enabled unconditionally
on non-Windows platforms, which can conflict with librdkafka's requirement that
WITH_SASL_OAUTHBEARER only be enabled when TLS is enabled; update the logic that
sets FLB_SASL_OAUTHBEARER_ENABLED so it is true only when FLB_TLS is true (i.e.,
gate it on FLB_TLS for both Windows and non-Windows flows), ensure
FLB_KAFKA_MSK_IAM_ENABLED continues to follow FLB_SASL_OAUTHBEARER_ENABLED, and
keep the FLB_OPTION(WITH_SASL_OAUTHBEARER ${FLB_SASL_OAUTHBEARER_ENABLED}) usage
unchanged.
🧹 Nitpick comments (1)
plugins/out_kafka/kafka_config.c (1)

255-261: Consider failing on sasl.oauthbearer.config set error.

When rd_kafka_conf_set() fails for sasl.oauthbearer.config, the code logs an error but continues execution. While this is a placeholder value, a configuration failure could indicate a deeper issue that might cause confusing problems later.

♻️ Proposed change to abort on failure
             res = rd_kafka_conf_set(ctx->conf, "sasl.oauthbearer.config",
                                     "principal=admin", errstr, sizeof(errstr));
             if (res != RD_KAFKA_CONF_OK) {
                 flb_plg_error(ctx->ins,
                              "failed to set sasl.oauthbearer.config: %s",
                              errstr);
+                flb_out_kafka_destroy(ctx);
+                return NULL;
             }

Signed-off-by: Arbin <arbin.cheng@coins.ph>
Signed-off-by: Arbin <arbin.cheng@coins.ph>
@kalavt kalavt force-pushed the feature/aws-msk-iam-clean branch from 8ea3227 to de92624 Compare January 29, 2026 16:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

docs-required ok-package-test Run PR packaging tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants