diff --git a/.github/workflows/README.md b/.github/workflows/README.md deleted file mode 100644 index 5362de7825c..00000000000 --- a/.github/workflows/README.md +++ /dev/null @@ -1,227 +0,0 @@ -# Available workflows - -| Workflow file | Description | Run event | -| :---------------------------------------------------- | ------------------------ | ------------------------------------------------- | -| [build-master-packages](./build-master-packages.yaml) | Builds packages using `master` for certain targets | on new commit/push on master / manual | -| [cron-unstable-build](./cron-unstable-build.yaml) | Automated nightly builds of each supported branch | Scheduled/manual trigger | -| [master-integration-test](./master-integration-test.yaml) | Runs the integration testing suite on master | on new commit/push on master | -| [staging-build](./staging-build.yaml) | Builds the distro packages and docker images from a tagged release into staging (S3 and GHCR) | on new release/tag | -| [staging-test](./staging-test.yaml) | Test the staging distro packages and docker images| manually or when `staging-build` completes successfully | -| [staging-release](./staging-release.yaml) | Publishes the docker images/manifest on hub.docker.io/fluent/ and the distro packages | manual approval | -| [pr-closed-docker](./pr-closed-docker.yaml) | Removes docker images for PR on hub.docker.io/fluentbitdev/| on pr closed| -| [pr-compile-check](./pr-compile-check.yaml) | Runs some compilation sanity checks on a PR | -| [pr-integration-test](./pr-integration-test.yaml) | Runs the integration testing suite on a PR branch | pr opened / label created 'ok-to-test' / on new commit/push on PR(s) | -| [pr-package-tests](./pr-package-tests.yaml) | Runs the package build for all targets on a PR branch | pr opened / label created 'ok-package-test' / on new commit/push on PR(s) | -| [pr-perf-test](./pr-integration-test.yaml) | Runs the performance testing suite on a PR branch | pr opened / label created 'ok-to-performance-test' / on new commit/push on PR(s) | -| [pr-stale](./pr-stale.yaml) | Closes stale PR(s) with no activity in 30 days | scheduled daily 01:30 AM UTC| -| [unit-tests](./unit-tests.yaml) | Runs the unit tests suite on master push or new PR | PR opened, merge in master branch | - -## Available labels - -| Label name | Description | -| :----------|-------------| -| docs-required| default tag used to request documentation, has to be removed before merge | -| ok-package-test | Build for all possible targets | -| ok-to-test | run all integration tests | -| ok-to-merge | run mergebot and merge (rebase) current PR | -| ci/integration-docker-ok | integration test is able to build docker image | -| ci/integration-gcp-ok | integration test is able to run on GCP | -| long-term | long running pull request, don't close | -| exempt-stale | prevent stale checks running | - -## Required secrets - -* AWS_ACCESS_KEY_ID -* AWS_SECRET_ACCESS_KEY -* AWS_S3_BUCKET_STAGING -* AWS_S3_BUCKET_RELEASE -* GPG_PRIVATE_KEY -* GPG_PRIVATE_KEY_PASSPHRASE - -These are only required for Cosign of the container images, will be skipped if not present: - -* COSIGN_PRIVATE_KEY -* COSIGN_PRIVATE_KEY_PASSWORD - if set otherwise not required - -## Environments - -These environments are used: - -* `unstable` for all nightly builds -* `staging` for all staging builds -* `release` for running the promotion of staging to release, this can have additional approvals added - -If an environment is not present then it will be created but this may not have the appropriate permissions then. - -## Pushing to Github Container Registry - -Github actions require specific permissions to push to packages, see: -For some reason this is not automatically done via permission inheritance or similar. - -1. Verify you can push with a simple test, e.g. `docker pull alpine && docker tag alpine:latest ghcr.io//fluent-bit:latest && docker push ghcr.io//fluent-bit:latest` -2. Once this is working locally, you should then be able to set up action permissions for the repository. If you already have a package no need to push a test one. -3. Go to `https://github.com/users/USER/packages/container/fluent-bit/settings` and ensure the repository has access to `Write`. - -## Version-specific targets - -Each major version (e.g. 1.8 & 1.9) supports different targets to build for, e.g. 1.9 includes a CentOS 8 target and 1.8 has some other legacy targets. - -This is all handled by the [build matrix generation composite action](../actions/generate-package-build-matrix/action.yaml). -This uses a [JSON file](../../packaging/build-config.json) to specify the targets so ensure this is updated. -The build matrix is then fed into the [reusable job](./call-build-linux-packages.yaml) that builds packages which will then fire for the appropriate targets. -The reusable job is used for all package builds including unstable/nightly and the PR `ok-package-test` triggered ones. - -## Releases - -The process at a high level is as follows: - -1. Tag created with `v` prefix. -2. [Deploy to staging](https://github.com/fluent/fluent-bit/actions/workflows/staging-build.yaml) workflow runs. -3. [Test staging](https://github.com/fluent/fluent-bit/actions/workflows/staging-test.yaml) workflow runs. -4. Manually initiate [release from staging](https://github.com/fluent/fluent-bit/actions/workflows/staging-release.yaml) workflow. -5. A PR is auto-created to increment the minor version now for Fluent Bit using the [`update_version.sh`](../../update_version.sh) script. -6. Create PRs for doc updates - Windows & container versions. (WIP to automate). - -Breaking the steps down. - -### Deploy to staging and test - -This should run automatically when a tag is created matching the `v*` regex. -It currently copes with 1.8+ builds although automation is only exercised for 1.9+ releases. - -Once this is completed successfully the staging tests should also run automatically. - -![Workflows for staging and test example](./resources/auto-build-test-workflow.png "Example of workflows for build and test") - -If both complete successfully then we are good to go. - -Occasional failures are seen with package builds not downloading dependencies (CentOS 7 in particular seems bad for this). -A re-run of failed jobs should resolve this. - -The workflow builds all Linux, macOS and Windows targets to a staging S3 bucket plus the container images to ghcr.io. - -### Release from staging workflow - -This is a manually initiated workflow, the intention is multiple staging builds can happen but we only release one. -Note that currently we do not support parallel staging builds of different versions, e.g. master and 1.9 branches. -**We can only release the previous staging build and there is a check to confirm version.** - -Ensure AppVeyor build for the tag has completed successfully as well. - -To trigger: - -All this job does is copy the various artefacts from staging locations to release ones, it does not rebuild them. - -![Workflow for release example](./resources/release-from-staging-workflow-incorrect-version.png "Example of workflow for release") - -With this example you can see we used the wrong `version` as it requires it without the `v` prefix (it is used for container tag, etc.) and so it fails. - -![Workflow for release failure example](./resources/release-version-failure.png "Example of failing workflow for release") - -Make sure to provide without the `v` prefix. - -![Workflow for release example](./resources/release-from-staging-workflow.png "Example of successful workflow for release") - -Once this workflow is initiated you then also need to have it approved by the designated "release team" otherwise it will not progress. - -![Release approval example](./resources/release-approval.png "Release approval example") - -They will be notified for approval by Github. -Unfortunately it has to be approved for each job in the sequence rather than a global approval for the whole workflow although that can be useful to check between jobs. - -![Release approval per-job required](./resources/release-approval-per-job.png "Release approval per-job required") - -This is quite useful to delay the final smoke test of packages until after the manual steps are done as it will then verify them all for you. - -#### Packages server sync - -The workflow above ensures all release artefacts are pushed to the appropriate container registry and S3 bucket for official releases. -The packages server then periodically syncs from this bucket to pull down and serve the new packages so there may be a delay (up to 1 hour) before it serves the new versions. -The syncs happen hourly. -See for details of the dedicated packages server. - -The main reason for a separate server is to accurately track download statistics. -Container images are handled by ghcr.io and Docker Hub, not this server. - -#### Transient container publishing failures - -The parallel publishing of multiple container tags for the same image seems to fail occasionally with network errors, particularly more for ghcr.io than DockerHub. -This can be resolved by just re-running the failed jobs. - -#### Windows builds from AppVeyor - -This is automated, however confirm that the actual build is successful for the tag: -If not then ask a maintainer to retrigger. - -It can take a while to find the one for the specific tag... - -#### ARM builds - -All builds are carried out in containers and intended to be run on a valid Ubuntu host to match a standard Github Actions runner. -This can take some time for ARM as we have to emulate the architecture via QEMU. - - introduces support to run ARM builds on a dedicated [actuated.dev](https://docs.actuated.dev/) ephemeral VM runner. -A self-hosted ARM runner is sponsored by [Equinix Metal](https://deploy.equinix.com/metal/) and provisioned for this per the [documentation](https://docs.actuated.dev/provision-server/). -For fork workflows, this should all be skipped and run on a normal Ubuntu Github hosted runner but be aware this may take some time. - -### Manual release - -As long as it is built to staging we can manually publish packages as well via the script here: - -Containers can be promoted manually too, ensure to include all architectures and signatures. - -### Create PRs - -Once releases are published we need to provide PRs for the following documentation updates: - -1. Windows checksums: -2. Container versions: - - is the repo for updates to docs. - -Take the checksums from the release process above, the AppVeyor stage provides them all and we attempt to auto-create the PR with it. - -## Unstable/nightly builds - -These happen every 24 hours and [reuse the same workflow](./cron-unstable-build.yaml) as the staging build so are identical except they skip the upload to S3 step. -This means all targets are built nightly for `master` and `2.1` branches including container images and Linux, macOS and Windows packages. - -The container images are available here (the tag refers to the branch): - -* [ghcr.io/fluent/fluent-bit/unstable:2.1](ghcr.io/fluent/fluent-bit/unstable:2.1) -* [ghcr.io/fluent/fluent-bit/unstable:master](ghcr.io/fluent/fluent-bit/unstable:master) -* [ghcr.io/fluent/fluent-bit/unstable:windows-2022-2.1](ghcr.io/fluent/fluent-bit/unstable:windows-2022-2.1) -* [ghcr.io/fluent/fluent-bit/unstable:windows-2022-master](ghcr.io/fluent/fluent-bit/unstable:windows-2022-master) - -The Linux, macOS and Windows packages are available to download from the specific workflow run. - -## Integration tests - -On every commit to `master` we rebuild the [packages](./build-master-packages.yaml) and [container images](./master-integration-test.yaml). -The container images are then used to [run the integration tests](./master-integration-test.yaml) from the repository. -The container images are available as: - -* [ghcr.io/fluent/fluent-bit/master:x86_64](ghcr.io/fluent/fluent-bit/master:x86_64) - -## PR checks - -Various workflows are run for PRs automatically: - -* [Unit tests](./unit-tests.yaml) -* [Compile checks on CentOS 7 compilers](./pr-compile-check.yaml) -* [Linting](./pr-lint.yaml) -* [Windows builds](./pr-windows-build.yaml) -* [Fuzzing](./pr-fuzz.yaml) -* [Container image builds](./pr-image-tests.yaml) -* [Install script checks](./pr-install-script.yaml) - -We try to guard these to only trigger when relevant files are changed to reduce any delays or resources used. -**All should be able to be triggered manually for explicit branches as well.** - -The following workflows can be triggered manually for specific PRs too: - -* [Integration tests](./pr-integration-test.yaml): Build a container image and run the integration tests as per commits to `master`. -* [Performance tests](./pr-perf-test.yaml): WIP to trigger a performance test on a dedicated VM and collect the results as a PR comment. -* [Full package build](./pr-package-tests.yaml): builds all Linux, macOs and Windows packages as well as container images. - -To trigger these, apply the relevant label. diff --git a/.github/workflows/build-branch-containers.yaml b/.github/workflows/build-branch-containers.yaml deleted file mode 100644 index ea5d3c6cd3d..00000000000 --- a/.github/workflows/build-branch-containers.yaml +++ /dev/null @@ -1,20 +0,0 @@ -name: Build containers for a specific branch of 1.9+ -on: - workflow_dispatch: - inputs: - version: - description: Version of Fluent Bit to build, commit, branch, etc. The container image will be ghcr.io/fluent/fluent-bit/test/. - required: true - default: master -jobs: - build-branch-containers: - uses: ./.github/workflows/call-build-images.yaml - with: - version: ${{ github.event.inputs.version }} - ref: ${{ github.event.inputs.version }} - registry: ghcr.io - username: ${{ github.actor }} - image: ${{ github.repository }}/test/${{ github.event.inputs.version }} - unstable: ${{ github.event.inputs.version }} - secrets: - token: ${{ secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/build-legacy-branch.yaml b/.github/workflows/build-legacy-branch.yaml deleted file mode 100644 index e31d3b85edf..00000000000 --- a/.github/workflows/build-legacy-branch.yaml +++ /dev/null @@ -1,156 +0,0 @@ -name: Build containers for a specific branch of 1.8 -on: - workflow_dispatch: - inputs: - ref: - description: The code to build so a commit, branch, etc. The container image will be ghcr.io/fluent/fluent-bit/test/. - required: true - default: "1.8" - -env: - IMAGE_NAME: ghcr.io/${{ github.repository }}/test/${{ github.event.inputs.ref }} - -jobs: - build-legacy-branch-meta: - runs-on: ubuntu-latest - permissions: - contents: read - steps: - - name: Checkout code - uses: actions/checkout@v5 - with: - ref: ${{ inputs.ref }} - - - name: Check this is a 1.8 type build - run: | - if [[ -f "dockerfiles/Dockerfile" ]]; then - echo "Invalid branch as contains Dockerfile: ${{ inputs.ref }}" - exit 1 - fi - shell: bash - - # For 1.8 builds it is a little more complex so we have this build matrix to handle it. - # This creates separate images for each architecture. - # The later step then creates a multi-arch manifest for all of these. - build-legacy-images-matrix: - name: Build single arch legacy images - runs-on: ubuntu-latest - needs: - - build-legacy-branch-meta - strategy: - fail-fast: false - matrix: - arch: [amd64, arm64, arm/v7] - include: - - arch: amd64 - suffix: x86_64 - - arch: arm/v7 - suffix: arm32v7 - - arch: arm64 - suffix: arm64v8 - permissions: - contents: read - packages: write - steps: - - name: Checkout the docker build repo for legacy builds - uses: actions/checkout@v5 - with: - repository: fluent/fluent-bit-docker-image - ref: "1.8" # Fixed to this branch - - - name: Set up QEMU - uses: docker/setup-qemu-action@v3 - - - name: Set up Docker Buildx - uses: docker/setup-buildx-action@v3 - - - name: Log in to the Container registry - uses: docker/login-action@v3 - with: - registry: ghcr.io - username: ${{ github.actor }} - password: ${{ secrets.GITHUB_TOKEN }} - - - id: debug-meta - uses: docker/metadata-action@v5 - with: - images: ${{ env.IMAGE_NAME }} - tags: | - raw,${{ inputs.ref }}-debug - - - name: Build the legacy x86_64 debug image - if: matrix.arch == 'amd64' - uses: docker/build-push-action@v6 - with: - file: ./Dockerfile.x86_64.debug - context: . - tags: ${{ steps.debug-meta.outputs.tags }} - labels: ${{ steps.debug-meta.outputs.labels }} - provenance: false - platforms: linux/amd64 - push: true - load: false - build-args: | - FLB_TARBALL=https://github.com/fluent/fluent-bit/tarball/${{ inputs.ref }} - - - name: Extract metadata from Github - id: meta - uses: docker/metadata-action@v5 - with: - images: ${{ env.IMAGE_NAME }} - tags: | - raw,${{ matrix.suffix }}-${{ inputs.ref }} - - - name: Build the legacy ${{ matrix.arch }} image - uses: docker/build-push-action@v6 - with: - file: ./Dockerfile.${{ matrix.suffix }} - context: . - tags: ${{ steps.meta.outputs.tags }} - labels: ${{ steps.meta.outputs.labels }} - platforms: linux/${{ matrix.arch }} - provenance: false - push: true - load: false - build-args: | - FLB_TARBALL=https://github.com/fluent/fluent-bit/tarball/${{ inputs.ref }} - - # Create a multi-arch manifest for the separate 1.8 images. - build-legacy-image-manifests: - name: Deploy multi-arch container image manifests - permissions: - contents: read - packages: write - runs-on: ubuntu-latest - needs: - - build-legacy-branch-meta - - build-legacy-images-matrix - steps: - - name: Set up Docker Buildx - uses: docker/setup-buildx-action@v3 - - - name: Log in to the Container registry - uses: docker/login-action@v3 - with: - registry: ghcr.io - username: ${{ github.actor }} - password: ${{ secrets.GITHUB_TOKEN }} - - - name: Pull all the images - # Use platform to trigger warnings on invalid image metadata - run: | - docker pull --platform=linux/amd64 ${{ env.IMAGE_NAME }}:x86_64-${{ inputs.ref }} - docker pull --platform=linux/arm64 ${{ env.IMAGE_NAME }}:arm64v8-${{ inputs.ref }} - docker pull --platform=linux/arm/v7 ${{ env.IMAGE_NAME }}:arm32v7-${{ inputs.ref }} - shell: bash - - - name: Create manifests for images - run: | - docker manifest create ${{ env.IMAGE_NAME }}:${{ inputs.ref }} \ - --amend ${{ env.IMAGE_NAME }}:x86_64-${{ inputs.ref }} \ - --amend ${{ env.IMAGE_NAME }}:arm64v8-${{ inputs.ref }} \ - --amend ${{ env.IMAGE_NAME }}:arm32v7-${{ inputs.ref }} - docker manifest push --purge ${{ env.IMAGE_NAME }}:${{ inputs.ref }} - env: - DOCKER_CLI_EXPERIMENTAL: enabled - shell: bash diff --git a/.github/workflows/build-master-packages.yaml b/.github/workflows/build-master-packages.yaml deleted file mode 100644 index 5698abfe5b4..00000000000 --- a/.github/workflows/build-master-packages.yaml +++ /dev/null @@ -1,51 +0,0 @@ -on: - push: - branches: - - master - workflow_dispatch: - inputs: - version: - description: Version of Fluent Bit to build - required: false - default: master - target: - description: Only build a specific target, intended for debug/test builds only. - required: false - default: "" - -name: Build packages for master -jobs: - master-build-generate-matrix: - name: Staging build matrix - runs-on: ubuntu-latest - outputs: - build-matrix: ${{ steps.set-matrix.outputs.matrix }} - steps: - # Set up the list of target to build so we can pass the JSON to the reusable job - - id: set-matrix - run: | - matrix=$(( - echo '{ "distro" : [ "debian/bullseye", "ubuntu/20.04", "ubuntu/22.04", "centos/7" ]}' - ) | jq -c .) - if [ -n "${{ github.event.inputs.target || '' }}" ]; then - echo "Overriding matrix to build: ${{ github.event.inputs.target }}" - matrix=$(( - echo '{ "distro" : [' - echo '"${{ github.event.inputs.target }}"' - echo ']}' - ) | jq -c .) - fi - echo $matrix - echo $matrix| jq . - echo "matrix=$matrix" >> $GITHUB_OUTPUT - shell: bash - - master-build-packages: - needs: master-build-generate-matrix - uses: ./.github/workflows/call-build-linux-packages.yaml - with: - version: master - ref: master - build_matrix: ${{ needs.master-build-generate-matrix.outputs.build-matrix }} - secrets: - token: ${{ secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/call-build-images.yaml b/.github/workflows/call-build-images.yaml deleted file mode 100644 index 0aa29d9b2e7..00000000000 --- a/.github/workflows/call-build-images.yaml +++ /dev/null @@ -1,455 +0,0 @@ ---- -name: Reusable workflow to build container images - -on: - workflow_call: - inputs: - version: - description: The version of Fluent Bit to create. - type: string - required: true - ref: - description: The commit, tag or branch of Fluent Bit to checkout for building that creates the version above. - type: string - required: true - registry: - description: The registry to push container images to. - type: string - required: false - default: ghcr.io - username: - description: The username for the registry. - type: string - required: true - image: - description: The name of the container image to push to the registry. - type: string - required: true - environment: - description: The Github environment to run this workflow on. - type: string - required: false - unstable: - description: Optionally add metadata to build to indicate an unstable build, set to the contents you want to add. - type: string - required: false - default: "" - push: - description: Optionally push the images to the registry, defaults to true but for forks we cannot do this in PRs. - type: boolean - required: false - default: true - secrets: - token: - description: The Github token or similar to authenticate with for the registry. - required: true - cosign_private_key: - description: The optional Cosign key to use for signing the images. - required: false - cosign_private_key_password: - description: If the Cosign key requires a password then specify here, otherwise not required. - required: false -jobs: - call-build-images-meta: - name: Extract any supporting metadata - outputs: - major-version: ${{ steps.determine-major-version.outputs.replaced }} - runs-on: ubuntu-latest - environment: ${{ inputs.environment }} - permissions: - contents: read - steps: - - name: Checkout code - uses: actions/checkout@v5 - with: - ref: ${{ inputs.ref }} - - # For main branch/releases we want to tag with the major version. - # E.g. if we build version 1.9.2 we want to tag with 1.9.2 and 1.9. - - name: Determine major version tag - id: determine-major-version - uses: frabert/replace-string-action@v2.5 - with: - pattern: '^(\d+\.\d+).*$' - string: ${{ inputs.version }} - replace-with: "$1" - flags: "g" - - # Taken from https://docs.docker.com/build/ci/github-actions/multi-platform/#distribute-build-across-multiple-runners - # We split this out to make it easier to restart just one of them if it fails and do all in parallel - call-build-single-arch-container-images: - # Allow us to continue to create a manifest if we want - continue-on-error: true - permissions: - contents: read - packages: write - strategy: - fail-fast: false - matrix: - platform: - - amd64 - - arm64 - - arm/v7 - target: - - production - - debug - name: ${{ matrix.platform }}/${{ matrix.target }} container image build - # Use GitHub Actions ARM hosted runners - runs-on: ${{ (contains(matrix.platform, 'arm') && 'ubuntu-22.04-arm') || 'ubuntu-latest' }} - steps: - - name: Checkout code - uses: actions/checkout@v5 - with: - ref: ${{ inputs.ref }} - token: ${{ secrets.token }} - - - name: Set up Docker Buildx - uses: docker/setup-buildx-action@v3 - - - name: Log in to the Container registry - uses: docker/login-action@v3 - with: - registry: ${{ inputs.registry }} - username: ${{ github.actor }} - password: ${{ secrets.token }} - - - name: Build and push by digest the standard ${{ matrix.target }} image - id: build - uses: docker/build-push-action@v6 - with: - # Use path context rather than Git context as we want local files - file: ./dockerfiles/Dockerfile - context: . - target: ${{ matrix.target }} - outputs: type=image,name=${{ inputs.registry }}/${{ inputs.image }},push-by-digest=true,name-canonical=true,push=${{ inputs.push }} - platforms: linux/${{ matrix.platform }} - # Must be disabled to provide legacy format images from the registry - provenance: false - # This is configured in outputs above - push: ${{ inputs.push }} - load: false - build-args: | - FLB_NIGHTLY_BUILD=${{ inputs.unstable }} - RELEASE_VERSION=${{ inputs.version }} - WAMR_BUILD_TARGET=${{ (contains(matrix.platform, 'arm/v7') && 'ARMV7') || '' }} - - - name: Export ${{ matrix.target }} digest - run: | - mkdir -p /tmp/digests - digest="${{ steps.build.outputs.digest }}" - touch "/tmp/digests/${digest#sha256:}" - shell: bash - - - name: Upload ${{ matrix.target }} digest - uses: actions/upload-artifact@v5 - with: - name: ${{ matrix.target }}-digests-${{ (contains(matrix.platform, 'arm/v7') && 'arm-v7') || matrix.platform }} - path: /tmp/digests/* - if-no-files-found: error - retention-days: 1 - - # Take the digests and produce a multi-arch manifest from them. - call-build-container-image-manifests: - if: inputs.push - permissions: - contents: read - packages: write - name: Upload multi-arch container image manifests - runs-on: ubuntu-latest - needs: - - call-build-images-meta - - call-build-single-arch-container-images - outputs: - version: ${{ steps.meta.outputs.version }} - steps: - - name: Extract metadata from Github - id: meta - uses: docker/metadata-action@v5 - with: - images: ${{ inputs.registry }}/${{ inputs.image }} - tags: | - raw,${{ inputs.version }} - raw,${{ needs.call-build-images-meta.outputs.major-version }} - raw,latest - - - name: Download production digests - uses: actions/download-artifact@v6 - with: - pattern: production-digests-* - path: /tmp/production-digests - merge-multiple: true - - - name: Set up Docker Buildx - uses: docker/setup-buildx-action@v3 - - - name: Log in to the Container registry - uses: docker/login-action@v3 - with: - registry: ${{ inputs.registry }} - username: ${{ github.actor }} - password: ${{ secrets.token }} - - - name: Create production manifest - run: | - docker buildx imagetools create $(jq -cr '.tags | map("-t " + .) | join(" ")' <<< "$DOCKER_METADATA_OUTPUT_JSON") \ - $(printf '${{ inputs.registry }}/${{ inputs.image }}@sha256:%s ' *) - shell: bash - working-directory: /tmp/production-digests - - - name: Inspect image - run: | - docker buildx imagetools inspect ${{ inputs.registry }}/${{ inputs.image }}:${{ steps.meta.outputs.version }} - shell: bash - - # Take the digests and produce a multi-arch manifest from them. - call-build-debug-container-image-manifests: - if: inputs.push - permissions: - contents: read - packages: write - name: Upload debug multi-arch container image manifests - runs-on: ubuntu-latest - needs: - - call-build-images-meta - - call-build-single-arch-container-images - outputs: - version: ${{ steps.debug-meta.outputs.version }} - steps: - - id: debug-meta - uses: docker/metadata-action@v5 - with: - images: ${{ inputs.registry }}/${{ inputs.image }} - tags: | - raw,${{ inputs.version }}-debug - raw,${{ needs.call-build-images-meta.outputs.major-version }}-debug - raw,latest-debug - - - name: Download debug digests - uses: actions/download-artifact@v6 - with: - pattern: debug-digests-* - path: /tmp/debug-digests - merge-multiple: true - - - name: Set up Docker Buildx - uses: docker/setup-buildx-action@v3 - - - name: Log in to the Container registry - uses: docker/login-action@v3 - with: - registry: ${{ inputs.registry }} - username: ${{ github.actor }} - password: ${{ secrets.token }} - - - name: Create debug manifest - run: | - docker buildx imagetools create $DOCKER_PUSH_EXTRA_FLAGS $(jq -cr '.tags | map("-t " + .) | join(" ")' <<< "$DOCKER_METADATA_OUTPUT_JSON") \ - $(printf '${{ inputs.registry }}/${{ inputs.image }}@sha256:%s ' *) - shell: bash - working-directory: /tmp/debug-digests - - - name: Inspect image - run: | - docker buildx imagetools inspect ${{ inputs.registry }}/${{ inputs.image }}:${{ steps.debug-meta.outputs.version }} - shell: bash - - call-build-images-generate-schema: - if: inputs.push - needs: - - call-build-images-meta - - call-build-container-image-manifests - runs-on: ubuntu-latest - environment: ${{ inputs.environment }} - permissions: - contents: read - packages: read - steps: - - name: Log in to the Container registry - uses: docker/login-action@v3 - with: - registry: ${{ inputs.registry }} - username: ${{ inputs.username }} - password: ${{ secrets.token }} - - - name: Generate schema - run: | - docker run --rm -t ${{ inputs.registry }}/${{ inputs.image }}:${{ inputs.version }} -J > fluent-bit-schema-${{ inputs.version }}.json - cat fluent-bit-schema-${{ inputs.version }}.json | jq -M > fluent-bit-schema-pretty-${{ inputs.version }}.json - shell: bash - - - name: Upload the schema - uses: actions/upload-artifact@v5 - with: - path: ./fluent-bit-schema*.json - name: fluent-bit-schema-${{ inputs.version }} - if-no-files-found: error - - call-build-images-scan: - if: inputs.push - needs: - - call-build-images-meta - - call-build-container-image-manifests - name: Trivy + Dockle image scan - runs-on: ubuntu-latest - environment: ${{ inputs.environment }} - permissions: - contents: read - packages: read - steps: - - name: Log in to the Container registry - uses: docker/login-action@v3 - with: - registry: ${{ inputs.registry }} - username: ${{ inputs.username }} - password: ${{ secrets.token }} - - - name: Trivy - multi-arch - uses: aquasecurity/trivy-action@master - with: - image-ref: "${{ inputs.registry }}/${{ inputs.image }}:${{ inputs.version }}" - format: "table" - exit-code: "1" - ignore-unfixed: true - vuln-type: "os,library" - severity: "CRITICAL,HIGH" - - - name: Dockle - multi-arch - uses: hands-lab/dockle-action@v1 - with: - image: "${{ inputs.registry }}/${{ inputs.image }}:${{ inputs.version }}" - exit-code: "1" - exit-level: WARN - - call-build-images-sign: - if: inputs.push - needs: - - call-build-images-meta - - call-build-container-image-manifests - - call-build-debug-container-image-manifests - name: Deploy and sign multi-arch container image manifests - permissions: - contents: read - packages: write - # This is used to complete the identity challenge - # with sigstore/fulcio when running outside of PRs. - id-token: write - runs-on: ubuntu-latest - environment: ${{ inputs.environment }} - steps: - - name: Install cosign - uses: sigstore/cosign-installer@v2 - - - name: Cosign keyless signing using Rektor public transparency log - # This step uses the identity token to provision an ephemeral certificate - # against the sigstore community Fulcio instance, and records it to the - # sigstore community Rekor transparency log. - # - # We use recursive signing on the manifest to cover all the images. - run: | - cosign sign --recursive --force \ - -a "repo=${{ github.repository }}" \ - -a "workflow=${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}" \ - -a "ref=${{ github.sha }}" \ - -a "release=${{ inputs.version }}" \ - "${{ inputs.registry }}/${{ inputs.image }}@${{ needs.call-build-container-image-manifests.outputs.version }}" \ - "${{ inputs.registry }}/${{ inputs.image }}@${{ needs.call-build-debug-container-image-manifests.outputs.version }}" - shell: bash - # Ensure we move on to key-based signing as well - continue-on-error: true - env: - COSIGN_EXPERIMENTAL: true - - - name: Cosign with a key - # Only run if we have a key defined - if: ${{ env.COSIGN_PRIVATE_KEY }} - # The key needs to cope with newlines - run: | - echo -e "${COSIGN_PRIVATE_KEY}" > /tmp/my_cosign.key - cosign sign --key /tmp/my_cosign.key --recursive --force \ - -a "repo=${{ github.repository }}" \ - -a "workflow=${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}" \ - -a "ref=${{ github.sha }}" \ - -a "release=${{ inputs.version }}" \ - "${{ inputs.registry }}/${{ inputs.image }}@${{ needs.call-build-container-image-manifests.outputs.version }}" \ - "${{ inputs.registry }}/${{ inputs.image }}@${{ needs.call-build-debug-container-image-manifests.outputs.version }}" - rm -f /tmp/my_cosign.key - shell: bash - continue-on-error: true - env: - COSIGN_PRIVATE_KEY: ${{ secrets.cosign_private_key }} - COSIGN_PASSWORD: ${{ secrets.cosign_private_key_password }} # optional - - # This takes a long time... - call-build-windows-container: - name: Windows container images - runs-on: windows-${{ matrix.windows-base-version }} - environment: ${{ inputs.environment }} - needs: - - call-build-images-meta - strategy: - fail-fast: true - matrix: - windows-base-version: - - '2022' - - '2025' - permissions: - contents: read - packages: write - env: - IMAGE: ${{ inputs.registry }}/${{ inputs.image }}:windows-${{ matrix.windows-base-version }}-${{ inputs.version }} - steps: - - name: Checkout repository - uses: actions/checkout@v5 - with: - ref: ${{ inputs.ref }} - - # - name: Set up Docker Buildx - # uses: docker/setup-buildx-action@v3 - - - name: Log in to the Container registry - uses: docker/login-action@v3 - with: - registry: ${{ inputs.registry }} - username: ${{ inputs.username }} - password: ${{ secrets.token }} - - - name: Pull the last release image to speed up the build with a cache - continue-on-error: true - run: | - VERSION=$(gh release list --json tagName,isLatest --jq '.[] | select(.isLatest)|.tagName | sub("^v"; "")') - echo VERSION="$VERSION" - docker pull ${{ inputs.registry }}/${{ inputs.image }}:windows-${{ matrix.windows-base-version }}-$VERSION - shell: bash - env: - GH_TOKEN: ${{ secrets.token }} - - - name: Build the production images - run: | - docker build -t $IMAGE --build-arg FLB_NIGHTLY_BUILD=${{ inputs.unstable }} --build-arg WINDOWS_VERSION=ltsc${{ matrix.windows-base-version }} -f ./dockerfiles/Dockerfile.windows . - shell: bash - - - name: Sanity check of the production images - run: | - docker run --rm -t $IMAGE --help - shell: bash - - - name: Push the production images - if: inputs.push - run: | - docker push $IMAGE - shell: bash - - # We cannot use this action as it requires privileged mode - # uses: docker/build-push-action@v6 - # with: - # file: ./dockerfiles/Dockerfile.windows - # context: . - # tags: ${{ steps.meta.outputs.tags }} - # labels: ${{ steps.meta.outputs.labels }} - # platforms: windows/amd64 - # target: runtime - # push: true - # load: false - # build-args: | - # FLB_NIGHTLY_BUILD=${{ inputs.unstable }} - # WINDOWS_VERSION=ltsc2022 diff --git a/.github/workflows/call-build-linux-packages.yaml b/.github/workflows/call-build-linux-packages.yaml deleted file mode 100644 index 808bb15979c..00000000000 --- a/.github/workflows/call-build-linux-packages.yaml +++ /dev/null @@ -1,272 +0,0 @@ ---- -name: Reusable workflow to build binary packages into S3 bucket - -on: - workflow_call: - inputs: - version: - description: The version of Fluent Bit to create. - type: string - required: true - ref: - description: The commit, tag or branch of Fluent Bit to checkout for building that creates the version above. - type: string - required: true - build_matrix: - description: The build targets to produce as a JSON matrix. - type: string - required: true - environment: - description: The Github environment to run this workflow on. - type: string - required: false - unstable: - description: Optionally add metadata to build to indicate an unstable build, set to the contents you want to add. - type: string - required: false - default: "" - ignore_failing_targets: - description: Optionally ignore any failing builds in the matrix and continue. - type: boolean - required: false - default: false - secrets: - token: - description: The Github token or similar to authenticate with. - required: true - bucket: - description: The name of the S3 (US-East) bucket to push packages into. - required: false - access_key_id: - description: The S3 access key id for the bucket. - required: false - secret_access_key: - description: The S3 secret access key for the bucket. - required: false - gpg_private_key: - description: The GPG key to use for signing the packages. - required: false - gpg_private_key_passphrase: - description: The GPG key passphrase to use for signing the packages. - required: false - -jobs: - call-build-capture-source: - # Capture source tarball and generate checksum for it - name: Extract any supporting metadata - runs-on: ubuntu-22.04 - environment: ${{ inputs.environment }} - permissions: - contents: read - steps: - - name: Checkout code - uses: actions/checkout@v5 - with: - ref: ${{ inputs.ref }} - path: source - - - name: Create tarball and checksums - run: | - tar -czvf $SOURCE_FILENAME_PREFIX.tar.gz -C source --exclude-vcs . - md5sum $SOURCE_FILENAME_PREFIX.tar.gz > $SOURCE_FILENAME_PREFIX.tar.gz.md5 - sha256sum $SOURCE_FILENAME_PREFIX.tar.gz > $SOURCE_FILENAME_PREFIX.tar.gz.sha256 - # Move to a directory to simplify upload/sync - mkdir -p source-packages - cp -fv $SOURCE_FILENAME_PREFIX* source-packages/ - shell: bash - env: - SOURCE_FILENAME_PREFIX: source-${{ inputs.version }} - - - name: Upload the source artifacts - uses: actions/upload-artifact@v5 - with: - name: source-${{ inputs.version }} - path: source-packages/* - if-no-files-found: error - - # Pick up latest master version - - name: Checkout code for action - if: inputs.environment == 'staging' - uses: actions/checkout@v5 - with: - path: action-support - - - name: Push tarballs to S3 - # Only upload for staging - if: inputs.environment == 'staging' - uses: ./action-support/.github/actions/sync-to-bucket - with: - bucket: ${{ secrets.bucket }} - access_key_id: ${{ secrets.access_key_id }} - secret_access_key: ${{ secrets.secret_access_key }} - bucket-directory: "${{ inputs.version }}/source" - source-directory: "source-packages/" - - call-build-linux-packages: - name: ${{ matrix.distro }} package build and stage to S3 - environment: ${{ inputs.environment }} - runs-on: ${{ ((contains(matrix.distro, 'arm' ) || contains(matrix.distro, 'raspbian')) && 'ubuntu-22.04-arm') || 'ubuntu-22.04' }} - permissions: - contents: read - strategy: - fail-fast: false - matrix: ${{ fromJSON(inputs.build_matrix) }} - # Potentially we support continuing with all successful targets - continue-on-error: ${{ inputs.ignore_failing_targets || false }} - steps: - - name: Checkout code - uses: actions/checkout@v5 - with: - ref: ${{ inputs.ref }} - - - name: Set up Docker Buildx - uses: docker/setup-buildx-action@v3 - - # Raspbian requires ARMv6 emulation - - name: Set up QEMU - if: contains(matrix.distro, 'raspbian') - uses: docker/setup-qemu-action@v3 - with: - image: tonistiigi/binfmt:qemu-v7.0.0-28 # See: https://github.com/docker/setup-qemu-action/issues/198#issuecomment-2653791775 - - - name: Replace all special characters with dashes - id: formatted_distro - run: | - output=${INPUT//[\/]/-} - echo "$INPUT --> $output" - echo "replaced=$output" >> "$GITHUB_OUTPUT" - shell: bash - env: - INPUT: ${{ matrix.distro }} - - - name: fluent-bit - ${{ matrix.distro }} artifacts - run: | - ./build.sh - env: - FLB_DISTRO: ${{ matrix.distro }} - FLB_OUT_DIR: ${{ inputs.version }}/staging - FLB_NIGHTLY_BUILD: ${{ inputs.unstable }} - CMAKE_INSTALL_PREFIX: /opt/fluent-bit/ - working-directory: packaging - - - name: Upload the ${{ steps.formatted_distro.outputs.replaced }} artifacts - uses: actions/upload-artifact@v5 - with: - name: packages-${{ inputs.version }}-${{ steps.formatted_distro.outputs.replaced }} - path: packaging/packages/ - if-no-files-found: error - - - name: Retrieve target info for repo creation - id: get-target-info - timeout-minutes: 5 - # Remove any .arm648 suffix - # For ubuntu map to codename using the disto-info list (CSV) - run: | - sudo apt-get update - sudo apt-get install -y distro-info - sudo apt-get install -y awscli || sudo snap install aws-cli --classic - - TARGET=${DISTRO%*.arm64v8} - if [[ "$TARGET" == "ubuntu/"* ]]; then - UBUNTU_CODENAME=$(cut -d ',' -f 1,3 < "/usr/share/distro-info/ubuntu.csv"|grep "${TARGET##*/}"|cut -d ',' -f 2) - if [[ -n "$UBUNTU_CODENAME" ]]; then - TARGET="ubuntu/$UBUNTU_CODENAME" - else - echo "Unable to extract codename for $DISTRO" - exit 1 - fi - fi - echo "$TARGET" - echo "target=$TARGET" >> $GITHUB_OUTPUT - env: - DISTRO: ${{ matrix.distro }} - DEBIAN_FRONTEND: noninteractive - shell: bash - - - name: Verify output target - # Only upload for staging - # Make sure not to do a --delete on sync as it will remove the other architecture - run: | - if [ -z "${{ steps.get-target-info.outputs.target }}" ]; then - echo "Invalid (empty) target defined" - exit 1 - fi - shell: bash - - # Pick up latest master version - - name: Checkout code for action - if: inputs.environment == 'staging' - uses: actions/checkout@v5 - with: - path: action-support - - - name: Push packages to S3 - # Only upload for staging - if: inputs.environment == 'staging' - uses: ./action-support/.github/actions/sync-to-bucket - with: - bucket: ${{ secrets.bucket }} - access_key_id: ${{ secrets.access_key_id }} - secret_access_key: ${{ secrets.secret_access_key }} - bucket-directory: "${{ inputs.version }}/${{ steps.get-target-info.outputs.target }}/" - source-directory: "packaging/packages/${{ matrix.distro }}/${{ inputs.version }}/staging/" - - call-build-linux-packages-repo: - name: Create repo metadata in S3 - # Only upload for staging - if: inputs.environment == 'staging' - # Need to use 18.04 as 20.04 has no createrepo available - runs-on: ubuntu-22.04 - environment: ${{ inputs.environment }} - needs: - - call-build-linux-packages - continue-on-error: ${{ inputs.ignore_failing_targets || false }} - steps: - - name: Install dependencies - timeout-minutes: 10 - run: | - sudo apt-get update - sudo apt-get install -y createrepo-c aptly - sudo apt-get install -y awscli || sudo snap install aws-cli --classic - shell: bash - env: - DEBIAN_FRONTEND: noninteractive - - - name: Checkout code for repo metadata construction - always latest - uses: actions/checkout@v5 - - - name: Import GPG key for signing - id: import_gpg - uses: crazy-max/ghaction-import-gpg@v6 - with: - gpg_private_key: ${{ secrets.gpg_private_key }} - passphrase: ${{ secrets.gpg_private_key_passphrase }} - - - name: Create repositories on staging now - # We sync down what we have for the release directories. - # Create the repo metadata then upload to the root of the bucket. - # This will wipe out any versioned directories in the process. - run: | - rm -rf ./latest/ - mkdir -p ./latest/ - if [ -n "${AWS_S3_ENDPOINT}" ]; then - ENDPOINT="--endpoint-url ${AWS_S3_ENDPOINT}" - fi - aws s3 sync "s3://$AWS_S3_BUCKET/${{ inputs.version }}" ./latest/ --no-progress ${ENDPOINT} - - gpg --export -a "${{ steps.import_gpg.outputs.name }}" > ./latest/fluentbit.key - rpm --import ./latest/fluentbit.key - - ./update-repos.sh "./latest/" - echo "${{ inputs.version }}" > "./latest/latest-version.txt" - aws s3 sync "./latest/" "s3://$AWS_S3_BUCKET" --delete --follow-symlinks --no-progress ${ENDPOINT} - env: - GPG_KEY: ${{ steps.import_gpg.outputs.name }} - AWS_REGION: "us-east-1" - AWS_ACCESS_KEY_ID: ${{ secrets.access_key_id }} - AWS_SECRET_ACCESS_KEY: ${{ secrets.secret_access_key }} - AWS_S3_BUCKET: ${{ secrets.bucket }} - # To use with Minio locally (or update to whatever endpoint you want) - # AWS_S3_ENDPOINT: http://localhost:9000 - shell: bash - working-directory: packaging diff --git a/.github/workflows/call-build-macos.yaml b/.github/workflows/call-build-macos.yaml deleted file mode 100644 index 080a3b918f7..00000000000 --- a/.github/workflows/call-build-macos.yaml +++ /dev/null @@ -1,155 +0,0 @@ ---- -name: Reusable workflow to build MacOS packages optionally into S3 bucket - -on: - workflow_call: - inputs: - version: - description: The version of Fluent Bit to create. - type: string - required: true - ref: - description: The commit, tag or branch of Fluent Bit to checkout for building that creates the version above. - type: string - required: true - environment: - description: The Github environment to run this workflow on. - type: string - required: false - unstable: - description: Optionally add metadata to build to indicate an unstable build, set to the contents you want to add. - type: string - required: false - default: '' - secrets: - token: - description: The Github token or similar to authenticate with. - required: true - bucket: - description: The name of the S3 (US-East) bucket to push packages into. - required: false - access_key_id: - description: The S3 access key id for the bucket. - required: false - secret_access_key: - description: The S3 secret access key for the bucket. - required: false - -jobs: - call-build-macos-legacy-check: - # Requires https://github.com/fluent/fluent-bit/pull/5247 so will not build for previous branches - name: Extract any supporting metadata - outputs: - build-type: ${{ steps.determine-build-type.outputs.BUILD_TYPE }} - runs-on: ubuntu-latest - environment: ${{ inputs.environment }} - permissions: - contents: read - steps: - - name: Checkout code - uses: actions/checkout@v5 - with: - ref: ${{ inputs.ref }} - - - name: Determine build type - id: determine-build-type - run: | - BUILD_TYPE="legacy" - if [[ -f "conf/fluent-bit-macos.conf" ]]; then - BUILD_TYPE="modern" - fi - echo "Detected type: $BUILD_TYPE" - echo "BUILD_TYPE=$BUILD_TYPE" >> $GITHUB_OUTPUT - shell: bash - - call-build-macos-package: - if: needs.call-build-macos-legacy-check.outputs.build-type == 'modern' - runs-on: ${{ matrix.config.runner }} - environment: ${{ inputs.environment }} - needs: - - call-build-macos-legacy-check - permissions: - contents: read - strategy: - fail-fast: false - matrix: - config: - - name: "Apple Silicon macOS runner" - runner: macos-14 - cmake_version: "3.31.6" - - name: "Intel macOS runner" - runner: macos-14-large - cmake_version: "3.31.6" - - steps: - - name: Checkout repository - uses: actions/checkout@v5 - with: - ref: ${{ inputs.ref }} - - - name: Install dependencies - run: | - brew update - brew install bison flex libyaml openssl pkgconfig || true - - - name: Install cmake - uses: jwlawson/actions-setup-cmake@v2 - with: - cmake-version: "${{ matrix.config.cmake_version }}" - - - name: Build Fluent Bit packages - run: | - export LIBRARY_PATH=/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/lib:$LIBRARY_PATH - cmake -DCPACK_GENERATOR=productbuild -DFLB_NIGHTLY_BUILD=${{ inputs.unstable }} ../ -DOPENSSL_ROOT_DIR=$(brew --prefix openssl) - cmake --build . - cpack -G productbuild - working-directory: build - - - name: Upload build packages - uses: actions/upload-artifact@v5 - with: - name: macos-packages on ${{ matrix.config.runner }} - path: | - build/fluent-bit-*-apple* - build/fluent-bit-*-intel* - if-no-files-found: error - - call-build-macos-s3-upload: - name: Handle upload to S3 - # The environment must be used that has access to any secrets required, even if passed in. - # If passed in but not in the environment here you end up with an empty secret. - environment: ${{ inputs.environment }} - runs-on: ubuntu-latest - needs: - - call-build-macos-package - permissions: - contents: read - strategy: - fail-fast: false - matrix: - config: - - name: "Apple Silicon macOS package" - os: macos-14 - - steps: - - name: Checkout repository - uses: actions/checkout@v5 - with: - ref: ${{ inputs.ref }} - - - name: Download all artefacts - continue-on-error: true - uses: actions/download-artifact@v6 - with: - name: macos-packages on ${{ matrix.config.os }} - path: artifacts/ - - - name: Push MacOS packages to S3 - if: inputs.environment == 'staging' - uses: ./.github/actions/sync-to-bucket - with: - bucket: ${{ secrets.bucket }} - access_key_id: ${{ secrets.access_key_id }} - secret_access_key: ${{ secrets.secret_access_key }} - bucket-directory: "${{ inputs.version }}/macos/" - source-directory: "artifacts/" diff --git a/.github/workflows/call-build-windows.yaml b/.github/workflows/call-build-windows.yaml deleted file mode 100644 index 96b99c21661..00000000000 --- a/.github/workflows/call-build-windows.yaml +++ /dev/null @@ -1,237 +0,0 @@ ---- -name: Reusable workflow to build Windows packages optionally into S3 bucket - -# -# If you change dependencies etc here, please also check and update -# the other Windows build resources: -# -# - DEVELOPER_GUIDE.md "Windows" section -# - appveyor.yml -# - .github/workflows/call-build-windows.yaml -# - dockerfiles/Dockerfile.windows -# - -on: - workflow_call: - inputs: - version: - description: The version of Fluent Bit to create. - type: string - required: true - ref: - description: The commit, tag or branch of Fluent Bit to checkout for building that creates the version above. - type: string - required: true - environment: - description: The Github environment to run this workflow on. - type: string - required: false - unstable: - description: Optionally add metadata to build to indicate an unstable build, set to the contents you want to add. - type: string - required: false - default: '' - secrets: - token: - description: The Github token or similar to authenticate with. - required: true - bucket: - description: The name of the S3 (US-East) bucket to push packages into. - required: false - access_key_id: - description: The S3 access key id for the bucket. - required: false - secret_access_key: - description: The S3 secret access key for the bucket. - required: false - -jobs: - - call-build-windows-get-meta: - name: Determine build info - runs-on: ubuntu-latest - permissions: - contents: read - outputs: - armSupported: ${{ steps.armcheck.outputs.armSupported }} - steps: - - name: Checkout repository - uses: actions/checkout@v5 - with: - ref: ${{ inputs.ref }} - - - name: Determine if we are doing a build with ARM support - id: armcheck - # Check for new contents from https://github.com/fluent/fluent-bit/pull/6621 - run: | - if grep -q "winarm64" CMakeLists.txt ; then - echo "armSupported=true" >> $GITHUB_OUTPUT - else - echo "armSupported=false" >> $GITHUB_OUTPUT - fi - shell: bash - - call-build-windows-package: - runs-on: windows-latest - environment: ${{ inputs.environment }} - needs: - - call-build-windows-get-meta - strategy: - fail-fast: false - matrix: - config: - - name: "Windows 32bit" - arch: x86 - cmake_additional_opt: "" - vcpkg_triplet: x86-windows-static - cmake_version: "3.31.6" - - name: "Windows 64bit" - arch: x64 - cmake_additional_opt: "" - vcpkg_triplet: x64-windows-static - cmake_version: "3.31.6" - - name: "Windows 64bit (Arm64)" - arch: amd64_arm64 - cmake_additional_opt: "-DCMAKE_SYSTEM_NAME=Windows -DCMAKE_SYSTEM_VERSION=10.0 -DCMAKE_SYSTEM_PROCESSOR=ARM64" - vcpkg_triplet: arm64-windows-static - cmake_version: "3.31.6" - permissions: - contents: read - # Default environment variables can be overridden below. To prevent library pollution - without this other random libraries may be found on the path leading to failures. - env: - PATH: C:\ProgramData\Chocolatey\bin;c:/Program Files/Git/cmd;c:/Windows/system32;C:/Windows/System32/WindowsPowerShell/v1.0;$ENV:WIX/bin;C:/Program Files/CMake/bin;C:\vcpkg; - steps: - - name: Checkout repository - uses: actions/checkout@v5 - with: - ref: ${{ inputs.ref }} - - - name: Get dependencies - run: | - Invoke-WebRequest -OutFile winflexbison.zip $env:WINFLEXBISON - Expand-Archive winflexbison.zip -Destination C:\WinFlexBison - Copy-Item -Path C:\WinFlexBison/win_bison.exe C:\WinFlexBison/bison.exe - Copy-Item -Path C:\WinFlexBison/win_flex.exe C:\WinFlexBison/flex.exe - echo "C:\WinFlexBison" | Out-File -FilePath $env:GITHUB_PATH -Append - choco install cmake --version "${{ matrix.config.cmake_version }}" --force - env: - WINFLEXBISON: https://github.com/lexxmark/winflexbison/releases/download/v2.5.22/win_flex_bison-2.5.22.zip - shell: pwsh - - - name: Set up with Developer Command Prompt for Microsoft Visual C++ - uses: ilammy/msvc-dev-cmd@v1 - with: - arch: ${{ matrix.config.arch }} - - - name: Get gzip command and nsis w/ chocolatey - uses: crazy-max/ghaction-chocolatey@v3 - with: - args: install gzip nsis -y - - # http://man7.org/linux/man-pages/man1/date.1.html - - name: Get Date - id: get-date - run: | - echo "date=$(/bin/date -u "+%Y%m%d")" >> $GITHUB_OUTPUT - shell: bash - - - name: Restore cached packages of vcpkg - id: cache-vcpkg-sources - uses: actions/cache/restore@v4 - with: - path: | - C:\vcpkg\installed - key: ${{ runner.os }}-${{ matrix.config.arch }}-vcpkg-installed-${{ steps.get-date.outputs.date }} - restore-keys: | - ${{ runner.os }}-${{ matrix.config.arch }}-vcpkg-installed- - enableCrossOsArchive: false - - - name: Build openssl with vcpkg - run: | - C:\vcpkg\vcpkg install --recurse openssl --triplet ${{ matrix.config.vcpkg_triplet }} - shell: cmd - - - name: Build libyaml with vcpkg - run: | - C:\vcpkg\vcpkg install --recurse libyaml --triplet ${{ matrix.config.vcpkg_triplet }} - shell: cmd - - - name: Upgrade any outdated vcpkg packages - run: | - C:\vcpkg\vcpkg upgrade --no-dry-run - shell: cmd - - - name: Save packages of vcpkg - id: save-vcpkg-sources - uses: actions/cache/save@v4 - with: - path: | - C:\vcpkg\installed - key: ${{ steps.cache-vcpkg-sources.outputs.cache-primary-key }} - enableCrossOsArchive: false - - - name: Build Fluent Bit packages - # If we are using 2.0.* or earlier we need to exclude the ARM64 build as the dependencies fail to compile. - # Trying to do via an exclude for the job triggers linting errors. - # This is only supposed to be a workaround for now so can be easily removed later. - if: ${{ matrix.config.arch != 'amd64_arm64' || needs.call-build-windows-get-meta.outputs.armSupported == 'true' }} - run: | - cmake -G "NMake Makefiles" -DFLB_NIGHTLY_BUILD='${{ inputs.unstable }}' -DOPENSSL_ROOT_DIR='C:\vcpkg\installed\${{ matrix.config.vcpkg_triplet }}' ${{ matrix.config.cmake_additional_opt }} -DFLB_LIBYAML_DIR='C:\vcpkg\installed\${{ matrix.config.vcpkg_triplet }}' ../ - cmake --build . - cpack - working-directory: build - - - name: Upload build packages - # Skip upload if we skipped build. - if: ${{ matrix.config.arch != 'amd64_arm64' || needs.call-build-windows-get-meta.outputs.armSupported == 'true' }} - uses: actions/upload-artifact@v5 - with: - name: windows-packages-${{ matrix.config.arch }} - path: | - build/*-bit-*.exe - build/*-bit-*.msi - build/*-bit-*.zip - if-no-files-found: error - - call-build-windows-s3-upload: - name: Handle upload to S3 - # The environment must be used that has access to any secrets required, even if passed in. - # If passed in but not in the environment here you end up with an empty secret. - environment: ${{ inputs.environment }} - runs-on: ubuntu-latest - needs: - - call-build-windows-package - permissions: - contents: read - steps: - - name: Checkout repository - uses: actions/checkout@v5 - with: - # Need latest for checksum packaging script - ref: master - - - name: Download all artefacts - uses: actions/download-artifact@v6 - with: - pattern: windows-packages-* - merge-multiple: true - path: artifacts/ - - - name: Set up Windows checksums - run: | - packaging/windows-checksums.sh - ls -lR artifacts/ - shell: bash - env: - SOURCE_DIR: artifacts - - - name: Push Windows packages to S3 - # Only upload for staging - if: inputs.environment == 'staging' - uses: ./.github/actions/sync-to-bucket - with: - bucket: ${{ secrets.bucket }} - access_key_id: ${{ secrets.access_key_id }} - secret_access_key: ${{ secrets.secret_access_key }} - bucket-directory: "${{ inputs.version }}/windows/" - source-directory: "artifacts/" diff --git a/.github/workflows/call-integration-image-build.yaml b/.github/workflows/call-integration-image-build.yaml deleted file mode 100644 index 44bd407a6ee..00000000000 --- a/.github/workflows/call-integration-image-build.yaml +++ /dev/null @@ -1,129 +0,0 @@ -name: Reusable workflow for integration testing -on: - workflow_call: - inputs: - ref: - description: The SHA, commit or branch to checkout and build. - required: true - type: string - registry: - description: The registry to push container images to. - type: string - required: true - username: - description: The username for the registry. - type: string - required: true - image: - description: The name of the container image to push to the registry. - type: string - required: true - image-tag: - description: The tag of the image to for testing. - type: string - required: true - environment: - description: The Github environment to run this workflow on. - type: string - required: false - secrets: - token: - description: The Github token or similar to authenticate with for the registry. - required: true -jobs: - call-integration-image-build-container: - name: Integration test container image build - runs-on: ubuntu-latest - environment: ${{ inputs.environment }} - permissions: - contents: read - packages: write - steps: - - uses: actions/checkout@v5 - with: - ref: ${{ inputs.ref }} - - - name: Set up Docker Buildx - uses: docker/setup-buildx-action@v3 - - - name: Log in to the Container registry - uses: docker/login-action@v3 - with: - registry: ${{ inputs.registry }} - username: ${{ inputs.username }} - password: ${{ secrets.token }} - - - name: Extract metadata from Github - id: meta - uses: docker/metadata-action@v5 - with: - images: ${{ inputs.registry }}/${{ inputs.image }} - tags: | - raw,${{ inputs.image-tag }} - - - name: Build the AMD64 image - uses: docker/build-push-action@v6 - with: - file: ./dockerfiles/Dockerfile - context: . - tags: ${{ steps.meta.outputs.tags }} - labels: ${{ steps.meta.outputs.labels }} - platforms: linux/amd64 - target: production - provenance: false - push: true - load: false - - - name: Extract metadata from Github - id: meta-debug - uses: docker/metadata-action@v5 - with: - images: ${{ inputs.registry }}/${{ inputs.image }} - tags: | - raw,${{ inputs.image-tag }}-debug - - - name: Build the AMD64 debug image - uses: docker/build-push-action@v6 - with: - file: ./dockerfiles/Dockerfile - context: . - tags: ${{ steps.meta-debug.outputs.tags }} - labels: ${{ steps.meta-debug.outputs.labels }} - provenance: false - target: debug - platforms: linux/amd64 - push: true - load: false - - call-integration-image-build-smoke-test: - name: Integration test image is valid - needs: call-integration-image-build-container - runs-on: ubuntu-latest - environment: ${{ inputs.environment }} - permissions: - contents: read - packages: read - steps: - - name: Checkout repository - uses: actions/checkout@v5 - with: - ref: ${{ inputs.ref }} - - - name: Log in to the Container registry - uses: docker/login-action@v3 - with: - registry: ${{ inputs.registry }} - username: ${{ inputs.username }} - password: ${{ secrets.token }} - - - name: Test the HTTP server is responding - timeout-minutes: 5 - run: | - packaging/testing/smoke/container/container-smoke-test.sh - shell: bash - env: - CONTAINER_NAME: local-smoke-${{ inputs.image-tag }} - CONTAINER_ARCH: linux/amd64 - REGISTRY: ${{ inputs.registry }} - IMAGE_NAME: ${{ inputs.image }} - IMAGE_TAG: ${{ inputs.image-tag }} \ No newline at end of file diff --git a/.github/workflows/call-run-integration-test.yaml b/.github/workflows/call-run-integration-test.yaml deleted file mode 100644 index f03f0dfcf3f..00000000000 --- a/.github/workflows/call-run-integration-test.yaml +++ /dev/null @@ -1,294 +0,0 @@ ---- -name: Reusable workflow to run integration tests with specific images -on: - workflow_call: - secrets: - opensearch_aws_access_id: - description: AWS access ID to use within the opensearch integration tests. - required: true - opensearch_aws_secret_key: - description: AWS secret key to use within the opensearch integration tests. - required: true - opensearch_admin_password: - description: Default admin password use within the opensearch integration tests. - required: true - terraform_api_token: - description: Default terraform API token to use when running integration tests. - required: true - gcp-service-account-key: - description: The GCP service account key to use. - required: true - inputs: - image_name: - description: The image repository and name to use. - required: false - default: ghcr.io/fluent/fluent-bit/master - type: string - image_tag: - description: The image tag to use. - required: false - default: x86_64 - type: string - ref: - description: The commit, tag or branch of the repository to checkout - type: string - required: false - default: main -jobs: - call-run-terraform-setup: - name: Run Terraform set up - runs-on: ubuntu-latest - permissions: - packages: read - outputs: - aws-opensearch-endpoint: ${{ steps.aws-opensearch-endpoint.outputs.stdout }} - gke-cluster-name: ${{ steps.gke-cluster-name.outputs.stdout }} - gke-cluster-region: ${{ steps.gke-cluster-region.outputs.stdout }} - gke-cluster-zone: ${{ steps.gke-cluster-zone.outputs.stdout }} - steps: - - uses: actions/checkout@v5 - with: - ref: ${{ inputs.ref }} - repository: fluent/fluent-bit-ci - - - uses: hashicorp/setup-terraform@v3 - with: - cli_config_credentials_hostname: 'app.terraform.io' - cli_config_credentials_token: ${{ secrets.terraform_api_token }} - - - id: 'auth' - uses: 'google-github-actions/auth@v3' - with: - credentials_json: ${{ secrets.gcp-service-account-key }} - - - name: 'Set up Cloud SDK' - uses: 'google-github-actions/setup-gcloud@v3' - - - name: Replace terraform variables. - run: | - sed -i -e "s|\$OPENSEARCH_AWS_ACCESS_ID|${{ secrets.opensearch_aws_access_id }}|g" default.auto.tfvars - sed -i -e "s|\$OPENSEARCH_AWS_SECRET_KEY|${{ secrets.opensearch_aws_secret_key }}|g" default.auto.tfvars - sed -i -e "s|\$OPENSEARCH_ADMIN_PASSWORD|${{ secrets.opensearch_admin_password }}|g" default.auto.tfvars - - cat <> default.auto.tfvars - gcp_sa_key = <<-EOF - ${{ secrets.gcp-service-account-key }} - EOF - EOT - working-directory: terraform/ - shell: bash - - - name: Terraform init - id: init - run: terraform init - working-directory: terraform/ - - - name: Terraform fmt - id: fmt - run: | - find . -name "*.tf" -exec terraform fmt -check {} \; - working-directory: terraform - - - name: Terraform validate - id: validate - run: terraform validate -no-color - working-directory: terraform/ - - - name: Terraform Plan - if: github.event_name == 'pull_request' - id: plan - run: terraform plan -no-color - working-directory: terraform - continue-on-error: true - - - uses: actions/github-script@v8 - if: github.event_name == 'pull_request' - env: - PLAN: "terraform\n${{ steps.plan.outputs.stdout }}" - with: - github-token: ${{ secrets.GITHUB_TOKEN }} - script: | - const output = `#### Terraform Format and Style 🖌\`${{ steps.fmt.outcome }}\` - #### Terraform Initialization ⚙️\`${{ steps.init.outcome }}\` - #### Terraform Validation 🤖\`${{ steps.validate.outcome }}\` - #### Terraform Plan 📖\`${{ steps.plan.outcome }}\` -
Show Plan - \`\`\`\n - ${process.env.PLAN} - \`\`\` -
- *Pushed by: @${{ github.actor }}, Action: \`${{ github.event_name }}\`*`; - github.rest.issues.createComment({ - issue_number: context.issue.number, - owner: context.repo.owner, - repo: context.repo.repo, - body: output - }) - - - name: Terraform Plan Status - if: steps.plan.outcome == 'failure' - run: exit 1 - - - name: Terraform Apply - if: github.event_name != 'pull_request' - id: apply - run: terraform apply -input=false -auto-approve - working-directory: terraform - env: - TF_LOG: TRACE - - # We're using the terraform wrapper here so separate steps for each output variable - - id: aws-opensearch-endpoint - run: terraform output -no-color -raw aws-opensearch-endpoint - working-directory: terraform - shell: bash - - - id: gke-cluster-name - run: terraform output -no-color -raw gke_kubernetes_cluster_name - working-directory: terraform - shell: bash - - - id: gke-cluster-region - run: terraform output -no-color -raw gke_region - working-directory: terraform - shell: bash - - - id: gke-cluster-zone - run: terraform output -no-color -raw gke_zone - working-directory: terraform - shell: bash - - call-run-integration-kind: - name: Run integration tests on KIND - needs: - - call-run-terraform-setup - # Can test for multiple K8S versions with KIND - strategy: - fail-fast: false - matrix: - k8s-release: [ 'v1.23.5', 'v1.22.7', 'v1.21.10' ] - runs-on: ubuntu-latest - permissions: - contents: read - packages: read - steps: - - name: Test image exists and cache locally - run: docker pull ${{ inputs.image_name }}:${{ inputs.image_tag }} - - - uses: actions/checkout@v5 - with: - ref: ${{ inputs.ref }} - repository: fluent/fluent-bit-ci - - - name: Configure system for Opensearch - run: | - sudo sysctl -w vm.max_map_count=262144 - sysctl -p - shell: bash - - - name: Setup BATS - uses: mig4/setup-bats@v1 - with: - bats-version: 1.9.0 - - - name: Create k8s Kind Cluster - uses: helm/kind-action@v1.13.0 - with: - node_image: kindest/node:${{ matrix.k8s-release }} - cluster_name: kind - - - name: Set up Helm - uses: azure/setup-helm@v4 - with: - version: v3.8.1 - - - name: Set up Kubectl - uses: azure/setup-kubectl@v4 - - - name: Run tests - timeout-minutes: 60 - run: | - kind load docker-image ${{ inputs.image_name }}:${{ inputs.image_tag }} - ./run-tests.sh - shell: bash - env: - FLUENTBIT_IMAGE_REPOSITORY: ${{ inputs.image_name }} - FLUENTBIT_IMAGE_TAG: ${{ inputs.image_tag }} - HOSTED_OPENSEARCH_HOST: ${{ needs.call-run-terraform-setup.outputs.aws-opensearch-endpoint }} - HOSTED_OPENSEARCH_PORT: 443 - HOSTED_OPENSEARCH_USERNAME: admin - HOSTED_OPENSEARCH_PASSWORD: ${{ secrets.opensearch_admin_password }} - - call-run-integration-cloud: - name: Run integration tests on cloud providers - needs: - - call-run-terraform-setup - runs-on: ubuntu-latest - permissions: - contents: read - strategy: - fail-fast: false - matrix: - cloud: - - gke - env: - USE_GKE_GCLOUD_AUTH_PLUGIN: true - steps: - - uses: actions/checkout@v5 - with: - ref: ${{ inputs.ref }} - repository: fluent/fluent-bit-ci - - - if: matrix.cloud == 'gke' - uses: 'google-github-actions/auth@v3' - with: - credentials_json: ${{ secrets.gcp-service-account-key }} - - - if: matrix.cloud == 'gke' - uses: 'google-github-actions/setup-gcloud@v3' - with: - install_components: 'gke-gcloud-auth-plugin' - - - name: Setup BATS - uses: mig4/setup-bats@v1 - with: - bats-version: 1.9.0 - - - name: Set up Helm - uses: azure/setup-helm@v4 - with: - version: v3.8.1 - - - name: Set up Kubectl - uses: azure/setup-kubectl@v4 - - - name: Get the GKE Kubeconfig - if: matrix.cloud == 'gke' - uses: 'google-github-actions/get-gke-credentials@v3' - with: - cluster_name: ${{ needs.call-run-terraform-setup.outputs.gke-cluster-name }} - location: ${{ needs.call-run-terraform-setup.outputs.gke-cluster-zone }} - - - name: Check Kubeconfig set up - run: | - kubectl cluster-info - kubectl get nodes --show-labels - kubectl get pods --all-namespaces --show-labels - kubectl get ns - shell: bash - - - name: Run tests - timeout-minutes: 60 - run: | - ./run-tests.sh - shell: bash - env: - # Namespace per test run to hopefully isolate a bit - TEST_NAMESPACE: test-${{ github.run_id }} - FLUENTBIT_IMAGE_REPOSITORY: ${{ inputs.image_name }} - FLUENTBIT_IMAGE_TAG: ${{ inputs.image_tag }} - HOSTED_OPENSEARCH_HOST: ${{ needs.call-run-terraform-setup.outputs.aws-opensearch-endpoint }} - HOSTED_OPENSEARCH_PORT: 443 - HOSTED_OPENSEARCH_USERNAME: admin - USE_GKE_GCLOUD_AUTH_PLUGIN: true - HOSTED_OPENSEARCH_PASSWORD: ${{ secrets.opensearch_admin_password }} diff --git a/.github/workflows/call-test-images.yaml b/.github/workflows/call-test-images.yaml deleted file mode 100644 index e78f16b0863..00000000000 --- a/.github/workflows/call-test-images.yaml +++ /dev/null @@ -1,205 +0,0 @@ ---- -name: Reusable workflow to test container images -on: - workflow_call: - inputs: - registry: - description: The registry to pull the images to test from. - type: string - required: true - username: - description: The username for authentication with the registry. - type: string - required: true - image: - description: The name of the image to pull from the registry for testing. - type: string - required: true - image-tag: - description: The tag of the image to pull from the registry for testing. - type: string - required: true - environment: - description: The Github environment to run this workflow on. - type: string - required: false - ref: - description: The commit, tag or branch to checkout for testing scripts. - type: string - default: master - required: false - secrets: - token: - description: The Github token or similar to authenticate with for the registry. - required: true - cosign_key: - description: The optional Cosign key to use for verifying the images. - required: false -jobs: - call-test-images-cosign-verify: - name: Cosign verification of container image - environment: ${{ inputs.environment }} - runs-on: ubuntu-latest - steps: - - name: Install cosign - uses: sigstore/cosign-installer@v2 - - - name: Log in to the Container registry - uses: docker/login-action@v3 - with: - registry: ${{ inputs.registry }} - username: ${{ inputs.username }} - password: ${{ secrets.token }} - - # There is currently no way to verify a local image, e.g. for a particular architecture - # https://github.com/sigstore/cosign/issues/60 - - name: Verify image with a key - # Only key-based verification currently - if: ${{ env.COSIGN_PUBLIC_KEY }} - run: | - echo -e "${COSIGN_PUBLIC_KEY}" > /tmp/my_cosign.pub - cosign verify --key /tmp/my_cosign.pub "$REGISTRY/$IMAGE_NAME:$IMAGE_TAG" - rm -f /tmp/my_cosign.key - shell: bash - env: - COSIGN_PUBLIC_KEY: ${{ secrets.cosign_key }} - REGISTRY: ${{ inputs.registry }} - IMAGE_NAME: ${{ inputs.image }} - IMAGE_TAG: ${{ inputs.image-tag }} - - call-test-images-container-architecture: - name: ${{ matrix.arch }} image architecture verification - runs-on: ubuntu-latest - environment: ${{ inputs.environment }} - # Test as much as we can - continue-on-error: true - strategy: - fail-fast: false - matrix: - arch: [ linux/amd64, linux/arm64, linux/arm/v7 ] - include: - # Rather than extract the specific central arch we just provide it - - arch: linux/amd64 - expected: amd64 - - arch: linux/arm64 - expected: arm64 - - arch: linux/arm/v7 - expected: arm - steps: - - name: Log in to the Container registry - uses: docker/login-action@v3 - with: - registry: ${{ inputs.registry }} - username: ${{ inputs.username }} - password: ${{ secrets.token }} - - - name: Pull and extract architecture of image - id: extract_arch - run: | - docker pull --platform=${{ matrix.arch }} "$REGISTRY/$IMAGE_NAME:$IMAGE_TAG" - ACTUAL_ARCH=$(docker image inspect --format '{{.Architecture}}' "$REGISTRY/$IMAGE_NAME:$IMAGE_TAG") - echo "ACTUAL_ARCH=$ACTUAL_ARCH" >> $GITHUB_OUTPUT - docker image inspect "$REGISTRY/$IMAGE_NAME:$IMAGE_TAG" - shell: bash - env: - REGISTRY: ${{ inputs.registry }} - IMAGE_NAME: ${{ inputs.image }} - IMAGE_TAG: ${{ inputs.image-tag }} - - - name: Validate architecture of image - run: | - if [[ "$ACTUAL_ARCH" != "$EXPECTED_ARCH" ]]; then - echo "Invalid architecture for $REGISTRY/$IMAGE_NAME: $ACTUAL_ARCH != $EXPECTED_ARCH" - exit 1 - fi - env: - EXPECTED_ARCH: ${{ matrix.expected }} - ACTUAL_ARCH: ${{ steps.extract_arch.outputs.ACTUAL_ARCH }} - shell: bash - - call-test-images-container-smoke: - # Ensure each architecture container runs up correctly with default configuration. - name: ${{ matrix.arch }} smoke test for local container images - runs-on: ubuntu-latest - environment: ${{ inputs.environment }} - # No point running if the architecture is incorrect - needs: [ call-test-images-container-architecture ] - continue-on-error: true - strategy: - fail-fast: false # verify all - matrix: - arch: [ linux/amd64, linux/arm64, linux/arm/v7 ] - steps: - - name: Checkout repository - uses: actions/checkout@v5 - with: - ref: ${{ inputs.ref }} - - - name: Log in to the Container registry - uses: docker/login-action@v3 - with: - registry: ${{ inputs.registry }} - username: ${{ inputs.username }} - password: ${{ secrets.token }} - - - name: Set up QEMU using standard action - if: ${{ matrix.arch != 'linux/arm64' }} - uses: docker/setup-qemu-action@v3 - - # Without this QEMU fails for ARM64 - - name: Set up binary emulation for QEMU - if: ${{ matrix.arch == 'linux/arm64' }} - run: | - docker run --privileged --rm tonistiigi/binfmt --install all - - - name: Verify platform is supported with Alpine container - # We make sure there is not an inherent issue with this architecture on this runner - run: | - docker run --rm --platform=${{ matrix.arch }} alpine uname -a - - - name: Test the HTTP server is responding - timeout-minutes: 10 - run: | - packaging/testing/smoke/container/container-smoke-test.sh - shell: bash - env: - CONTAINER_NAME: local-smoke-${{ matrix.arch }} - CONTAINER_ARCH: ${{ matrix.arch }} - REGISTRY: ${{ inputs.registry }} - IMAGE_NAME: ${{ inputs.image }} - IMAGE_TAG: ${{ inputs.image-tag }} - - call-test-images-k8s-smoke: - # No need to test every architecture here, that is covered by local container tests. - # Testing helm chart deployment on KIND here. - name: Helm chart test on KIND - environment: ${{ inputs.environment }} - runs-on: ubuntu-latest - steps: - - name: Checkout repository - uses: actions/checkout@v5 - with: - ref: ${{ inputs.ref }} - - - name: Create k8s Kind Cluster - uses: helm/kind-action@v1.13.0 - - - name: Set up Helm - uses: azure/setup-helm@v4 - with: - version: v3.6.3 - - - name: Set up Kubectl - uses: azure/setup-kubectl@v4 - - - name: Test the HTTP server is responding - timeout-minutes: 5 - run: | - packaging/testing/smoke/k8s/k8s-smoke-test.sh - shell: bash - env: - NAMESPACE: default - REGISTRY: ${{ inputs.registry }} - IMAGE_NAME: ${{ inputs.image }} - IMAGE_TAG: ${{ inputs.image-tag }} - diff --git a/.github/workflows/call-test-packages.yaml b/.github/workflows/call-test-packages.yaml deleted file mode 100644 index b4e5e642f57..00000000000 --- a/.github/workflows/call-test-packages.yaml +++ /dev/null @@ -1,55 +0,0 @@ ---- -name: Reusable workflow to test packages in S3 bucket -on: - workflow_call: - inputs: - environment: - description: The Github environment to run this workflow on. - type: string - required: false - ref: - description: The commit, tag or branch to checkout for testing scripts. - type: string - default: master - required: false - secrets: - token: - description: The Github token or similar to authenticate with. - required: true - bucket: - description: The name of the S3 (US-East) bucket to pull packages from. - required: true - -jobs: - call-test-packaging: - # We use Dokken to run a series of test suites locally on containers representing - # each OS we want to install on. This creates custom images with the package - # installed and configured as per our documentation then verifies that the agent - # is running at startup. - name: ${{ matrix.distro }} package tests - runs-on: ubuntu-latest - environment: ${{ inputs.environment }} - env: - AWS_URL: https://${{ secrets.bucket }}.s3.amazonaws.com - strategy: - fail-fast: false - matrix: - distro: [ amazonlinux2022, amazonlinux2, centos7, centos8, debian10, debian11, ubuntu1804, ubuntu2004, ubuntu2204 ] - steps: - - name: Checkout repository - uses: actions/checkout@v5 - - - name: Get the version - id: get_version - run: | - curl --fail -LO "$AWS_URL/latest-version.txt" - VERSION=$(cat latest-version.txt) - echo "VERSION=$VERSION" >> $GITHUB_OUTPUT - shell: bash - - - name: Run package installation tests - run: | - packaging/testing/smoke/packages/run-package-tests.sh - env: - PACKAGE_TEST: ${{ matrix.distro }} - RELEASE_URL: https://packages.fluentbit.io diff --git a/.github/workflows/call-windows-unit-tests.yaml b/.github/workflows/call-windows-unit-tests.yaml deleted file mode 100644 index 9d9c5ca0b46..00000000000 --- a/.github/workflows/call-windows-unit-tests.yaml +++ /dev/null @@ -1,177 +0,0 @@ ---- -name: Reusable workflow to run unit tests on Windows packages (only for x86 and x64) - -on: - workflow_call: - inputs: - version: - description: The version of Fluent Bit to create. - type: string - required: true - ref: - description: The commit, tag or branch of Fluent Bit to checkout for building that creates the version above. - type: string - required: true - environment: - description: The Github environment to run this workflow on. - type: string - required: false - unstable: - description: Optionally add metadata to build to indicate an unstable build, set to the contents you want to add. - type: string - required: false - default: '' - secrets: - token: - description: The Github token or similar to authenticate with. - required: true - -jobs: - call-build-windows-unit-test: - runs-on: ${{ matrix.config.os }} - environment: ${{ inputs.environment }} - strategy: - fail-fast: false - matrix: - config: - - name: "Windows 32bit" - arch: x86 - cmake_additional_opt: "" - vcpkg_triplet: x86-windows-static - cmake_version: "3.31.6" - os: windows-latest - - name: "Windows 64bit" - arch: x64 - cmake_additional_opt: "" - vcpkg_triplet: x64-windows-static - cmake_version: "3.31.6" - os: windows-latest - - name: "Windows 64bit (Arm64)" - arch: amd64_arm64 - cmake_additional_opt: "-DCMAKE_SYSTEM_NAME=Windows -DCMAKE_SYSTEM_VERSION=10.0 -DCMAKE_SYSTEM_PROCESSOR=ARM64" - vcpkg_triplet: arm64-windows-static - cmake_version: "3.31.6" - os: windows-11-arm - permissions: - contents: read - # Default environment variables can be overridden below. To prevent library pollution - without this other random libraries may be found on the path leading to failures. - env: - PATH: C:\ProgramData\Chocolatey\bin;c:/Program Files/Git/cmd;c:/Windows/system32;C:/Windows/System32/WindowsPowerShell/v1.0;$ENV:WIX/bin;C:/Program Files/CMake/bin;C:\vcpkg; - steps: - - name: Checkout repository - uses: actions/checkout@v5 - with: - ref: ${{ inputs.ref }} - - - name: Get dependencies - run: | - Invoke-WebRequest -OutFile winflexbison.zip $env:WINFLEXBISON - Expand-Archive winflexbison.zip -Destination C:\WinFlexBison - Copy-Item -Path C:\WinFlexBison/win_bison.exe C:\WinFlexBison/bison.exe - Copy-Item -Path C:\WinFlexBison/win_flex.exe C:\WinFlexBison/flex.exe - echo "C:\WinFlexBison" | Out-File -FilePath $env:GITHUB_PATH -Append - choco install cmake.portable --version "${{ matrix.config.cmake_version }}" --force - env: - WINFLEXBISON: https://github.com/lexxmark/winflexbison/releases/download/v2.5.22/win_flex_bison-2.5.22.zip - shell: pwsh - - - name: Set up with Developer Command Prompt for Microsoft Visual C++ - uses: ilammy/msvc-dev-cmd@v1 - with: - arch: ${{ matrix.config.arch }} - - - name: Get gzip command w/ chocolatey - uses: crazy-max/ghaction-chocolatey@v3 - with: - args: install gzip -y - - # http://man7.org/linux/man-pages/man1/date.1.html - - name: Get Date - id: get-date - run: | - echo "date=$(/bin/date -u "+%Y%m%d")" >> $GITHUB_OUTPUT - shell: bash - - - name: Restore cached packages of vcpkg - id: cache-unit-test-vcpkg-sources - uses: actions/cache/restore@v4 - with: - path: | - C:\vcpkg\installed - key: ${{ runner.os }}-${{ matrix.config.arch }}-wintest-vcpkg-installed-${{ steps.get-date.outputs.date }} - restore-keys: | - ${{ runner.os }}-${{ matrix.config.arch }}-wintest-vcpkg-installed- - enableCrossOsArchive: false - - - name: Build openssl with vcpkg - run: | - C:\vcpkg\vcpkg install --recurse openssl --triplet ${{ matrix.config.vcpkg_triplet }} - shell: cmd - - - name: Build libyaml with vcpkg - run: | - C:\vcpkg\vcpkg install --recurse libyaml --triplet ${{ matrix.config.vcpkg_triplet }} - shell: cmd - - - name: Upgrade any outdated vcpkg packages - run: | - C:\vcpkg\vcpkg upgrade --no-dry-run - shell: cmd - - - name: Save packages of vcpkg - id: save-vcpkg-sources - uses: actions/cache/save@v4 - with: - path: | - C:\vcpkg\installed - key: ${{ steps.cache-unit-test-vcpkg-sources.outputs.cache-primary-key }} - enableCrossOsArchive: false - - - name: Build unit-test for Fluent Bit packages (x86, x64, and ARM64) - run: | - cmake -G "NMake Makefiles" ` - -D FLB_TESTS_INTERNAL=On ` - -D FLB_NIGHTLY_BUILD='${{ inputs.unstable }}' ` - -D OPENSSL_ROOT_DIR='C:\vcpkg\installed\${{ matrix.config.vcpkg_triplet }}' ` - ${{ matrix.config.cmake_additional_opt }} ` - -D FLB_LIBYAML_DIR='C:\vcpkg\installed\${{ matrix.config.vcpkg_triplet }}' ` - -D FLB_WITHOUT_flb-rt-out_elasticsearch=On ` - -D FLB_WITHOUT_flb-rt-out_td=On ` - -D FLB_WITHOUT_flb-rt-out_forward=On ` - -D FLB_WITHOUT_flb-rt-in_disk=On ` - -D FLB_WITHOUT_flb-rt-in_proc=On ` - -D FLB_WITHOUT_flb-it-parser=On ` - -D FLB_WITHOUT_flb-it-unit_sizes=On ` - -D FLB_WITHOUT_flb-it-network=On ` - -D FLB_WITHOUT_flb-it-pack=On ` - -D FLB_WITHOUT_flb-it-signv4=On ` - -D FLB_WITHOUT_flb-it-aws_credentials=On ` - -D FLB_WITHOUT_flb-it-aws_credentials_ec2=On ` - -D FLB_WITHOUT_flb-it-aws_credentials_http=On ` - -D FLB_WITHOUT_flb-it-aws_credentials_profile=On ` - -D FLB_WITHOUT_flb-it-aws_credentials_sts=On ` - -D FLB_WITHOUT_flb-it-aws_util=On ` - -D FLB_WITHOUT_flb-it-input_chunk=On ` - ../ - cmake --build . - shell: pwsh - working-directory: build - - - name: Upload unit test binaries - uses: actions/upload-artifact@v5 - with: - name: windows-unit-tests-${{ matrix.config.arch }} - path: | - build/**/*.exe - if-no-files-found: error - - - name: Display dependencies w/ dumpbin - run: | - dumpbin /dependents .\bin\fluent-bit.exe - working-directory: build - - - name: Run unit tests for Fluent Bit packages (x86, x64, and ARM64) - run: | - ctest --build-run-dir "$PWD" --output-on-failure - shell: pwsh - working-directory: build diff --git a/.github/workflows/cron-scorecards-analysis.yaml b/.github/workflows/cron-scorecards-analysis.yaml deleted file mode 100644 index 738ed33f7cb..00000000000 --- a/.github/workflows/cron-scorecards-analysis.yaml +++ /dev/null @@ -1,52 +0,0 @@ - ---- -# https://openssf.org/blog/2022/01/19/reducing-security-risks-in-open-source-software-at-scale-scorecards-launches-v4/ -name: Scorecards supply-chain security -on: - push: - # Only the default branch is supported. - branches: - - main - schedule: - # Weekly on Saturdays. - - cron: '30 1 * * 6' - workflow_dispatch: - -# Declare default permissions as read only. -permissions: read-all - -jobs: - scorecard-analysis: - name: Scorecards analysis - runs-on: ubuntu-latest - permissions: - # Needed to upload the results to code-scanning dashboard. - security-events: write - # Needed for GitHub OIDC token if publish_results is true - id-token: write - steps: - - name: "Checkout code" - uses: actions/checkout@v5 - with: - persist-credentials: false - - - name: "Run analysis" - uses: ossf/scorecard-action@4eaacf0543bb3f2c246792bd56e8cdeffafb205a - with: - results_file: results.sarif - results_format: sarif - publish_results: true - - - name: "Upload artifact" - uses: actions/upload-artifact@v5 - with: - name: SARIF file - path: results.sarif - retention-days: 7 - - # Upload the results to GitHub's code scanning dashboard. - - name: "Upload to code-scanning" - uses: github/codeql-action/upload-sarif@v4 - with: - sarif_file: results.sarif - category: ossf-scorecard diff --git a/.github/workflows/cron-stale.yaml b/.github/workflows/cron-stale.yaml deleted file mode 100644 index 9ee9a64afa7..00000000000 --- a/.github/workflows/cron-stale.yaml +++ /dev/null @@ -1,26 +0,0 @@ -name: 'Close stale issues and PR(s)' -on: - schedule: - - cron: '30 1 * * *' - -jobs: - stale: - name: Mark stale - runs-on: ubuntu-latest - steps: - - uses: actions/stale@v10 - with: - repo-token: ${{ secrets.GITHUB_TOKEN }} - stale-issue-message: 'This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days. Maintainers can add the `exempt-stale` label.' - stale-pr-message: 'This PR is stale because it has been open 45 days with no activity. Remove stale label or comment or this will be closed in 10 days.' - close-issue-message: 'This issue was closed because it has been stalled for 5 days with no activity.' - days-before-stale: 90 - days-before-close: 5 - days-before-pr-close: -1 - exempt-all-pr-assignees: true - exempt-all-pr-milestones: true - exempt-issue-labels: 'long-term,enhancement,exempt-stale' - # start with the oldest - ascending: true - # keep an eye on this - operations-per-run: 250 diff --git a/.github/workflows/cron-trivy.yaml b/.github/workflows/cron-trivy.yaml deleted file mode 100644 index f0b84f7e5b0..00000000000 --- a/.github/workflows/cron-trivy.yaml +++ /dev/null @@ -1,87 +0,0 @@ ---- -# Separate action to allow us to initiate manually and run regularly -name: Trivy security analysis of latest containers - -# Run on every push to master, or weekly. -# Allow users to trigger an asynchronous run anytime too. -on: - push: - branches: [master] - schedule: - # 13:44 on Thursday - - cron: 44 13 * * 4 - workflow_dispatch: - -jobs: - # Run Trivy on the latest container and update the security code scanning results tab. - trivy-latest: - # Matrix job that pulls the latest image for each supported architecture via the multi-arch latest manifest. - # We then re-tag it locally to ensure that when Trivy runs it does not pull the latest for the wrong architecture. - name: ${{ matrix.arch }} container scan - runs-on: [ ubuntu-latest ] - continue-on-error: true - strategy: - fail-fast: false - # Matrix of architectures to test along with their local tags for special character substitution - matrix: - # The architecture for the container runtime to pull. - arch: [ linux/amd64, linux/arm64, linux/arm/v7 ] - # In a few cases we need the arch without slashes so provide a descriptive extra field for that. - # We could also extract or modify this via a regex but this seemed simpler and easier to follow. - include: - - arch: linux/amd64 - local_tag: x86_64 - - arch: linux/arm64 - local_tag: arm64 - - arch: linux/arm/v7 - local_tag: arm32 - steps: - - name: Log in to the Container registry - uses: docker/login-action@v3 - with: - username: ${{ secrets.DOCKERHUB_USERNAME }} - password: ${{ secrets.DOCKERHUB_TOKEN }} - - - name: Pull the image for the architecture we're testing - run: | - docker pull --platform ${{ matrix.arch }} fluent/fluent-bit:latest - - - name: Tag locally to ensure we do not pull wrong architecture - run: | - docker tag fluent/fluent-bit:latest local/fluent-bit:${{ matrix.local_tag }} - - # Deliberately chosen master here to keep up-to-date. - - name: Run Trivy vulnerability scanner for any major issues - uses: aquasecurity/trivy-action@master - with: - image-ref: local/fluent-bit:${{ matrix.local_tag }} - # Filter out any that have no current fix. - ignore-unfixed: true - # Only include major issues. - severity: CRITICAL,HIGH - format: template - template: '@/contrib/sarif.tpl' - output: trivy-results-${{ matrix.local_tag }}.sarif - - # Show all detected issues. - # Note this will show a lot more, including major un-fixed ones. - - name: Run Trivy vulnerability scanner for local output - uses: aquasecurity/trivy-action@master - with: - image-ref: local/fluent-bit:${{ matrix.local_tag }} - format: table - - - name: Upload Trivy scan results to GitHub Security tab - uses: github/codeql-action/upload-sarif@v4 - with: - sarif_file: trivy-results-${{ matrix.local_tag }}.sarif - category: ${{ matrix.arch }} container - wait-for-processing: true - - # In case we need to analyse the uploaded files for some reason. - - name: Detain results for debug if needed - uses: actions/upload-artifact@v5 - with: - name: trivy-results-${{ matrix.local_tag }}.sarif - path: trivy-results-${{ matrix.local_tag }}.sarif - if-no-files-found: error diff --git a/.github/workflows/cron-unstable-build.yaml b/.github/workflows/cron-unstable-build.yaml deleted file mode 100644 index a48999f45b8..00000000000 --- a/.github/workflows/cron-unstable-build.yaml +++ /dev/null @@ -1,226 +0,0 @@ ---- -name: Unstable build - -on: - workflow_dispatch: - inputs: - branch: - description: The branch to create an unstable release for/from. - type: string - default: master - required: true - - # Run nightly build at this time, bit of trial and error but this seems good. - schedule: - - cron: "0 6 * * *" # master build - - cron: "0 12 * * *" # 3.2 build - - cron: "0 18 * * *" # master build - -# We do not want a new unstable build to run whilst we are releasing the current unstable build. -concurrency: unstable-build-release - -jobs: - # This job provides this metadata for the other jobs to use. - unstable-build-get-meta: - name: Get metadata to add to build - runs-on: ubuntu-latest - environment: unstable - outputs: - date: ${{ steps.date.outputs.date }} - branch: ${{ steps.branch.outputs.branch }} - permissions: - contents: none - steps: - # For cron builds, i.e. nightly, we provide date and time as extra parameter to distinguish them. - - name: Get current date - id: date - run: echo "date=$(date '+%Y-%m-%d-%H_%M_%S')" >> $GITHUB_OUTPUT - - - name: Debug event output - uses: hmarr/debug-action@v3 - - # Now we need to determine which branch to build - - name: Manual run - get branch - if: github.event_name == 'workflow_dispatch' - run: | - echo "cron_branch=${{ github.event.inputs.branch }}" >> $GITHUB_ENV - shell: bash - - - name: master run - if: github.event_name == 'schedule' && github.event.schedule=='0 6 * * *' - run: | - echo "cron_branch=master" >> $GITHUB_ENV - shell: bash - - - name: 3.0 run - if: github.event_name == 'schedule' && github.event.schedule=='0 12 * * *' - run: | - echo "cron_branch=3.2" >> $GITHUB_ENV - shell: bash - - - name: master run - if: github.event_name == 'schedule' && github.event.schedule=='0 18 * * *' - run: | - echo "cron_branch=master" >> $GITHUB_ENV - shell: bash - - - name: 4.0 run - if: github.event_name == 'schedule' && github.event.schedule=='0 24 * * *' - run: | - echo "cron_branch=4.0" >> $GITHUB_ENV - shell: bash - - - name: Output the branch to use - id: branch - run: | - echo "$cron_branch" - if [[ -z "$cron_branch" ]]; then - echo "Unable to determine branch to use" - exit 1 - fi - echo "branch=$cron_branch" >> $GITHUB_OUTPUT - shell: bash - - unstable-build-images: - needs: unstable-build-get-meta - uses: ./.github/workflows/call-build-images.yaml - with: - version: ${{ needs.unstable-build-get-meta.outputs.branch }} - ref: ${{ needs.unstable-build-get-meta.outputs.branch }} - registry: ghcr.io - username: ${{ github.actor }} - image: ${{ github.repository }}/unstable - environment: unstable - unstable: ${{ needs.unstable-build-get-meta.outputs.date }} - secrets: - token: ${{ secrets.GITHUB_TOKEN }} - - unstable-build-generate-matrix: - name: unstable build matrix - needs: - - unstable-build-get-meta - runs-on: ubuntu-latest - outputs: - build-matrix: ${{ steps.set-matrix.outputs.build-matrix }} - environment: unstable - permissions: - contents: read - steps: - - name: Checkout repository, always latest for action - uses: actions/checkout@v5 - - # Set up the list of target to build so we can pass the JSON to the reusable job - - uses: ./.github/actions/generate-package-build-matrix - id: set-matrix - with: - ref: ${{ needs.unstable-build-get-meta.outputs.branch }} - - unstable-build-packages: - needs: - - unstable-build-get-meta - - unstable-build-generate-matrix - uses: ./.github/workflows/call-build-linux-packages.yaml - with: - version: ${{ needs.unstable-build-get-meta.outputs.branch }} - ref: ${{ needs.unstable-build-get-meta.outputs.branch }} - build_matrix: ${{ needs.unstable-build-generate-matrix.outputs.build-matrix }} - environment: unstable - unstable: ${{ needs.unstable-build-get-meta.outputs.date }} - secrets: - token: ${{ secrets.GITHUB_TOKEN }} - - unstable-build-windows-package: - needs: - - unstable-build-get-meta - uses: ./.github/workflows/call-build-windows.yaml - with: - version: ${{ needs.unstable-build-get-meta.outputs.branch }} - ref: ${{ needs.unstable-build-get-meta.outputs.branch }} - environment: unstable - unstable: ${{ needs.unstable-build-get-meta.outputs.date }} - secrets: - token: ${{ secrets.GITHUB_TOKEN }} - - unstable-build-macos-package: - needs: - - unstable-build-get-meta - uses: ./.github/workflows/call-build-macos.yaml - with: - version: ${{ needs.unstable-build-get-meta.outputs.branch }} - ref: ${{ needs.unstable-build-get-meta.outputs.branch }} - environment: unstable - unstable: ${{ needs.unstable-build-get-meta.outputs.date }} - secrets: - token: ${{ secrets.GITHUB_TOKEN }} - - # We already detain all artefacts as build output so just capture for an unstable release. - # We make all of these on a separate repo to prevent notification spam. - unstable-release: - runs-on: ubuntu-latest - needs: - - unstable-build-get-meta - - unstable-build-images - - unstable-build-packages - - unstable-build-windows-package - - unstable-build-macos-package - environment: unstable - permissions: - contents: read - steps: - # Required to make a release later - - name: Checkout repository - uses: actions/checkout@v5 - - - name: Download all artefacts - continue-on-error: true - uses: actions/download-artifact@v6 - with: - path: artifacts/ - - - name: Single packages tar - run: | - mkdir -p release-upload - # Optional JSON schema so ignore failure - mv -f artifacts/*.json release-upload/ || true - tar -czvf release-upload/packages-unstable-${{ needs.unstable-build-get-meta.outputs.branch }}.tar.gz -C artifacts/ . - shell: bash - - - name: Log in to the Container registry - uses: docker/login-action@v3 - with: - registry: ghcr.io - username: ${{ github.actor }} - password: ${{ secrets.GITHUB_TOKEN }} - - - name: Pull containers as well (single arch only) - # May not be any/valid so ignore errors - continue-on-error: true - run: | - docker pull $IMAGE - docker save --output container-${{ needs.unstable-build-get-meta.outputs.branch }}.tar $IMAGE - docker pull $IMAGE-debug - docker save --output container-${{ needs.unstable-build-get-meta.outputs.branch }}-debug.tar $IMAGE-debug - shell: bash - working-directory: release-upload - env: - IMAGE: ghcr.io/${{ github.repository }}/unstable:${{ needs.unstable-build-get-meta.outputs.branch }} - - - name: Display structure of files to upload - run: ls -R - working-directory: release-upload - shell: bash - - - name: Remove any existing release - continue-on-error: true - run: gh release delete unstable-${{ needs.unstable-build-get-meta.outputs.branch }} --yes --repo ${{ secrets.RELEASE_REPO }} - env: - GITHUB_TOKEN: ${{ secrets.RELEASE_TOKEN }} - shell: bash - - - name: Create Release - # Do not fail the job here - continue-on-error: true - run: gh release create unstable-${{ needs.unstable-build-get-meta.outputs.branch }} release-upload/*.* --repo ${{ secrets.RELEASE_REPO }} --generate-notes --prerelease --target ${{ needs.unstable-build-get-meta.outputs.branch }} --title "Nightly unstable ${{ needs.unstable-build-get-meta.outputs.branch }} build" - env: - GITHUB_TOKEN: ${{ secrets.RELEASE_TOKEN }} - shell: bash diff --git a/.github/workflows/master-integration-test.yaml b/.github/workflows/master-integration-test.yaml deleted file mode 100644 index 85c321b8250..00000000000 --- a/.github/workflows/master-integration-test.yaml +++ /dev/null @@ -1,33 +0,0 @@ -name: Build master container images and run integration tests -on: - push: - branches: - - master - -jobs: - master-integration-test-build: - name: Master - integration build - uses: ./.github/workflows/call-integration-image-build.yaml - with: - ref: ${{ github.sha }} - registry: ghcr.io - username: ${{ github.actor }} - image: ${{ github.repository }}/master - image-tag: x86_64 - environment: integration - secrets: - token: ${{ secrets.GITHUB_TOKEN }} - - master-integration-test-run-integration: - name: Master - integration test - needs: master-integration-test-build - uses: ./.github/workflows/call-run-integration-test.yaml - with: - image_name: ghcr.io/${{ github.repository }}/master - image_tag: x86_64 - secrets: - opensearch_aws_access_id: ${{ secrets.OPENSEARCH_AWS_ACCESS_ID }} - opensearch_aws_secret_key: ${{ secrets.OPENSEARCH_AWS_SECRET_KEY }} - opensearch_admin_password: ${{ secrets.OPENSEARCH_ADMIN_PASSWORD }} - terraform_api_token: ${{ secrets.TF_API_TOKEN }} - gcp-service-account-key: ${{ secrets.GCP_SA_KEY }} diff --git a/.github/workflows/pr-closed-docker.yaml b/.github/workflows/pr-closed-docker.yaml deleted file mode 100644 index 4d2a374b3e0..00000000000 --- a/.github/workflows/pr-closed-docker.yaml +++ /dev/null @@ -1,21 +0,0 @@ -name: Remove docker images for stale/closed PR(s). -on: - pull_request: - branches: - - master - types: [closed] -jobs: - cleanup: - name: PR - cleanup pr-${{ github.event.number }} images - runs-on: ubuntu-latest - permissions: - # We may need a specific token here with `packages:admin` privileges which is not available to GITHUB_TOKEN - packages: write - steps: - - uses: vlaurin/action-ghcr-prune@v0.6.0 - with: - organization: fluent - container: fluent-bit/pr-${{ github.event.number }} - token: ${{ secrets.GITHUB_TOKEN }} - prune-untagged: true - keep-last: 0 diff --git a/.github/workflows/pr-commit-message.yaml b/.github/workflows/pr-commit-message.yaml deleted file mode 100644 index 596d9cdcb14..00000000000 --- a/.github/workflows/pr-commit-message.yaml +++ /dev/null @@ -1,21 +0,0 @@ -name: 'Pull requests commit messages' -on: - pull_request: - types: - - opened - - edited - - reopened - - synchronize -jobs: - check-commit-message: - name: Check Commit Message - runs-on: ubuntu-latest - steps: - - name: Check commit subject complies with https://github.com/fluent/fluent-bit/blob/master/CONTRIBUTING.md#commit-changes - uses: gsactions/commit-message-checker@v2 - with: - pattern: '^[a-z0-9A-Z\-_\s\,\.\/]+\:[ ]{0,1}[a-zA-Z]+[a-zA-Z0-9 \-\.\:_\#\(\)=\/''\"\,><\+\[\]\!\*\\]+$' - error: 'Invalid commit subject. Please refer to: https://github.com/fluent/fluent-bit/blob/master/CONTRIBUTING.md#commit-changes' - checkAllCommitMessages: 'false' - excludeDescription: 'true' - accessToken: ${{ secrets.GITHUB_TOKEN }} \ No newline at end of file diff --git a/.github/workflows/pr-compile-check.yaml b/.github/workflows/pr-compile-check.yaml deleted file mode 100644 index d171353afd8..00000000000 --- a/.github/workflows/pr-compile-check.yaml +++ /dev/null @@ -1,154 +0,0 @@ -name: 'Pull requests compile checks' -on: - pull_request: - # Only trigger if there is a code change or a CMake change that (could) affect code - paths: - - '**.c' - - '**.h' - - 'CMakeLists.txt' - - 'cmake/*' - workflow_dispatch: - -jobs: - # Sanity check for compilation using older compiler on CentOS 7 - pr-compile-centos-7: - runs-on: ubuntu-latest - timeout-minutes: 30 - steps: - - name: Checkout Fluent Bit code - uses: actions/checkout@v5 - - - name: Set up Docker Buildx - uses: docker/setup-buildx-action@v3 - - - name: Attempt to build current source for CentOS 7 - uses: docker/build-push-action@v6 - with: - context: . - file: ./dockerfiles/Dockerfile.centos7 - # No need to use after this so discard completely - push: false - load: false - provenance: false - - # Sanity check for compilation using system libraries - pr-compile-system-libs: - runs-on: ${{ matrix.os.version }} - timeout-minutes: 60 - strategy: - fail-fast: false - matrix: - flb_option: - - "-DFLB_PREFER_SYSTEM_LIBS=On" - cmake_version: - - "3.31.6" - compiler: - - gcc: - cc: gcc - cxx: g++ - - clang: - cc: clang - cxx: clang++ - os: - - version: ubuntu-22.04 - clang: "clang-12" - - version: ubuntu-24.04 - clang: "clang-14" - - steps: - - name: Setup environment for ${{ matrix.os.version }} with ${{ matrix.os.clang }} - run: | - sudo apt-get update - sudo apt-get install -y curl gcc-9 g++-9 ${CLANG_PKG} libsystemd-dev gcovr libyaml-dev - sudo ln -s /usr/bin/llvm-symbolizer-12 /usr/bin/llvm-symbolizer || true - env: - CLANG_PKG: ${{ matrix.os.clang }} - - - name: Install system libraries for this test - run: | - sudo apt-get update - sudo apt-get install -y libc-ares-dev libjemalloc-dev libluajit-5.1-dev \ - libnghttp2-dev libsqlite3-dev libzstd-dev libmsgpack-dev librdkafka-dev - mkdir -p /tmp/libbacktrace/build && \ - curl -L https://github.com/ianlancetaylor/libbacktrace/archive/8602fda.tar.gz | \ - tar --strip-components=1 -xzC /tmp/libbacktrace/ && \ - pushd /tmp/libbacktrace/build && ../configure && make && sudo make install && popd - - - name: Install cmake - uses: jwlawson/actions-setup-cmake@v2 - with: - cmake-version: "${{ matrix.cmake_version }}" - - - name: Checkout Fluent Bit code - uses: actions/checkout@v5 - - - name: ${{ matrix.compiler.cc }} & ${{ matrix.compiler.cxx }} - ${{ matrix.flb_option }} - run: | - export nparallel=$(( $(getconf _NPROCESSORS_ONLN) > 8 ? 8 : $(getconf _NPROCESSORS_ONLN) )) - echo "CC = $CC, CXX = $CXX, FLB_OPT = $FLB_OPT" - sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-9 90 - sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-9 90 - sudo update-alternatives --install /usr/bin/clang clang /usr/bin/${CLANG_PKG} 90 - cmake $GLOBAL_OPTS $FLB_OPT ../ - make -j $nparallel - working-directory: build - env: - CC: ${{ matrix.compiler.cc }} - CXX: ${{ matrix.compiler.cxx }} - FLB_OPT: ${{ matrix.flb_option }} - GLOBAL_OPTS: "-DFLB_JEMALLOC=On -DFLB_SHARED_LIB=Off -DFLB_DEBUG=On -DFLB_ALL=On -DFLB_EXAMPLES=Off" - CLANG_PKG: ${{ matrix.os.clang }} - - - name: Display dependencies w/ ldd - run: | - export ldd_result=$(ldd ./bin/fluent-bit) - echo "ldd result:" - echo "$ldd_result" - echo "$ldd_result" | grep libcares - echo "$ldd_result" | grep libjemalloc - echo "$ldd_result" | grep libluajit - echo "$ldd_result" | grep libnghttp2 - echo "$ldd_result" | grep libsqlite3 - echo "$ldd_result" | grep libzstd - working-directory: build - - - name: Display dependencies w/ ldd for libmsgpack and librdkafka - if: matrix.os.version == 'ubuntu-24.04' - run: | - export ldd_result=$(ldd ./bin/fluent-bit) - echo "ldd result:" - echo "$ldd_result" | grep libmsgpack - echo "$ldd_result" | grep librdkafka - working-directory: build - - # Sanity check for compilation w/ CXX support - pr-compile-without-cxx: - runs-on: ubuntu-24.04 - timeout-minutes: 60 - strategy: - fail-fast: false - matrix: - cmake_version: - - "3.31.6" - steps: - - name: Setup environment - run: | - sudo apt-get update - sudo apt-get install -y bison cmake flex gcc libssl-dev libyaml-dev - sudo apt-get install -y libzstd-dev librdkafka-dev - - - name: Install cmake - uses: jwlawson/actions-setup-cmake@v2 - with: - cmake-version: "${{ matrix.cmake_version }}" - - - name: Checkout Fluent Bit code - uses: actions/checkout@v5 - - - name: Compile w/ CXX support - run: | - export CXX=/bin/false - export nparallel=$(( $(getconf _NPROCESSORS_ONLN) > 8 ? 8 : $(getconf _NPROCESSORS_ONLN) )) - cmake -DFLB_PREFER_SYSTEM_LIB_ZSTD=ON -DFLB_PREFER_SYSTEM_LIB_KAFKA=ON ../ - make -j $nparallel - working-directory: build diff --git a/.github/workflows/pr-fuzz.yaml b/.github/workflows/pr-fuzz.yaml deleted file mode 100644 index 6194ff305a4..00000000000 --- a/.github/workflows/pr-fuzz.yaml +++ /dev/null @@ -1,41 +0,0 @@ -name: CIFuzz -on: - pull_request: - # Only fuzz when C source files change - paths: - - '**.c' - - '**.h' -jobs: - fuzzing: - name: PR - fuzzing test - runs-on: ubuntu-latest - steps: - - name: Build Fuzzers - id: build - uses: google/oss-fuzz/infra/cifuzz/actions/build_fuzzers@master - with: - oss-fuzz-project-name: 'fluent-bit' - dry-run: false - language: c - - name: Run Fuzzers - uses: google/oss-fuzz/infra/cifuzz/actions/run_fuzzers@master - with: - oss-fuzz-project-name: 'fluent-bit' - fuzz-seconds: 600 - dry-run: false - language: c - output-sarif: true - - name: Upload Crash - uses: actions/upload-artifact@v5 - if: failure() && steps.build.outcome == 'success' - with: - name: artifacts - path: ./out/artifacts - - name: Upload Sarif - if: always() && steps.build.outcome == 'success' - uses: github/codeql-action/upload-sarif@v4 - with: - # Path to SARIF file relative to the root of the repository - sarif_file: cifuzz-sarif/results.sarif - checkout_path: cifuzz-sarif - category: CIFuzz diff --git a/.github/workflows/pr-image-tests.yaml b/.github/workflows/pr-image-tests.yaml deleted file mode 100644 index 8a970c4ff97..00000000000 --- a/.github/workflows/pr-image-tests.yaml +++ /dev/null @@ -1,121 +0,0 @@ -name: Build images for all architectures -on: - pull_request: - types: - - opened - - reopened - - synchronize - paths: - - 'dockerfiles/Dockerfile' - - 'dockerfiles/Dockerfile.windows' - - 'conf/**' - - workflow_dispatch: - -jobs: - pr-image-tests-build-images: - name: PR - Buildkit docker build test - runs-on: ubuntu-latest - permissions: - contents: read - # We do not push and this allows simpler workflow running for forks too - steps: - - name: Checkout code - uses: actions/checkout@v5 - - - name: Set up Docker Buildx - uses: docker/setup-buildx-action@v3 - - - name: Extract metadata from Github - id: meta - uses: docker/metadata-action@v5 - with: - images: ${{ github.repository }}/pr-${{ github.event.pull_request.number }} - tags: | - type=sha - - - name: Build the multi-arch images - id: build - uses: docker/build-push-action@v6 - with: - file: ./dockerfiles/Dockerfile - context: . - platforms: linux/amd64 - target: production - provenance: false - push: false - load: true - tags: ${{ steps.meta.outputs.tags }} - labels: ${{ steps.meta.outputs.labels }} - - - name: Sanity check it runs - # We do this for a simple check of dependencies - run: | - docker run --rm -t ${{ steps.meta.outputs.tags }} --help - shell: bash - - - name: Build the debug multi-arch images - uses: docker/build-push-action@v6 - with: - file: ./dockerfiles/Dockerfile - context: . - platforms: linux/amd64 - target: debug - provenance: false - push: false - load: false - - pr-image-tests-classic-docker-build: - name: PR - Classic docker build test - runs-on: ubuntu-latest - permissions: - contents: read - steps: - - name: Checkout code - uses: actions/checkout@v5 - - - name: Build the classic test image - # We only want to confirm it builds with classic mode, nothing else - run: | - docker build -f ./dockerfiles/Dockerfile . - env: - # Ensure we disable buildkit - DOCKER_BUILDKIT: 0 - shell: bash - pr-image-tests-build-windows-images: - name: PR - Docker windows build test, windows 2022 and 2025 - runs-on: windows-${{ matrix.windows-base-version }} - strategy: - fail-fast: true - matrix: - windows-base-version: - # https://github.com/fluent/fluent-bit/blob/1d366594a889624ec3003819fe18588aac3f17cd/dockerfiles/Dockerfile.windows#L3 - - '2022' - - '2025' - permissions: - contents: read - steps: - - name: Checkout repository - uses: actions/checkout@v5 - - - name: Extract metadata from Github - id: meta - uses: docker/metadata-action@v5 - with: - images: ${{ github.repository }}/pr-${{ github.event.pull_request.number }} - tags: | - type=sha - flavor: | - suffix=-windows-${{ matrix.windows-base-version }} - - - name: Build the windows images - id: build - run: | - docker build -t ${{ steps.meta.outputs.tags }} --build-arg WINDOWS_VERSION=ltsc${{ matrix.windows-base-version }} -f ./dockerfiles/Dockerfile.windows . - - - name: Sanity check it runs - # We do this for a simple check of dependencies - run: | - docker run --rm -t ${{ steps.meta.outputs.tags }} --help - shell: bash - diff --git a/.github/workflows/pr-install-script.yaml b/.github/workflows/pr-install-script.yaml deleted file mode 100644 index f284ea7c0f6..00000000000 --- a/.github/workflows/pr-install-script.yaml +++ /dev/null @@ -1,29 +0,0 @@ -name: Test install script for all targets -on: - pull_request: - types: - - opened - - reopened - - synchronize - paths: - - 'packaging/test-release-packages.sh' - - 'install.sh' - - workflow_dispatch: -jobs: - test-install-script: - name: Run install tests - runs-on: ubuntu-latest - permissions: - contents: read - timeout-minutes: 30 - steps: - - name: Checkout Fluent Bit code - uses: actions/checkout@v5 - - - name: Run install tests - run: | - ./packaging/test-release-packages.sh - shell: bash - env: - INSTALL_SCRIPT: ./install.sh diff --git a/.github/workflows/pr-integration-test.yaml b/.github/workflows/pr-integration-test.yaml deleted file mode 100644 index 2b18b1c67b1..00000000000 --- a/.github/workflows/pr-integration-test.yaml +++ /dev/null @@ -1,69 +0,0 @@ -name: Build and run integration tests for PR -on: - # We need write token for upload to GHCR and we are protecting with labels too. - pull_request_target: - branches: - - master - types: - - labeled - - opened - - reopened - - synchronize - -jobs: - pr-integration-test-build: - name: PR - integration build - # We only need to test this once as the rest are chained from it. - if: contains(github.event.pull_request.labels.*.name, 'ok-to-test') - uses: ./.github/workflows/call-integration-image-build.yaml - with: - ref: ${{ github.event.pull_request.head.sha }} - registry: ghcr.io - username: ${{ github.actor }} - image: ${{ github.repository }}/pr-${{ github.event.number }} - image-tag: ${{ github.sha }} - environment: integration - secrets: - token: ${{ secrets.GITHUB_TOKEN }} - - pr-integration-test-build-complete: - name: PR - integration build complete - runs-on: ubuntu-latest - needs: - - pr-integration-test-build - steps: - - uses: actions-ecosystem/action-add-labels@v1 - name: Label the PR - with: - labels: ci/integration-docker-ok - github_token: ${{ secrets.GITHUB_TOKEN }} - number: ${{ github.event.pull_request.number }} - - pr-integration-test-run-integration: - name: PR - K8S integration test - needs: - - pr-integration-test-build - uses: ./.github/workflows/call-run-integration-test.yaml - with: - image_name: ghcr.io/${{ github.repository }}/pr-${{ github.event.pull_request.number }} - image_tag: ${{ github.sha }} - secrets: - opensearch_aws_access_id: ${{ secrets.OPENSEARCH_AWS_ACCESS_ID }} - opensearch_aws_secret_key: ${{ secrets.OPENSEARCH_AWS_SECRET_KEY }} - opensearch_admin_password: ${{ secrets.OPENSEARCH_ADMIN_PASSWORD }} - terraform_api_token: ${{ secrets.TF_API_TOKEN }} - gcp-service-account-key: ${{ secrets.GCP_SA_KEY }} - - pr-integration-test-run-integration-post-label: - name: PR - integration test complete - runs-on: ubuntu-latest - needs: - - pr-integration-test-run-integration - steps: - - uses: actions-ecosystem/action-add-labels@v1 - name: Label the PR - with: - labels: ci/integration-test-ok - github_token: ${{ secrets.GITHUB_TOKEN }} - number: ${{ github.event.pull_request.number }} - repo: fluent/fluent-bit diff --git a/.github/workflows/pr-labels.yaml b/.github/workflows/pr-labels.yaml deleted file mode 100644 index f8139304799..00000000000 --- a/.github/workflows/pr-labels.yaml +++ /dev/null @@ -1,17 +0,0 @@ -name: 'Pull requests labels' -on: - pull_request_target: - types: - - opened -jobs: - apply-default-labels: - name: PR - apply default labels - runs-on: ubuntu-latest - steps: - - uses: actions-ecosystem/action-add-labels@v1 - name: Label the PR with 'docs-required' by default. - with: - labels: docs-required - github_token: ${{ secrets.GITHUB_TOKEN }} - number: ${{ github.event.pull_request.number }} - repo: fluent/fluent-bit diff --git a/.github/workflows/pr-lint.yaml b/.github/workflows/pr-lint.yaml deleted file mode 100644 index 28c07a6ca98..00000000000 --- a/.github/workflows/pr-lint.yaml +++ /dev/null @@ -1,36 +0,0 @@ -name: Lint PRs -on: - pull_request: - workflow_dispatch: - -jobs: - hadolint-pr: - runs-on: ubuntu-latest - name: PR - Hadolint - steps: - - uses: actions/checkout@v5 - # Ignores do not work: https://github.com/reviewdog/action-hadolint/issues/35 is resolved - - uses: reviewdog/action-hadolint@v1 - with: - exclude: | - packaging/testing/smoke/packages/Dockerfile.* - - shellcheck-pr: - runs-on: ubuntu-latest - name: PR - Shellcheck - steps: - - uses: actions/checkout@v5 - - uses: ludeeus/action-shellcheck@master - with: - ignore_paths: cmake/sanitizers-cmake lib plugins tests - - actionlint-pr: - runs-on: ubuntu-latest - name: PR - Actionlint - steps: - - uses: actions/checkout@v5 - - run: | - echo "::add-matcher::.github/actionlint-matcher.json" - bash <(curl https://raw.githubusercontent.com/rhysd/actionlint/main/scripts/download-actionlint.bash) - ./actionlint -color -shellcheck= - shell: bash diff --git a/.github/workflows/pr-package-tests.yaml b/.github/workflows/pr-package-tests.yaml deleted file mode 100644 index 42d9065b918..00000000000 --- a/.github/workflows/pr-package-tests.yaml +++ /dev/null @@ -1,112 +0,0 @@ -name: PR - packaging tests run on-demand via label -on: - pull_request: - types: - - labeled - - opened - - reopened - - synchronize - branches: - - master - -# Cancel any running on push -concurrency: - group: ${{ github.ref }} - cancel-in-progress: true - -jobs: - # This job provides this metadata for the other jobs to use. - pr-package-test-build-get-meta: - # This is a long test to run so only on-demand for certain PRs - if: contains(github.event.pull_request.labels.*.name, 'ok-package-test') - name: Get metadata to add to build - runs-on: ubuntu-latest - outputs: - date: ${{ steps.date.outputs.date }} - permissions: - contents: none - steps: - # For cron builds, i.e. nightly, we provide date and time as extra parameter to distinguish them. - - name: Get current date - id: date - run: echo "date=$(date '+%Y-%m-%d-%H_%M_%S')" >> $GITHUB_OUTPUT - - - name: Debug event output - uses: hmarr/debug-action@v3 - - pr-container-builds: - name: PR - container builds - needs: - - pr-package-test-build-get-meta - - pr-package-test-build-generate-matrix - uses: ./.github/workflows/call-build-images.yaml - with: - version: pr-${{ github.event.number }} - ref: ${{ github.ref }} - registry: ghcr.io - username: ${{ github.actor }} - image: ${{ github.repository }}/pr - unstable: ${{ needs.pr-package-test-build-get-meta.outputs.date }} - # We do not push as forks cannot get a token with the right permissions - push: false - secrets: - token: ${{ secrets.GITHUB_TOKEN }} - cosign_private_key: ${{ secrets.COSIGN_PRIVATE_KEY }} - cosign_private_key_password: ${{ secrets.COSIGN_PASSWORD }} - - pr-package-test-build-generate-matrix: - name: PR - packages build matrix - needs: - - pr-package-test-build-get-meta - runs-on: ubuntu-latest - outputs: - build-matrix: ${{ steps.set-matrix.outputs.build-matrix }} - permissions: - contents: read - steps: - - name: Checkout repository, always latest for action - uses: actions/checkout@v5 - - # Set up the list of target to build so we can pass the JSON to the reusable job - - uses: ./.github/actions/generate-package-build-matrix - id: set-matrix - with: - ref: master - - pr-package-test-build-packages: - name: PR - packages build Linux - needs: - - pr-package-test-build-get-meta - - pr-package-test-build-generate-matrix - uses: ./.github/workflows/call-build-linux-packages.yaml - with: - version: pr-${{ github.event.number }} - ref: ${{ github.ref }} - build_matrix: ${{ needs.pr-package-test-build-generate-matrix.outputs.build-matrix }} - unstable: ${{ needs.pr-package-test-build-get-meta.outputs.date }} - secrets: - token: ${{ secrets.GITHUB_TOKEN }} - - pr-package-test-build-windows-package: - name: PR - packages build Windows - needs: - - pr-package-test-build-get-meta - uses: ./.github/workflows/call-build-windows.yaml - with: - version: pr-${{ github.event.number }} - ref: ${{ github.ref }} - unstable: ${{ needs.pr-package-test-build-get-meta.outputs.date }} - secrets: - token: ${{ secrets.GITHUB_TOKEN }} - - pr-package-test-build-macos-package: - name: PR - packages build MacOS - needs: - - pr-package-test-build-get-meta - uses: ./.github/workflows/call-build-macos.yaml - with: - version: pr-${{ github.event.number }} - ref: ${{ github.ref }} - unstable: ${{ needs.pr-package-test-build-get-meta.outputs.date }} - secrets: - token: ${{ secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/pr-perf-test.yaml b/.github/workflows/pr-perf-test.yaml deleted file mode 100644 index f5867840edb..00000000000 --- a/.github/workflows/pr-perf-test.yaml +++ /dev/null @@ -1,97 +0,0 @@ -name: Build and run performance tests for PR -on: - # We want to run on forks as we protect with a label. - # Without this we have no secrets to pass. - pull_request_target: - branches: - - master - types: - - labeled - -jobs: - - pr-perf-test-run: - # We only need to test this once as the rest are chained from it. - if: contains(github.event.pull_request.labels.*.name, 'ok-to-performance-test') - uses: fluent/fluent-bit-ci/.github/workflows/call-run-performance-test.yaml@main - with: - vm-name: fb-perf-test-pr-${{ github.event.number }} - git-branch: ${{ github.head_ref }} - test-directory: examples/perf_test - duration: 30 - service: fb-delta - secrets: - service-account: ${{ secrets.GCP_SA_KEY }} - - pr-perf-test-upload: - name: PR - performance test upload - # Label check from previous - runs-on: ubuntu-latest - needs: - - pr-perf-test-run - permissions: - pull-requests: write - steps: - - uses: actions/download-artifact@v6 - - - name: Upload plots to CML - run: | - docker pull -q dvcorg/cml-py3:latest - for FILE in *.png; do - echo "Handling $FILE" - docker run --rm -t \ - -v $PWD:/output:rw \ - dvcorg/cml-py3:latest \ - /bin/sh -c "cml-publish /output/$FILE --driver=github --repo=${{ github.repository }} --md --title=$FILE >> /output/report.md" - done - shell: bash - working-directory: output - - - name: Dump markdown to variable - id: report - run: | - file_test_newlines="" - while read line - do - file_test_newlines+=" $line" - done < report.md - echo "report=$file_test_newlines" >> $GITHUB_OUTPUT - shell: bash - working-directory: output - - - uses: actions/github-script@v8 - if: github.event_name == 'pull_request' - env: - REPORT: "Plots\n${{ steps.report.outputs.report }}" - with: - github-token: ${{ secrets.GITHUB_TOKEN }} - script: | - const output = `#### Performance Test - Run URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }} -
Show Performance Results - ${process.env.REPORT} -
- *Pushed by: @${{ github.actor }}, Action: \`${{ github.event_name }}\`*`; - github.rest.issues.createComment({ - issue_number: context.issue.number, - owner: context.repo.owner, - repo: context.repo.repo, - body: output - }) - - pr-perf-test-complete: - # Only runs on success - name: PR - performance test complete - # Label check from previous - runs-on: ubuntu-latest - needs: - - pr-perf-test-run - permissions: - pull-requests: write - steps: - - uses: actions-ecosystem/action-add-labels@v1 - name: Label the PR - with: - labels: ci/performance-test-ok - github_token: ${{ secrets.GITHUB_TOKEN }} - number: ${{ github.event.pull_request.number }} diff --git a/.github/workflows/pr-windows-build.yaml b/.github/workflows/pr-windows-build.yaml deleted file mode 100644 index 110d85fda64..00000000000 --- a/.github/workflows/pr-windows-build.yaml +++ /dev/null @@ -1,51 +0,0 @@ -name: PR - Windows checks - -# -# Test PRs on Windows -# -# This won't run automatically on PRs from untrusted repos, it must be approved -# manually. If PR authors want to run it themselves, they should enable running -# actions on their fork, then invoke it on their branch via their forked repo's -# Actions tab. -# - -on: - # Enable invocation via Github repo Actions tab. Having this in the repo - # allows people with github forks to run this job on their own branches to - # build Windows branches conveniently. See DEVELOPER_GUIDE.md. - workflow_dispatch: - - pull_request: - # Limit to just those changes that 'might' affect Windows for automated builds - # We can always do a manual build for a branch - paths: - - '**.h' - - '**.c' - - '**.windows' - - './conf/**' - - './cmake/**' - types: - - opened - - reopened - - synchronize - -jobs: - pr-windows-build: - uses: ./.github/workflows/call-build-windows.yaml - with: - version: ${{ github.sha }} - ref: ${{ github.sha }} - environment: pr - secrets: - token: ${{ secrets.GITHUB_TOKEN }} - - run-windows-unit-tests: - needs: - - pr-windows-build - uses: ./.github/workflows/call-windows-unit-tests.yaml - with: - version: ${{ github.sha }} - ref: ${{ github.sha }} - environment: pr - secrets: - token: ${{ secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/resources/auto-build-test-workflow.png b/.github/workflows/resources/auto-build-test-workflow.png deleted file mode 100644 index 40d3dfce3a5..00000000000 Binary files a/.github/workflows/resources/auto-build-test-workflow.png and /dev/null differ diff --git a/.github/workflows/resources/release-approval-per-job.png b/.github/workflows/resources/release-approval-per-job.png deleted file mode 100644 index 0f30245c1ca..00000000000 Binary files a/.github/workflows/resources/release-approval-per-job.png and /dev/null differ diff --git a/.github/workflows/resources/release-approval.png b/.github/workflows/resources/release-approval.png deleted file mode 100644 index 16a5570cf3e..00000000000 Binary files a/.github/workflows/resources/release-approval.png and /dev/null differ diff --git a/.github/workflows/resources/release-from-staging-workflow-incorrect-version.png b/.github/workflows/resources/release-from-staging-workflow-incorrect-version.png deleted file mode 100644 index 3708885bd04..00000000000 Binary files a/.github/workflows/resources/release-from-staging-workflow-incorrect-version.png and /dev/null differ diff --git a/.github/workflows/resources/release-from-staging-workflow.png b/.github/workflows/resources/release-from-staging-workflow.png deleted file mode 100644 index c6ea45aebeb..00000000000 Binary files a/.github/workflows/resources/release-from-staging-workflow.png and /dev/null differ diff --git a/.github/workflows/resources/release-version-failure.png b/.github/workflows/resources/release-version-failure.png deleted file mode 100644 index a96698ccb7c..00000000000 Binary files a/.github/workflows/resources/release-version-failure.png and /dev/null differ diff --git a/.github/workflows/skipped-unit-tests.yaml b/.github/workflows/skipped-unit-tests.yaml deleted file mode 100644 index 2fe682091c8..00000000000 --- a/.github/workflows/skipped-unit-tests.yaml +++ /dev/null @@ -1,20 +0,0 @@ -# For skipped checks that are required to merge we trigger a fake job that succeeds. -# https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/troubleshooting-required-status-checks#handling-skipped-but-required-checks -name: Run unit tests -on: - pull_request: - paths: - - '.github/**' - - 'dockerfiles/**' - - 'docker_compose/**' - - 'packaging/**' - - '.gitignore' - - 'appveyor.yml' - - 'examples/**' - -jobs: - run-all-unit-tests: - runs-on: ubuntu-latest - name: Unit tests (matrix) - steps: - - run: echo "No unit tests required" \ No newline at end of file diff --git a/.github/workflows/staging-build.yaml b/.github/workflows/staging-build.yaml deleted file mode 100644 index f5d18013b51..00000000000 --- a/.github/workflows/staging-build.yaml +++ /dev/null @@ -1,179 +0,0 @@ ---- -name: Deploy to staging - -on: - push: - tags: - - '*' - - workflow_dispatch: - inputs: - version: - description: Version of Fluent Bit to build - required: true - default: master - target: - description: Only build a specific Linux target, intended for debug/test/quick builds only. - required: false - default: "" - ignore_failing_targets: - description: Optionally ignore any failing builds in the matrix and continue. - type: boolean - required: false - default: false - -# We do not want a new staging build to run whilst we are releasing the current staging build. -# We also do not want multiples to run for the same version. -concurrency: staging-build-release - -jobs: - - # This job strips off the `v` at the start of any tag provided. - # It then provides this metadata for the other jobs to use. - staging-build-get-meta: - name: Get metadata to build - runs-on: ubuntu-latest - outputs: - version: ${{ steps.formatted_version.outputs.replaced }} - steps: - - - run: | - echo "Version: ${{ inputs.version || github.ref_name }}" - shell: bash - - # This step is to consolidate the three different triggers into a single "version" - # 1. If manual dispatch - use the version provided. - # 2. If cron/regular build - use master. - # 3. If tag trigger, use that tag. - - name: Get the version - id: get_version - run: | - VERSION="${INPUT_VERSION}" - if [ -z "${VERSION}" ]; then - echo "Defaulting to master" - VERSION=master - fi - echo "VERSION=$VERSION" >> $GITHUB_OUTPUT - shell: bash - env: - # Use the dispatch variable in preference, if empty use the context ref_name which should - # only ever be a tag or the master branch for cron builds. - INPUT_VERSION: ${{ inputs.version || github.ref_name }} - - # String the 'v' prefix for tags. - - uses: frabert/replace-string-action@v2.5 - id: formatted_version - with: - pattern: '[v]*(.*)$' - string: "${{ steps.get_version.outputs.VERSION }}" - replace-with: '$1' - flags: 'g' - - staging-build-images: - needs: staging-build-get-meta - uses: ./.github/workflows/call-build-images.yaml - with: - version: ${{ needs.staging-build-get-meta.outputs.version }} - ref: ${{ inputs.version || github.ref_name }} - registry: ghcr.io - username: ${{ github.actor }} - image: ${{ github.repository }}/staging - environment: staging - secrets: - token: ${{ secrets.GITHUB_TOKEN }} - cosign_private_key: ${{ secrets.COSIGN_PRIVATE_KEY }} - cosign_private_key_password: ${{ secrets.COSIGN_PASSWORD }} - - staging-build-upload-schema-s3: - needs: - - staging-build-get-meta - - staging-build-images - runs-on: ubuntu-latest - environment: staging - steps: - - name: Download the schema generated by call-build-images - # We may have no schema so ignore that failure - continue-on-error: true - uses: actions/download-artifact@v6 - with: - name: fluent-bit-schema-${{ needs.staging-build-get-meta.outputs.version }} - path: artifacts/ - - - name: Display structure of downloaded files - run: | - ls -R artifacts/ - shell: bash - - - name: Push schema to S3 bucket - # We may have no schema so ignore that failure - continue-on-error: true - run: | - aws s3 sync "artifacts/" "s3://${AWS_S3_BUCKET}/${DEST_DIR}" --no-progress - env: - DEST_DIR: "${{ needs.staging-build-get-meta.outputs.version }}/" - AWS_REGION: "us-east-1" - AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET_STAGING }} - - staging-build-generate-matrix: - name: Staging build matrix - runs-on: ubuntu-latest - outputs: - build-matrix: ${{ steps.set-matrix.outputs.build-matrix }} - steps: - - name: Checkout repository - uses: actions/checkout@v5 - # Set up the list of target to build so we can pass the JSON to the reusable job - - uses: ./.github/actions/generate-package-build-matrix - id: set-matrix - with: - ref: ${{ inputs.version || github.ref_name }} - target: ${{ inputs.target || '' }} - - staging-build-packages: - needs: - - staging-build-get-meta - - staging-build-generate-matrix - uses: ./.github/workflows/call-build-linux-packages.yaml - with: - version: ${{ needs.staging-build-get-meta.outputs.version }} - ref: ${{ inputs.version || github.ref_name }} - build_matrix: ${{ needs.staging-build-generate-matrix.outputs.build-matrix }} - environment: staging - ignore_failing_targets: ${{ inputs.ignore_failing_targets || false }} - secrets: - token: ${{ secrets.GITHUB_TOKEN }} - bucket: ${{ secrets.AWS_S3_BUCKET_STAGING }} - access_key_id: ${{ secrets.AWS_ACCESS_KEY_ID }} - secret_access_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - gpg_private_key: ${{ secrets.GPG_PRIVATE_KEY }} - gpg_private_key_passphrase: ${{ secrets.GPG_PRIVATE_KEY_PASSPHRASE }} - - staging-build-windows-packages: - needs: - - staging-build-get-meta - uses: ./.github/workflows/call-build-windows.yaml - with: - version: ${{ needs.staging-build-get-meta.outputs.version }} - ref: ${{ inputs.version || github.ref_name }} - environment: staging - secrets: - token: ${{ secrets.GITHUB_TOKEN }} - bucket: ${{ secrets.AWS_S3_BUCKET_STAGING }} - access_key_id: ${{ secrets.AWS_ACCESS_KEY_ID }} - secret_access_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - - staging-build-macos-packages: - needs: - - staging-build-get-meta - uses: ./.github/workflows/call-build-macos.yaml - with: - version: ${{ needs.staging-build-get-meta.outputs.version }} - ref: ${{ inputs.version || github.ref_name }} - environment: staging - secrets: - token: ${{ secrets.GITHUB_TOKEN }} - bucket: ${{ secrets.AWS_S3_BUCKET_STAGING }} - access_key_id: ${{ secrets.AWS_ACCESS_KEY_ID }} - secret_access_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} diff --git a/.github/workflows/staging-release.yaml b/.github/workflows/staging-release.yaml deleted file mode 100644 index 34d241a01e1..00000000000 --- a/.github/workflows/staging-release.yaml +++ /dev/null @@ -1,1141 +0,0 @@ ---- -name: Release from staging - -# This is only expected to be invoked on-demand by a specific user. -on: - workflow_dispatch: - inputs: - version: - type: string - description: The version we want to release from staging, ensure this is numeric without the v prefix for the tag. - required: true - docker-image: - type: string - description: Optionally override the image name to push to on Docker Hub. - default: fluent/fluent-bit - required: false - github-image: - type: string - description: Optionally override the image name to push to on Github Container Registry. - default: fluent/fluent-bit - required: false - -# We do not want a new staging build to run whilst we are releasing the current staging build. -# We also do not want multiples to run for the same version. -concurrency: staging-build-release - -env: - STAGING_IMAGE_NAME: ghcr.io/${{ github.repository }}/staging - -jobs: - - staging-release-version-check: - name: Check staging release matches - environment: release # required to get bucket name - runs-on: ubuntu-latest - outputs: - major-version: ${{ steps.get_major_version.outputs.value }} - permissions: - contents: read - steps: - - name: Get the version on staging - run: | - curl --fail -LO "$AWS_URL/latest-version.txt" - cat latest-version.txt - STAGING_VERSION=$(cat latest-version.txt) - [[ "$STAGING_VERSION" != "$RELEASE_VERSION" ]] && echo "Latest version mismatch: $STAGING_VERSION != $RELEASE_VERSION" && exit 1 - # Must end in something that exits 0 - echo "Successfully confirmed version is as expected: $STAGING_VERSION" - shell: bash - env: - AWS_URL: https://${{ secrets.AWS_S3_BUCKET_STAGING }}.s3.amazonaws.com - RELEASE_VERSION: ${{ github.event.inputs.version }} - - # Get the major version, i.e. 1.9.3 --> 1.9, or just return the passed in version. - - name: Convert to major version format - id: get_major_version - run: | - MAJOR_VERSION="$RELEASE_VERSION" - if [[ $RELEASE_VERSION =~ ^[0-9]+\.[0-9]+ ]]; then - MAJOR_VERSION="${BASH_REMATCH[0]}" - fi - echo "value=$MAJOR_VERSION" >> $GITHUB_OUTPUT - shell: bash - env: - RELEASE_VERSION: ${{ github.event.inputs.version }} - - - name: Checkout repository - uses: actions/checkout@v5 - - staging-release-generate-package-matrix: - name: Get package matrix - runs-on: ubuntu-latest - outputs: - deb-build-matrix: ${{ steps.get-matrix.outputs.deb-build-matrix }} - rpm-build-matrix: ${{ steps.get-matrix.outputs.rpm-build-matrix }} - steps: - - name: Checkout repository - uses: actions/checkout@v5 - - - name: Setup runner - run: | - sudo apt-get update - sudo apt-get install -y jq - shell: bash - - # Cope with 1.9 as well as 2.0 - - uses: ./.github/actions/generate-package-build-matrix - id: get-matrix - with: - ref: v${{ inputs.version }} - - # Now annotate with whether it is Yum or Apt based - - # 1. Take packages from the staging bucket - # 2. Sign them with the release GPG key - # 3. Also take existing release packages from the release bucket. - # 4. Create a full repo configuration using the existing releases as well. - # 5. Upload to release bucket. - # Note we could resign all packages as well potentially if we wanted to update the key. - staging-release-yum-packages: - name: S3 - update YUM packages bucket - runs-on: ubuntu-22.04 # no createrepo on Ubuntu 20.04 - environment: release - needs: - - staging-release-version-check - - staging-release-generate-package-matrix - permissions: - contents: read - strategy: - matrix: ${{ fromJSON(needs.staging-release-generate-package-matrix.outputs.rpm-build-matrix) }} - fail-fast: false - steps: - - name: Checkout code - uses: actions/checkout@v5 - - - name: Setup runner - run: | - sudo apt-get update - sudo apt-get install -y createrepo-c rpm - shell: bash - - - name: Import GPG key for signing - id: import_gpg - uses: crazy-max/ghaction-import-gpg@v6 - with: - gpg_private_key: ${{ secrets.GPG_PRIVATE_KEY }} - passphrase: ${{ secrets.GPG_PRIVATE_KEY_PASSPHRASE }} - - # Download the current release bucket - # Add everything from staging - # Sign and set up metadata for it all - # Upload to release bucket - - - name: Sync packages from buckets on S3 - run: | - mkdir -p "packaging/releases/$DISTRO" - aws s3 sync "s3://${{ secrets.AWS_S3_BUCKET_RELEASE }}/$DISTRO" "packaging/releases/$DISTRO" --no-progress - aws s3 sync "s3://${{ secrets.AWS_S3_BUCKET_STAGING }}/$DISTRO" "packaging/releases/$DISTRO" --no-progress - env: - AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - AWS_REGION: "us-east-1" - DISTRO: ${{ matrix.distro }} - shell: bash - - - name: GPG set up keys for signing - run: | - gpg --export -a "${{ steps.import_gpg.outputs.name }}" > /tmp/fluentbit.key - rpm --import /tmp/fluentbit.key - shell: bash - - - name: Update repo info and remove any staging details - run: | - packaging/update-yum-repo.sh - env: - GPG_KEY: ${{ steps.import_gpg.outputs.name }} - AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET_RELEASE }} - VERSION: ${{ github.event.inputs.version }} - BASE_PATH: "packaging/releases" - RPM_REPO: ${{ matrix.distro }} - shell: bash - - - name: Sync to release bucket on S3 - run: | - aws s3 sync "packaging/releases/$DISTRO" "s3://${{ secrets.AWS_S3_BUCKET_RELEASE }}/$DISTRO" --delete --follow-symlinks --no-progress - env: - AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - AWS_REGION: "us-east-1" - DISTRO: ${{ matrix.distro }} - shell: bash - - staging-release-apt-packages: - name: S3 - update APT packages bucket - runs-on: ubuntu-22.04 - environment: release - needs: - - staging-release-version-check - - staging-release-generate-package-matrix - permissions: - contents: read - strategy: - matrix: ${{ fromJSON(needs.staging-release-generate-package-matrix.outputs.deb-build-matrix) }} - fail-fast: false - steps: - - name: Checkout code - uses: actions/checkout@v5 - - - name: Setup runner - run: | - sudo apt-get update - sudo apt-get install -y aptly debsigs distro-info rsync - shell: bash - - - name: Convert version to codename - id: get_codename - run: | - CODENAME="$DISTRO" - if [[ "$DISTRO" == ubuntu* ]]; then - echo "Converting Ubuntu version to codename" - UBUNTU_NAME=$(grep "${DISTRO##*/} LTS" /usr/share/distro-info/ubuntu.csv|cut -d ',' -f3) - echo "Got Ubuntu codename: $UBUNTU_NAME" - CODENAME="ubuntu/$UBUNTU_NAME" - fi - echo "Using codename: $CODENAME" - echo "CODENAME=$CODENAME" >> $GITHUB_OUTPUT - shell: bash - env: - DISTRO: ${{ matrix.distro }} - - - name: Import GPG key for signing - id: import_gpg - uses: crazy-max/ghaction-import-gpg@v6 - with: - gpg_private_key: ${{ secrets.GPG_PRIVATE_KEY }} - passphrase: ${{ secrets.GPG_PRIVATE_KEY_PASSPHRASE }} - - - name: Sync packages from buckets on S3 - run: | - mkdir -p "packaging/releases/$CODENAME" - aws s3 sync "s3://${{ secrets.AWS_S3_BUCKET_RELEASE }}/$CODENAME" "packaging/releases/$CODENAME" --no-progress - aws s3 sync "s3://${{ secrets.AWS_S3_BUCKET_STAGING }}/$CODENAME" "packaging/releases/$CODENAME" --no-progress - env: - AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - AWS_REGION: "us-east-1" - CODENAME: ${{ steps.get_codename.outputs.CODENAME }} - shell: bash - - - name: Update repo info and remove any staging details - run: | - packaging/update-apt-repo.sh - env: - GPG_KEY: ${{ steps.import_gpg.outputs.name }} - AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET_RELEASE }} - VERSION: ${{ github.event.inputs.version }} - BASE_PATH: "packaging/releases" - DEB_REPO: ${{ steps.get_codename.outputs.CODENAME }} - shell: bash - - - name: Sync to release bucket on S3 - run: | - aws s3 sync "packaging/releases/$CODENAME" "s3://${{ secrets.AWS_S3_BUCKET_RELEASE }}/$CODENAME" --delete --follow-symlinks --no-progress - env: - AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - AWS_REGION: "us-east-1" - CODENAME: ${{ steps.get_codename.outputs.CODENAME }} - shell: bash - - staging-release-update-non-linux-s3: - name: Update Windows and macOS packages - runs-on: ubuntu-22.04 - environment: release - needs: - - staging-release-version-check - permissions: - contents: none - strategy: - matrix: - distro: - - macos - - windows - fail-fast: false - steps: - - name: Sync packages from buckets on S3 - run: | - mkdir -p "packaging/releases/$DISTRO" - aws s3 sync "s3://${{ secrets.AWS_S3_BUCKET_RELEASE }}/$DISTRO" "packaging/releases/$DISTRO" --no-progress - aws s3 sync "s3://${{ secrets.AWS_S3_BUCKET_STAGING }}/${{ github.event.inputs.version }}/$DISTRO" "packaging/releases/$DISTRO" --no-progress - env: - AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - AWS_REGION: "us-east-1" - DISTRO: ${{ matrix.distro }} - shell: bash - - - name: Sync to release bucket on S3 - run: | - aws s3 sync "packaging/releases/$DISTRO" "s3://${{ secrets.AWS_S3_BUCKET_RELEASE }}/$DISTRO" --delete --follow-symlinks --no-progress - env: - AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - AWS_REGION: "us-east-1" - DISTRO: ${{ matrix.distro }} - shell: bash - - staging-release-update-base-s3: - name: Update top-level bucket info - runs-on: ubuntu-22.04 - environment: release - needs: - - staging-release-apt-packages - - staging-release-yum-packages - permissions: - contents: none - steps: - - name: Import GPG key for signing - id: import_gpg - uses: crazy-max/ghaction-import-gpg@v6 - with: - gpg_private_key: ${{ secrets.GPG_PRIVATE_KEY }} - passphrase: ${{ secrets.GPG_PRIVATE_KEY_PASSPHRASE }} - - - name: GPG public key - run: | - gpg --export -a "${{ steps.import_gpg.outputs.name }}" > ./fluentbit.key - aws s3 cp ./fluentbit.key s3://${{ secrets.AWS_S3_BUCKET_RELEASE }}/fluentbit.key --no-progress - env: - AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - AWS_REGION: "us-east-1" - shell: bash - - - name: JSON schema - continue-on-error: true - run: | - aws s3 sync "s3://${AWS_STAGING_S3_BUCKET}/${VERSION}" "s3://${AWS_RELEASE_S3_BUCKET}/${VERSION}" --no-progress --exclude "*" --include "*.json" - env: - VERSION: ${{ github.event.inputs.version }} - AWS_REGION: "us-east-1" - AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - AWS_STAGING_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET_STAGING }} - AWS_RELEASE_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET_RELEASE }} - shell: bash - - staging-release-source-s3: - name: S3 - update source bucket - runs-on: ubuntu-22.04 - environment: release - needs: - - staging-release-version-check - permissions: - contents: read - steps: - - name: Checkout code - uses: actions/checkout@v5 - - - name: Sync packages from buckets on S3 - run: | - mkdir -p release staging - aws s3 sync "s3://${{ secrets.AWS_S3_BUCKET_RELEASE_SOURCES }}" release/ --no-progress - aws s3 sync "s3://${{ secrets.AWS_S3_BUCKET_STAGING }}/source/" staging/ --no-progress - env: - AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - AWS_REGION: "us-east-1" - shell: bash - - - name: Move components from staging and setup - run: | - ./packaging/update-source-packages.sh - env: - AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - AWS_REGION: "us-east-1" - SOURCE_DIR: staging - WINDOWS_SOURCE_DIR: appveyor - TARGET_DIR: release - VERSION: ${{ github.event.inputs.version }} - MAJOR_VERSION: ${{ needs.staging-release-version-check.outputs.major-version }} - shell: bash - - - name: Sync to bucket on S3 - run: | - aws s3 sync release/ "s3://${{ secrets.AWS_S3_BUCKET_RELEASE_SOURCES }}" --delete --follow-symlinks --no-progress - env: - AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - AWS_REGION: "us-east-1" - shell: bash - - # Simple skopeo copy jobs to transfer image from staging to release registry with optional GPG key signing. - # Unfortunately skopeo currently does not support Cosign: https://github.com/containers/skopeo/issues/1533 - staging-release-images: - name: Release ${{ matrix.tag }} Linux container images - runs-on: ubuntu-latest - needs: - - staging-release-version-check - environment: release - strategy: - fail-fast: false - matrix: - # All the explicit tags we want to release - tag: [ - "${{ github.event.inputs.version }}", - "${{ needs.staging-release-version-check.outputs.major-version }}", - "${{ github.event.inputs.version }}-debug", - "${{ needs.staging-release-version-check.outputs.major-version }}-debug", - ] - permissions: - packages: write - steps: - # Primarily because the skopeo errors are hard to parse and non-obvious - - name: Check the image exists - run: | - docker pull "$STAGING_IMAGE_NAME:$TAG" - env: - TAG: ${{ matrix.tag }} - shell: bash - - # Use the container to prevent any rootless issues and we do not need to use GPG signing as DockerHub does not support it. - - name: Promote container images from staging to Dockerhub - run: | - docker run --rm \ - quay.io/skopeo/stable:latest \ - copy \ - --all \ - --retry-times 10 \ - --src-no-creds \ - --dest-creds "$RELEASE_CREDS" \ - "docker://$STAGING_IMAGE_NAME:$TAG" \ - "docker://$RELEASE_IMAGE_NAME:$TAG" - env: - RELEASE_IMAGE_NAME: docker.io/${{ github.event.inputs.docker-image || secrets.DOCKERHUB_ORGANIZATION }} - RELEASE_CREDS: ${{ secrets.DOCKERHUB_USERNAME }}:${{ secrets.DOCKERHUB_TOKEN }} - TAG: ${{ matrix.tag }} - shell: bash - - - name: Promote container images from staging to GHCR.io - if: ${{ ! startsWith(matrix.tag, 'latest') }} - run: | - docker run --rm \ - quay.io/skopeo/stable:latest \ - copy \ - --all \ - --retry-times 10 \ - --src-no-creds \ - --dest-creds "$RELEASE_CREDS" \ - "docker://$STAGING_IMAGE_NAME:$TAG" \ - "docker://$RELEASE_IMAGE_NAME:$TAG" - env: - RELEASE_IMAGE_NAME: ghcr.io/${{ github.event.inputs.github-image || github.repository }} - RELEASE_CREDS: ${{ github.actor }}:${{ secrets.GITHUB_TOKEN }} - TAG: ${{ matrix.tag }} - shell: bash - - # Part of resolution for: https://github.com/fluent/fluent-bit/issues/7748 - # More recent build-push-actions may mean legacy format is not preserved so we provide arch-specific tags just in case - staging-release-images-arch-specific-legacy-tags: - name: Release ${{ matrix.arch }} legacy format Linux container images - runs-on: ubuntu-latest - needs: - - staging-release-images - environment: release - strategy: - fail-fast: false - matrix: - arch: - - amd64 - - arm64 - - arm/v7 - permissions: - packages: write - env: - RELEASE_IMAGE_NAME: ${{ github.event.inputs.docker-image || secrets.DOCKERHUB_ORGANIZATION }} - RELEASE_TAG: ${{ github.event.inputs.version }} - steps: - - - name: Login to Docker Hub - uses: docker/login-action@v3 - with: - username: ${{ secrets.DOCKERHUB_USERNAME }} - password: ${{ secrets.DOCKERHUB_TOKEN }} - - - name: Login to GitHub Container Registry - uses: docker/login-action@v3 - with: - registry: ghcr.io - username: ${{ github.actor }} - password: ${{ secrets.GITHUB_TOKEN }} - - - name: Convert arch to tag - id: get-tag - run: | - TAG="${RELEASE_TAG}-${{ matrix.arch }}" - echo "Input value: $TAG" - TAG=${TAG/\//-} - echo "Using tag: $TAG" - echo "tag=$TAG" >> $GITHUB_OUTPUT - shell: bash - - - name: Pull release image - run: docker pull --platform='linux/${{ matrix.arch }}' "$RELEASE_IMAGE_NAME:$RELEASE_TAG" - shell: bash - - - name: Tag and push legacy format image to DockerHub - run: | - docker tag "$RELEASE_IMAGE_NAME:$RELEASE_TAG" docker.io/"$RELEASE_IMAGE_NAME:$TAG" - docker push docker.io/"$RELEASE_IMAGE_NAME:$TAG" - shell: bash - env: - TAG: ${{ steps.get-tag.outputs.tag }} - - - name: Tag and push legacy format image to Github Container Registry - run: | - docker tag "$RELEASE_IMAGE_NAME:$RELEASE_TAG" ghcr.io/"$RELEASE_IMAGE_NAME:$TAG" - docker push ghcr.io/"$RELEASE_IMAGE_NAME:$TAG" - shell: bash - env: - TAG: ${{ steps.get-tag.outputs.tag }} - - staging-release-images-latest-tags: - # Only update latest tags for 4.0 releases - if: startsWith(github.event.inputs.version, '4.') - name: Release latest Linux container images - runs-on: ubuntu-latest - needs: - - staging-release-images - environment: release - strategy: - fail-fast: false - matrix: - tag: [ - "latest", - "latest-debug" - ] - permissions: - packages: write - steps: - # Primarily because the skopeo errors are hard to parse and non-obvious - - name: Check the image exists - run: | - docker pull "$STAGING_IMAGE_NAME:$TAG" - env: - TAG: ${{ matrix.tag }} - shell: bash - - # Use the container to prevent any rootless issues and we do not need to use GPG signing as DockerHub does not support it. - - name: Promote container images from staging to Dockerhub - run: | - docker run --rm \ - quay.io/skopeo/stable:latest \ - copy \ - --all \ - --retry-times 10 \ - --src-no-creds \ - --dest-creds "$RELEASE_CREDS" \ - "docker://$STAGING_IMAGE_NAME:$TAG" \ - "docker://$RELEASE_IMAGE_NAME:$TAG" - env: - RELEASE_IMAGE_NAME: docker.io/${{ github.event.inputs.docker-image || secrets.DOCKERHUB_ORGANIZATION }} - RELEASE_CREDS: ${{ secrets.DOCKERHUB_USERNAME }}:${{ secrets.DOCKERHUB_TOKEN }} - TAG: ${{ matrix.tag }} - shell: bash - - - name: Promote container images from staging to GHCR.io - run: | - docker run --rm \ - quay.io/skopeo/stable:latest \ - copy \ - --all \ - --retry-times 10 \ - --src-no-creds \ - --dest-creds "$RELEASE_CREDS" \ - "docker://$STAGING_IMAGE_NAME:$TAG" \ - "docker://$RELEASE_IMAGE_NAME:$TAG" - env: - RELEASE_IMAGE_NAME: ghcr.io/${{ github.event.inputs.github-image || github.repository }} - RELEASE_CREDS: ${{ github.actor }}:${{ secrets.GITHUB_TOKEN }} - TAG: ${{ matrix.tag }} - shell: bash - - staging-release-images-windows: - name: Release Windows images - # Cannot be done by Skopeo on a Linux runner unfortunately - runs-on: ${{ matrix.runs_on }} - needs: - - staging-release-version-check - environment: release - permissions: - packages: write - strategy: - fail-fast: false - matrix: - include: - - tag: "windows-2022-${{ github.event.inputs.version }}" - runs_on: windows-latest - - tag: "windows-2025-${{ github.event.inputs.version }}" - runs_on: windows-2025 - steps: - - name: Login to GitHub Container Registry - uses: docker/login-action@v3 - with: - registry: ghcr.io - username: ${{ github.actor }} - password: ${{ secrets.GITHUB_TOKEN }} - - - name: Check the image exists - # Use manifest inspect rather than pulling the image so the Windows host version does not need to match the container image - run: | - docker manifest inspect "$STAGING_IMAGE_NAME:$TAG" - env: - TAG: ${{ matrix.tag }} - shell: bash - - # Pulling the actual image with tag is needed here. - - name: Promote container images from staging to GHCR.io - run: | - docker pull "$STAGING_IMAGE_NAME:$TAG" - docker tag "$STAGING_IMAGE_NAME:$TAG" "$RELEASE_IMAGE_NAME:$TAG" - docker push "$RELEASE_IMAGE_NAME:$TAG" - env: - RELEASE_IMAGE_NAME: ghcr.io/${{ github.event.inputs.github-image || github.repository }} - RELEASE_CREDS: ${{ github.actor }}:${{ secrets.GITHUB_TOKEN }} - TAG: ${{ matrix.tag }} - shell: bash - - - name: Login to Docker Hub - uses: docker/login-action@v3 - with: - username: ${{ secrets.DOCKERHUB_USERNAME }} - password: ${{ secrets.DOCKERHUB_TOKEN }} - - - name: Promote container images from staging to Dockerhub - run: | - docker tag "$STAGING_IMAGE_NAME:$TAG" "$RELEASE_IMAGE_NAME:$TAG" - docker push "$RELEASE_IMAGE_NAME:$TAG" - env: - RELEASE_IMAGE_NAME: docker.io/${{ github.event.inputs.docker-image || secrets.DOCKERHUB_ORGANIZATION }} - RELEASE_CREDS: ${{ secrets.DOCKERHUB_USERNAME }}:${{ secrets.DOCKERHUB_TOKEN }} - TAG: ${{ matrix.tag }} - shell: bash - - staging-release-images-sign: - name: Sign container image manifests - permissions: write-all - runs-on: ubuntu-latest - environment: release - needs: - - staging-release-images - env: - DH_RELEASE_IMAGE_NAME: docker.io/${{ github.event.inputs.docker-image || secrets.DOCKERHUB_ORGANIZATION }} - GHCR_RELEASE_IMAGE_NAME: ghcr.io/${{ github.event.inputs.github-image || github.repository }} - steps: - - name: Install cosign - uses: sigstore/cosign-installer@v3 - - - name: Login to Docker Hub - uses: docker/login-action@v3 - with: - username: ${{ secrets.DOCKERHUB_USERNAME }} - password: ${{ secrets.DOCKERHUB_TOKEN }} - - - name: Login to GitHub Container Registry - uses: docker/login-action@v3 - with: - registry: ghcr.io - username: ${{ github.actor }} - password: ${{ secrets.GITHUB_TOKEN }} - - - name: Cosign with a key - # Only run if we have a key defined - if: ${{ env.COSIGN_PRIVATE_KEY }} - # The key needs to cope with newlines - run: | - echo -e "${COSIGN_PRIVATE_KEY}" > /tmp/my_cosign.key - cosign sign --key /tmp/my_cosign.key --recursive --yes \ - -a "repo=${{ github.repository }}" \ - -a "workflow=${{ github.workflow }}" \ - -a "release=${{ github.event.inputs.version }}" \ - "$GHCR_RELEASE_IMAGE_NAME:${{ github.event.inputs.version }}" \ - "$GHCR_RELEASE_IMAGE_NAME:${{ github.event.inputs.version }}-debug" \ - "$DH_RELEASE_IMAGE_NAME:${{ github.event.inputs.version }}" \ - "$DH_RELEASE_IMAGE_NAME:${{ github.event.inputs.version }}-debug" - rm -f /tmp/my_cosign.key - shell: bash - env: - COSIGN_PRIVATE_KEY: ${{ secrets.COSIGN_PRIVATE_KEY }} - COSIGN_PASSWORD: ${{ secrets.COSIGN_PRIVATE_KEY_PASSWORD }} # optional - - - name: Cosign keyless signing using Rektor public transparency log - # This step uses the identity token to provision an ephemeral certificate - # against the sigstore community Fulcio instance, and records it to the - # sigstore community Rekor transparency log. - # - # We use recursive signing on the manifest to cover all the images. - run: | - cosign sign --yes --recursive \ - -a "repo=${{ github.repository }}" \ - -a "workflow=${{ github.workflow }}" \ - -a "release=${{ github.event.inputs.version }}" \ - "$GHCR_RELEASE_IMAGE_NAME:${{ github.event.inputs.version }}" \ - "$GHCR_RELEASE_IMAGE_NAME:${{ github.event.inputs.version }}-debug" \ - "$DH_RELEASE_IMAGE_NAME:${{ github.event.inputs.version }}" \ - "$DH_RELEASE_IMAGE_NAME:${{ github.event.inputs.version }}-debug" - shell: bash - env: - COSIGN_EXPERIMENTAL: true - - staging-release-upload-cosign-key: - name: Upload Cosign public key for verification - needs: - - staging-release-images-sign - permissions: - contents: none - runs-on: ubuntu-latest - steps: - - name: Install cosign - uses: sigstore/cosign-installer@v2 - - - name: Get public key and add to S3 bucket - # Only run if we have a key defined - if: ${{ env.COSIGN_PRIVATE_KEY }} - # The key needs to cope with newlines - run: | - echo -e "${COSIGN_PRIVATE_KEY}" > /tmp/my_cosign.key - cosign public-key --key /tmp/my_cosign.key > ./cosign.pub - rm -f /tmp/my_cosign.key - cat ./cosign.pub - aws s3 cp ./cosign.pub "s3://${{ secrets.AWS_S3_BUCKET_RELEASE }}/cosign.pub" --no-progress - shell: bash - env: - COSIGN_PRIVATE_KEY: ${{ secrets.COSIGN_PRIVATE_KEY }} - COSIGN_PASSWORD: ${{ secrets.COSIGN_PRIVATE_KEY_PASSWORD }} # optional - AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - AWS_REGION: "us-east-1" - - staging-release-smoke-test-packages: - name: Run package smoke tests - permissions: - contents: read - runs-on: ubuntu-latest - environment: release - needs: - - staging-release-apt-packages - - staging-release-yum-packages - steps: - - name: Checkout code - uses: actions/checkout@v5 - - - name: Test release packages - run: | - ./packaging/test-release-packages.sh - shell: bash - env: - VERSION_TO_CHECK_FOR: ${{ github.event.inputs.version }} - FLUENT_BIT_PACKAGES_URL: http://${{ secrets.AWS_S3_BUCKET_RELEASE }}.s3.amazonaws.com - FLUENT_BIT_PACKAGES_KEY: http://${{ secrets.AWS_S3_BUCKET_RELEASE }}.s3.amazonaws.com/fluentbit.key - - staging-release-smoke-test-containers: - name: Run container smoke tests - permissions: - contents: read - packages: read - runs-on: ubuntu-latest - environment: release - needs: - - staging-release-images - steps: - - name: Checkout code - uses: actions/checkout@v5 - - - name: Test containers - run: | - ./packaging/testing/smoke/container/container-smoke-test.sh - shell: bash - env: - IMAGE_TAG: ${{ github.event.inputs.version }} - - staging-release-create-release: - name: Create the Github Release once packages and containers are up - needs: - - staging-release-images - - staging-release-apt-packages - - staging-release-yum-packages - permissions: - contents: write - environment: release - runs-on: ubuntu-latest - steps: - - name: Release 2.0 - not latest - uses: softprops/action-gh-release@v2 - if: startsWith(inputs.version, '2.0') - with: - body: "https://fluentbit.io/announcements/v${{ inputs.version }}/" - draft: false - generate_release_notes: true - name: "Fluent Bit ${{ inputs.version }}" - tag_name: v${{ inputs.version }} - target_commitish: '2.0' - make_latest: false - - - name: Release 2.1 - not latest - uses: softprops/action-gh-release@v2 - if: startsWith(inputs.version, '2.1') - with: - body: "https://fluentbit.io/announcements/v${{ inputs.version }}/" - draft: false - generate_release_notes: true - name: "Fluent Bit ${{ inputs.version }}" - tag_name: v${{ inputs.version }} - target_commitish: '2.1' - make_latest: false - - - name: Release 3.0 - not latest - uses: softprops/action-gh-release@v2 - if: startsWith(inputs.version, '3.0') - with: - body: "https://fluentbit.io/announcements/v${{ inputs.version }}/" - draft: false - generate_release_notes: true - name: "Fluent Bit ${{ inputs.version }}" - tag_name: v${{ inputs.version }} - target_commitish: '3.0' - make_latest: false - - - name: Release 3.1 - not latest - uses: softprops/action-gh-release@v2 - if: startsWith(inputs.version, '3.1') - with: - body: "https://fluentbit.io/announcements/v${{ inputs.version }}/" - draft: false - generate_release_notes: true - name: "Fluent Bit ${{ inputs.version }}" - tag_name: v${{ inputs.version }} - target_commitish: '3.1' - make_latest: false - - - name: Release 3.2 - not latest - uses: softprops/action-gh-release@v2 - if: startsWith(inputs.version, '3.2') - with: - body: "https://fluentbit.io/announcements/v${{ inputs.version }}/" - draft: false - generate_release_notes: true - name: "Fluent Bit ${{ inputs.version }}" - tag_name: v${{ inputs.version }} - target_commitish: '3.2' - make_latest: false - - - name: Release 4.0 - not latest - uses: softprops/action-gh-release@v2 - if: startsWith(inputs.version, '4.0') - with: - body: "https://fluentbit.io/announcements/v${{ inputs.version }}/" - draft: false - generate_release_notes: true - name: "Fluent Bit ${{ inputs.version }}" - tag_name: v${{ inputs.version }} - make_latest: false - - - name: Release 4.1 - not latest - uses: softprops/action-gh-release@v2 - if: startsWith(inputs.version, '4.1') - with: - body: "https://fluentbit.io/announcements/v${{ inputs.version }}/" - draft: false - generate_release_notes: true - name: "Fluent Bit ${{ inputs.version }}" - tag_name: v${{ inputs.version }} - make_latest: true - - - name: Release 4.2 and latest - uses: softprops/action-gh-release@v2 - if: startsWith(inputs.version, '4.2') - with: - body: "https://fluentbit.io/announcements/v${{ inputs.version }}/" - draft: false - generate_release_notes: true - name: "Fluent Bit ${{ inputs.version }}" - tag_name: v${{ inputs.version }} - make_latest: true - - staging-release-windows-checksums: - name: Get Windows checksums for new release - runs-on: ubuntu-22.04 - environment: release - needs: - - staging-release-update-non-linux-s3 - permissions: - contents: none - outputs: - windows-exe32-hash: ${{ steps.hashes.outputs.WIN_32_EXE_HASH }} - windows-zip32-hash: ${{ steps.hashes.outputs.WIN_32_ZIP_HASH }} - windows-exe64-hash: ${{ steps.hashes.outputs.WIN_64_EXE_HASH }} - windows-zip64-hash: ${{ steps.hashes.outputs.WIN_64_ZIP_HASH }} - windows-arm-exe64-hash: ${{ steps.hashes.outputs.WIN_64_ARM_EXE_HASH }} - windows-arm-zip64-hash: ${{ steps.hashes.outputs.WIN_64_ARM_ZIP_HASH }} - steps: - - name: Sync release Windows directory to get checksums - run: - aws s3 sync "s3://${{ secrets.AWS_S3_BUCKET_RELEASE }}/windows" ./ --exclude "*" --include "*.sha256" - shell: bash - env: - AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - AWS_REGION: "us-east-1" - - - name: Provide output for documentation PR - id: hashes - # do not fail the build for this - continue-on-error: true - run: | - ls -l - export WIN_32_EXE_HASH=$(cat "./fluent-bit-${{ inputs.version }}-win32.exe.sha256"|awk '{print $1}') - export WIN_32_ZIP_HASH=$(cat "./fluent-bit-${{ inputs.version }}-win32.zip.sha256"|awk '{print $1}') - export WIN_64_EXE_HASH=$(cat "./fluent-bit-${{ inputs.version }}-win64.exe.sha256"|awk '{print $1}') - export WIN_64_ZIP_HASH=$(cat "./fluent-bit-${{ inputs.version }}-win64.zip.sha256"|awk '{print $1}') - if [[ -f "./fluent-bit-${{ inputs.version }}-winarm64.exe.sha256" ]]; then - export WIN_64_ARM_EXE_HASH=$(cat "./fluent-bit-${{ inputs.version }}-winarm64.exe.sha256"|awk '{print $1}') - fi - if [[ -f "./fluent-bit-${{ inputs.version }}-winarm64.zip.sha256" ]]; then - export WIN_64_ARM_ZIP_HASH=$(cat "./fluent-bit-${{ inputs.version }}-winarm64.zip.sha256"|awk '{print $1}') - fi - set | grep WIN_ - echo WIN_32_EXE_HASH="$WIN_32_EXE_HASH" >> $GITHUB_OUTPUT - echo WIN_32_ZIP_HASH="$WIN_32_ZIP_HASH" >> $GITHUB_OUTPUT - echo WIN_64_EXE_HASH="$WIN_64_EXE_HASH" >> $GITHUB_OUTPUT - echo WIN_64_ZIP_HASH="$WIN_64_ZIP_HASH" >> $GITHUB_OUTPUT - echo WIN_64_ARM_EXE_HASH="$WIN_64_ARM_EXE_HASH" >> $GITHUB_OUTPUT - echo WIN_64_ARM_ZIP_HASH="$WIN_64_ARM_ZIP_HASH" >> $GITHUB_OUTPUT - shell: bash - - staging-release-create-docs-pr: - name: Create docs updates for new release - needs: - - staging-release-images - - staging-release-windows-checksums - permissions: - contents: none - environment: release - runs-on: ubuntu-latest - steps: - - name: Release 2.0 - not latest - if: startsWith(inputs.version, '2.0') - uses: actions/checkout@v5 - with: - repository: fluent/fluent-bit-docs - ref: 2.0 - token: ${{ secrets.GH_PA_TOKEN }} - - - name: Release 2.1 - not latest - if: startsWith(inputs.version, '2.1') - uses: actions/checkout@v5 - with: - repository: fluent/fluent-bit-docs - ref: 2.1 - token: ${{ secrets.GH_PA_TOKEN }} - - - name: Release 2.2 - not latest - if: startsWith(inputs.version, '2.2') - uses: actions/checkout@v5 - with: - repository: fluent/fluent-bit-docs - ref: 2.2 - token: ${{ secrets.GH_PA_TOKEN }} - - - name: Release 3.0 - not latest - if: startsWith(inputs.version, '3.0') - uses: actions/checkout@v5 - with: - repository: fluent/fluent-bit-docs - ref: 3.0 - token: ${{ secrets.GH_PA_TOKEN }} - - - name: Release 3.1 - not latest - if: startsWith(inputs.version, '3.1') - uses: actions/checkout@v5 - with: - repository: fluent/fluent-bit-docs - ref: 3.1 - token: ${{ secrets.GH_PA_TOKEN }} - - - name: Release 3.2 - not latest - if: startsWith(inputs.version, '3.2') - uses: actions/checkout@v5 - with: - repository: fluent/fluent-bit-docs - token: ${{ secrets.GH_PA_TOKEN }} - ref: 3.2 - - - name: Release 4.0 - not latest - if: startsWith(inputs.version, '4.0') - uses: actions/checkout@v5 - with: - repository: fluent/fluent-bit-docs - token: ${{ secrets.GH_PA_TOKEN }} - ref: '4.0' - - - name: Release 4.1 - not latest - if: startsWith(inputs.version, '4.1') - uses: actions/checkout@v5 - with: - repository: fluent/fluent-bit-docs - token: ${{ secrets.GH_PA_TOKEN }} - - - name: Release 4.2 and latest - if: startsWith(inputs.version, '4.2') - uses: actions/checkout@v5 - with: - repository: fluent/fluent-bit-docs - token: ${{ secrets.GH_PA_TOKEN }} - - - name: Ensure we have the script we need - run: | - if [[ ! -f update-release-version-docs.sh ]] ; then - git checkout update-release-version-docs.sh -- master - fi - shell: bash - - - name: Update versions - # Uses https://github.com/fluent/fluent-bit-docs/blob/master/update-release-version-docs.sh - run: | - ./update-release-version-docs.sh - shell: bash - env: - NEW_VERSION: ${{ inputs.version }} - WIN_32_EXE_HASH: ${{ needs.staging-release-windows-checksums.outputs.windows-exe32-hash }} - WIN_32_ZIP_HASH: ${{ needs.staging-release-windows-checksums.outputs.windows-zip32-hash }} - WIN_64_EXE_HASH: ${{ needs.staging-release-windows-checksums.outputs.windows-exe64-hash }} - WIN_64_ZIP_HASH: ${{ needs.staging-release-windows-checksums.outputs.windows-zip64-hash }} - WIN_64_ARM_EXE_HASH: ${{ needs.staging-release-windows-checksums.outputs.windows-arm-exe64-hash }} - WIN_64_ARM_ZIP_HASH: ${{ needs.staging-release-windows-checksums.outputs.windows-arm-zip64-hash }} - - - name: Raise docs PR - id: cpr - uses: peter-evans/create-pull-request@v7 - with: - commit-message: 'release: update to v${{ inputs.version }}' - signoff: true - delete-branch: true - title: 'release: update to v${{ inputs.version }}' - # We need workflows permission so have to use the GH_PA_TOKEN - token: ${{ secrets.GH_PA_TOKEN }} - labels: ci,automerge - body: | - Update release ${{ inputs.version }} version. - - Created by ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }} - - Auto-generated by create-pull-request: https://github.com/peter-evans/create-pull-request - draft: false - - - name: Check outputs - if: ${{ steps.cpr.outputs.pull-request-number }} - run: | - echo "Pull Request Number - ${{ steps.cpr.outputs.pull-request-number }}" - echo "Pull Request URL - ${{ steps.cpr.outputs.pull-request-url }}" - shell: bash - - staging-release-create-version-update-pr: - name: Create version update PR for new release - needs: - - staging-release-create-release - permissions: - contents: write - pull-requests: write - environment: release - runs-on: ubuntu-latest - steps: - - name: Release 2.0 - if: startsWith(inputs.version, '2.0') - uses: actions/checkout@v5 - with: - ref: 2.0 - - - name: Release 2.1 - if: startsWith(inputs.version, '2.1') - uses: actions/checkout@v5 - with: - ref: 2.1 - - - name: Release 2.2 - if: startsWith(inputs.version, '2.2') - uses: actions/checkout@v5 - with: - ref: 2.2 - - - name: Release 3.0 - if: startsWith(inputs.version, '3.0') - uses: actions/checkout@v5 - with: - ref: 3.0 - - - name: Release 3.1 - if: startsWith(inputs.version, '3.1') - uses: actions/checkout@v5 - with: - ref: 3.1 - - - name: Release 3.2 - if: startsWith(inputs.version, '3.2') - uses: actions/checkout@v5 - with: - ref: 3.2 - - - name: Release 4.0 - if: startsWith(inputs.version, '4.0') - uses: actions/checkout@v5 - with: - ref: '4.0' - - - name: Release 4.1 - if: startsWith(inputs.version, '4.1') - uses: actions/checkout@v5 - with: - ref: 4.1 - - - name: Release 4.2 - if: startsWith(inputs.version, '4.2') - uses: actions/checkout@v5 - with: - ref: master - - # Get the new version to use - - name: 'Get next minor version' - id: semvers - uses: "WyriHaximus/github-action-next-semvers@v1" - with: - version: ${{ inputs.version }} - strict: true - - - run: ./update_version.sh - shell: bash - env: - NEW_VERSION: ${{ steps.semvers.outputs.patch }} - # Ensure we use the PR action to do the work - DISABLE_COMMIT: 'yes' - - - name: Raise FB PR to update version - id: cpr - uses: peter-evans/create-pull-request@v7 - with: - commit-message: 'release: update to ${{ steps.semvers.outputs.patch }}' - signoff: true - delete-branch: true - title: 'release: update to ${{ steps.semvers.outputs.patch }}' - labels: ci,automerge - body: | - Update next release to ${{ steps.semvers.outputs.patch }} version. - - Created by ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }} - - Auto-generated by create-pull-request: https://github.com/peter-evans/create-pull-request - draft: false - - - name: Check outputs - if: ${{ steps.cpr.outputs.pull-request-number }} - run: | - echo "Pull Request Number - ${{ steps.cpr.outputs.pull-request-number }}" - echo "Pull Request URL - ${{ steps.cpr.outputs.pull-request-url }}" - shell: bash diff --git a/.github/workflows/staging-test.yaml b/.github/workflows/staging-test.yaml deleted file mode 100644 index e7a2d3caa1c..00000000000 --- a/.github/workflows/staging-test.yaml +++ /dev/null @@ -1,57 +0,0 @@ ---- -name: Test staging -# The intention is this workflow is triggered either manually or -# after build has completed. -on: - workflow_run: - workflows: ['Deploy to staging'] - types: - - completed - workflow_dispatch: - -concurrency: integration-test - -jobs: - staging-test-images: - name: Container images staging tests - # Workflow run always triggers on completion regardless of status - # This prevents us from running if build fails. - if: github.event_name == 'workflow_dispatch' || github.event.workflow_run.conclusion == 'success' - uses: ./.github/workflows/call-test-images.yaml - with: - registry: ghcr.io - username: ${{ github.actor }} - image: ${{ github.repository }}/staging - image-tag: latest - environment: staging - secrets: - token: ${{ secrets.GITHUB_TOKEN }} - cosign_key: ${{ secrets.COSIGN_PUBLIC_KEY }} - - # Called workflows cannot be nested - staging-test-images-integration: - name: run integration tests on GCP - # Wait for other tests to succeed - needs: staging-test-images - uses: ./.github/workflows/call-run-integration-test.yaml - with: - image_name: ghcr.io/${{ github.repository }}/staging - image_tag: latest - secrets: - opensearch_aws_access_id: ${{ secrets.OPENSEARCH_AWS_ACCESS_ID }} - opensearch_aws_secret_key: ${{ secrets.OPENSEARCH_AWS_SECRET_KEY }} - opensearch_admin_password: ${{ secrets.OPENSEARCH_ADMIN_PASSWORD }} - terraform_api_token: ${{ secrets.TF_API_TOKEN }} - gcp-service-account-key: ${{ secrets.GCP_SA_KEY }} - - staging-test-packages: - name: Binary packages staging test - # Workflow run always triggers on completion regardless of status - # This prevents us from running if build fails. - if: github.event_name == 'workflow_dispatch' || github.event.workflow_run.conclusion == 'success' - uses: ./.github/workflows/call-test-packages.yaml - with: - environment: staging - secrets: - bucket: ${{ secrets.AWS_S3_BUCKET_STAGING }} - token: ${{ secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/unit-tests.yaml b/.github/workflows/unit-tests.yaml deleted file mode 100644 index 1300b08c586..00000000000 --- a/.github/workflows/unit-tests.yaml +++ /dev/null @@ -1,320 +0,0 @@ -name: Run unit tests -on: - push: - branches: - - master - - 3.2 - - 3.1 - - 3.0 - - 2.2 - - 2.1 - - 2.0 - - 1.9 - - 1.8 - pull_request: - paths-ignore: - - '.github/**' - - 'dockerfiles/**' - - 'docker_compose/**' - - 'packaging/**' - - '.gitignore' - - 'appveyor.yml' - - 'examples/**' - branches: - - master - - 4.1 - - 4.0 - - 3.2 - - 3.1 - - 3.0 - - 2.2 - - 2.1 - - 2.0 - - 1.9 - - 1.8 - types: [opened, reopened, synchronize] - workflow_dispatch: - -jobs: - run-ubuntu-unit-tests: - runs-on: ubuntu-22.04 - timeout-minutes: 60 - strategy: - fail-fast: false - matrix: - flb_option: - - "-DFLB_JEMALLOC=On" - - "-DFLB_JEMALLOC=Off" - - "-DFLB_SMALL=On" - - "-DSANITIZE_ADDRESS=On" - - "-DSANITIZE_UNDEFINED=On" - - "-DFLB_COVERAGE=On" - - "-DFLB_SANITIZE_MEMORY=On" - - "-DFLB_SANITIZE_THREAD=On" - - "-DFLB_SIMD=On" - - "-DFLB_SIMD=Off" - - "-DFLB_ARROW=On" - - "-DFLB_COMPILER_STRICT_POINTER_TYPES=On" - cmake_version: - - "3.31.6" - compiler: - - gcc: - cc: gcc - cxx: g++ - - clang: - cc: clang - cxx: clang++ - exclude: - - flb_option: "-DFLB_COVERAGE=On" - compiler: - cc: clang - cxx: clang++ - - flb_option: "-DFLB_ARROW=On" - compiler: - cc: clang - cxx: clang++ - - flb_option: "-DFLB_COMPILER_STRICT_POINTER_TYPES=On" - compiler: - cc: clang - cxx: clang++ - permissions: - contents: read - steps: - - name: Setup environment - run: | - sudo apt-get update - sudo apt-get install -y gcc-9 g++-9 clang-12 libsystemd-dev gcovr libyaml-dev libbpf-dev linux-tools-common - sudo ln -s /usr/bin/llvm-symbolizer-12 /usr/bin/llvm-symbolizer || true - - - name: Install cmake - uses: jwlawson/actions-setup-cmake@v2 - with: - cmake-version: "${{ matrix.cmake_version }}" - - - uses: actions/checkout@v5 - - - uses: actions/checkout@v5 - with: - repository: calyptia/fluent-bit-ci - path: ci - - name: Setup Apache Arrow libraries for parquet (-DFLB_ARROW=On Only) - if: matrix.flb_option == '-DFLB_ARROW=On' - run: | - sudo apt-get update - sudo apt-get install -y -V ca-certificates lsb-release wget - wget https://packages.apache.org/artifactory/arrow/$(lsb_release --id --short | tr 'A-Z' 'a-z')/apache-arrow-apt-source-latest-$(lsb_release --codename --short).deb - sudo apt-get install -y -V ./apache-arrow-apt-source-latest-$(lsb_release --codename --short).deb - sudo apt-get update - sudo apt-get install -y -V libarrow-glib-dev libparquet-glib-dev - - - name: ${{ matrix.compiler.cc }} & ${{ matrix.compiler.cxx }} - ${{ matrix.flb_option }} - run: | - echo "CC = $CC, CXX = $CXX, FLB_OPT = $FLB_OPT" - sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-9 90 - sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-9 90 - sudo update-alternatives --install /usr/bin/clang clang /usr/bin/clang-12 90 - sudo usermod -a -G systemd-journal $(id -un) - sudo -E su -p $(id -un) -c "PATH=$PATH ci/scripts/run-unit-tests.sh" - env: - CC: ${{ matrix.compiler.cc }} - CXX: ${{ matrix.compiler.cxx }} - FLB_OPT: ${{ matrix.flb_option }} - CALYPTIA_FLEET_TOKEN: ${{ secrets.CALYPTIA_FLEET_TOKEN }} - - run-macos-unit-tests: - # We chain this after Linux one as there are costs and restrictions associated - needs: - - run-ubuntu-unit-tests - runs-on: macos-latest - timeout-minutes: 60 - strategy: - fail-fast: false - matrix: - flb_option: - - "-DFLB_JEMALLOC=Off" - - "-DFLB_SANITIZE_MEMORY=On" - - "-DFLB_SANITIZE_THREAD=On" - cmake_version: - - "3.31.6" - permissions: - contents: read - steps: - - name: Install cmake - uses: jwlawson/actions-setup-cmake@v2 - with: - cmake-version: "${{ matrix.cmake_version }}" - - - uses: actions/checkout@v5 - - uses: actions/checkout@v5 - with: - repository: calyptia/fluent-bit-ci - path: ci - - - name: ${{ matrix.flb_option }} - run: | - echo "CC = $CC, CXX = $CXX, FLB_OPT = $FLB_OPT" - brew update - brew install bison flex openssl || true - ci/scripts/run-unit-tests.sh - env: - CC: gcc - CXX: g++ - FLB_OPT: ${{ matrix.flb_option }} - - run-aarch64-unit-tests: - runs-on: ${{(github.repository == 'fluent/fluent-bit') && 'ubuntu-24.04-arm' || 'ubuntu-latest' }} - permissions: - contents: read - needs: - - run-ubuntu-unit-tests - timeout-minutes: 10 - strategy: - fail-fast: false - matrix: - config: - - name: "Aarch64 testing" - flb_option: "-DFLB_WITHOUT_flb-it-network=1 -DFLB_WITHOUT_flb-it-fstore=1" - omit_option: "" - global_option: "-DFLB_BACKTRACE=Off -DFLB_SHARED_LIB=Off -DFLB_DEBUG=On -DFLB_ALL=On -DFLB_EXAMPLES=Off" - unit_test_option: "-DFLB_TESTS_INTERNAL=On" - compiler_cc: gcc - compiler_cxx: g++ - cmake_version: "3.31.6" - cmake_home: "/opt/cmake" - - steps: - - name: Checkout Fluent Bit code - uses: actions/checkout@v5 - - - name: Setup environment - run: | - sudo apt-get update - sudo apt-get install -y gcc-14 g++-14 clang-14 flex bison libsystemd-dev gcovr libyaml-dev libbpf-dev linux-tools-common curl tar gzip - sudo ln -s /usr/bin/llvm-symbolizer-14 /usr/bin/llvm-symbolizer || true - sudo mkdir -p "${CMAKE_HOME}" - cmake_url="https://github.com/Kitware/CMake/releases/download/v${CMAKE_VERSION}/cmake-${CMAKE_VERSION}-linux-$(uname -m).tar.gz" - cmake_dist="$(mktemp --suffix ".tar.gz")" - echo "Downloading CMake ${CMAKE_VERSION}: ${cmake_url} -> ${cmake_dist}" - curl -jksSL -o "${cmake_dist}" "${cmake_url}" - echo "Extracting CMake ${CMAKE_VERSION}: ${cmake_dist} -> ${CMAKE_HOME}" - sudo tar -xzf "${cmake_dist}" -C "${CMAKE_HOME}" --strip-components 1 - rm "${cmake_dist}" - env: - CMAKE_HOME: ${{ matrix.config.cmake_home }} - CMAKE_VERSION: ${{ matrix.config.cmake_version }} - - - name: Build and test with arm runners - run: | - sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-14 90 - sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-14 90 - sudo update-alternatives --install /usr/bin/clang clang /usr/bin/clang-14 90 - - export nparallel=$(( $(getconf _NPROCESSORS_ONLN) > 8 ? 8 : $(getconf _NPROCESSORS_ONLN) )) - export FLB_OPTION="${{ matrix.config.flb_option }}" - export FLB_OMIT_OPTION="${{ matrix.config.omit_option }}" - export GLOBAL_OPTION="${{ matrix.config.global_option }}" - export FLB_UNIT_TEST_OPTION="${{ matrix.config.unit_test_option }}" - export FLB_OPT="${FLB_OPTION} ${GLOBAL_OPTION} ${FLB_UNIT_TEST_OPTION} ${FLB_OMIT_OPTION}" - - echo "CC = ${{ matrix.config.compiler_cc }}, CXX = ${{ matrix.config.compiler_cxx }}, FLB_OPT = $FLB_OPT" - - if [[ -n "${CMAKE_HOME}" ]]; then - export PATH="${CMAKE_HOME}/bin:${PATH}" - fi - - cmake ${FLB_OPT} ../ - make -j $nparallel - ctest -j $nparallel --build-run-dir . --output-on-failure - working-directory: build - env: - CC: ${{ matrix.config.compiler_cc }} - CXX: ${{ matrix.config.compiler_cxx }} - CALYPTIA_FLEET_TOKEN: ${{ secrets.CALYPTIA_FLEET_TOKEN }} - CMAKE_HOME: ${{ matrix.config.cmake_home }} - - run-qemu-ubuntu-unit-tests: - # We chain this after Linux one as there are CPU time costs for QEMU emulation - needs: - - run-ubuntu-unit-tests - runs-on: ubuntu-22.04 - timeout-minutes: 60 - strategy: - fail-fast: false - matrix: - arch: - - s390x - - riscv64 - steps: - - name: Checkout Fluent Bit code - uses: actions/checkout@v5 - - - name: Prepare and build with QEMU ${{ matrix.arch }} - uses: uraimo/run-on-arch-action@v3 - id: build-and-test-on-qemu - with: - arch: ${{ matrix.arch }} - distro: ubuntu22.04 - shell: /bin/bash - dockerRunArgs: | - --volume "/var/lib/dbus/machine-id:/var/lib/dbus/machine-id" - --volume "/etc/machine-id:/etc/machine-id" - install: | - apt-get update - apt-get install -y gcc-12 g++-12 libyaml-dev flex bison libssl-dev libbpf-dev linux-tools-common - arch="$(dpkg --print-architecture)" - case "$arch" in - riscv64) - apt-get install -y gcc-12 g++-12 libyaml-dev flex bison libssl-dev \ - libbpf-dev linux-tools-common lld-15 - update-alternatives --install /usr/bin/ld.lld ld.lld /usr/bin/ld.lld-15 50 - ;; - *) - ;; - esac - apt-get satisfy -y cmake "cmake (<< 4.0)" - - update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-12 90 - update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-12 90 - run: | - cd build - export nparallel=$(( $(getconf _NPROCESSORS_ONLN) > 8 ? 8 : $(getconf _NPROCESSORS_ONLN) )) - export CMAKE_LINKER_OPTION="" - arch="$(dpkg --print-architecture)" - case "$arch" in - riscv64) - export CMAKE_LINKER_OPTION='-DCMAKE_EXE_LINKER_FLAGS="-fuse-ld=lld" -DCMAKE_SHARED_LINKER_FLAGS="-fuse-ld=lld"' - ;; - *) - ;; - esac - export FLB_OPTION="-DFLB_WITHOUT_flb-it-network=1 -DFLB_WITHOUT_flb-it-fstore=1" - export FLB_OMIT_OPTION="" - export GLOBAL_OPTION="-DFLB_BACKTRACE=Off -DFLB_SHARED_LIB=Off -DFLB_DEBUG=On -DFLB_ALL=On -DFLB_EXAMPLES=Off" - export FLB_UNIT_TEST_OPTION="-DFLB_TESTS_INTERNAL=On" - export FLB_OPT="${FLB_OPTION} ${GLOBAL_OPTION} ${FLB_UNIT_TEST_OPTION} ${FLB_OMIT_OPTION}" - export CC=gcc - export CXX=g++ - - echo "CC = $CC, CXX = $CXX, FLB_OPT = $FLB_OPT" - - cmake ${CMAKE_LINKER_OPTION} ${FLB_OPT} ../ - make -j $nparallel - ctest -j $nparallel --build-run-dir . --output-on-failure - - # Required check looks at this so do not remove - run-all-unit-tests: - if: always() - runs-on: ubuntu-latest - name: Unit tests (matrix) - permissions: - contents: none - needs: - - run-macos-unit-tests - - run-ubuntu-unit-tests - - run-aarch64-unit-tests - - run-qemu-ubuntu-unit-tests - steps: - - name: Check build matrix status - # Ignore MacOS failures - if: ${{ needs.run-ubuntu-unit-tests.result != 'success' }} - run: exit 1 diff --git a/.github/workflows/update-dockerhub.yaml b/.github/workflows/update-dockerhub.yaml deleted file mode 100644 index cd027bba44b..00000000000 --- a/.github/workflows/update-dockerhub.yaml +++ /dev/null @@ -1,23 +0,0 @@ ---- -name: Update Dockerhub description - -on: - workflow_dispatch: - -jobs: - update-dockerhub: - name: Update Dockerhub description - permissions: - contents: read - runs-on: ubuntu-latest - steps: - - uses: actions/checkout@v5 - - - name: Docker Hub Description - uses: peter-evans/dockerhub-description@v5 - with: - username: ${{ secrets.DOCKERHUB_USERNAME }} - password: ${{ secrets.DOCKERHUB_TOKEN }} - repository: ${{ github.repository }} - readme-filepath: ./dockerfiles/dockerhub-description.md - short-description: 'Fluent Bit, lightweight logs and metrics collector and forwarder' diff --git a/THIRD_PARTY_LICENSES.txt b/THIRD_PARTY_LICENSES.txt new file mode 100644 index 00000000000..b2d3992dc9e --- /dev/null +++ b/THIRD_PARTY_LICENSES.txt @@ -0,0 +1,1623 @@ +=== Public License Template === + +------------------------------ Top-Level License ------------------------------- +SPDX:Apache-2.0 + +---------------------------------- Copyright ----------------------------------- +Copyright (C) 1989, 1991 Free Software Foundation, Inc., +Copyright (C) 1993-2013 Yukihiro Matsumoto. All rights reserved. +Copyright (C) 1994-2017 Free Software Foundation, Inc. +Copyright (C) 1994-2021 Free Software Foundation, Inc. +Copyright (C) 2002-present Jason Evans . +Copyright (C) 2005-2023 Mike Pall. +Copyright (C) 2005-2023 Mike Pall. See Copyright Notice in luajit.h +Copyright (C) 2006 Toni Ronkko +Copyright (C) 2007 Free Software Foundation, Inc. +Copyright (C) 2007-2012 Mozilla Foundation. All rights reserved. +Copyright (C) 2009-2013 by Daniel Stenberg +Copyright (C) 2009-present Facebook, Inc. All rights reserved. +Copyright (C) 2012-2016 Free Software Foundation, Inc. +Copyright (C) 2012-2021 Free Software Foundation, Inc. +Copyright (C) 2013 Mark Adler +Copyright (C) 2015-2025 The Fluent Bit Authors +Copyright (C) 2019 Intel Corporation. All rights reserved. +Copyright (C) 2019 Intel Corporation. All rights reserved. +Copyright (C) 2019 Yaoyuan . +Copyright (C) 2019-21 Intel Corporation and others. All rights reserved. +Copyright (C) 2020 TU Bergakademie Freiberg Karl Fessel +Copyright (C) 2021 Intel Corporation and others. All rights reserved. +Copyright (C) 2022 Amazon.com, Inc. or its affiliates. All Rights Reserved. +Copyright (C) 2022 Intel Corporation. All rights reserved. +Copyright (C) 2023 Dylibso. All rights reserved. +Copyright (C) 2023 Midokura Japan KK. All rights reserved. +Copyright (C) 2024 Amazon.com, Inc. or its affiliates. All Rights Reserved. +Copyright (C) 2025 Midokura Japan KK. All rights reserved. +Copyright (C) +Copyright (C) Daniel Stenberg +Copyright (C) Gisle Vanem +Copyright (C) Guenter Knauf +Copyright (C) The c-ares project and its contributors +Copyright (C) Treasure Data +Copyright (C) the Massachusetts Institute of Technology. +Copyright (C) year name of author +Copyright (c) (Year), (Name of copyright holder) +Copyright (c) 1991, 1993 +Copyright (c) 1993 The Regents of the University of California. +Copyright (c) 1996, David Mazieres +Copyright (c) 1998 Massachusetts Institute of Technology +Copyright (c) 1998 Todd C. Miller +Copyright (c) 1999-2011 Unicode, Inc. All Rights reserved. +Copyright (c) 2000 Dug Song +Copyright (c) 2000 The NetBSD Foundation, Inc. +Copyright (c) 2000-2007 Niels Provos +Copyright (c) 2002 Christopher Clark +Copyright (c) 2002 Todd C. Miller +Copyright (c) 2002-2018 K.Kosako +Copyright (c) 2003 Michael A. Davis +Copyright (c) 2006 Maxim Yegorushkin +Copyright (c) 2006-2008 Alexander Chemeris +Copyright (c) 2006-2012, Thomas Pircher +Copyright (c) 2007 - 2023 Daniel Stenberg with many contributors, see AUTHORS +Copyright (c) 2007 Sun Microsystems +Copyright (c) 2007-2012 Niels Provos and Nick Mathewson +Copyright (c) 2008, Damien Miller +Copyright (c) 2008-2016, Dave Benson and the protobuf-c authors. +Copyright (c) 2008-2020 The AsmJit Authors +Copyright (c) 2009-2017 Dave Gamble and cJSON contributors +Copyright (c) 2009-2020 Petri Lehtinen +Copyright (c) 2010 BitTorrent, Inc. +Copyright (c) 2010 Serge A. Zaitsev +Copyright (c) 2010 by the contributors (see AUTHORS file). +Copyright (c) 2010- mruby developers +Copyright (c) 2011 Petteri Aimonen +Copyright (c) 2011-2019 K.Takata +Copyright (c) 2011-2020, Yann Collet +Copyright (c) 2012 Internet Initiative Japan Inc. +Copyright (c) 2012 Marcus Geelnard +Copyright (c) 2012 Tatsuhiro Tsujikawa +Copyright (c) 2012, 2013 Tatsuhiro Tsujikawa +Copyright (c) 2012, 2013, 2014, 2015 Tatsuhiro Tsujikawa +Copyright (c) 2012, 2014, 2015, 2016 Tatsuhiro Tsujikawa +Copyright (c) 2012, 2014, 2015, 2016 nghttp2 contributors +Copyright (c) 2012-2021 Yann Collet +Copyright (c) 2012-2022, Magnus Edenhill +Copyright (c) 2012-2022, [Magnus Edenhill](http://www.edenhill.se/). +Copyright (c) 2013 - 2024 by MaxMind, Inc. +Copyright (c) 2013 Internet Initiative Japan Inc. +Copyright (c) 2014 Coda Hale +Copyright (c) 2014 Tatsuhiro Tsujikawa +Copyright (c) 2014-2021 Florian Bernd +Copyright (c) 2014-2021 Joel Höner +Copyright (c) 2015 Nuxi, https://nuxi.nl/ +Copyright (c) 2015 Tatsuhiro Tsujikawa +Copyright (c) 2015-2021 Nicholas Fraser and the MPack authors +Copyright (c) 2015-present libuv project contributors. +Copyright (c) 2016 Peter Wu +Copyright (c) 2017 mruby developers +Copyright (c) 2018-present Dima Krasner +Copyright (c) 2019 Colin Ihrig and Contributors +Copyright (c) 2019 Nigel Stewart +Copyright (c) 2020 YaoYuan +Copyright (c) 2020-2021 Alibaba Cloud +Copyright (c) 2021 Tatsuhiro Tsujikawa +Copyright (c) 2022 Intel Corporation +Copyright (c) 2022 Tilen MAJERLE +Copyright (c) Meta Platforms, Inc. and affiliates. +Copyright (c) Meta Platforms, Inc. and affiliates. All rights reserved. +Copyright (c) Yann Collet, Meta Platforms, Inc. +Copyright (c) yui-knk 2016 +Copyright 2000-2007 Niels Provos +Copyright 2000-2011 Insight Software Consortium +Copyright 2000-2013 Kitware, Inc. +Copyright 2001-2014 Monkey Software LLC +Copyright 2007-2012 Niels Provos and Nick Mathewson +Copyright 2010 The Apache Software Foundation +Copyright 2010-2014 Rich Geldreich and Tenacious Software LLC +Copyright 2011 Intel Corporation All Rights Reserved. +Copyright 2013-2014 RAD Game Tools and Valve Software +Copyright 2013-2025 MaxMind, Inc. +Copyright 2020 The Apache Software Foundation +Copyright 2020 The simdutf authors +Copyright 2021 The simdutf authors +Copyright © 2021 + +-------------------------- Fourth Party Dependencies --------------------------- + +----------------------------------- Licenses ----------------------------------- +- Apache-2.0 +- BSD-3-Clause--modified-by-Google +- ISC +- MIT +- Unicode-DFS-2016 +- Unlicense + +--------------------------------- (separator) ---------------------------------- + +== Dependency +github.com/bytecodealliance/wasm-micro-runtime/language-bindings/go + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (C) 1989, 1991 Free Software Foundation, Inc., +Copyright (C) 1993-2013 Yukihiro Matsumoto. All rights reserved. +Copyright (C) 1994-2017 Free Software Foundation, Inc. +Copyright (C) 1994-2021 Free Software Foundation, Inc. +Copyright (C) 2002-present Jason Evans . +Copyright (C) 2005-2023 Mike Pall. +Copyright (C) 2005-2023 Mike Pall. See Copyright Notice in luajit.h +Copyright (C) 2006 Toni Ronkko +Copyright (C) 2007 Free Software Foundation, Inc. +Copyright (C) 2007-2012 Mozilla Foundation. All rights reserved. +Copyright (C) 2009-2013 by Daniel Stenberg +Copyright (C) 2009-present Facebook, Inc. All rights reserved. +Copyright (C) 2012-2016 Free Software Foundation, Inc. +Copyright (C) 2012-2021 Free Software Foundation, Inc. +Copyright (C) 2013 Mark Adler +Copyright (C) 2015-2025 The Fluent Bit Authors +Copyright (C) 2019 Intel Corporation. All rights reserved. +Copyright (C) 2019 Intel Corporation. All rights reserved. +Copyright (C) 2019 Yaoyuan . +Copyright (C) 2019-21 Intel Corporation and others. All rights reserved. +Copyright (C) 2020 TU Bergakademie Freiberg Karl Fessel +Copyright (C) 2021 Intel Corporation and others. All rights reserved. +Copyright (C) 2022 Amazon.com, Inc. or its affiliates. All Rights Reserved. +Copyright (C) 2022 Intel Corporation. All rights reserved. +Copyright (C) 2023 Dylibso. All rights reserved. +Copyright (C) 2023 Midokura Japan KK. All rights reserved. +Copyright (C) 2024 Amazon.com, Inc. or its affiliates. All Rights Reserved. +Copyright (C) 2025 Midokura Japan KK. All rights reserved. +Copyright (C) +Copyright (C) Daniel Stenberg +Copyright (C) Gisle Vanem +Copyright (C) Guenter Knauf +Copyright (C) The c-ares project and its contributors +Copyright (C) Treasure Data +Copyright (C) the Massachusetts Institute of Technology. +Copyright (C) year name of author +Copyright (c) (Year), (Name of copyright holder) +Copyright (c) 1991, 1993 +Copyright (c) 1993 The Regents of the University of California. +Copyright (c) 1996, David Mazieres +Copyright (c) 1998 Massachusetts Institute of Technology +Copyright (c) 1998 Todd C. Miller +Copyright (c) 1999-2011 Unicode, Inc. All Rights reserved. +Copyright (c) 2000 Dug Song +Copyright (c) 2000 The NetBSD Foundation, Inc. +Copyright (c) 2000-2007 Niels Provos +Copyright (c) 2002 Christopher Clark +Copyright (c) 2002 Todd C. Miller +Copyright (c) 2002-2018 K.Kosako +Copyright (c) 2003 Michael A. Davis +Copyright (c) 2006 Maxim Yegorushkin +Copyright (c) 2006-2008 Alexander Chemeris +Copyright (c) 2006-2012, Thomas Pircher +Copyright (c) 2007 - 2023 Daniel Stenberg with many contributors, see AUTHORS +Copyright (c) 2007 Sun Microsystems +Copyright (c) 2007-2012 Niels Provos and Nick Mathewson +Copyright (c) 2008, Damien Miller +Copyright (c) 2008-2016, Dave Benson and the protobuf-c authors. +Copyright (c) 2008-2020 The AsmJit Authors +Copyright (c) 2009-2017 Dave Gamble and cJSON contributors +Copyright (c) 2009-2020 Petri Lehtinen +Copyright (c) 2010 BitTorrent, Inc. +Copyright (c) 2010 Serge A. Zaitsev +Copyright (c) 2010 by the contributors (see AUTHORS file). +Copyright (c) 2010- mruby developers +Copyright (c) 2011 Petteri Aimonen +Copyright (c) 2011-2019 K.Takata +Copyright (c) 2011-2020, Yann Collet +Copyright (c) 2012 Internet Initiative Japan Inc. +Copyright (c) 2012 Marcus Geelnard +Copyright (c) 2012 Tatsuhiro Tsujikawa +Copyright (c) 2012, 2013 Tatsuhiro Tsujikawa +Copyright (c) 2012, 2013, 2014, 2015 Tatsuhiro Tsujikawa +Copyright (c) 2012, 2014, 2015, 2016 Tatsuhiro Tsujikawa +Copyright (c) 2012, 2014, 2015, 2016 nghttp2 contributors +Copyright (c) 2012-2021 Yann Collet +Copyright (c) 2012-2022, Magnus Edenhill +Copyright (c) 2012-2022, [Magnus Edenhill](http://www.edenhill.se/). +Copyright (c) 2013 - 2024 by MaxMind, Inc. +Copyright (c) 2013 Internet Initiative Japan Inc. +Copyright (c) 2014 Coda Hale +Copyright (c) 2014 Tatsuhiro Tsujikawa +Copyright (c) 2014-2021 Florian Bernd +Copyright (c) 2014-2021 Joel Höner +Copyright (c) 2015 Nuxi, https://nuxi.nl/ +Copyright (c) 2015 Tatsuhiro Tsujikawa +Copyright (c) 2015-2021 Nicholas Fraser and the MPack authors +Copyright (c) 2015-present libuv project contributors. +Copyright (c) 2016 Peter Wu +Copyright (c) 2017 mruby developers +Copyright (c) 2018-present Dima Krasner +Copyright (c) 2019 Colin Ihrig and Contributors +Copyright (c) 2019 Nigel Stewart +Copyright (c) 2020 YaoYuan +Copyright (c) 2020-2021 Alibaba Cloud +Copyright (c) 2021 Tatsuhiro Tsujikawa +Copyright (c) 2022 Intel Corporation +Copyright (c) 2022 Tilen MAJERLE +Copyright (c) Meta Platforms, Inc. and affiliates. +Copyright (c) Meta Platforms, Inc. and affiliates. All rights reserved. +Copyright (c) Yann Collet, Meta Platforms, Inc. +Copyright (c) yui-knk 2016 +Copyright 2000-2007 Niels Provos +Copyright 2000-2011 Insight Software Consortium +Copyright 2000-2013 Kitware, Inc. +Copyright 2001-2014 Monkey Software LLC +Copyright 2007-2012 Niels Provos and Nick Mathewson +Copyright 2010 The Apache Software Foundation +Copyright 2010-2014 Rich Geldreich and Tenacious Software LLC +Copyright 2011 Intel Corporation All Rights Reserved. +Copyright 2013-2014 RAD Game Tools and Valve Software +Copyright 2013-2025 MaxMind, Inc. +Copyright 2020 The Apache Software Foundation +Copyright 2020 The simdutf authors +Copyright 2021 The simdutf authors +Copyright © 2021 + +--------------------------------- (separator) ---------------------------------- + +== Dependency +github.com/maxmind/MaxMind-DB + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (C) 1989, 1991 Free Software Foundation, Inc., +Copyright (C) 1993-2013 Yukihiro Matsumoto. All rights reserved. +Copyright (C) 1994-2017 Free Software Foundation, Inc. +Copyright (C) 1994-2021 Free Software Foundation, Inc. +Copyright (C) 2002-present Jason Evans . +Copyright (C) 2005-2023 Mike Pall. +Copyright (C) 2005-2023 Mike Pall. See Copyright Notice in luajit.h +Copyright (C) 2006 Toni Ronkko +Copyright (C) 2007 Free Software Foundation, Inc. +Copyright (C) 2007-2012 Mozilla Foundation. All rights reserved. +Copyright (C) 2009-2013 by Daniel Stenberg +Copyright (C) 2009-present Facebook, Inc. All rights reserved. +Copyright (C) 2012-2016 Free Software Foundation, Inc. +Copyright (C) 2012-2021 Free Software Foundation, Inc. +Copyright (C) 2013 Mark Adler +Copyright (C) 2015-2025 The Fluent Bit Authors +Copyright (C) 2019 Intel Corporation. All rights reserved. +Copyright (C) 2019 Intel Corporation. All rights reserved. +Copyright (C) 2019 Yaoyuan . +Copyright (C) 2019-21 Intel Corporation and others. All rights reserved. +Copyright (C) 2020 TU Bergakademie Freiberg Karl Fessel +Copyright (C) 2021 Intel Corporation and others. All rights reserved. +Copyright (C) 2022 Amazon.com, Inc. or its affiliates. All Rights Reserved. +Copyright (C) 2022 Intel Corporation. All rights reserved. +Copyright (C) 2023 Dylibso. All rights reserved. +Copyright (C) 2023 Midokura Japan KK. All rights reserved. +Copyright (C) 2024 Amazon.com, Inc. or its affiliates. All Rights Reserved. +Copyright (C) 2025 Midokura Japan KK. All rights reserved. +Copyright (C) +Copyright (C) Daniel Stenberg +Copyright (C) Gisle Vanem +Copyright (C) Guenter Knauf +Copyright (C) The c-ares project and its contributors +Copyright (C) Treasure Data +Copyright (C) the Massachusetts Institute of Technology. +Copyright (C) year name of author +Copyright (c) (Year), (Name of copyright holder) +Copyright (c) 1991, 1993 +Copyright (c) 1993 The Regents of the University of California. +Copyright (c) 1996, David Mazieres +Copyright (c) 1998 Massachusetts Institute of Technology +Copyright (c) 1998 Todd C. Miller +Copyright (c) 1999-2011 Unicode, Inc. All Rights reserved. +Copyright (c) 2000 Dug Song +Copyright (c) 2000 The NetBSD Foundation, Inc. +Copyright (c) 2000-2007 Niels Provos +Copyright (c) 2002 Christopher Clark +Copyright (c) 2002 Todd C. Miller +Copyright (c) 2002-2018 K.Kosako +Copyright (c) 2003 Michael A. Davis +Copyright (c) 2006 Maxim Yegorushkin +Copyright (c) 2006-2008 Alexander Chemeris +Copyright (c) 2006-2012, Thomas Pircher +Copyright (c) 2007 - 2023 Daniel Stenberg with many contributors, see AUTHORS +Copyright (c) 2007 Sun Microsystems +Copyright (c) 2007-2012 Niels Provos and Nick Mathewson +Copyright (c) 2008, Damien Miller +Copyright (c) 2008-2016, Dave Benson and the protobuf-c authors. +Copyright (c) 2008-2020 The AsmJit Authors +Copyright (c) 2009-2017 Dave Gamble and cJSON contributors +Copyright (c) 2009-2020 Petri Lehtinen +Copyright (c) 2010 BitTorrent, Inc. +Copyright (c) 2010 Serge A. Zaitsev +Copyright (c) 2010 by the contributors (see AUTHORS file). +Copyright (c) 2010- mruby developers +Copyright (c) 2011 Petteri Aimonen +Copyright (c) 2011-2019 K.Takata +Copyright (c) 2011-2020, Yann Collet +Copyright (c) 2012 Internet Initiative Japan Inc. +Copyright (c) 2012 Marcus Geelnard +Copyright (c) 2012 Tatsuhiro Tsujikawa +Copyright (c) 2012, 2013 Tatsuhiro Tsujikawa +Copyright (c) 2012, 2013, 2014, 2015 Tatsuhiro Tsujikawa +Copyright (c) 2012, 2014, 2015, 2016 Tatsuhiro Tsujikawa +Copyright (c) 2012, 2014, 2015, 2016 nghttp2 contributors +Copyright (c) 2012-2021 Yann Collet +Copyright (c) 2012-2022, Magnus Edenhill +Copyright (c) 2012-2022, [Magnus Edenhill](http://www.edenhill.se/). +Copyright (c) 2013 - 2024 by MaxMind, Inc. +Copyright (c) 2013 Internet Initiative Japan Inc. +Copyright (c) 2014 Coda Hale +Copyright (c) 2014 Tatsuhiro Tsujikawa +Copyright (c) 2014-2021 Florian Bernd +Copyright (c) 2014-2021 Joel Höner +Copyright (c) 2015 Nuxi, https://nuxi.nl/ +Copyright (c) 2015 Tatsuhiro Tsujikawa +Copyright (c) 2015-2021 Nicholas Fraser and the MPack authors +Copyright (c) 2015-present libuv project contributors. +Copyright (c) 2016 Peter Wu +Copyright (c) 2017 mruby developers +Copyright (c) 2018-present Dima Krasner +Copyright (c) 2019 Colin Ihrig and Contributors +Copyright (c) 2019 Nigel Stewart +Copyright (c) 2020 YaoYuan +Copyright (c) 2020-2021 Alibaba Cloud +Copyright (c) 2021 Tatsuhiro Tsujikawa +Copyright (c) 2022 Intel Corporation +Copyright (c) 2022 Tilen MAJERLE +Copyright (c) Meta Platforms, Inc. and affiliates. +Copyright (c) Meta Platforms, Inc. and affiliates. All rights reserved. +Copyright (c) Yann Collet, Meta Platforms, Inc. +Copyright (c) yui-knk 2016 +Copyright 2000-2007 Niels Provos +Copyright 2000-2011 Insight Software Consortium +Copyright 2000-2013 Kitware, Inc. +Copyright 2001-2014 Monkey Software LLC +Copyright 2007-2012 Niels Provos and Nick Mathewson +Copyright 2010 The Apache Software Foundation +Copyright 2010-2014 Rich Geldreich and Tenacious Software LLC +Copyright 2011 Intel Corporation All Rights Reserved. +Copyright 2013-2014 RAD Game Tools and Valve Software +Copyright 2013-2025 MaxMind, Inc. +Copyright 2020 The Apache Software Foundation +Copyright 2020 The simdutf authors +Copyright 2021 The simdutf authors +Copyright © 2021 + +--------------------------------- (separator) ---------------------------------- + +== Dependency +github.com/maxmind/mmdbwriter + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) 2020 by MaxMind, Inc. + +--------------------------------- (separator) ---------------------------------- + +== Dependency +github.com/oschwald/maxminddb-golang + +== License Type +=== ISC-7ff10cf9 +ISC License + +Copyright (c) 2015, Gregory J. Oschwald + +Permission to use, copy, modify, and/or distribute this software for any +purpose with or without fee is hereby granted, provided that the above +copyright notice and this permission notice appear in all copies. + +THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH +REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY +AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, +INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM +LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR +OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR +PERFORMANCE OF THIS SOFTWARE. + + + +== Copyright +Copyright (c) 2015, Gregory J. Oschwald +Copyright 2011 Evan Shaw. All rights reserved. + +--------------------------------- (separator) ---------------------------------- + +== Dependency +github.com/valyala/fastjson + +== License Type +SPDX:MIT + +== Copyright +Copyright (c) 2018 Aliaksandr Valialkin + +--------------------------------- (separator) ---------------------------------- + +== Dependency +go4.org/netipx + +== License Type +=== BSD-3-Clause--modified-by-Google-628df198 +Copyright (c) 2020 The Inet.af AUTHORS. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Tailscale Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + + + +== Copyright +Copyright (c) 2020 The Inet.af AUTHORS. All rights reserved. +Copyright 2020 The Inet.Af AUTHORS. All rights reserved. +Copyright 2021 The Inet.Af AUTHORS. All rights reserved. + +--------------------------------- (separator) ---------------------------------- + +== Dependency +golang.org/x/sys + +== License Type +SPDX:BSD-3-Clause--modified-by-Google + +== Copyright +Copyright (c) 2009 The Go Authors. All rights reserved. +Copyright 2009 The Go Authors. All rights reserved. +Copyright 2009,2010 The Go Authors. All rights reserved. +Copyright 2010 The Go Authors. All rights reserved. +Copyright 2011 The Go Authors. All rights reserved. +Copyright 2012 The Go Authors. All rights reserved. +Copyright 2013 The Go Authors. All rights reserved. +Copyright 2014 The Go Authors. All rights reserved. +Copyright 2015 The Go Authors. All rights reserved. +Copyright 2016 The Go Authors. All rights reserved. +Copyright 2017 The Go Authors. All right reserved. +Copyright 2017 The Go Authors. All rights reserved. +Copyright 2018 The Go Authors. All rights reserved. +Copyright 2019 The Go Authors. All rights reserved. +Copyright 2020 The Go Authors. All rights reserved. +Copyright 2021 The Go Authors. All rights reserved. +Copyright 2022 The Go Authors. All rights reserved. +Copyright 2023 The Go Authors. All rights reserved. + +== Patents +Additional IP Rights Grant (Patents) + +"This implementation" means the copyrightable works distributed by +Google as part of the Go project. + +Google hereby grants to You a perpetual, worldwide, non-exclusive, +no-charge, royalty-free, irrevocable (except as stated in this section) +patent license to make, have made, use, offer to sell, sell, import, +transfer and otherwise run, modify and propagate the contents of this +implementation of Go, where such license applies only to those patent +claims, both currently owned or controlled by Google and acquired in +the future, licensable by Google that are necessarily infringed by this +implementation of Go. This grant does not include claims that would be +infringed only as a consequence of further modification of this +implementation. If you or your agent or exclusive licensee institute or +order or agree to the institution of patent litigation against any +entity (including a cross-claim or counterclaim in a lawsuit) alleging +that this implementation of Go or any code incorporated within this +implementation of Go constitutes direct or contributory patent +infringement, or inducement of patent infringement, then any patent +rights granted to you under this License for this implementation of Go +shall terminate as of the date such litigation is filed. + + +--------------------------------- (separator) ---------------------------------- + +== Dependency +android-tzdata@0.1.1 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) [year] [fullname] + +--------------------------------- (separator) ---------------------------------- + +== Dependency +android_system_properties@0.1.5 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) 2013 Nicolas Silva +Copyright 2016 Nicolas Silva + +--------------------------------- (separator) ---------------------------------- + +== Dependency +autocfg@1.1.0 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) 2018 Josh Stone + +--------------------------------- (separator) ---------------------------------- + +== Dependency +bumpalo@3.14.0 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) 2019 Nick Fitzgerald +Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT +Copyright 2012-2017 The Rust Project Developers. See the COPYRIGHT +Copyright 2014 The Rust Project Developers. See the COPYRIGHT +Copyright 2015 The Rust Project Developers. See the COPYRIGHT +Copyright 2018 The Rust Project Developers. See the COPYRIGHT + +--------------------------------- (separator) ---------------------------------- + +== Dependency +byteorder@1.4.3 + +== License Type +SPDX:Unlicense + +== Copyright +Copyright (c) 2015 Andrew Gallant + +--------------------------------- (separator) ---------------------------------- + +== Dependency +byteorder@1.5.0 + +== License Type +SPDX:Unlicense + +== Copyright +Copyright (c) 2015 Andrew Gallant + +--------------------------------- (separator) ---------------------------------- + +== Dependency +cc@1.0.83 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) 2014 Alex Crichton +Copyright 2015 The Rust Project Developers. See the COPYRIGHT +Copyright © 2015-2017 winapi-rs developers +Copyright © 2017 winapi-rs developers + +--------------------------------- (separator) ---------------------------------- + +== Dependency +cfg-if@1.0.0 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) 2014 Alex Crichton + +--------------------------------- (separator) ---------------------------------- + +== Dependency +chrono@0.4.19 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) 2014, Kang Seonghoon. +Copyright (c) 2014--2017, Kang Seonghoon and +Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT +Copyright 2013-2014 The Rust Project Developers. +copyright (c) 2015, John Nagle. + +--------------------------------- (separator) ---------------------------------- + +== Dependency +chrono@0.4.33 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) 2014, Kang Seonghoon. +Copyright (c) 2014--2017, Kang Seonghoon and +Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT +copyright (c) 2015, John Nagle. + +--------------------------------- (separator) ---------------------------------- + +== Dependency +core-foundation-sys@0.8.6 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) 2012-2013 Mozilla Foundation +Copyright 2013 The Servo Project Developers. See the COPYRIGHT +Copyright 2013-2015 The Servo Project Developers. See the COPYRIGHT +Copyright 2016 The Servo Project Developers. See the COPYRIGHT +Copyright 2019 The Servo Project Developers. See the COPYRIGHT +Copyright 2023 The Servo Project Developers. See the COPYRIGHT + +--------------------------------- (separator) ---------------------------------- + +== Dependency +filter_rust@0.1.0 + +== License Type +SPDX:MIT + +== Copyright +(no copyright notices found) + +--------------------------------- (separator) ---------------------------------- + +== Dependency +filter_rust_clib@0.1.0 + +== License Type +SPDX:MIT + +== Copyright +(no copyright notices found) + +--------------------------------- (separator) ---------------------------------- + +== Dependency +filter_rust_msgpack@0.1.0 + +== License Type +SPDX:MIT + +== Copyright +(no copyright notices found) + +--------------------------------- (separator) ---------------------------------- + +== Dependency +iana-time-zone-haiku@0.1.2 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) 2020 Andrew D. Straw +Copyright 2020 Andrew Straw + +--------------------------------- (separator) ---------------------------------- + +== Dependency +iana-time-zone@0.1.59 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) 2020 Andrew D. Straw +Copyright 2020 Andrew Straw + +--------------------------------- (separator) ---------------------------------- + +== Dependency +itoa@1.0.10 + +== License Type +SPDX:Apache-2.0 + +== Copyright +David Tolnay + +--------------------------------- (separator) ---------------------------------- + +== Dependency +itoa@1.0.2 + +== License Type +SPDX:Apache-2.0 + +== Copyright +David Tolnay + +--------------------------------- (separator) ---------------------------------- + +== Dependency +js-sys@0.3.67 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) 2014 Alex Crichton + +--------------------------------- (separator) ---------------------------------- + +== Dependency +libc@0.2.126 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) 2014-2020 The Rust Project Developers + +--------------------------------- (separator) ---------------------------------- + +== Dependency +libc@0.2.152 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) 2014-2020 The Rust Project Developers + +--------------------------------- (separator) ---------------------------------- + +== Dependency +log@0.4.20 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) 2014 The Rust Project Developers +Copyright 2014-2015 The Rust Project Developers. See the COPYRIGHT +Copyright 2015 The Rust Project Developers. See the COPYRIGHT + +--------------------------------- (separator) ---------------------------------- + +== Dependency +num-integer@0.1.45 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) 2014 The Rust Project Developers +Copyright 2013-2014 The Rust Project Developers. See the COPYRIGHT + +--------------------------------- (separator) ---------------------------------- + +== Dependency +num-traits@0.2.15 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) 2014 The Rust Project Developers +Copyright 2013-2014 The Rust Project Developers. See the COPYRIGHT + +--------------------------------- (separator) ---------------------------------- + +== Dependency +num-traits@0.2.17 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) 2014 The Rust Project Developers +Copyright 2013-2014 The Rust Project Developers. See the COPYRIGHT + +--------------------------------- (separator) ---------------------------------- + +== Dependency +once_cell@1.19.0 + +== License Type +SPDX:Apache-2.0 + +== Copyright +(no copyright notices found) + +--------------------------------- (separator) ---------------------------------- + +== Dependency +paste@1.0.14 + +== License Type +SPDX:Apache-2.0 + +== Copyright +(no copyright notices found) + +--------------------------------- (separator) ---------------------------------- + +== Dependency +paste@1.0.7 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) 2018 + +--------------------------------- (separator) ---------------------------------- + +== Dependency +proc-macro2@1.0.42 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) 2014 Alex Crichton + +--------------------------------- (separator) ---------------------------------- + +== Dependency +proc-macro2@1.0.78 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright to David Tolnay ,Alex Crichton + +--------------------------------- (separator) ---------------------------------- + +== Dependency +quote@1.0.20 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) 2016 The Rust Project Developers + +--------------------------------- (separator) ---------------------------------- + +== Dependency +quote@1.0.35 + +== License Type +SPDX:Apache-2.0 + +== Copyright +David Tolnay + +--------------------------------- (separator) ---------------------------------- + +== Dependency +rmp-serde@1.1.0 + +== License Type +SPDX:MIT + +== Copyright +Copyright (c) 2017 Evgeny Safronov + +--------------------------------- (separator) ---------------------------------- + +== Dependency +rmp-serde@1.1.2 + +== License Type +SPDX:MIT + +== Copyright +Copyright (c) 2017 Evgeny Safronov + +--------------------------------- (separator) ---------------------------------- + +== Dependency +rmp@0.8.11 + +== License Type +SPDX:MIT + +== Copyright +Copyright (c) 2017 Evgeny Safronov + +--------------------------------- (separator) ---------------------------------- + +== Dependency +rmp@0.8.12 + +== License Type +SPDX:MIT + +== Copyright +Copyright (c) 2017 Evgeny Safronov + +--------------------------------- (separator) ---------------------------------- + +== Dependency +rmpv@1.0.1 + +== License Type +SPDX:MIT + +== Copyright +Copyright (c) 2017 Evgeny Safronov + +--------------------------------- (separator) ---------------------------------- + +== Dependency +ryu@1.0.10 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright 2018 Ulf Adams + +--------------------------------- (separator) ---------------------------------- + +== Dependency +ryu@1.0.16 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright 2018 Ulf Adams + +--------------------------------- (separator) ---------------------------------- + +== Dependency +serde@1.0.137 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Erick Tryzelaar ,David Tolnay + +--------------------------------- (separator) ---------------------------------- + +== Dependency +serde@1.0.140 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Erick Tryzelaar ,David Tolnay + +--------------------------------- (separator) ---------------------------------- + +== Dependency +serde@1.0.196 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Erick Tryzelaar ,David Tolnay + +--------------------------------- (separator) ---------------------------------- + +== Dependency +serde_bytes@0.11.14 + +== License Type +SPDX:Apache-2.0 + +== Copyright +(no copyright notices found) + +--------------------------------- (separator) ---------------------------------- + +== Dependency +serde_bytes@0.11.6 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) 2014 The Rust Project Developers + +--------------------------------- (separator) ---------------------------------- + +== Dependency +serde_derive@1.0.140 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Erick Tryzelaar ,David Tolnay + +--------------------------------- (separator) ---------------------------------- + +== Dependency +serde_derive@1.0.196 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Erick Tryzelaar ,David Tolnay + +--------------------------------- (separator) ---------------------------------- + +== Dependency +serde_json@1.0.113 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Erick Tryzelaar ,David Tolnay ,Alexander Huszagh + +--------------------------------- (separator) ---------------------------------- + +== Dependency +serde_json@1.0.81 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Erick Tryzelaar ,David Tolnay ,Alexander Huszagh + +--------------------------------- (separator) ---------------------------------- + +== Dependency +serde_json@1.0.82 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Erick Tryzelaar ,David Tolnay ,Alexander Huszagh + +--------------------------------- (separator) ---------------------------------- + +== Dependency +syn@1.0.98 + +== License Type +SPDX:Apache-2.0 + +== Copyright +David Tolnay + +--------------------------------- (separator) ---------------------------------- + +== Dependency +syn@2.0.48 + +== License Type +SPDX:Apache-2.0 + +== Copyright +David Tolnay + +--------------------------------- (separator) ---------------------------------- + +== Dependency +time@0.1.44 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) 2014 The Rust Project Developers +Copyright 2012-2013 The Rust Project Developers. See the COPYRIGHT +Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT + +--------------------------------- (separator) ---------------------------------- + +== Dependency +unicode-ident@1.0.12 + +== License Type +SPDX:Unicode-DFS-2016 + +== Copyright +Copyright © 1991-2022 Unicode, Inc. All rights reserved. + +--------------------------------- (separator) ---------------------------------- + +== Dependency +unicode-ident@1.0.2 + +== License Type +SPDX:Unicode-DFS-2016 + +== Copyright +Copyright © 1991-2022 Unicode, Inc. All rights reserved. + +--------------------------------- (separator) ---------------------------------- + +== Dependency +wasi@0.10.0+wasi-snapshot-preview1 + +== License Type +SPDX:Apache-2.0 + +== Copyright +(no copyright notices found) + +--------------------------------- (separator) ---------------------------------- + +== Dependency +wasi_serde_json@0.1.0 + +== License Type +SPDX:Apache-2.0 + +== Copyright +(no copyright notices found) + +--------------------------------- (separator) ---------------------------------- + +== Dependency +wasm-bindgen-backend@0.2.90 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) 2014 Alex Crichton + +--------------------------------- (separator) ---------------------------------- + +== Dependency +wasm-bindgen-macro-support@0.2.90 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) 2014 Alex Crichton + +--------------------------------- (separator) ---------------------------------- + +== Dependency +wasm-bindgen-macro@0.2.90 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) 2014 Alex Crichton + +--------------------------------- (separator) ---------------------------------- + +== Dependency +wasm-bindgen-shared@0.2.90 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) 2014 Alex Crichton + +--------------------------------- (separator) ---------------------------------- + +== Dependency +wasm-bindgen@0.2.90 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) 2014 Alex Crichton + +--------------------------------- (separator) ---------------------------------- + +== Dependency +winapi-i686-pc-windows-gnu@0.4.0 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright © 2016 winapi-rs developers +Copyright © 2016-2018 winapi-rs developers + +--------------------------------- (separator) ---------------------------------- + +== Dependency +winapi-x86_64-pc-windows-gnu@0.4.0 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright © 2016 winapi-rs developers +Copyright © 2016-2018 winapi-rs developers + +--------------------------------- (separator) ---------------------------------- + +== Dependency +winapi@0.3.9 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) 2015-2018 The winapi-rs Developers + +--------------------------------- (separator) ---------------------------------- + +== Dependency +windows-core@0.52.0 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) Microsoft Corporation. + +--------------------------------- (separator) ---------------------------------- + +== Dependency +windows-targets@0.52.0 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) Microsoft Corporation. + +--------------------------------- (separator) ---------------------------------- + +== Dependency +windows_aarch64_gnullvm@0.52.0 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) Microsoft Corporation. + +--------------------------------- (separator) ---------------------------------- + +== Dependency +windows_aarch64_msvc@0.52.0 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) Microsoft Corporation. + +--------------------------------- (separator) ---------------------------------- + +== Dependency +windows_i686_gnu@0.52.0 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) Microsoft Corporation. + +--------------------------------- (separator) ---------------------------------- + +== Dependency +windows_i686_msvc@0.52.0 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) Microsoft Corporation. + +--------------------------------- (separator) ---------------------------------- + +== Dependency +windows_x86_64_gnu@0.52.0 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) Microsoft Corporation. + +--------------------------------- (separator) ---------------------------------- + +== Dependency +windows_x86_64_gnullvm@0.52.0 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) Microsoft Corporation. + +--------------------------------- (separator) ---------------------------------- + +== Dependency +windows_x86_64_msvc@0.52.0 + +== License Type +SPDX:Apache-2.0 + +== Copyright +Copyright (c) Microsoft Corporation. + +----------------------------------- Licenses ----------------------------------- + +--------------------------------- (separator) ---------------------------------- +== SPDX:Apache-2.0 + +Apache License + +Version 2.0, January 2004 + +http://www.apache.org/licenses/ + +TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + +1. Definitions. + +"License" shall mean the terms and conditions for use, reproduction, and +distribution as defined by Sections 1 through 9 of this document. + +"Licensor" shall mean the copyright owner or entity authorized by the +copyright owner that is granting the License. + +"Legal Entity" shall mean the union of the acting entity and all other +entities that control, are controlled by, or are under common control with +that entity. For the purposes of this definition, "control" means (i) the +power, direct or indirect, to cause the direction or management of such +entity, whether by contract or otherwise, or (ii) ownership of fifty percent +(50%) or more of the outstanding shares, or (iii) beneficial ownership of such +entity. + +"You" (or "Your") shall mean an individual or Legal Entity exercising +permissions granted by this License. + +"Source" form shall mean the preferred form for making modifications, +including but not limited to software source code, documentation source, and +configuration files. + +"Object" form shall mean any form resulting from mechanical transformation or +translation of a Source form, including but not limited to compiled object +code, generated documentation, and conversions to other media types. + +"Work" shall mean the work of authorship, whether in Source or Object form, +made available under the License, as indicated by a copyright notice that is +included in or attached to the work (an example is provided in the Appendix +below). + +"Derivative Works" shall mean any work, whether in Source or Object form, that +is based on (or derived from) the Work and for which the editorial revisions, +annotations, elaborations, or other modifications represent, as a whole, an +original work of authorship. For the purposes of this License, Derivative +Works shall not include works that remain separable from, or merely link (or +bind by name) to the interfaces of, the Work and Derivative Works thereof. + +"Contribution" shall mean any work of authorship, including the original +version of the Work and any modifications or additions to that Work or +Derivative Works thereof, that is intentionally submitted to Licensor for +inclusion in the Work by the copyright owner or by an individual or Legal +Entity authorized to submit on behalf of the copyright owner. For the purposes +of this definition, "submitted" means any form of electronic, verbal, or +written communication sent to the Licensor or its representatives, including +but not limited to communication on electronic mailing lists, source code +control systems, and issue tracking systems that are managed by, or on behalf +of, the Licensor for the purpose of discussing and improving the Work, but +excluding communication that is conspicuously marked or otherwise designated +in writing by the copyright owner as "Not a Contribution." + +"Contributor" shall mean Licensor and any individual or Legal Entity on behalf +of whom a Contribution has been received by Licensor and subsequently +incorporated within the Work. + +2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. + +3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. + +4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: + +(a) You must give any other recipients of the Work or Derivative Works a copy +of this License; and + +(b) You must cause any modified files to carry prominent notices stating that +You changed the files; and + +(c) You must retain, in the Source form of any Derivative Works that You +distribute, all copyright, patent, trademark, and attribution notices from the +Source form of the Work, excluding those notices that do not pertain to any +part of the Derivative Works; and + +(d) If the Work includes a "NOTICE" text file as part of its distribution, +then any Derivative Works that You distribute must include a readable copy of +the attribution notices contained within such NOTICE file, excluding those +notices that do not pertain to any part of the Derivative Works, in at least +one of the following places: within a NOTICE text file distributed as part of +the Derivative Works; within the Source form or documentation, if provided +along with the Derivative Works; or, within a display generated by the +Derivative Works, if and wherever such third-party notices normally appear. +The contents of the NOTICE file are for informational purposes only and do not +modify the License. You may add Your own attribution notices within Derivative +Works that You distribute, alongside or as an addendum to the NOTICE text from +the Work, provided that such additional attribution notices cannot be +construed as modifying the License. + +You may add Your own copyright statement to Your modifications and may provide +additional or different license terms and conditions for use, reproduction, or +distribution of Your modifications, or for any such Derivative Works as a +whole, provided Your use, reproduction, and distribution of the Work otherwise +complies with the conditions stated in this License. + +5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. + +6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. + +7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. + +8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. + +9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. + +END OF TERMS AND CONDITIONS + +APPENDIX: How to apply the Apache License to your work. + +To apply the Apache License to your work, attach the following boilerplate +notice, with the fields enclosed by brackets "[]" replaced with your own +identifying information. (Don't include the brackets!) The text should be +enclosed in the appropriate comment syntax for the file format. We also +recommend that a file or class name and description of purpose be included on +the same "printed page" as the copyright notice for easier identification +within third-party archives. + +Copyright [yyyy] [name of copyright owner] + +Licensed under the Apache License, Version 2.0 (the "License"); + +you may not use this file except in compliance with the License. + +You may obtain a copy of the License at + +http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software + +distributed under the License is distributed on an "AS IS" BASIS, + +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + +See the License for the specific language governing permissions and + +limitations under the License. + + + +--------------------------------- (separator) ---------------------------------- +== SPDX:BSD-3-Clause--modified-by-Google + +Redistribution and use in source and binary forms, with +or without modification, are permitted provided that the following conditions +are met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + + +--------------------------------- (separator) ---------------------------------- +== SPDX:MIT + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. + + + +--------------------------------- (separator) ---------------------------------- +== SPDX:Unicode-DFS-2016 + +UNICODE, INC. LICENSE AGREEMENT - DATA FILES AND SOFTWARE + +Unicode Data Files include all data files under the directories http://www.unicode.org/Public/, http://www.unicode.org/reports/, http://www.unicode.org/cldr/data/, http://source.icu-project.org/repos/icu/, http://www.unicode.org/ivd/data/, and http://www.unicode.org/utility/trac/browser/. + +Unicode Data Files do not include PDF online code charts under the directory http://www.unicode.org/Public/. + +Software includes any source code published in the Unicode Standard or under the directories http://www.unicode.org/Public/, http://www.unicode.org/reports/, http://www.unicode.org/cldr/data/, http://source.icu-project.org/repos/icu/, and http://www.unicode.org/utility/trac/browser/. + +NOTICE TO USER: Carefully read the following legal agreement. BY DOWNLOADING, INSTALLING, COPYING OR OTHERWISE USING UNICODE INC.'S DATA FILES ("DATA FILES"), AND/OR SOFTWARE ("SOFTWARE"), YOU UNEQUIVOCALLY ACCEPT, AND AGREE TO BE BOUND BY, ALL OF THE TERMS AND CONDITIONS OF THIS AGREEMENT. IF YOU DO NOT AGREE, DO NOT DOWNLOAD, INSTALL, COPY, DISTRIBUTE OR USE THE DATA FILES OR SOFTWARE. + +COPYRIGHT AND PERMISSION NOTICE + +Copyright © 1991-2016 Unicode, Inc. All rights reserved. Distributed under the Terms of Use in http://www.unicode.org/copyright.html. + +Permission is hereby granted, free of charge, to any person obtaining a copy of the Unicode data files and any associated documentation (the "Data Files") or Unicode software and any associated documentation (the "Software") to deal in the Data Files or Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, and/or sell copies of the Data Files or Software, and to permit persons to whom the Data Files or Software are furnished to do so, provided that either + +(a) this copyright and permission notice appear with all copies of the Data Files or Software, or +(b) this copyright and permission notice appear in associated Documentation. +THE DATA FILES AND SOFTWARE ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF THIRD PARTY RIGHTS. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR HOLDERS INCLUDED IN THIS NOTICE BE LIABLE FOR ANY CLAIM, OR ANY SPECIAL INDIRECT OR CONSEQUENTIAL DAMAGES, OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THE DATA FILES OR SOFTWARE. + +Except as contained in this notice, the name of a copyright holder shall not be used in advertising or otherwise to promote the sale, use or other dealings in these Data Files or Software without prior written authorization of the copyright holder. + + +--------------------------------- (separator) ---------------------------------- +== SPDX:Unlicense + +This is free and unencumbered software released into the public domain. + +Anyone is free to copy, modify, publish, use, compile, sell, or distribute +this software, either in source code form or as a compiled binary, for any +purpose, commercial or non-commercial, and by any means. + +In jurisdictions that recognize copyright laws, the author or authors of this +software dedicate any and all copyright interest in the software to the public +domain. We make this dedication for the benefit of the public at large and to +the detriment of our heirs and + +successors. We intend this dedication to be an overt act of relinquishment in +perpetuity of all present and future rights to this software under copyright +law. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + +For more information, please refer to + + diff --git a/buildrpm/fluent-bit-container-image.spec b/buildrpm/fluent-bit-container-image.spec new file mode 100644 index 00000000000..d4bfbc67a94 --- /dev/null +++ b/buildrpm/fluent-bit-container-image.spec @@ -0,0 +1,51 @@ + + +%if 0%{?with_debug} +%global _dwz_low_mem_die_limit 0 +%else +%global debug_package %{nil} +%endif + +%{!?registry: %global registry container-registry.oracle.com/olcne} +%global app_name fluent-bit +%global app_version 4.2.0 +%global oracle_release_version 1 +%global _buildhost build-ol%{?oraclelinux}-%{?_arch}.oracle.com + +Name: %{app_name}-container-image +Version: %{app_version} +Release: %{oracle_release_version}%{?dist} +Summary: Telemetry agent for logs, metrics, and traces. +License: Apache-2.0 +Group: System/Management +Url: https://github.com/fluent/fluent-bit.git +Source: %{name}-%{version}.tar.bz2 + +%description +Telemetry agent for logs, metrics, and traces. + +%prep +%setup -q -n %{name}-%{version} + +%build +%global rpm_name %{app_name}-%{version}-%{release}.%{_build_arch} +%global docker_tag %{registry}/%{app_name}-base:v%{version} + +yum clean all +yumdownloader --destdir=${PWD}/rpms %{rpm_name} + +docker build --pull \ + --build-arg https_proxy=${https_proxy} \ + -t %{docker_tag} -f ./olm/builds/Dockerfile . +docker save -o %{app_name}.tar %{docker_tag} + +%install +%__install -D -m 644 %{app_name}.tar %{buildroot}/usr/local/share/olcne/%{app_name}.tar + +%files +%license LICENSE THIRD_PARTY_LICENSES.txt olm/SECURITY.md +/usr/local/share/olcne/%{app_name}.tar + +%changelog +* Thu Nov 13 2025 Oracle Cloud Native Environment Authors - 4.2.0-1 +- Added Oracle specific build files for fluent-bit. diff --git a/buildrpm/fluent-bit.spec b/buildrpm/fluent-bit.spec new file mode 100644 index 00000000000..a2a305e7368 --- /dev/null +++ b/buildrpm/fluent-bit.spec @@ -0,0 +1,75 @@ + + +%if 0%{?with_debug} +%global _dwz_low_mem_die_limit 0 +%else +%global debug_package %{nil} +%endif + +%global app_name fluent-bit +%global app_version 4.2.0 +%global oracle_release_version 1 +%global _buildhost build-ol%{?oraclelinux}-%{?_arch}.oracle.com + +Name: %{app_name} +Version: %{app_version} +Release: %{oracle_release_version}%{?dist} +Summary: Telemetry agent for logs, metrics, and traces. +License: Apache-2.0 +Group: System/Management +Url: https://github.com/fluent/fluent-bit.git +Source: %{name}-%{version}.tar.bz2 +BuildRequires: git +BuildRequires: cmake +BuildRequires: gcc +BuildRequires: gcc-c++ +BuildRequires: flex +BuildRequires: bison +BuildRequires: libyaml-devel +BuildRequires: make +BuildRequires: openssl-devel +BuildRequires: libicu +BuildRequires: libicu-devel +BuildRequires: libpq +BuildRequires: libpq-devel +BuildRequires: cyrus-sasl-devel +BuildRequires: systemd-devel +BuildRequires: zlib-devel +BuildRequires: postgresql +BuildRequires: postgresql-server +BuildRequires: cpio + +%description +Telemetry agent for logs, metrics, and traces. + +%prep +%setup -q -n %{name}-%{version} + +%build +cd build +cmake -DFLB_RELEASE=On \ + -DFLB_JEMALLOC=On \ + -DFLB_TLS=On \ + -DFLB_SHARED_LIB=Off \ + -DFLB_EXAMPLES=Off \ + -DFLB_HTTP_SERVER=On \ + -DFLB_IN_EXEC=Off \ + -DFLB_IN_SYSTEMD=On \ + -DFLB_OUT_KAFKA=On \ + -DFLB_OUT_PGSQL=On \ + -DFLB_JEMALLOC_OPTIONS="--with-lg-vaddr=48" \ + -DFLB_LOG_NO_CONTROL_CHARS=On \ + .. +make -j "$(getconf _NPROCESSORS_ONLN)" + +%install +install -m 755 -d %{buildroot}/%{app_name}/bin +install -m 755 build/bin/%{app_name} %{buildroot}/%{app_name}/bin/${app_name} + +%files +%license LICENSE THIRD_PARTY_LICENSES.txt olm/SECURITY.md +/%{app_name}/ + +%changelog +* Thu Nov 13 2025 Oracle Cloud Native Environment Authors - 4.2.0-1 +- Added Oracle specific build files for fluent-bit. diff --git a/olm/SECURITY.md b/olm/SECURITY.md new file mode 100644 index 00000000000..2ca81027ff2 --- /dev/null +++ b/olm/SECURITY.md @@ -0,0 +1,38 @@ +# Reporting security vulnerabilities + +Oracle values the independent security research community and believes that +responsible disclosure of security vulnerabilities helps us ensure the security +and privacy of all our users. + +Please do NOT raise a GitHub Issue to report a security vulnerability. If you +believe you have found a security vulnerability, please submit a report to +[secalert_us@oracle.com][1] preferably with a proof of concept. Please review +some additional information on [how to report security vulnerabilities to Oracle][2]. +We encourage people who contact Oracle Security to use email encryption using +[our encryption key][3]. + +We ask that you do not use other channels or contact the project maintainers +directly. + +Non-vulnerability related security issues including ideas for new or improved +security features are welcome on GitHub Issues. + +## Security updates, alerts and bulletins + +Security updates will be released on a regular cadence. Many of our projects +will typically release security fixes in conjunction with the +Oracle Critical Patch Update program. Additional +information, including past advisories, is available on our [security alerts][4] +page. + +## Security-related information + +We will provide security related information such as a threat model, considerations +for secure use, or any known security issues in our documentation. Please note +that labs and sample code are intended to demonstrate a concept and may not be +sufficiently hardened for production use. + +[1]: mailto:secalert_us@oracle.com +[2]: https://www.oracle.com/corporate/security-practices/assurance/vulnerability/reporting.html +[3]: https://www.oracle.com/security-alerts/encryptionkey.html +[4]: https://www.oracle.com/security-alerts/ diff --git a/olm/builds/Dockerfile b/olm/builds/Dockerfile new file mode 100644 index 00000000000..fc540ae6deb --- /dev/null +++ b/olm/builds/Dockerfile @@ -0,0 +1,31 @@ +FROM container-registry.oracle.com/os/oraclelinux:8-slim + +COPY rpms /tmp/ + +WORKDIR /fluent-bit + +RUN microdnf update -y && \ + microdnf install -y libpq && \ + microdnf clean all && \ + rpm -i /tmp/*.rpm && \ + rm /tmp/*.rpm + +COPY conf/fluent-bit.conf \ + conf/parsers.conf \ + conf/parsers_ambassador.conf \ + conf/parsers_java.conf \ + conf/parsers_extra.conf \ + conf/parsers_openstack.conf \ + conf/parsers_cinder.conf \ + conf/plugins.conf \ + /fluent-bit/etc/ + +# Generate schema and include as part of the container image +RUN /fluent-bit/bin/fluent-bit -J > /fluent-bit/etc/schema.json + +EXPOSE 2020 + +# Entry point +ENTRYPOINT [ "/fluent-bit/bin/fluent-bit" ] +CMD ["/fluent-bit/bin/fluent-bit", "-c", "/fluent-bit/etc/fluent-bit.conf"] + diff --git a/olm/jenkins/ci/Jenkinsfile b/olm/jenkins/ci/Jenkinsfile new file mode 100644 index 00000000000..9cc5bf4f56e --- /dev/null +++ b/olm/jenkins/ci/Jenkinsfile @@ -0,0 +1,13 @@ +@Library('olcne-pipeline') _ +import com.oracle.olcne.pipeline.BranchPattern + +String version = '4.2.0' +String imgTag = "v" + version + +olcnePipeline( + branchPattern: new BranchPattern(master: "oracle/release/" + version, feature: '(?!^release/.*$)(^.*$)'), + containers: [('container-registry.oracle.com/olcne/fluent-bit-base:' + imgTag): 'olcne/fluent-bit-base:' + imgTag], + architectures: ['x86_64', 'aarch64'], + platforms: ['ol8'], + yumOL8Repos: ['ol8_codeready_builder'] +)