chore: add Trivy security scanning and fix non-root container users#3082
chore: add Trivy security scanning and fix non-root container users#3082tac0turtle wants to merge 6 commits intomainfrom
Conversation
Add `make trivy-scan` (filesystem + image) via dockerized Trivy. Fix DS-0002 (HIGH): run evm, testapp, and local-da containers as non-root `ev-node` user, consistent with the grpc Dockerfile. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
The latest Buf updates on your PR. Results from workflow CI / buf-check (pull_request).
|
Summary of ChangesHello @tac0turtle, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly improves the project's security by introducing automated vulnerability scanning with Trivy and hardening several Docker containers to run with reduced privileges. These changes aim to proactively identify and mitigate security risks, ensuring a more robust and secure application environment. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Claude finished @auricom's task in 1m 32s —— View job Code Review
Good security work overall. The non-root container changes are well-structured, consistent with the existing Critical: Trivy scans don't fail on findings
By default, Trivy exits with code +trivy_exit_code := env("TRIVY_EXIT_CODE", "1")
# trivy-scan-fs
fs --scanners vuln,misconfig,secret \
+ --exit-code {{trivy_exit_code}} \
--severity {{trivy_severity}} \
/workspace
# trivy-scan-image
image --severity {{trivy_severity}} \
+ --exit-code {{trivy_exit_code}} \
$img; \Using a variable allows callers to opt out ( Important:
|
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #3082 +/- ##
==========================================
- Coverage 61.00% 60.94% -0.06%
==========================================
Files 113 113
Lines 11700 11700
==========================================
- Hits 7137 7131 -6
- Misses 3753 3759 +6
Partials 810 810
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Code Review
This pull request introduces Trivy security scanning capabilities and enhances container security by configuring evm, testapp, and local-da containers to run as a non-root ev-node user. The addition of scripts/security.mk provides a convenient way to perform filesystem and image scans. The changes to the Dockerfiles for evm, testapp, and local-da correctly implement the non-root user principle, which is a significant security improvement. There are minor opportunities for consistency and clarity in the Dockerfile user creation and the default image scanning configuration.
|
|
||
| WORKDIR /root | ||
| RUN addgroup -g 1000 ev-node && \ | ||
| adduser -u 1000 -G ev-node -s /bin/sh -D ev-node |
There was a problem hiding this comment.
The adduser command uses the -D flag, which prevents the creation of a home directory. However, the subsequent WORKDIR /home/ev-node implies that /home/ev-node is intended to be the user's home directory. For consistency with apps/testapp/Dockerfile and clearer intent, it's better to allow adduser to create the home directory by removing the -D flag, or explicitly create it if -D is strictly necessary for other reasons. Removing -D is the most straightforward approach to align with the WORKDIR and chown commands.
adduser -u 1000 -G ev-node -s /bin/sh ev-node
scripts/security.mk
Outdated
| TRIVY_CACHE_VOLUME := trivy-cache | ||
|
|
||
| # Docker images to scan (space-separated, override or extend as needed) | ||
| SCAN_IMAGES ?= evstack:local-dev |
There was a problem hiding this comment.
The SCAN_IMAGES variable defaults to evstack:local-dev. While the comment indicates it can be overridden, having a single specific image as the default might lead to other relevant images being missed during scans if the user doesn't explicitly configure this variable. Consider making this variable empty by default or providing a more generic placeholder, encouraging users to define the images they intend to scan, or adding a clear example of how to extend it for multiple images.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Covers bind mounts, named volumes, Kubernetes init containers, fsGroup, and docker-compose. Links from the changelog entry. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Good call to improve the Dockerfiles. The test fail due to RO volumes now. |
# Conflicts: # CHANGELOG.md # Makefile
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughAdds a Justfile-based Trivy security workflow and converts runtime container users to non-root Changes
Sequence Diagram(s)sequenceDiagram
participant Dev as Developer
participant Just as Just runner
participant Trivy as Trivy (Docker)
participant Docker as Docker daemon
participant FS as Repo FS
participant Registry as Image registry
Dev->>Just: run `just trivy-scan`
Just->>Just: run `trivy-scan-fs` and `trivy-scan-image`
Just->>Trivy: start Trivy for FS scan (via Docker image)
Trivy->>FS: scan filesystem (vuln, misconfig, secret)
Trivy-->>Just: FS scan results
Just->>Docker: enumerate images from `scan_images`
loop per image
Just->>Trivy: start Trivy for image (with cache volume)
Trivy->>Docker: analyze image layers
Trivy->>Registry: fetch layers if needed
Trivy-->>Just: image scan results
end
Just-->>Dev: print progress and completion messages
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.just/security.just:
- Around line 11-34: The Trivy targets trivy-scan-fs and trivy-scan-image
currently do not fail CI when findings exist; update both command invocations
(the fs command inside target trivy-scan-fs and the image command inside target
trivy-scan-image) to include an exit code flag (e.g., add --exit-code 1) so
Trivy returns non-zero on findings; optionally expose a variable (e.g.,
trivy_exit_code with default 1) and use {{trivy_exit_code}} in both invocations
so the behavior is configurable while ensuring CI fails on detected issues.
- Line 3: The Trivy image is pinned to "latest" via the trivy_image variable
which allows non-reproducible scans; change trivy_image to a specific release
tag or digest (e.g., aquasec/trivy:0.48.0 or the image@sha256:...) to lock the
tool version, and update any scan invocations (where Trivy is run) to optionally
include --skip-db-update for PR/gate scans while reserving DB updates for
scheduled jobs; update the trivy_image variable and adjust the scan invocation
logic that references trivy_image to implement these changes.
In `@apps/evm/Dockerfile`:
- Around line 23-30: The Dockerfile switches to the non-root user (USER ev-node)
but never sets HOME, so entrypoint.sh (which defaults to $HOME/.evm) may still
resolve to /root; add an ENV HOME=/home/ev-node before the USER ev-node
instruction so $HOME points to /home/ev-node at runtime, and keep the existing
WORKDIR /home/ev-node and chown/chmod steps to ensure ownership/permissions for
entrypoint.sh and the home directory.
In `@CHANGELOG.md`:
- Line 27: Update the changelog entry that starts with "**BREAKING:** Docker
images for `evm`, `testapp`, and `local-da` now run as non-root user `ev-node`
(uid 1000)..." to append the PR reference (e.g., "(`#1234`)") consistent with
neighboring entries; edit the line to include the PR link or number so the entry
matches the format used elsewhere for traceability.
In `@docs/guides/migrate-docker-nonroot.md`:
- Around line 39-44: The step currently pulls the new image but incorrectly uses
docker start on the existing container (so the old image stays in use); replace
the docker start <container-name> instruction with commands that remove and
recreate the container so it runs the new image (e.g., docker rm
<container-name> followed by docker run ... <image> ...), or instruct users to
use docker-compose recreate / docker-compose up -d --force-recreate when using
compose; ensure the docs explicitly state that the container must be recreated
to pick up the new image.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 153f9ac0-dce3-4e35-8776-18168486bb43
📒 Files selected for processing (8)
.just/security.justCHANGELOG.mdapps/evm/Dockerfileapps/testapp/Dockerfiledocs/.vitepress/config.tsdocs/guides/migrate-docker-nonroot.mdjustfiletools/local-da/Dockerfile
| @@ -0,0 +1,35 @@ | |||
| # Security scanning with Trivy (https://trivy.dev) | |||
|
|
|||
| trivy_image := "aquasec/trivy:latest" | |||
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Trivy container image latest tag best practices CI/CD reproducibility
💡 Result:
Recommendation: don’t use :latest for Trivy in CI/CD if you care about reproducibility
latest (and other floating tags) can be retagged to different image contents over time, which breaks determinism. Docker explicitly recommends pinning by digest when you need fully repeatable builds/scans. [1]
Best practices for reproducible Trivy container-image scanning
-
Pin the Trivy tool version (and ideally the image digest)
- Prefer an explicit Trivy release version over
latest. - If you use GitHub Actions, pin the action and the Trivy version it installs (the action supports
version: latestorversion: vX.Y.Z; don’t uselatestfor reproducibility). [2] - Avoid “canary” images/builds for production pipelines. [3]
- Prefer an explicit Trivy release version over
-
Control the vulnerability database lifecycle (this is a major source of “same image, different results”)
- Trivy’s vulnerability DB is updated frequently (published every ~6 hours), so scan results can change even when the target image is unchanged. [4]
- For reproducible runs, cache the DB between CI jobs and/or runs (Trivy explicitly calls out CI caching benefits). [4]
- For “frozen” results (e.g., audits), run with
--skip-db-updateto prevent pulling a newer DB during the scan. [4]
-
If you need strict repeatability across networks/regions, self-host or mirror the Trivy DB
- Trivy distributes DBs as OCI artifacts (notably via GHCR), and its “air-gapped/offline” guidance is designed around controlling that dependency. [5]
- In locked-down enterprise CI, mirror/self-host the DB and point Trivy at your controlled source (so the exact DB used is explicit and reviewable). [5]
-
Separate “reproducible security gate” from “continuously updated security signal”
- Common pattern:
- PR gate: pinned Trivy version + cached/frozen DB (stable pass/fail).
- Scheduled job (daily): same Trivy version (or intentionally updated), DB updates enabled to detect newly disclosed CVEs.
- Common pattern:
Sources: Docker pinning/digest guidance for determinism [1]; Trivy Action supports explicit Trivy version pinning [2]; Trivy warns about canary builds [3]; Trivy DB update frequency + skip-update + caching [4]; Trivy DB distribution via GHCR / offline control concepts [5].
🏁 Script executed:
# Examine the .just/security.just file to understand context
cat -n .just/security.justRepository: evstack/ev-node
Length of output: 1566
🏁 Script executed:
# Check if there are other references to trivy_image or container images in the project
rg "trivy_image|docker|container|image" --type-list | head -20
fd -t f "*.just" -exec rg "trivy_image" {} +Repository: evstack/ev-node
Length of output: 693
Pin Trivy version and consider database update strategy.
Line 3 uses :latest, which allows the Trivy vulnerability database to update automatically (~every 6 hours), causing identical image scans to produce different results. Pin an explicit Trivy version (and optionally the image digest) for reproducible security scans.
The project already caches the Trivy database via trivy_cache_volume, which helps. For further reproducibility, consider:
- Pinning to a specific Trivy release (e.g.,
aquasec/trivy:0.48.0) or digest - Using
--skip-db-updatein PR/gate scans to freeze results; enable DB updates only in scheduled/continuous jobs
🔧 Suggested fix
-trivy_image := "aquasec/trivy:latest"
+trivy_image := env("TRIVY_IMAGE", "aquasec/trivy:0.48.0")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| trivy_image := "aquasec/trivy:latest" | |
| trivy_image := env("TRIVY_IMAGE", "aquasec/trivy:0.48.0") |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.just/security.just at line 3, The Trivy image is pinned to "latest" via the
trivy_image variable which allows non-reproducible scans; change trivy_image to
a specific release tag or digest (e.g., aquasec/trivy:0.48.0 or the
image@sha256:...) to lock the tool version, and update any scan invocations
(where Trivy is run) to optionally include --skip-db-update for PR/gate scans
while reserving DB updates for scheduled jobs; update the trivy_image variable
and adjust the scan invocation logic that references trivy_image to implement
these changes.
| trivy-scan: trivy-scan-fs trivy-scan-image | ||
|
|
||
| # Scan repo for dependency vulnerabilities, misconfigs, and secrets | ||
| trivy-scan-fs: | ||
| @echo "--> Scanning repository filesystem with Trivy" | ||
| @{{trivy_run}} \ | ||
| -v {{justfile_directory()}}:/workspace \ | ||
| {{trivy_image}} \ | ||
| fs --scanners vuln,misconfig,secret \ | ||
| --severity {{trivy_severity}} \ | ||
| /workspace | ||
| @echo "--> Filesystem scan complete" | ||
|
|
||
| # Scan built Docker images for vulnerabilities | ||
| trivy-scan-image: | ||
| @echo "--> Scanning Docker images with Trivy" | ||
| @for img in {{scan_images}}; do \ | ||
| echo "--> Scanning image: $img"; \ | ||
| {{trivy_run}} \ | ||
| -v /var/run/docker.sock:/var/run/docker.sock \ | ||
| {{trivy_image}} \ | ||
| image --severity {{trivy_severity}} \ | ||
| $img; \ | ||
| done |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Trivy exit code vulnerabilities found documentation
💡 Result:
Trivy’s default behavior is to exit with code 0 even if vulnerabilities (or other security issues) are found. To make a CI job fail when findings exist, set a non‑zero exit code via --exit-code (or the config/action equivalent). (trivy.dev)
CLI (Trivy)
--exit-code <n>: “Exit code when vulnerabilities were found (default:0).” (trivy.dev)- Common pattern (fail build if matching findings are present):
trivy image --severity HIGH,CRITICAL --exit-code 1 <image>trivy fs --severity HIGH,CRITICAL --exit-code 1 <path>(trivy.dev)
GitHub Action (aquasecurity/trivy-action)
exit-codeinput: “Exit code when specified vulnerabilities are found” (default0). (github.com)
Citations:
- 1: https://trivy.dev/v0.63/docs/configuration/others/?utm_source=openai
- 2: https://trivy.dev/v0.27.1/docs/references/cli/fs/?utm_source=openai
- 3: https://trivy.dev/v0.27.1/docs/references/cli/fs/?utm_source=openai
- 4: https://github.com/aquasecurity/trivy-action?utm_source=openai
Make scans fail on findings.
The current implementation lacks explicit failure behavior. By default, Trivy exits with code 0 even when vulnerabilities are found. Add --exit-code 1 (or a configurable equivalent) to both trivy-scan-fs and trivy-scan-image so that CI jobs fail when security issues are detected.
🔧 Suggested fix
trivy_severity := env("TRIVY_SEVERITY", "CRITICAL,HIGH")
+trivy_exit_code := env("TRIVY_EXIT_CODE", "1")
@@
fs --scanners vuln,misconfig,secret \
+ --exit-code {{trivy_exit_code}} \
--severity {{trivy_severity}} \
/workspace
@@
{{trivy_image}} \
image --severity {{trivy_severity}} \
+ --exit-code {{trivy_exit_code}} \
$img; \🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.just/security.just around lines 11 - 34, The Trivy targets trivy-scan-fs
and trivy-scan-image currently do not fail CI when findings exist; update both
command invocations (the fs command inside target trivy-scan-fs and the image
command inside target trivy-scan-image) to include an exit code flag (e.g., add
--exit-code 1) so Trivy returns non-zero on findings; optionally expose a
variable (e.g., trivy_exit_code with default 1) and use {{trivy_exit_code}} in
both invocations so the behavior is configurable while ensuring CI fails on
detected issues.
| ### 3. Pull the new image and restart | ||
|
|
||
| ```bash | ||
| docker pull <image> | ||
| docker start <container-name> | ||
| ``` |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
docker pull image docker start container does it use new image automatically
💡 Result:
No.
docker pull repo:tagonly updates your local image (it may move therepo:taglabel to a new image ID). Existing containers keep pointing to the image ID they were created from, and that never changes. (forums.docker.com)docker start <container>starts an already-created container, so it cannot switch to a newer image. (stackoverflow.com)
To use the newly pulled image, you must recreate the container (e.g., docker rm <container> then docker run ... repo:tag), or force pulling when creating a new container with docker run --pull=always .... (docs.docker.com)
Citations:
- 1: https://forums.docker.com/t/is-it-safe-to-perform-a-docker-pull-while-the-image-is-being-run/10946?utm_source=openai
- 2: https://stackoverflow.com/questions/68585895/does-docker-automatically-update-the-image-when-i-start-my-docker-container?utm_source=openai
- 3: https://docs.docker.com/reference/cli/docker/container/run?utm_source=openai
Step 3 must recreate the container to use the newly pulled image.
The current step pulls a new image but then runs docker start on the existing container, which continues to use the old image. Containers retain references to the specific image ID they were created with; pulling only updates the local tag label. To apply the new image, the container must be recreated with docker rm <container-name> followed by docker run (or equivalent docker-compose recreate), otherwise the migration appears successful but the old image remains active.
🔧 Suggested fix
### 3. Pull the new image and restart
```bash
docker pull <image>
-docker start <container-name>
+# Recreate container to use the newly pulled image
+docker rm <container-name>
+docker run ... <image> ...</details>
<details>
<summary>🤖 Prompt for AI Agents</summary>
Verify each finding against the current code and only fix it if needed.
In @docs/guides/migrate-docker-nonroot.md around lines 39 - 44, The step
currently pulls the new image but incorrectly uses docker start on the existing
container (so the old image stays in use); replace the docker start
instruction with commands that remove and recreate the
container so it runs the new image (e.g., docker rm followed by
docker run ... ...), or instruct users to use docker-compose recreate /
docker-compose up -d --force-recreate when using compose; ensure the docs
explicitly state that the container must be recreated to pick up the new image.
</details>
<!-- fingerprinting:phantom:poseidon:hawk -->
<!-- This is an auto-generated comment by CodeRabbit -->
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
apps/evm/docker-compose.yml (1)
81-83: Align volume names across docker-compose files for consistency.The volume naming convention changed in
apps/evm/docker-compose.ymlfromrethtoev-reth-dataandev-node-evm-data, butexecution/evm/docker/docker-compose.ymlandexecution/evm/docker/docker-compose-full-node.ymlstill use the olderrethnaming. This inconsistency may cause confusion when switching between configurations.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/evm/docker-compose.yml` around lines 81 - 83, The volumes in this compose file use non-canonical names (ev-node-evm-data, ev-reth-data); change them to the canonical volume names used in the other EVM compose files (replace ev-reth-data and ev-node-evm-data with the exact volume identifiers used in execution/evm/docker/docker-compose.yml and execution/evm/docker/docker-compose-full-node.yml), and update any service volume mounts in this file that reference ev-reth-data or ev-node-evm-data to use those canonical names so all compose files are consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/evm/Dockerfile`:
- Around line 22-23: The Dockerfile ADDUSER invocation is missing the Alpine
non-interactive flag; update the adduser command (the line calling adduser -u
1000 -G ev-node -s /bin/sh ev-node) to include the -D flag so the user is
created without an interactive password prompt (matching other Dockerfiles like
apps/grpc/Dockerfile and tools/local-da/Dockerfile) to prevent the Docker build
from hanging.
---
Nitpick comments:
In `@apps/evm/docker-compose.yml`:
- Around line 81-83: The volumes in this compose file use non-canonical names
(ev-node-evm-data, ev-reth-data); change them to the canonical volume names used
in the other EVM compose files (replace ev-reth-data and ev-node-evm-data with
the exact volume identifiers used in execution/evm/docker/docker-compose.yml and
execution/evm/docker/docker-compose-full-node.yml), and update any service
volume mounts in this file that reference ev-reth-data or ev-node-evm-data to
use those canonical names so all compose files are consistent.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e52ddfe0-630d-41da-bfb7-e2f1fb4e0c0a
📒 Files selected for processing (3)
CHANGELOG.mdapps/evm/Dockerfileapps/evm/docker-compose.yml
apps/evm/Dockerfile
Outdated
| RUN addgroup -g 1000 ev-node && \ | ||
| adduser -u 1000 -G ev-node -s /bin/sh ev-node |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Search for adduser usage in Dockerfiles across the repo
rg 'adduser' --type dockerfile -n -B2 -A2Repository: evstack/ev-node
Length of output: 93
🏁 Script executed:
# Also check for addgroup and adduser patterns
fd -name 'Dockerfile*' -type f | head -20 | xargs -I {} sh -c 'echo "=== {} ===" && grep -n "add\(user\|group\)" {} 2>/dev/null || true'Repository: evstack/ev-node
Length of output: 289
🌐 Web query:
Alpine Linux adduser -D flag documentation required non-interactive
💡 Result:
In Alpine, adduser is the BusyBox applet. Its -D flag is documented as:
-D— “Do not assign a password” (so it won’t prompt, i.e., suitable for non-interactive user creation). [1]
However, in practice BusyBox adduser -D has been reported to create the account with a locked password field in /etc/shadow (e.g., !), not a usable passwordless login. [1] Alpine’s own docs therefore commonly suggest setting a password afterward with passwd <user> if one wasn’t set during creation. [2]
Where to get the local documentation/manpages on Alpine:
- Install BusyBox’s documentation package:
apk add busybox-doc(package description: BusyBox documentation). [3] - Or just view the built-in help:
adduser --help/busybox adduser --help(the same option text is shown/quoted in BusyBox’s bug report). [1]
Sources
- BusyBox bug report quoting
adduseroption doc and showing-Dresult in/etc/shadow. https://lists.busybox.net/pipermail/busybox-cvs/2018-April/038077.html - Alpine User Handbook (post-install): set a password with
passwdif the tool didn’t ask. https://docs.alpinelinux.org/user-handbook/0.1a/Working/post-install.html - Alpine package index:
busybox-docpackage details. https://pkgs.alpinelinux.org/package/edge/main/x86/busybox-doc
🏁 Script executed:
# Find all Dockerfile files
find . -name 'Dockerfile*' -type fRepository: evstack/ev-node
Length of output: 157
🏁 Script executed:
# Check the specific file mentioned in the review
cat -n apps/evm/Dockerfile | sed -n '15,35p'Repository: evstack/ev-node
Length of output: 660
🏁 Script executed:
# Search for adduser in any Dockerfiles
find . -name 'Dockerfile*' -type f -exec grep -l 'adduser' {} \;Repository: evstack/ev-node
Length of output: 131
🏁 Script executed:
# Check how other Dockerfiles in the repo create users
grep -n 'adduser' ./apps/grpc/Dockerfile ./tools/local-da/Dockerfile -A 1 -B 1Repository: evstack/ev-node
Length of output: 408
🏁 Script executed:
# Also check apps/testapp since it was in the list
grep -n 'adduser' ./apps/testapp/Dockerfile -A 1 -B 1Repository: evstack/ev-node
Length of output: 41
Missing -D flag in adduser command will cause Docker build to hang.
Alpine's adduser requires the -D flag to skip the interactive password prompt. Without it, the Docker build will hang indefinitely during the password prompt phase. This is inconsistent with other Dockerfiles in the repository (e.g., apps/grpc/Dockerfile, tools/local-da/Dockerfile), which correctly use the -D flag.
🔧 Proposed fix
RUN addgroup -g 1000 ev-node && \
- adduser -u 1000 -G ev-node -s /bin/sh ev-node
+ adduser -D -u 1000 -G ev-node -s /bin/sh ev-node📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| RUN addgroup -g 1000 ev-node && \ | |
| adduser -u 1000 -G ev-node -s /bin/sh ev-node | |
| RUN addgroup -g 1000 ev-node && \ | |
| adduser -D -u 1000 -G ev-node -s /bin/sh ev-node |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/evm/Dockerfile` around lines 22 - 23, The Dockerfile ADDUSER invocation
is missing the Alpine non-interactive flag; update the adduser command (the line
calling adduser -u 1000 -G ev-node -s /bin/sh ev-node) to include the -D flag so
the user is created without an interactive password prompt (matching other
Dockerfiles like apps/grpc/Dockerfile and tools/local-da/Dockerfile) to prevent
the Docker build from hanging.
86a4655 to
8cb45b0
Compare
|
it seems tastora is hardcoding /var/evstack as home folder for test @chatton can we make it optional or rely on default $HOME value now that we have non root image |
Add
make trivy-scan(filesystem + image) via dockerized Trivy. Fix DS-0002 (HIGH): run evm, testapp, and local-da containers as non-rootev-nodeuser, consistent with the grpc Dockerfile.Overview
add container users
Summary by CodeRabbit
Breaking Changes
New Features
Documentation