Skip to content

feat: workspace filesystem explorer tab#294

Open
jamiepine wants to merge 1 commit intomainfrom
feat/workspace-files-tab
Open

feat: workspace filesystem explorer tab#294
jamiepine wants to merge 1 commit intomainfrom
feat/workspace-files-tab

Conversation

@jamiepine
Copy link
Member

@jamiepine jamiepine commented Mar 3, 2026

Summary

  • Replace the Ingest tab with a unified Files tab that provides a full filesystem explorer for the agent's workspace directory
  • Add 7 new backend API endpoints (/agents/files/*) for list, read, download, write, delete, rename, and upload operations with workspace boundary enforcement
  • Build a React frontend with breadcrumb navigation, directory listing, text file preview/editing, file CRUD operations, and inline ingest progress tracking

Backend

New module src/api/files.rs with 7 endpoints:

Endpoint Method Purpose
/agents/files/list GET List directory entries (name, type, size, modified_at)
/agents/files/read GET Read text file content (capped at 1 MiB)
/agents/files/download GET Download file as binary attachment
/agents/files/write POST Create/overwrite file or create directory
/agents/files/delete DELETE Delete file or directory
/agents/files/rename PUT Rename/move within workspace
/agents/files/upload POST Multipart upload with ingest tracking for ingest/

Security:

  • Path resolution reuses best_effort_canonicalize from tools/file.rs (made pub(crate))
  • All paths validated to stay within agent workspace boundary
  • Symlink traversal rejected
  • Protected root directories (ingest/, skills/) cannot be renamed or deleted
  • Protected identity files (SOUL.md, IDENTITY.md, USER.md, ROLE.md) read-only via this API
  • Uploads to ingest/ auto-create ingestion tracking records for the pipeline

Frontend

New component AgentFiles.tsx replacing AgentIngest.tsx in the tab bar:

  • Breadcrumb navigation — clickable path segments, URL state via ?path= search param
  • Directory listing — sorted dirs-first, with file type, size, and modified time columns
  • Hover actions — download, rename (inline), delete per entry
  • Text file preview — right split panel with edit mode and save
  • File creation — New File / New Folder dialogs
  • Upload — to current directory; ingest tracking when in ingest/
  • Ingest status bar — shown at bottom when browsing ingest/, reuses existing ingest progress query
  • Protected entries — lock icon on protected dirs/files, write/delete actions hidden

Files Changed

File Change
src/api/files.rs New — 7 API endpoints (586 lines)
src/api.rs Add mod files
src/api/server.rs Import files module, wire 7 routes
src/tools/file.rs best_effort_canonicalizepub(crate)
interface/src/routes/AgentFiles.tsx New — filesystem explorer component (948 lines)
interface/src/api/client.ts Add 7 types + 7 API methods
interface/src/components/AgentTabs.tsx Replace "Ingest" → "Files" tab
interface/src/router.tsx Replace ingest route with files route

Note

This change introduces a comprehensive workspace filesystem explorer that replaces the ingest-only tab with a full-featured file browser. Backend adds 7 secure API endpoints with strict path validation and workspace boundary enforcement, while the frontend delivers a polished explorer with breadcrumb navigation, inline text editing, and file operations. Protected system files and directories are made read-only at the API layer, and ingest uploads automatically integrate with the existing ingestion pipeline. Total additions: 1,648 lines across backend (586) and frontend (948), with minimal deletions.

Written by Tembo for commit 451674d. This will update automatically on new commits.

Replace the Ingest tab with a unified Files tab that provides full
filesystem browsing, preview, editing, and CRUD operations within
the agent's workspace directory.

Backend (src/api/files.rs):
- 7 new API endpoints: list, read, download, write, delete, rename, upload
- Path resolution reuses best_effort_canonicalize from tools/file.rs
- Workspace boundary enforcement with symlink rejection
- Protected root dirs (ingest/, skills/) and identity files (SOUL.md, etc.)
- Upload to ingest/ auto-creates ingestion tracking records

Frontend (interface/src/routes/AgentFiles.tsx):
- Breadcrumb navigation with clickable path segments
- Directory listing sorted dirs-first with hover actions
- Text file preview panel with inline editing and save
- New file/folder creation dialog, inline rename, delete
- File upload to current directory via button or file picker
- Ingest progress status bar when browsing ingest/ directory
- URL state via ?path= search param for bookmarkable paths
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 3, 2026

Walkthrough

This pull request adds a comprehensive filesystem API for agent workspace exploration. The backend introduces file operations (list, read, download, write, delete, rename, upload) with path validation and security protections. The frontend adds corresponding client methods, types, and a file manager UI component for navigating and managing workspace files.

Changes

Cohort / File(s) Summary
Backend Filesystem API
src/api/files.rs, src/api/server.rs
New files module implementing seven filesystem endpoints with directory listing, file reading/downloading, writing, deleting, renaming, and uploading. Includes path resolution, symlink rejection, and protection for workspace roots and identity files. Server router wired to expose new endpoints under /agents/files.
Frontend API Client
interface/src/api/client.ts
Added seven new types (FilesystemEntry, FileListResponse, FileReadResponse, FileWriteResponse, FileDeleteResponse, FileRenameResponse, FileUploadResponse) and seven corresponding API methods (filesystemList, filesystemRead, filesystemDownloadUrl, filesystemWrite, filesystemDelete, filesystemRename, filesystemUpload) with appropriate HTTP verbs and response typing.
Frontend Router & Navigation
interface/src/router.tsx, interface/src/components/AgentTabs.tsx
Updated route definitions, replaced AgentIngest with AgentFiles component, changed path from /agents/$agentId/ingest to /agents/$agentId/files with search parameter validation for path. Updated tab label from "Ingest" to "Files".
File Manager Component
interface/src/routes/AgentFiles.tsx
New React component providing directory browsing with breadcrumb navigation, file preview/edit workflows for text files, upload and folder creation dialogs, inline renaming, and ingest progress tracking with real-time status updates.
Utility Export
src/tools/file.rs
Changed visibility of best_effort_canonicalize from private to pub(crate) for crate-wide access.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title directly describes the main feature being added: a workspace filesystem explorer tab that replaces the ingest tab.
Description check ✅ Passed The description comprehensively covers the changeset, explaining the backend endpoints, security measures, frontend components, and file changes.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/workspace-files-tab

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 ast-grep (0.41.0)
src/api/server.rs

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (1)
interface/src/routes/AgentFiles.tsx (1)

216-285: Consider surfacing mutation failures in the UI.

Write/delete/rename/upload/save failures currently have no user-visible feedback, so permission or server errors appear as silent no-ops.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@interface/src/routes/AgentFiles.tsx` around lines 216 - 285, Add onError
handlers to the file-related mutations (writeMutation, deleteMutation,
renameMutation, uploadMutation, saveFileMutation) so failures are surfaced to
the user: for each mutation, implement an onError callback that receives the
error and displays a user-visible notification (e.g., call a shared
notify/showToast function or set an error state) with a clear message that
includes the operation name and error details, and optionally keep existing
onSuccess behavior (invalidate queries, reset state) unchanged; ensure the
notification helper is imported/used consistently so permission/server errors no
longer fail silently.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@interface/src/routes/AgentFiles.tsx`:
- Around line 697-700: The preview lock uses isProtectedIdentityFile("",
selectedFile.split("/").pop() ?? "") which passes an empty directory so any
nested file with a protected basename (e.g., SOUL.md / IDENTITY.md) is blocked;
change the call to provide the actual directory (or full path) so protection
only applies to root files. Locate where editMode and selectedFile are used (the
JSX block calling isProtectedIdentityFile) and replace the empty-string first
arg with the file's directory or full path derived from selectedFile (e.g.,
compute dir = selectedFile.includes("/") ? selectedFile.substring(0,
selectedFile.lastIndexOf("/")) : "" or pass selectedFile itself) so
isProtectedIdentityFile can enforce root-only protection.

In `@src/api/files.rs`:
- Around line 310-313: The handler currently uses tokio::fs::read to load the
entire file into memory (see tokio::fs::read, path, data) which can OOM on large
files; change to open the file with tokio::fs::File::open(&path) and stream it
to the response using a ReaderStream (tokio_util::io::ReaderStream) wrapped in
your HTTP body type (e.g., axum::body::StreamBody or hyper::Body::wrap_stream)
so bytes are sent incrementally; handle/open errors similarly (log via
tracing::warn!(%error, path = %path.display(), ...)) and return
StatusCode::INTERNAL_SERVER_ERROR on failure, but do not allocate the whole file
in memory.
- Around line 525-544: The deduplication branch builds a new filesystem name
(unique) and sets target but the DB/metadata still uses safe_name and SQL errors
are being ignored; update the upload flow in src/api/files.rs so the filename
saved to the DB/metadata is the actual on-disk name (derive it from
target.file_name()/unique after target = target_dir.join(...)) instead of
safe_name, and replace any discarded SQL results (e.g., where `let _ =` or
`ignore` is used around DB calls) with proper error handling: check the Result,
log failures via the existing logger, and return/propagate an appropriate error
to the client on DB failure. Locate the deduplication variables (safe_name,
unique, target) and the subsequent DB insert/update calls to change the
persisted name and remove silent error drops (also apply the same fixes around
the other occurrences mentioned near the second block — lines handling the saved
filename and SQL calls).
- Around line 278-289: The current preview still reads whole file into memory
because tokio::fs::read is used; instead open the file and read only up to
MAX_READ_BYTES into a preallocated buffer. Replace the tokio::fs::read(&path)
call that produces raw with opening tokio::fs::File::open(&path).await and using
tokio::io::AsyncReadExt methods (e.g., read into a Vec<u8> with capacity
read_size or read_exact up to read_size) to fill only read_size bytes, then use
that buffer for slice/content conversion; keep usage of file_size, truncated,
read_size, slice, content and the existing error logging around path and
StatusCode.
- Around line 435-449: The rename handler currently only checks the source path
(old_path) with is_protected_root_dir and is_protected_identity_file; add the
same protection checks for the destination path (request.new_path) so callers
cannot rename another file into a protected root or identity file. After the
existing old_path checks, call is_protected_root_dir(&workspace, &new_path) and
is_protected_identity_file(&workspace, &new_path) (using the same
types/normalization as old_path) and return Err(StatusCode::FORBIDDEN) with a
tracing::warn that includes path = %request.new_path; keep the same log
messages/format and error code to preserve behavior.
- Line 505: The loop using "while let Ok(Some(field)) =
multipart.next_field().await" swallows parse errors and can treat a partial
upload as success; change it to explicitly match on multipart.next_field().await
(e.g., match multipart.next_field().await { Ok(Some(field)) => ..., Ok(None) =>
break, Err(e) => return Err(...) }) so that Err from
multipart.next_field().await is propagated or converted into an appropriate
error response/early return instead of silently breaking, and ensure any
cleanup/rollback logic runs when an Err is encountered.

---

Nitpick comments:
In `@interface/src/routes/AgentFiles.tsx`:
- Around line 216-285: Add onError handlers to the file-related mutations
(writeMutation, deleteMutation, renameMutation, uploadMutation,
saveFileMutation) so failures are surfaced to the user: for each mutation,
implement an onError callback that receives the error and displays a
user-visible notification (e.g., call a shared notify/showToast function or set
an error state) with a clear message that includes the operation name and error
details, and optionally keep existing onSuccess behavior (invalidate queries,
reset state) unchanged; ensure the notification helper is imported/used
consistently so permission/server errors no longer fail silently.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4d4e044 and 451674d.

📒 Files selected for processing (8)
  • interface/src/api/client.ts
  • interface/src/components/AgentTabs.tsx
  • interface/src/router.tsx
  • interface/src/routes/AgentFiles.tsx
  • src/api.rs
  • src/api/files.rs
  • src/api/server.rs
  • src/tools/file.rs

Comment on lines +697 to +700
{!editMode && !isProtectedIdentityFile(
"",
selectedFile.split("/").pop() ?? "",
) && (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Identity edit lock check is too broad in preview.

This blocks editing any nested file named SOUL.md/IDENTITY.md etc., while backend protection is root-only.

✏️ Proposed fix
-								{!editMode && !isProtectedIdentityFile(
-									"",
-									selectedFile.split("/").pop() ?? "",
-								) && (
+								{!editMode && !isProtectedIdentityFile(
+									selectedFile.includes("/")
+										? selectedFile.substring(0, selectedFile.lastIndexOf("/"))
+										: "",
+									selectedFile.split("/").pop() ?? "",
+								) && (
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
{!editMode && !isProtectedIdentityFile(
"",
selectedFile.split("/").pop() ?? "",
) && (
{!editMode && !isProtectedIdentityFile(
selectedFile.includes("/")
? selectedFile.substring(0, selectedFile.lastIndexOf("/"))
: "",
selectedFile.split("/").pop() ?? "",
) && (
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@interface/src/routes/AgentFiles.tsx` around lines 697 - 700, The preview lock
uses isProtectedIdentityFile("", selectedFile.split("/").pop() ?? "") which
passes an empty directory so any nested file with a protected basename (e.g.,
SOUL.md / IDENTITY.md) is blocked; change the call to provide the actual
directory (or full path) so protection only applies to root files. Locate where
editMode and selectedFile are used (the JSX block calling
isProtectedIdentityFile) and replace the empty-string first arg with the file's
directory or full path derived from selectedFile (e.g., compute dir =
selectedFile.includes("/") ? selectedFile.substring(0,
selectedFile.lastIndexOf("/")) : "" or pass selectedFile itself) so
isProtectedIdentityFile can enforce root-only protection.

Comment on lines +278 to +289
let file_size = metadata.len();
let truncated = file_size > MAX_READ_BYTES;
let read_size = file_size.min(MAX_READ_BYTES) as usize;

let raw = tokio::fs::read(&path).await.map_err(|error| {
tracing::warn!(%error, path = %path.display(), "files API: failed to read file");
StatusCode::INTERNAL_SERVER_ERROR
})?;

// Take only up to the read limit and convert to string (lossy for binary files).
let slice = &raw[..read_size.min(raw.len())];
let content = String::from_utf8_lossy(slice).to_string();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Preview reads still load the entire file into memory.

tokio::fs::read reads full contents before truncation, so large files can still cause memory spikes despite MAX_READ_BYTES.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/api/files.rs` around lines 278 - 289, The current preview still reads
whole file into memory because tokio::fs::read is used; instead open the file
and read only up to MAX_READ_BYTES into a preallocated buffer. Replace the
tokio::fs::read(&path) call that produces raw with opening
tokio::fs::File::open(&path).await and using tokio::io::AsyncReadExt methods
(e.g., read into a Vec<u8> with capacity read_size or read_exact up to
read_size) to fill only read_size bytes, then use that buffer for slice/content
conversion; keep usage of file_size, truncated, read_size, slice, content and
the existing error logging around path and StatusCode.

Comment on lines +310 to +313
let data = tokio::fs::read(&path).await.map_err(|error| {
tracing::warn!(%error, path = %path.display(), "files API: failed to read file for download");
StatusCode::INTERNAL_SERVER_ERROR
})?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Download handler buffers full file in RAM.

Using tokio::fs::read here scales poorly for large downloads and can exhaust memory under concurrent requests.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/api/files.rs` around lines 310 - 313, The handler currently uses
tokio::fs::read to load the entire file into memory (see tokio::fs::read, path,
data) which can OOM on large files; change to open the file with
tokio::fs::File::open(&path) and stream it to the response using a ReaderStream
(tokio_util::io::ReaderStream) wrapped in your HTTP body type (e.g.,
axum::body::StreamBody or hyper::Body::wrap_stream) so bytes are sent
incrementally; handle/open errors similarly (log via tracing::warn!(%error, path
= %path.display(), ...)) and return StatusCode::INTERNAL_SERVER_ERROR on
failure, but do not allocate the whole file in memory.

Comment on lines +435 to +449
// Prevent renaming protected root directories.
if is_protected_root_dir(&workspace, &old_path) {
tracing::warn!(
path = %request.old_path,
"files API: rename rejected for protected root directory"
);
return Err(StatusCode::FORBIDDEN);
}
if is_protected_identity_file(&workspace, &old_path) {
tracing::warn!(
path = %request.old_path,
"files API: rename rejected for protected identity file"
);
return Err(StatusCode::FORBIDDEN);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Protect rename targets, not just rename sources.

Current checks only guard old_path. A caller can rename another file to a protected identity/root path and bypass the write/delete restrictions.

🔒 Proposed fix
     if is_protected_root_dir(&workspace, &old_path) {
         tracing::warn!(
             path = %request.old_path,
             "files API: rename rejected for protected root directory"
         );
         return Err(StatusCode::FORBIDDEN);
     }
     if is_protected_identity_file(&workspace, &old_path) {
         tracing::warn!(
             path = %request.old_path,
             "files API: rename rejected for protected identity file"
         );
         return Err(StatusCode::FORBIDDEN);
     }
+    if is_protected_root_dir(&workspace, &new_path) {
+        tracing::warn!(
+            path = %request.new_path,
+            "files API: rename target is a protected root directory"
+        );
+        return Err(StatusCode::FORBIDDEN);
+    }
+    if is_protected_identity_file(&workspace, &new_path) {
+        tracing::warn!(
+            path = %request.new_path,
+            "files API: rename target is a protected identity file"
+        );
+        return Err(StatusCode::FORBIDDEN);
+    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Prevent renaming protected root directories.
if is_protected_root_dir(&workspace, &old_path) {
tracing::warn!(
path = %request.old_path,
"files API: rename rejected for protected root directory"
);
return Err(StatusCode::FORBIDDEN);
}
if is_protected_identity_file(&workspace, &old_path) {
tracing::warn!(
path = %request.old_path,
"files API: rename rejected for protected identity file"
);
return Err(StatusCode::FORBIDDEN);
}
// Prevent renaming protected root directories.
if is_protected_root_dir(&workspace, &old_path) {
tracing::warn!(
path = %request.old_path,
"files API: rename rejected for protected root directory"
);
return Err(StatusCode::FORBIDDEN);
}
if is_protected_identity_file(&workspace, &old_path) {
tracing::warn!(
path = %request.old_path,
"files API: rename rejected for protected identity file"
);
return Err(StatusCode::FORBIDDEN);
}
if is_protected_root_dir(&workspace, &new_path) {
tracing::warn!(
path = %request.new_path,
"files API: rename target is a protected root directory"
);
return Err(StatusCode::FORBIDDEN);
}
if is_protected_identity_file(&workspace, &new_path) {
tracing::warn!(
path = %request.new_path,
"files API: rename target is a protected identity file"
);
return Err(StatusCode::FORBIDDEN);
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/api/files.rs` around lines 435 - 449, The rename handler currently only
checks the source path (old_path) with is_protected_root_dir and
is_protected_identity_file; add the same protection checks for the destination
path (request.new_path) so callers cannot rename another file into a protected
root or identity file. After the existing old_path checks, call
is_protected_root_dir(&workspace, &new_path) and
is_protected_identity_file(&workspace, &new_path) (using the same
types/normalization as old_path) and return Err(StatusCode::FORBIDDEN) with a
tracing::warn that includes path = %request.new_path; keep the same log
messages/format and error code to preserve behavior.


let mut uploaded = Vec::new();

while let Ok(Some(field)) = multipart.next_field().await {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Multipart parse errors are currently swallowed.

while let Ok(Some(field)) = ... exits silently on the first parse error and can return success for partial uploads.

🧩 Proposed fix
-    while let Ok(Some(field)) = multipart.next_field().await {
+    while let Some(field) = multipart.next_field().await.map_err(|error| {
+        tracing::warn!(%error, "files API: failed to parse multipart upload");
+        StatusCode::BAD_REQUEST
+    })? {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
while let Ok(Some(field)) = multipart.next_field().await {
while let Some(field) = multipart.next_field().await.map_err(|error| {
tracing::warn!(%error, "files API: failed to parse multipart upload");
StatusCode::BAD_REQUEST
})? {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/api/files.rs` at line 505, The loop using "while let Ok(Some(field)) =
multipart.next_field().await" swallows parse errors and can treat a partial
upload as success; change it to explicitly match on multipart.next_field().await
(e.g., match multipart.next_field().await { Ok(Some(field)) => ..., Ok(None) =>
break, Err(e) => return Err(...) }) so that Err from
multipart.next_field().await is propagated or converted into an appropriate
error response/early return instead of silently breaking, and ensure any
cleanup/rollback logic runs when an Err is encountered.

Comment on lines +525 to +544
let mut target = target_dir.join(safe_name);

// Deduplicate filenames if the target already exists.
if target.exists() {
let stem = Path::new(safe_name)
.file_stem()
.and_then(|s| s.to_str())
.unwrap_or("upload");
let ext = Path::new(safe_name)
.extension()
.and_then(|e| e.to_str())
.unwrap_or("txt");
let unique = format!(
"{}-{}.{}",
stem,
&uuid::Uuid::new_v4().to_string()[..8],
ext
);
target = target_dir.join(unique);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Ingest upload metadata can be incorrect and DB failures are ignored.

When deduplication changes the stored filename, the API/DB still record safe_name, not the actual saved name. Also, SQL errors are dropped, so clients can get success while tracking failed.

🛠️ Proposed fix
         let mut target = target_dir.join(safe_name);

         // Deduplicate filenames if the target already exists.
         if target.exists() {
@@
             target = target_dir.join(unique);
         }
+        let stored_name = target
+            .file_name()
+            .and_then(|n| n.to_str())
+            .unwrap_or(safe_name)
+            .to_string();

         tokio::fs::write(&target, &data).await.map_err(|error| {
             tracing::warn!(%error, path = %target.display(), "files API: failed to write uploaded file");
             StatusCode::INTERNAL_SERVER_ERROR
         })?;
@@
-                    let _ = sqlx::query(
+                    sqlx::query(
                         r#"
                         INSERT OR IGNORE INTO ingestion_files (content_hash, filename, file_size, total_chunks, status)
                         VALUES (?, ?, ?, 0, 'queued')
                         "#,
                     )
                     .bind(&hash)
-                    .bind(safe_name)
+                    .bind(&stored_name)
                     .bind(file_size)
                     .execute(pool)
-                    .await;
+                    .await
+                    .map_err(|error| {
+                        tracing::warn!(%error, filename = %stored_name, "files API: failed to create ingestion tracking record");
+                        StatusCode::INTERNAL_SERVER_ERROR
+                    })?;
                 }
             }
         }
@@
-        uploaded.push(safe_name.to_string());
+        uploaded.push(stored_name);

As per coding guidelines: "Don't silently discard errors; use let _ = only on channel sends where the receiver may be dropped; handle, log, or propagate all other errors."

Also applies to: 558-568, 581-581

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/api/files.rs` around lines 525 - 544, The deduplication branch builds a
new filesystem name (unique) and sets target but the DB/metadata still uses
safe_name and SQL errors are being ignored; update the upload flow in
src/api/files.rs so the filename saved to the DB/metadata is the actual on-disk
name (derive it from target.file_name()/unique after target =
target_dir.join(...)) instead of safe_name, and replace any discarded SQL
results (e.g., where `let _ =` or `ignore` is used around DB calls) with proper
error handling: check the Result, log failures via the existing logger, and
return/propagate an appropriate error to the client on DB failure. Locate the
deduplication variables (safe_name, unique, target) and the subsequent DB
insert/update calls to change the persisted name and remove silent error drops
(also apply the same fixes around the other occurrences mentioned near the
second block — lines handling the saved filename and SQL calls).

/// Resolve a user-supplied path relative to the workspace, ensuring it stays
/// within the workspace boundary. Rejects symlinks and `..` traversal.
fn resolve_workspace_path(workspace: &Path, raw: &str) -> Result<PathBuf, StatusCode> {
let cleaned = raw.trim_start_matches('/');
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

best_effort_canonicalize preserves .. components for non-existent targets, so write_file/rename_file can escape the workspace boundary (e.g. path=../outside.txt). I'd reject ../. components up-front before joining/canonicalizing.

Suggested change
let cleaned = raw.trim_start_matches('/');
let cleaned = raw.trim_start_matches('/');
if Path::new(cleaned)
.components()
.any(|c| matches!(c, std::path::Component::ParentDir | std::path::Component::CurDir))
{
tracing::warn!(path = %raw, "files API: path traversal rejected");
return Err(StatusCode::FORBIDDEN);
}

) -> Result<Json<FileRenameResponse>, StatusCode> {
let workspace = get_workspace(&state, &request.agent_id)?;
let old_path = resolve_workspace_path(&workspace, &request.old_path)?;
let new_path = resolve_workspace_path(&workspace, &request.new_path)?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Small but important: this blocks renaming from protected paths, but still allows renaming into a protected root dir / identity filename (and potentially overwriting). I'd also guard the destination.

Suggested change
let new_path = resolve_workspace_path(&workspace, &request.new_path)?;
let new_path = resolve_workspace_path(&workspace, &request.new_path)?;
// Prevent renaming into protected root directories / identity files.
if is_protected_root_dir(&workspace, &new_path) || is_protected_identity_file(&workspace, &new_path)
{
tracing::warn!(
path = %request.new_path,
"files API: rename rejected for protected destination"
);
return Err(StatusCode::FORBIDDEN);
}


let mut uploaded = Vec::new();

while let Ok(Some(field)) = multipart.next_field().await {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

while let Ok(..) will silently stop consuming fields on the first multipart error and still return 200. I'd propagate the parse error as 400.

Suggested change
while let Ok(Some(field)) = multipart.next_field().await {
while let Some(field) = multipart.next_field().await.map_err(|error| {
tracing::warn!(%error, "files API: failed to parse multipart field");
StatusCode::BAD_REQUEST
})? {

Ok(Json(FileWriteResponse { success: true }))
}

/// Delete a file or empty directory within the workspace.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor doc mismatch: this deletes directories recursively (remove_dir_all), not just empty directories.

Suggested change
/// Delete a file or empty directory within the workspace.
/// Delete a file or directory within the workspace (recursive for directories).

</span>
)}
<div className="flex items-center gap-1">
{!editMode && !isProtectedIdentityFile(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This treats any file named user.md/etc as protected even in subdirectories, because it always passes "" as the current path. Using the selected file’s directory keeps protection scoped to workspace root.

Suggested change
{!editMode && !isProtectedIdentityFile(
{!editMode &&
!isProtectedIdentityFile(
selectedFile.includes("/")
? selectedFile.slice(0, selectedFile.lastIndexOf("/"))
: "",
selectedFile.split("/").pop() ?? "",
) && (

let truncated = file_size > MAX_READ_BYTES;
let read_size = file_size.min(MAX_READ_BYTES) as usize;

let raw = tokio::fs::read(&path).await.map_err(|error| {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even though the response truncates to 1 MiB, tokio::fs::read still reads the entire file into memory first. That makes this endpoint an easy DoS vector for large workspace files.

Suggested change
let raw = tokio::fs::read(&path).await.map_err(|error| {
let mut file = tokio::fs::File::open(&path).await.map_err(|error| {
tracing::warn!(%error, path = %path.display(), "files API: failed to open file");
StatusCode::INTERNAL_SERVER_ERROR
})?;
let mut raw = vec![0u8; read_size];
let bytes_read = tokio::io::AsyncReadExt::read(&mut file, &mut raw)
.await
.map_err(|error| {
tracing::warn!(%error, path = %path.display(), "files API: failed to read file");
StatusCode::INTERNAL_SERVER_ERROR
})?;
raw.truncate(bytes_read);
// Convert to string (lossy for binary files).
let content = String::from_utf8_lossy(&raw).to_string();

.and_then(|n| n.to_str())
.unwrap_or("download");

let content_disposition = format!("attachment; filename=\"{}\"", filename);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two things here:

  • tokio::fs::read loads the whole file into memory before sending it; streaming would avoid big allocations for large downloads.
  • filename can contain "/CRLF on Unix filesystems, which can break Content-Disposition formatting.
Suggested change
let content_disposition = format!("attachment; filename=\"{}\"", filename);
let safe_filename = filename.replace('"', "_").replace('\r', "").replace('\n', "");
let content_disposition = format!("attachment; filename=\"{}\"", safe_filename);

Comment on lines +282 to +289
let raw = tokio::fs::read(&path).await.map_err(|error| {
tracing::warn!(%error, path = %path.display(), "files API: failed to read file");
StatusCode::INTERNAL_SERVER_ERROR
})?;

// Take only up to the read limit and convert to string (lossy for binary files).
let slice = &raw[..read_size.min(raw.len())];
let content = String::from_utf8_lossy(slice).to_string();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion meant to replace the entire tokio::fs::read + slice-to-string block so we only read up to MAX_READ_BYTES.

Suggested change
let raw = tokio::fs::read(&path).await.map_err(|error| {
tracing::warn!(%error, path = %path.display(), "files API: failed to read file");
StatusCode::INTERNAL_SERVER_ERROR
})?;
// Take only up to the read limit and convert to string (lossy for binary files).
let slice = &raw[..read_size.min(raw.len())];
let content = String::from_utf8_lossy(slice).to_string();
let mut file = tokio::fs::File::open(&path).await.map_err(|error| {
tracing::warn!(%error, path = %path.display(), "files API: failed to open file");
StatusCode::INTERNAL_SERVER_ERROR
})?;
let mut raw = vec![0u8; read_size];
let bytes_read = tokio::io::AsyncReadExt::read(&mut file, &mut raw)
.await
.map_err(|error| {
tracing::warn!(%error, path = %path.display(), "files API: failed to read file");
StatusCode::INTERNAL_SERVER_ERROR
})?;
raw.truncate(bytes_read);
// Convert to string (lossy for binary files).
let content = String::from_utf8_lossy(&raw).to_string();

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant