Conversation
Replace the Ingest tab with a unified Files tab that provides full filesystem browsing, preview, editing, and CRUD operations within the agent's workspace directory. Backend (src/api/files.rs): - 7 new API endpoints: list, read, download, write, delete, rename, upload - Path resolution reuses best_effort_canonicalize from tools/file.rs - Workspace boundary enforcement with symlink rejection - Protected root dirs (ingest/, skills/) and identity files (SOUL.md, etc.) - Upload to ingest/ auto-creates ingestion tracking records Frontend (interface/src/routes/AgentFiles.tsx): - Breadcrumb navigation with clickable path segments - Directory listing sorted dirs-first with hover actions - Text file preview panel with inline editing and save - New file/folder creation dialog, inline rename, delete - File upload to current directory via button or file picker - Ingest progress status bar when browsing ingest/ directory - URL state via ?path= search param for bookmarkable paths
WalkthroughThis pull request adds a comprehensive filesystem API for agent workspace exploration. The backend introduces file operations (list, read, download, write, delete, rename, upload) with path validation and security protections. The frontend adds corresponding client methods, types, and a file manager UI component for navigating and managing workspace files. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 ast-grep (0.41.0)src/api/server.rsThanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (1)
interface/src/routes/AgentFiles.tsx (1)
216-285: Consider surfacing mutation failures in the UI.Write/delete/rename/upload/save failures currently have no user-visible feedback, so permission or server errors appear as silent no-ops.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@interface/src/routes/AgentFiles.tsx` around lines 216 - 285, Add onError handlers to the file-related mutations (writeMutation, deleteMutation, renameMutation, uploadMutation, saveFileMutation) so failures are surfaced to the user: for each mutation, implement an onError callback that receives the error and displays a user-visible notification (e.g., call a shared notify/showToast function or set an error state) with a clear message that includes the operation name and error details, and optionally keep existing onSuccess behavior (invalidate queries, reset state) unchanged; ensure the notification helper is imported/used consistently so permission/server errors no longer fail silently.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@interface/src/routes/AgentFiles.tsx`:
- Around line 697-700: The preview lock uses isProtectedIdentityFile("",
selectedFile.split("/").pop() ?? "") which passes an empty directory so any
nested file with a protected basename (e.g., SOUL.md / IDENTITY.md) is blocked;
change the call to provide the actual directory (or full path) so protection
only applies to root files. Locate where editMode and selectedFile are used (the
JSX block calling isProtectedIdentityFile) and replace the empty-string first
arg with the file's directory or full path derived from selectedFile (e.g.,
compute dir = selectedFile.includes("/") ? selectedFile.substring(0,
selectedFile.lastIndexOf("/")) : "" or pass selectedFile itself) so
isProtectedIdentityFile can enforce root-only protection.
In `@src/api/files.rs`:
- Around line 310-313: The handler currently uses tokio::fs::read to load the
entire file into memory (see tokio::fs::read, path, data) which can OOM on large
files; change to open the file with tokio::fs::File::open(&path) and stream it
to the response using a ReaderStream (tokio_util::io::ReaderStream) wrapped in
your HTTP body type (e.g., axum::body::StreamBody or hyper::Body::wrap_stream)
so bytes are sent incrementally; handle/open errors similarly (log via
tracing::warn!(%error, path = %path.display(), ...)) and return
StatusCode::INTERNAL_SERVER_ERROR on failure, but do not allocate the whole file
in memory.
- Around line 525-544: The deduplication branch builds a new filesystem name
(unique) and sets target but the DB/metadata still uses safe_name and SQL errors
are being ignored; update the upload flow in src/api/files.rs so the filename
saved to the DB/metadata is the actual on-disk name (derive it from
target.file_name()/unique after target = target_dir.join(...)) instead of
safe_name, and replace any discarded SQL results (e.g., where `let _ =` or
`ignore` is used around DB calls) with proper error handling: check the Result,
log failures via the existing logger, and return/propagate an appropriate error
to the client on DB failure. Locate the deduplication variables (safe_name,
unique, target) and the subsequent DB insert/update calls to change the
persisted name and remove silent error drops (also apply the same fixes around
the other occurrences mentioned near the second block — lines handling the saved
filename and SQL calls).
- Around line 278-289: The current preview still reads whole file into memory
because tokio::fs::read is used; instead open the file and read only up to
MAX_READ_BYTES into a preallocated buffer. Replace the tokio::fs::read(&path)
call that produces raw with opening tokio::fs::File::open(&path).await and using
tokio::io::AsyncReadExt methods (e.g., read into a Vec<u8> with capacity
read_size or read_exact up to read_size) to fill only read_size bytes, then use
that buffer for slice/content conversion; keep usage of file_size, truncated,
read_size, slice, content and the existing error logging around path and
StatusCode.
- Around line 435-449: The rename handler currently only checks the source path
(old_path) with is_protected_root_dir and is_protected_identity_file; add the
same protection checks for the destination path (request.new_path) so callers
cannot rename another file into a protected root or identity file. After the
existing old_path checks, call is_protected_root_dir(&workspace, &new_path) and
is_protected_identity_file(&workspace, &new_path) (using the same
types/normalization as old_path) and return Err(StatusCode::FORBIDDEN) with a
tracing::warn that includes path = %request.new_path; keep the same log
messages/format and error code to preserve behavior.
- Line 505: The loop using "while let Ok(Some(field)) =
multipart.next_field().await" swallows parse errors and can treat a partial
upload as success; change it to explicitly match on multipart.next_field().await
(e.g., match multipart.next_field().await { Ok(Some(field)) => ..., Ok(None) =>
break, Err(e) => return Err(...) }) so that Err from
multipart.next_field().await is propagated or converted into an appropriate
error response/early return instead of silently breaking, and ensure any
cleanup/rollback logic runs when an Err is encountered.
---
Nitpick comments:
In `@interface/src/routes/AgentFiles.tsx`:
- Around line 216-285: Add onError handlers to the file-related mutations
(writeMutation, deleteMutation, renameMutation, uploadMutation,
saveFileMutation) so failures are surfaced to the user: for each mutation,
implement an onError callback that receives the error and displays a
user-visible notification (e.g., call a shared notify/showToast function or set
an error state) with a clear message that includes the operation name and error
details, and optionally keep existing onSuccess behavior (invalidate queries,
reset state) unchanged; ensure the notification helper is imported/used
consistently so permission/server errors no longer fail silently.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
interface/src/api/client.tsinterface/src/components/AgentTabs.tsxinterface/src/router.tsxinterface/src/routes/AgentFiles.tsxsrc/api.rssrc/api/files.rssrc/api/server.rssrc/tools/file.rs
| {!editMode && !isProtectedIdentityFile( | ||
| "", | ||
| selectedFile.split("/").pop() ?? "", | ||
| ) && ( |
There was a problem hiding this comment.
Identity edit lock check is too broad in preview.
This blocks editing any nested file named SOUL.md/IDENTITY.md etc., while backend protection is root-only.
✏️ Proposed fix
- {!editMode && !isProtectedIdentityFile(
- "",
- selectedFile.split("/").pop() ?? "",
- ) && (
+ {!editMode && !isProtectedIdentityFile(
+ selectedFile.includes("/")
+ ? selectedFile.substring(0, selectedFile.lastIndexOf("/"))
+ : "",
+ selectedFile.split("/").pop() ?? "",
+ ) && (📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| {!editMode && !isProtectedIdentityFile( | |
| "", | |
| selectedFile.split("/").pop() ?? "", | |
| ) && ( | |
| {!editMode && !isProtectedIdentityFile( | |
| selectedFile.includes("/") | |
| ? selectedFile.substring(0, selectedFile.lastIndexOf("/")) | |
| : "", | |
| selectedFile.split("/").pop() ?? "", | |
| ) && ( |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@interface/src/routes/AgentFiles.tsx` around lines 697 - 700, The preview lock
uses isProtectedIdentityFile("", selectedFile.split("/").pop() ?? "") which
passes an empty directory so any nested file with a protected basename (e.g.,
SOUL.md / IDENTITY.md) is blocked; change the call to provide the actual
directory (or full path) so protection only applies to root files. Locate where
editMode and selectedFile are used (the JSX block calling
isProtectedIdentityFile) and replace the empty-string first arg with the file's
directory or full path derived from selectedFile (e.g., compute dir =
selectedFile.includes("/") ? selectedFile.substring(0,
selectedFile.lastIndexOf("/")) : "" or pass selectedFile itself) so
isProtectedIdentityFile can enforce root-only protection.
| let file_size = metadata.len(); | ||
| let truncated = file_size > MAX_READ_BYTES; | ||
| let read_size = file_size.min(MAX_READ_BYTES) as usize; | ||
|
|
||
| let raw = tokio::fs::read(&path).await.map_err(|error| { | ||
| tracing::warn!(%error, path = %path.display(), "files API: failed to read file"); | ||
| StatusCode::INTERNAL_SERVER_ERROR | ||
| })?; | ||
|
|
||
| // Take only up to the read limit and convert to string (lossy for binary files). | ||
| let slice = &raw[..read_size.min(raw.len())]; | ||
| let content = String::from_utf8_lossy(slice).to_string(); |
There was a problem hiding this comment.
Preview reads still load the entire file into memory.
tokio::fs::read reads full contents before truncation, so large files can still cause memory spikes despite MAX_READ_BYTES.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/files.rs` around lines 278 - 289, The current preview still reads
whole file into memory because tokio::fs::read is used; instead open the file
and read only up to MAX_READ_BYTES into a preallocated buffer. Replace the
tokio::fs::read(&path) call that produces raw with opening
tokio::fs::File::open(&path).await and using tokio::io::AsyncReadExt methods
(e.g., read into a Vec<u8> with capacity read_size or read_exact up to
read_size) to fill only read_size bytes, then use that buffer for slice/content
conversion; keep usage of file_size, truncated, read_size, slice, content and
the existing error logging around path and StatusCode.
| let data = tokio::fs::read(&path).await.map_err(|error| { | ||
| tracing::warn!(%error, path = %path.display(), "files API: failed to read file for download"); | ||
| StatusCode::INTERNAL_SERVER_ERROR | ||
| })?; |
There was a problem hiding this comment.
Download handler buffers full file in RAM.
Using tokio::fs::read here scales poorly for large downloads and can exhaust memory under concurrent requests.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/files.rs` around lines 310 - 313, The handler currently uses
tokio::fs::read to load the entire file into memory (see tokio::fs::read, path,
data) which can OOM on large files; change to open the file with
tokio::fs::File::open(&path) and stream it to the response using a ReaderStream
(tokio_util::io::ReaderStream) wrapped in your HTTP body type (e.g.,
axum::body::StreamBody or hyper::Body::wrap_stream) so bytes are sent
incrementally; handle/open errors similarly (log via tracing::warn!(%error, path
= %path.display(), ...)) and return StatusCode::INTERNAL_SERVER_ERROR on
failure, but do not allocate the whole file in memory.
| // Prevent renaming protected root directories. | ||
| if is_protected_root_dir(&workspace, &old_path) { | ||
| tracing::warn!( | ||
| path = %request.old_path, | ||
| "files API: rename rejected for protected root directory" | ||
| ); | ||
| return Err(StatusCode::FORBIDDEN); | ||
| } | ||
| if is_protected_identity_file(&workspace, &old_path) { | ||
| tracing::warn!( | ||
| path = %request.old_path, | ||
| "files API: rename rejected for protected identity file" | ||
| ); | ||
| return Err(StatusCode::FORBIDDEN); | ||
| } |
There was a problem hiding this comment.
Protect rename targets, not just rename sources.
Current checks only guard old_path. A caller can rename another file to a protected identity/root path and bypass the write/delete restrictions.
🔒 Proposed fix
if is_protected_root_dir(&workspace, &old_path) {
tracing::warn!(
path = %request.old_path,
"files API: rename rejected for protected root directory"
);
return Err(StatusCode::FORBIDDEN);
}
if is_protected_identity_file(&workspace, &old_path) {
tracing::warn!(
path = %request.old_path,
"files API: rename rejected for protected identity file"
);
return Err(StatusCode::FORBIDDEN);
}
+ if is_protected_root_dir(&workspace, &new_path) {
+ tracing::warn!(
+ path = %request.new_path,
+ "files API: rename target is a protected root directory"
+ );
+ return Err(StatusCode::FORBIDDEN);
+ }
+ if is_protected_identity_file(&workspace, &new_path) {
+ tracing::warn!(
+ path = %request.new_path,
+ "files API: rename target is a protected identity file"
+ );
+ return Err(StatusCode::FORBIDDEN);
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Prevent renaming protected root directories. | |
| if is_protected_root_dir(&workspace, &old_path) { | |
| tracing::warn!( | |
| path = %request.old_path, | |
| "files API: rename rejected for protected root directory" | |
| ); | |
| return Err(StatusCode::FORBIDDEN); | |
| } | |
| if is_protected_identity_file(&workspace, &old_path) { | |
| tracing::warn!( | |
| path = %request.old_path, | |
| "files API: rename rejected for protected identity file" | |
| ); | |
| return Err(StatusCode::FORBIDDEN); | |
| } | |
| // Prevent renaming protected root directories. | |
| if is_protected_root_dir(&workspace, &old_path) { | |
| tracing::warn!( | |
| path = %request.old_path, | |
| "files API: rename rejected for protected root directory" | |
| ); | |
| return Err(StatusCode::FORBIDDEN); | |
| } | |
| if is_protected_identity_file(&workspace, &old_path) { | |
| tracing::warn!( | |
| path = %request.old_path, | |
| "files API: rename rejected for protected identity file" | |
| ); | |
| return Err(StatusCode::FORBIDDEN); | |
| } | |
| if is_protected_root_dir(&workspace, &new_path) { | |
| tracing::warn!( | |
| path = %request.new_path, | |
| "files API: rename target is a protected root directory" | |
| ); | |
| return Err(StatusCode::FORBIDDEN); | |
| } | |
| if is_protected_identity_file(&workspace, &new_path) { | |
| tracing::warn!( | |
| path = %request.new_path, | |
| "files API: rename target is a protected identity file" | |
| ); | |
| return Err(StatusCode::FORBIDDEN); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/files.rs` around lines 435 - 449, The rename handler currently only
checks the source path (old_path) with is_protected_root_dir and
is_protected_identity_file; add the same protection checks for the destination
path (request.new_path) so callers cannot rename another file into a protected
root or identity file. After the existing old_path checks, call
is_protected_root_dir(&workspace, &new_path) and
is_protected_identity_file(&workspace, &new_path) (using the same
types/normalization as old_path) and return Err(StatusCode::FORBIDDEN) with a
tracing::warn that includes path = %request.new_path; keep the same log
messages/format and error code to preserve behavior.
|
|
||
| let mut uploaded = Vec::new(); | ||
|
|
||
| while let Ok(Some(field)) = multipart.next_field().await { |
There was a problem hiding this comment.
Multipart parse errors are currently swallowed.
while let Ok(Some(field)) = ... exits silently on the first parse error and can return success for partial uploads.
🧩 Proposed fix
- while let Ok(Some(field)) = multipart.next_field().await {
+ while let Some(field) = multipart.next_field().await.map_err(|error| {
+ tracing::warn!(%error, "files API: failed to parse multipart upload");
+ StatusCode::BAD_REQUEST
+ })? {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| while let Ok(Some(field)) = multipart.next_field().await { | |
| while let Some(field) = multipart.next_field().await.map_err(|error| { | |
| tracing::warn!(%error, "files API: failed to parse multipart upload"); | |
| StatusCode::BAD_REQUEST | |
| })? { |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/files.rs` at line 505, The loop using "while let Ok(Some(field)) =
multipart.next_field().await" swallows parse errors and can treat a partial
upload as success; change it to explicitly match on multipart.next_field().await
(e.g., match multipart.next_field().await { Ok(Some(field)) => ..., Ok(None) =>
break, Err(e) => return Err(...) }) so that Err from
multipart.next_field().await is propagated or converted into an appropriate
error response/early return instead of silently breaking, and ensure any
cleanup/rollback logic runs when an Err is encountered.
| let mut target = target_dir.join(safe_name); | ||
|
|
||
| // Deduplicate filenames if the target already exists. | ||
| if target.exists() { | ||
| let stem = Path::new(safe_name) | ||
| .file_stem() | ||
| .and_then(|s| s.to_str()) | ||
| .unwrap_or("upload"); | ||
| let ext = Path::new(safe_name) | ||
| .extension() | ||
| .and_then(|e| e.to_str()) | ||
| .unwrap_or("txt"); | ||
| let unique = format!( | ||
| "{}-{}.{}", | ||
| stem, | ||
| &uuid::Uuid::new_v4().to_string()[..8], | ||
| ext | ||
| ); | ||
| target = target_dir.join(unique); | ||
| } |
There was a problem hiding this comment.
Ingest upload metadata can be incorrect and DB failures are ignored.
When deduplication changes the stored filename, the API/DB still record safe_name, not the actual saved name. Also, SQL errors are dropped, so clients can get success while tracking failed.
🛠️ Proposed fix
let mut target = target_dir.join(safe_name);
// Deduplicate filenames if the target already exists.
if target.exists() {
@@
target = target_dir.join(unique);
}
+ let stored_name = target
+ .file_name()
+ .and_then(|n| n.to_str())
+ .unwrap_or(safe_name)
+ .to_string();
tokio::fs::write(&target, &data).await.map_err(|error| {
tracing::warn!(%error, path = %target.display(), "files API: failed to write uploaded file");
StatusCode::INTERNAL_SERVER_ERROR
})?;
@@
- let _ = sqlx::query(
+ sqlx::query(
r#"
INSERT OR IGNORE INTO ingestion_files (content_hash, filename, file_size, total_chunks, status)
VALUES (?, ?, ?, 0, 'queued')
"#,
)
.bind(&hash)
- .bind(safe_name)
+ .bind(&stored_name)
.bind(file_size)
.execute(pool)
- .await;
+ .await
+ .map_err(|error| {
+ tracing::warn!(%error, filename = %stored_name, "files API: failed to create ingestion tracking record");
+ StatusCode::INTERNAL_SERVER_ERROR
+ })?;
}
}
}
@@
- uploaded.push(safe_name.to_string());
+ uploaded.push(stored_name);As per coding guidelines: "Don't silently discard errors; use let _ = only on channel sends where the receiver may be dropped; handle, log, or propagate all other errors."
Also applies to: 558-568, 581-581
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/files.rs` around lines 525 - 544, The deduplication branch builds a
new filesystem name (unique) and sets target but the DB/metadata still uses
safe_name and SQL errors are being ignored; update the upload flow in
src/api/files.rs so the filename saved to the DB/metadata is the actual on-disk
name (derive it from target.file_name()/unique after target =
target_dir.join(...)) instead of safe_name, and replace any discarded SQL
results (e.g., where `let _ =` or `ignore` is used around DB calls) with proper
error handling: check the Result, log failures via the existing logger, and
return/propagate an appropriate error to the client on DB failure. Locate the
deduplication variables (safe_name, unique, target) and the subsequent DB
insert/update calls to change the persisted name and remove silent error drops
(also apply the same fixes around the other occurrences mentioned near the
second block — lines handling the saved filename and SQL calls).
| /// Resolve a user-supplied path relative to the workspace, ensuring it stays | ||
| /// within the workspace boundary. Rejects symlinks and `..` traversal. | ||
| fn resolve_workspace_path(workspace: &Path, raw: &str) -> Result<PathBuf, StatusCode> { | ||
| let cleaned = raw.trim_start_matches('/'); |
There was a problem hiding this comment.
best_effort_canonicalize preserves .. components for non-existent targets, so write_file/rename_file can escape the workspace boundary (e.g. path=../outside.txt). I'd reject ../. components up-front before joining/canonicalizing.
| let cleaned = raw.trim_start_matches('/'); | |
| let cleaned = raw.trim_start_matches('/'); | |
| if Path::new(cleaned) | |
| .components() | |
| .any(|c| matches!(c, std::path::Component::ParentDir | std::path::Component::CurDir)) | |
| { | |
| tracing::warn!(path = %raw, "files API: path traversal rejected"); | |
| return Err(StatusCode::FORBIDDEN); | |
| } |
| ) -> Result<Json<FileRenameResponse>, StatusCode> { | ||
| let workspace = get_workspace(&state, &request.agent_id)?; | ||
| let old_path = resolve_workspace_path(&workspace, &request.old_path)?; | ||
| let new_path = resolve_workspace_path(&workspace, &request.new_path)?; |
There was a problem hiding this comment.
Small but important: this blocks renaming from protected paths, but still allows renaming into a protected root dir / identity filename (and potentially overwriting). I'd also guard the destination.
| let new_path = resolve_workspace_path(&workspace, &request.new_path)?; | |
| let new_path = resolve_workspace_path(&workspace, &request.new_path)?; | |
| // Prevent renaming into protected root directories / identity files. | |
| if is_protected_root_dir(&workspace, &new_path) || is_protected_identity_file(&workspace, &new_path) | |
| { | |
| tracing::warn!( | |
| path = %request.new_path, | |
| "files API: rename rejected for protected destination" | |
| ); | |
| return Err(StatusCode::FORBIDDEN); | |
| } |
|
|
||
| let mut uploaded = Vec::new(); | ||
|
|
||
| while let Ok(Some(field)) = multipart.next_field().await { |
There was a problem hiding this comment.
while let Ok(..) will silently stop consuming fields on the first multipart error and still return 200. I'd propagate the parse error as 400.
| while let Ok(Some(field)) = multipart.next_field().await { | |
| while let Some(field) = multipart.next_field().await.map_err(|error| { | |
| tracing::warn!(%error, "files API: failed to parse multipart field"); | |
| StatusCode::BAD_REQUEST | |
| })? { |
| Ok(Json(FileWriteResponse { success: true })) | ||
| } | ||
|
|
||
| /// Delete a file or empty directory within the workspace. |
There was a problem hiding this comment.
Minor doc mismatch: this deletes directories recursively (remove_dir_all), not just empty directories.
| /// Delete a file or empty directory within the workspace. | |
| /// Delete a file or directory within the workspace (recursive for directories). |
| </span> | ||
| )} | ||
| <div className="flex items-center gap-1"> | ||
| {!editMode && !isProtectedIdentityFile( |
There was a problem hiding this comment.
This treats any file named user.md/etc as protected even in subdirectories, because it always passes "" as the current path. Using the selected file’s directory keeps protection scoped to workspace root.
| {!editMode && !isProtectedIdentityFile( | |
| {!editMode && | |
| !isProtectedIdentityFile( | |
| selectedFile.includes("/") | |
| ? selectedFile.slice(0, selectedFile.lastIndexOf("/")) | |
| : "", | |
| selectedFile.split("/").pop() ?? "", | |
| ) && ( |
| let truncated = file_size > MAX_READ_BYTES; | ||
| let read_size = file_size.min(MAX_READ_BYTES) as usize; | ||
|
|
||
| let raw = tokio::fs::read(&path).await.map_err(|error| { |
There was a problem hiding this comment.
Even though the response truncates to 1 MiB, tokio::fs::read still reads the entire file into memory first. That makes this endpoint an easy DoS vector for large workspace files.
| let raw = tokio::fs::read(&path).await.map_err(|error| { | |
| let mut file = tokio::fs::File::open(&path).await.map_err(|error| { | |
| tracing::warn!(%error, path = %path.display(), "files API: failed to open file"); | |
| StatusCode::INTERNAL_SERVER_ERROR | |
| })?; | |
| let mut raw = vec![0u8; read_size]; | |
| let bytes_read = tokio::io::AsyncReadExt::read(&mut file, &mut raw) | |
| .await | |
| .map_err(|error| { | |
| tracing::warn!(%error, path = %path.display(), "files API: failed to read file"); | |
| StatusCode::INTERNAL_SERVER_ERROR | |
| })?; | |
| raw.truncate(bytes_read); | |
| // Convert to string (lossy for binary files). | |
| let content = String::from_utf8_lossy(&raw).to_string(); |
| .and_then(|n| n.to_str()) | ||
| .unwrap_or("download"); | ||
|
|
||
| let content_disposition = format!("attachment; filename=\"{}\"", filename); |
There was a problem hiding this comment.
Two things here:
tokio::fs::readloads the whole file into memory before sending it; streaming would avoid big allocations for large downloads.filenamecan contain"/CRLF on Unix filesystems, which can breakContent-Dispositionformatting.
| let content_disposition = format!("attachment; filename=\"{}\"", filename); | |
| let safe_filename = filename.replace('"', "_").replace('\r', "").replace('\n', ""); | |
| let content_disposition = format!("attachment; filename=\"{}\"", safe_filename); |
| let raw = tokio::fs::read(&path).await.map_err(|error| { | ||
| tracing::warn!(%error, path = %path.display(), "files API: failed to read file"); | ||
| StatusCode::INTERNAL_SERVER_ERROR | ||
| })?; | ||
|
|
||
| // Take only up to the read limit and convert to string (lossy for binary files). | ||
| let slice = &raw[..read_size.min(raw.len())]; | ||
| let content = String::from_utf8_lossy(slice).to_string(); |
There was a problem hiding this comment.
Suggestion meant to replace the entire tokio::fs::read + slice-to-string block so we only read up to MAX_READ_BYTES.
| let raw = tokio::fs::read(&path).await.map_err(|error| { | |
| tracing::warn!(%error, path = %path.display(), "files API: failed to read file"); | |
| StatusCode::INTERNAL_SERVER_ERROR | |
| })?; | |
| // Take only up to the read limit and convert to string (lossy for binary files). | |
| let slice = &raw[..read_size.min(raw.len())]; | |
| let content = String::from_utf8_lossy(slice).to_string(); | |
| let mut file = tokio::fs::File::open(&path).await.map_err(|error| { | |
| tracing::warn!(%error, path = %path.display(), "files API: failed to open file"); | |
| StatusCode::INTERNAL_SERVER_ERROR | |
| })?; | |
| let mut raw = vec![0u8; read_size]; | |
| let bytes_read = tokio::io::AsyncReadExt::read(&mut file, &mut raw) | |
| .await | |
| .map_err(|error| { | |
| tracing::warn!(%error, path = %path.display(), "files API: failed to read file"); | |
| StatusCode::INTERNAL_SERVER_ERROR | |
| })?; | |
| raw.truncate(bytes_read); | |
| // Convert to string (lossy for binary files). | |
| let content = String::from_utf8_lossy(&raw).to_string(); |
Summary
/agents/files/*) for list, read, download, write, delete, rename, and upload operations with workspace boundary enforcementBackend
New module
src/api/files.rswith 7 endpoints:/agents/files/list/agents/files/read/agents/files/download/agents/files/write/agents/files/delete/agents/files/rename/agents/files/uploadingest/Security:
best_effort_canonicalizefromtools/file.rs(madepub(crate))ingest/,skills/) cannot be renamed or deletedSOUL.md,IDENTITY.md,USER.md,ROLE.md) read-only via this APIingest/auto-create ingestion tracking records for the pipelineFrontend
New component
AgentFiles.tsxreplacingAgentIngest.tsxin the tab bar:?path=search paramingest/ingest/, reuses existing ingest progress queryFiles Changed
src/api/files.rssrc/api.rsmod filessrc/api/server.rsfilesmodule, wire 7 routessrc/tools/file.rsbest_effort_canonicalize→pub(crate)interface/src/routes/AgentFiles.tsxinterface/src/api/client.tsinterface/src/components/AgentTabs.tsxinterface/src/router.tsxNote
This change introduces a comprehensive workspace filesystem explorer that replaces the ingest-only tab with a full-featured file browser. Backend adds 7 secure API endpoints with strict path validation and workspace boundary enforcement, while the frontend delivers a polished explorer with breadcrumb navigation, inline text editing, and file operations. Protected system files and directories are made read-only at the API layer, and ingest uploads automatically integrate with the existing ingestion pipeline. Total additions: 1,648 lines across backend (586) and frontend (948), with minimal deletions.
Written by Tembo for commit 451674d. This will update automatically on new commits.