Conversation
…lobal bulletin Adds living context documents (topics) that are addressable, per-channel, injectable into workers, and maintained by the cortex sync loop. Data layer: topics + topic_versions tables, TopicStore with full CRUD, versioning, channel assignment, and staleness tracking. Synthesis: topic sync loop runs alongside existing cortex loops, gathers memories matching configurable criteria (query, types, importance, pinned memories), synthesizes via LLM, stores versioned snapshots. Stale detection based on newest matching memory timestamp. Delivery: channels compose channel-specific topics (falling back to global bulletin), workers accept topic_ids param for context injection, cortex chat uses all active topics, task workers read topic_ids from task metadata. API: full CRUD + sync trigger + version history endpoints. Interface: topics tab with card grid browser, create dialog with criteria form + channel assignments, detail dialog with content/config/versions tabs, pin memory search, manual sync trigger.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds a topic-based context subsystem: DB schema and TopicStore, API and client types, frontend Topic management UI, background topic sync/synthesis loop with an LLM prompt template, prompt/worker topic injection, startup/config wiring, and tests plus a design doc for topic lifecycle and delivery. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
| pin_ids: parse_json_string_array(&pin_ids_value), | ||
| channel_ids: parse_json_string_array(&channel_ids_value), | ||
| status, | ||
| max_words: row |
There was a problem hiding this comment.
max_words is read as i64 and cast to usize (as usize). If the DB ever contains a negative value, this wraps to a huge number. Safer to try_into() (or clamp) here.
| max_words: row | |
| max_words: row | |
| .try_get::<i64, _>("max_words") | |
| .ok() | |
| .and_then(|value| value.try_into().ok()) | |
| .unwrap_or(1500), |
|
|
||
| impl TopicCriteria { | ||
| /// Build a `SearchConfig` from criteria fields. | ||
| pub fn to_search_config(&self) -> SearchConfig { |
There was a problem hiding this comment.
TopicCriteria has fields like min_importance, channel_ids, and max_age, but to_search_config() currently drops them on the floor (and memory_types only uses the first entry). If the API/UI exposes these, it’s worth either validating/rejecting unsupported combinations or plumbing the filters through so users don’t silently get different results than requested.
| tracing::debug!("cortex received signal, buffer size: {}", buffer.len()); | ||
| } | ||
|
|
||
| // When a memory is saved, mark all active topics as stale so the |
There was a problem hiding this comment.
Marking all topics stale on every MemorySaved looks like it’ll drive a lot of unnecessary work: mark_all_stale() clears last_synced_at, and sync_one_topic() treats last_synced_at.is_none() as stale even when no matching memories exist. That means frequent memory writes can trigger full re-synthesis across every topic on the next pass.
Maybe better to just rely on last_memory_at comparisons (or track a separate “dirty” bit / last_checked_at) so a memory write doesn’t automatically force an LLM call for unrelated topics.
| let search_config = topic.criteria.to_search_config(); | ||
| let query = topic.criteria.query.as_deref().unwrap_or(""); | ||
|
|
||
| // Staleness check: query newest memory matching criteria and compare to last_memory_at. |
There was a problem hiding this comment.
Staleness check doesn’t seem to do what the comment says when query is present: search_config will typically be SearchMode::Hybrid, and MemorySearch::search() ignores sort_by in hybrid mode (it’s relevance-ranked), so max_results: 1 here is “top match”, not “newest memory”.
Also, both searches use unwrap_or_default(), which will silently treat search failures as “no changes” (and can mask LanceDB/SQLite issues). I’d consider propagating the error (so it lands in the existing topic sync failed logging) and using a recency-based query for staleness (or a dedicated mode) so last_memory_at comparisons are meaningful.
| StatusCode::INTERNAL_SERVER_ERROR | ||
| })?; | ||
|
|
||
| Ok(Json(TopicResponse { topic })) |
There was a problem hiding this comment.
Minor API response correctness: TopicStore::update() updates updated_at in SQLite (datetime('now')), but the handler returns the pre-update struct, so updated_at (and any other DB-default fields) will be stale in the response.
| Ok(Json(TopicResponse { topic })) | |
| let topic = store | |
| .get(&id) | |
| .await | |
| .map_err(|error| { | |
| tracing::warn!(%error, agent_id = %request.agent_id, topic_id = %id, "failed to get updated topic"); | |
| StatusCode::INTERNAL_SERVER_ERROR | |
| })? | |
| .ok_or(StatusCode::NOT_FOUND)?; | |
| Ok(Json(TopicResponse { topic })) |
Same idea for create_topic() if the client cares about DB timestamps.
There was a problem hiding this comment.
Actionable comments posted: 1
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/config.rs (1)
4777-4818: 🛠️ Refactor suggestion | 🟠 MajorExtract cortex merge into a dedicated resolver to prevent drift.
CortexConfigmerge is duplicated in both defaults and agent paths. This is easy to desync whenever a new field is added (as withtopic_sync_interval_secs). A sharedresolve_cortex(...)helper would centralize precedence and reduce future config bugs.As per coding guidelines:
src/config.rs: Config resolution must follow:env > DB > defaultwith per-subsystemresolve()methods.Also applies to: 4989-5026
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config.rs` around lines 4777 - 4818, The Cortex merge logic is duplicated and should be centralized: create a helper function (e.g., resolve_cortex or CortexConfig::resolve) that takes the TOML optional cortex block and the base_defaults.cortex and returns a fully-resolved CortexConfig applying the proper precedence (env > DB > default) and using unwrap_or for each field (tick_interval_secs, worker_timeout_secs, branch_timeout_secs, circuit_breaker_threshold, bulletin_interval_secs, bulletin_max_words, bulletin_max_turns, association_interval_secs, association_similarity_threshold, association_updates_threshold, association_max_per_pass, topic_sync_interval_secs); replace the duplicated inline map(...).unwrap_or(base_defaults.cortex) occurrences in both the defaults and agent resolution paths with calls to this helper so future fields are resolved in one place.src/agent/channel.rs (1)
667-745:⚠️ Potential issue | 🟠 MajorCoalesced prompt path is missing the new “Available Topics” section.
Normal turns now expose topic IDs (Line [1043]-Line [1059]), but batched/coalesced turns don’t. That makes
spawn_worker.topic_idsless usable exactly in high-traffic channels where coalescing is active.Also applies to: 1043-1059
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/agent/channel.rs` around lines 667 - 745, The coalesced prompt path in build_system_prompt_with_coalesce omits the new "Available Topics" info so coalesced turns lack topic IDs (used by spawn_worker.topic_ids); fetch and expose the channel topic IDs alongside the existing channel_topics/memory_bulletin and pass them into the channel prompt rendering (i.e., add a variable derived from channel_topics like a list of topic IDs or a formatted "Available Topics" section and include it when calling prompt_engine.render_channel_prompt_with_links in build_system_prompt_with_coalesce) so the coalesced prompts match the non-coalesced path.
🟠 Major comments (20)
migrations/20260302000001_topics.sql-11-12 (1)
11-12:⚠️ Potential issue | 🟠 MajorAdd DB-level constraints for status and numeric bounds.
statusis parsed as a strict enum in Rust, but the table currently allows arbitrary text. Also,max_words/memory_countcan go negative.Suggested migration hardening
- status TEXT NOT NULL DEFAULT 'active', - max_words INTEGER NOT NULL DEFAULT 1500, + status TEXT NOT NULL DEFAULT 'active' CHECK (status IN ('active', 'paused', 'archived')), + max_words INTEGER NOT NULL DEFAULT 1500 CHECK (max_words > 0), @@ - memory_count INTEGER NOT NULL, + memory_count INTEGER NOT NULL CHECK (memory_count >= 0),Also applies to: 26-27
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@migrations/20260302000001_topics.sql` around lines 11 - 12, The migration must enforce DB-level constraints: change the topics table column definitions so that status is restricted to the Rust enum values (replace the free TEXT column for status with a constrained type or a CHECK clause enforcing status IN (<exact enum variants from the Rust TopicStatus>)) and add numeric bounds for max_words and memory_count (add CHECK clauses like max_words >= 0 and memory_count >= 0) so negatives are disallowed; update the CREATE TABLE / ALTER TABLE statements that reference the status, max_words, and memory_count columns accordingly and ensure the CHECK uses the exact enum variant strings used by the Rust type (e.g., values used in TopicStatus).src/topics/store.rs-337-343 (1)
337-343:⚠️ Potential issue | 🟠 MajorDon’t silently default corrupted JSON/topic fields.
These fallbacks mask bad DB state and return misleading topic data instead of surfacing the error.
As per coding guidelines: "Don't silently discard errors; use `let _ =` only on channel sends where the receiver may be dropped; handle, log, or propagate all other errors."Suggested fix (propagate parse/read failures)
-fn parse_json_string_array(value: &str) -> Vec<String> { - serde_json::from_str(value).unwrap_or_default() +fn parse_json_string_array(value: &str, field_name: &str) -> Result<Vec<String>> { + serde_json::from_str(value) + .with_context(|| format!("failed to parse topic field `{field_name}` as string array")) + .map_err(Into::into) } -fn parse_criteria(value: &str) -> TopicCriteria { - serde_json::from_str(value).unwrap_or_default() +fn parse_criteria(value: &str) -> Result<TopicCriteria> { + serde_json::from_str(value) + .context("failed to parse topic criteria JSON") + .map_err(Into::into) } @@ - let criteria_value: String = row.try_get("criteria").unwrap_or_else(|_| "{}".to_string()); - let pin_ids_value: String = row.try_get("pin_ids").unwrap_or_else(|_| "[]".to_string()); - let channel_ids_value: String = row - .try_get("channel_ids") - .unwrap_or_else(|_| "[]".to_string()); + let criteria_value: String = row.try_get("criteria").context("failed to read topic criteria")?; + let pin_ids_value: String = row.try_get("pin_ids").context("failed to read topic pin_ids")?; + let channel_ids_value: String = row.try_get("channel_ids").context("failed to read topic channel_ids")?; @@ - criteria: parse_criteria(&criteria_value), - pin_ids: parse_json_string_array(&pin_ids_value), - channel_ids: parse_json_string_array(&channel_ids_value), + criteria: parse_criteria(&criteria_value)?, + pin_ids: parse_json_string_array(&pin_ids_value, "pin_ids")?, + channel_ids: parse_json_string_array(&channel_ids_value, "channel_ids")?,Also applies to: 349-353
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/topics/store.rs` around lines 337 - 343, The current helpers parse_json_string_array and parse_criteria silently swallow JSON parse errors via unwrap_or_default; change them to return Result types (e.g., Result<Vec<String>, serde_json::Error> and Result<TopicCriteria, serde_json::Error>) and propagate errors to callers instead of returning defaults, updating call sites to handle or bubble up the error; apply the same change to the other parsing helpers around the same block (the similar functions at lines ~349-353) so DB/read failures are surfaced rather than masked.src/topics/store.rs-107-113 (1)
107-113:⚠️ Potential issue | 🟠 Major
max_memoriesis not enforced for hybrid topic searches.In hybrid mode, search uses
max_results_per_source, but this conversion only setsmax_results, so query-based topics can exceed the configured cap.Suggested fix
SearchConfig { mode, memory_type, sort_by, max_results: self.max_memories, + max_results_per_source: self.max_memories, ..Default::default() }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/topics/store.rs` around lines 107 - 113, The SearchConfig construction sets max_results but fails to set max_results_per_source for hybrid searches, so hybrid/topic query results can exceed self.max_memories; update the builder in src/topics/store.rs (the SearchConfig creation where mode, memory_type, sort_by, max_results are set) to also set max_results_per_source = self.max_memories when mode == SearchMode::Hybrid (or the equivalent hybrid enum variant), ensuring hybrid searches honor the configured cap.prompts/en/cortex_topic_synthesis.md.j2-9-15 (1)
9-15:⚠️ Potential issue | 🟠 MajorWord-limit constraint is undefined.
Line 14 references a “word limit,” but the template never states the actual number, so the model can’t reliably honor it.
Suggested prompt fix
## Output Format Write a focused document about the topic. Organize by what's most actionable or currently relevant. Use short paragraphs and clear structure. +Target length: at most {{ max_words }} words. @@ -- Exceed the word limit +- Exceed {{ max_words }} words🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prompts/en/cortex_topic_synthesis.md.j2` around lines 9 - 15, The prompt mentions a "word limit" but never defines it, so update the template prompts/en/cortex_topic_synthesis.md.j2 to declare an explicit word-limit parameter and surface it in the instructions: add a clear numeric constraint (or a template variable like word_limit) near the top and replace the ambiguous "Exceed the word limit" line in the "Do NOT:" section with a concrete reference to that parameter so the model can reliably enforce it; ensure the template accepts and documents the word_limit variable where the "Do NOT:" block and any guidance rely on it.src/topics/store.rs-178-178 (1)
178-178:⚠️ Potential issue | 🟠 MajorUse
try_from()for integer conversions to prevent silent wrapping on invalid values.SQLite stores
max_wordsasINTEGER(i64) with no CHECK constraint, but theTopicstruct usesusize. The unchecked casts at lines 178, 253, and 371-374 create two risks:
- Outbound (usize→i64): Could wrap if value exceeds i64::MAX (unlikely but possible)
- Inbound (i64→usize): Could silently wrap negative values to huge unsigned integers, bypassing the fallback
unwrap_or(1500)Suggested fix
- .bind(topic.max_words as i64) + .bind(i64::try_from(topic.max_words).context("topic.max_words exceeds i64 range")?)- max_words: row - .try_get::<i64, _>("max_words") - .map(|value| value as usize) - .unwrap_or(1500), + max_words: { + let value: i64 = row.try_get("max_words").context("failed to read topic max_words")?; + usize::try_from(value).context("topic max_words must be non-negative")? + },Also applies to: lines 253, 371-374
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/topics/store.rs` at line 178, The cast of topic.max_words and other integer conversions are done with unchecked "as" casts which can silently wrap; replace those with fallible conversions using std::convert::TryFrom/TryInto: where you bind a parameter (the ".bind(topic.max_words as i64)" usage) use i64::try_from(topic.max_words) and handle the Err (return an error or clamp/log), and where you convert values read from the DB (the i64 → usize conversions in the Topic::from_row / row-mapping code around lines 253 and 371-374) use usize::try_from(db_i64_value).unwrap_or(1500) or otherwise handle the Result to preserve the existing fallback, ensuring negative or overflowing DB values do not wrap into huge usize values.src/config.rs-4814-4816 (1)
4814-4816:⚠️ Potential issue | 🟠 MajorGuard
topic_sync_interval_secsagainst0to avoid tight sync loops.The new value is accepted as-is in both defaults and per-agent overrides. A
0value can cause an effectively continuous topic sync cycle.🛠️ Suggested hardening
- topic_sync_interval_secs: c - .topic_sync_interval_secs - .unwrap_or(base_defaults.cortex.topic_sync_interval_secs), + topic_sync_interval_secs: c + .topic_sync_interval_secs + .unwrap_or(base_defaults.cortex.topic_sync_interval_secs) + .max(1),- topic_sync_interval_secs: c - .topic_sync_interval_secs - .unwrap_or(defaults.cortex.topic_sync_interval_secs), + topic_sync_interval_secs: c + .topic_sync_interval_secs + .unwrap_or(defaults.cortex.topic_sync_interval_secs) + .max(1),Also applies to: 5023-5025
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config.rs` around lines 4814 - 4816, The assignment for topic_sync_interval_secs currently accepts zero which can lead to tight sync loops; update the code where topic_sync_interval_secs is set (the expression using c.topic_sync_interval_secs.unwrap_or(base_defaults.cortex.topic_sync_interval_secs)) to guard against 0 by normalizing the value to a minimum sensible interval (e.g., treat 0 as the default or use std::cmp::max(1, value)) so both per-agent override (c.topic_sync_interval_secs) and default (base_defaults.cortex.topic_sync_interval_secs) never yield 0; apply the same safeguard to the other occurrence referenced (lines around 5023-5025) so topic_sync_interval_secs is always >= 1.src/tools/spawn_worker.rs-118-122 (1)
118-122:⚠️ Potential issue | 🟠 Major
topic_idsis documented but ignored foropencodeworkers.Line [121] documents
topic_idsas worker context input, but Line [159]-Line [167] drops it entirely on theopencodepath. This creates silent behavior drift for callers.Suggested fix
let readiness = self.state.deps.runtime_config.work_readiness(); let is_opencode = args.worker_type.as_deref() == Some("opencode"); + if is_opencode && !args.topic_ids.is_empty() { + return Err(SpawnWorkerError( + "topic_ids are currently supported only for builtin workers".into(), + )); + } let worker_id = if is_opencode {Also applies to: 159-167
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tools/spawn_worker.rs` around lines 118 - 122, The review notes that the "topic_ids" field is documented but discarded on the opencode branch; update the spawn logic that constructs opencode workers so it preserves and forwards the topic_ids array instead of dropping it. Locate the opencode-specific branch in spawn_worker.rs (the code handling the "opencode" path / function that builds the opencode worker request) and copy/merge the topic_ids value from the incoming worker spec into the opencode request/payload (and any intermediate structs like the opencode worker config), ensuring the same validation/serialization applied for other paths is used so callers receive the documented behavior.src/agent/channel.rs-685-694 (1)
685-694:⚠️ Potential issue | 🟠 MajorFallback should trigger when rendered topic content is empty, not only when list is empty.
If assigned topics exist but contain blank
content, Line [685] / Line [1031] still chooses the topic path and can produce an empty bulletin block instead of using global bulletin text.Also applies to: 1031-1040
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/agent/channel.rs` around lines 685 - 694, The current branch chooses the topic-generated bulletin whenever channel_topics is non-empty, even if all topics have blank rendered content; change the logic in the memory_bulletin construction to first build or check the filtered list of topics with non-empty topic.content and only use the topic path if that filtered list is non-empty, otherwise fall back to rc.memory_bulletin.load().to_string(); apply the same fix to the analogous block around the code referenced at lines ~1031-1040 so both uses of channel_topics rely on the filtered non-empty-content collection (use symbols channel_topics, memory_bulletin, and rc.memory_bulletin to locate the places to update).src/agent/cortex_chat.rs-340-349 (1)
340-349:⚠️ Potential issue | 🟠 MajorFallback should depend on renderable topic content, not raw topic count.
If topics exist but all have empty
content, Line [340] takes the topic path and produces an empty context block instead of falling back to bulletin text.Suggested fix
- let context_content = if all_topics.is_empty() { - runtime_config.memory_bulletin.load().to_string() - } else { - all_topics - .iter() - .filter(|topic| !topic.content.is_empty()) - .map(|topic| format!("### {}\n\n{}", topic.title, topic.content)) - .collect::<Vec<_>>() - .join("\n\n") - }; + let topic_sections = all_topics + .iter() + .filter(|topic| !topic.content.trim().is_empty()) + .map(|topic| format!("### {}\n\n{}", topic.title, topic.content)) + .collect::<Vec<_>>(); + let context_content = if topic_sections.is_empty() { + runtime_config.memory_bulletin.load().to_string() + } else { + topic_sections.join("\n\n") + };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/agent/cortex_chat.rs` around lines 340 - 349, The current conditional uses all_topics.is_empty() so when topics exist but all have empty content you end up with an empty context; change the branch to check for renderable topics instead (e.g., use all_topics.iter().all(|t| t.content.is_empty()) or compute a filtered non_empty_topics first). In practice, update the logic around the context_content variable: if every topic has empty content, use runtime_config.memory_bulletin.load().to_string(), otherwise build the context by filtering topics with non-empty content and formatting them (same map/collect/join logic). Ensure you reference all_topics and context_content (and the filter that checks topic.content) when making the change.src/api/agents.rs-842-846 (1)
842-846:⚠️ Potential issue | 🟠 Major
state.topic_storesinsert needs matching cleanup on agent delete.Line [842]-Line [846] adds persistent state, but
delete_agentcleanup (starting Line [1075]) doesn’t remove the corresponding entry. That leaves stale per-agent topic stores after deletion.Suggested follow-up in
delete_agentlet mut searches = (**state.memory_searches.load()).clone(); searches.remove(&agent_id); state.memory_searches.store(std::sync::Arc::new(searches)); + + let mut topic_stores = (**state.topic_stores.load()).clone(); + topic_stores.remove(&agent_id); + state.topic_stores.store(std::sync::Arc::new(topic_stores));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/api/agents.rs` around lines 842 - 846, The new per-agent entry added to state.topic_stores (via the clone/insert/store sequence around topic_stores_map and agent_id) isn’t removed when an agent is deleted; update the delete_agent function to mirror the insertion cleanup: load the current map from state.topic_stores, clone it, remove the agent_id key (and any associated TopicStore cleanup if needed), and store the updated Arc-wrapped map back into state.topic_stores so no stale entries remain.interface/src/routes/AgentTopics.tsx-665-681 (1)
665-681:⚠️ Potential issue | 🟠 Major
handleSavealways sendscriteria.max_memories, even for unrelated edits.This can overwrite existing criteria unintentionally (e.g., title-only edits still persist a
max_memoriesvalue).Suggested fix
- const handleSave = () => { - const criteria: TopicCriteria = { - max_memories: maxMemories, - }; + const handleSave = () => { + const criteriaChanged = + query !== (topic.criteria.query ?? "") || + JSON.stringify(selectedTypes) !== JSON.stringify(topic.criteria.memory_types ?? []) || + minImportance !== (topic.criteria.min_importance ?? 0) || + maxAge !== (topic.criteria.max_age ?? "") || + maxMemories !== (topic.criteria.max_memories ?? 30); + + const criteria: TopicCriteria = {}; + if (maxMemories !== (topic.criteria.max_memories ?? 30)) { + criteria.max_memories = maxMemories; + } if (query.trim()) criteria.query = query.trim(); if (selectedTypes.length > 0) criteria.memory_types = selectedTypes; if (minImportance > 0) criteria.min_importance = minImportance; if (maxAge) criteria.max_age = maxAge; onUpdate({ title: title !== topic.title ? title : undefined, status: status !== topic.status ? status : undefined, - criteria, + criteria: criteriaChanged ? criteria : undefined, max_words: maxWords !== topic.max_words ? maxWords : undefined, channel_ids: selectedChannels, pin_ids: pinIds, }); };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@interface/src/routes/AgentTopics.tsx` around lines 665 - 681, handleSave currently always includes criteria.max_memories which can overwrite existing values; change the criteria construction in handleSave (the criteria variable and its max_memories assignment) to only include max_memories when maxMemories !== topic.criteria?.max_memories (and similarly only add other criteria fields if they differ from topic.criteria), and pass criteria to onUpdate only if at least one criteria field was added (or set criteria to undefined when no criteria changes).interface/src/routes/AgentTopics.tsx-283-297 (1)
283-297:⚠️ Potential issue | 🟠 MajorNumeric fields accept invalid values without bounds validation.
Number()casts cleared or invalid input (e.g., empty string →0, non-numeric →NaN, or negative values) without enforcing min/max constraints. HTML attributes provide no JavaScript or backend protection. Validate and clamp values before submitting toonCreate/onUpdatecallbacks:
- Line 415-421:
maxWordsinput (min 100, max 10000)- Line 827-845:
maxMemoriesinput (min 5, max 100) andmaxWordsinput (min 100, max 10000)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@interface/src/routes/AgentTopics.tsx` around lines 283 - 297, The numeric inputs are not validated before submission in AgentTopics: ensure you parse and validate/clamp maxWords (and maxMemories where used) in the submit/update handlers (e.g., handleSubmit and the corresponding update handler) by converting the value safely (parseInt/Number), treating empty/invalid as the minimum, guarding against NaN, and enforcing the specified bounds (maxWords: min 100, max 10000; maxMemories: min 5, max 100) before calling onCreate/onUpdate; also update any UI state setters (setMaxWords/setMaxMemories) to reflect the clamped value so the form and payload stay consistent.src/agent/channel.rs-679-684 (1)
679-684:⚠️ Potential issue | 🟠 MajorHandle or log topic-store errors instead of silently defaulting.
Topic-store reads at lines 679–684, 1025–1030, and 1043–1048 use
.unwrap_or_default()without logging or error propagation. This hides failures (DB issues, connectivity problems) and degrades prompts unexpectedly. The containing functions returnResult<String>, so errors should be logged and propagated instead.As per coding guidelines: "Don't silently discard errors; use
let _ =only on channel sends where the receiver may be dropped; handle, log, or propagate all other errors."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/agent/channel.rs` around lines 679 - 684, The topic-store reads using self.deps.topic_store.get_for_channel(...) currently call .unwrap_or_default(), which silently hides errors; replace those unwraps (the calls to get_for_channel at the occurrences around lines 679–684, 1025–1030, and 1043–1048) with proper error handling: propagate the error using the ? operator (or map_err to add context) so the containing functions that return Result<String> bubble up failures, and additionally log the error (e.g., via tracing::error! or the crate logger) before returning to aid debugging; ensure you reference the same symbols (self.deps.topic_store.get_for_channel, self.deps.agent_id, &self.id) when adding context to the propagated error.src/agent/cortex_chat.rs-334-339 (1)
334-339:⚠️ Potential issue | 🟠 MajorLog topic-store failures in prompt composition.
Line 339 silently converts
list_activeerrors to an empty vector without logging, hiding real data-path failures. This violates the guideline to handle, log, or propagate all errors (not just channel sends). The same file demonstrates the proper pattern inload_channel_transcript(line 421) usingtracing::warn!before falling back.Suggested fix
- let all_topics = self - .deps - .topic_store - .list_active(&self.deps.agent_id) - .await - .unwrap_or_default(); + let all_topics = match self.deps.topic_store.list_active(&self.deps.agent_id).await { + Ok(topics) => topics, + Err(error) => { + tracing::warn!( + %error, + agent_id = %self.deps.agent_id, + "failed to load active topics for cortex chat; falling back to bulletin" + ); + Vec::new() + } + };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/agent/cortex_chat.rs` around lines 334 - 339, The call that assigns all_topics currently swallows errors via unwrap_or_default on self.deps.topic_store.list_active(&self.deps.agent_id) — update this to match the pattern used in load_channel_transcript by awaiting the Result, logging the error with tracing::warn! (including context and the error value) if it fails, and only then falling back to an empty Vec; locate the assignment to all_topics in cortex_chat.rs and replace the unwrap_or_default usage with explicit match/if let handling that logs the error before using the default.src/agent/cortex.rs-1896-1903 (1)
1896-1903: 🛠️ Refactor suggestion | 🟠 MajorUse the standard LLM agent construction contract for this new topic-synthesis path.
This new LLM flow is built ad hoc; it should follow the repo’s required pattern for agent type, hooks, history handling, and explicit turn limits.
As per coding guidelines: "Every LLM process must be a Rig
Agent<SpacebotModel, SpacebotHook>with system prompt, tools, history, and hooks ... Agent construction must setmax_turnsexplicitly ... History must be external..."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/agent/cortex.rs` around lines 1896 - 1903, The current topic-synthesis flow builds an ad-hoc agent; replace it with the repo-standard Rig Agent construction using Agent<SpacebotModel, SpacebotHook> via rig::agent::AgentBuilder: ensure you pass the model into AgentBuilder::new(model), set .preamble(&system_prompt), attach the repo's tools and hooks (e.g., your SpacebotHook instance) with .hooks(...)/.tools(...), supply external history (do not create internal history—use the existing history object) via the builder’s history method, and explicitly set .max_turns(...) before calling .build(); then call agent.prompt(&synthesis_prompt).await as before.src/agent/cortex.rs-1803-1815 (1)
1803-1815:⚠️ Potential issue | 🟠 MajorPropagate memory-search failures in topic sync instead of defaulting to empty results.
Using
unwrap_or_default()here turns store/search failures into false “no updates” states, which can keep stale topic content indefinitely.Proposed fix
- let newest_results = deps + let newest_results = deps .memory_search .search( query, &SearchConfig { max_results: 1, sort_by: SearchSort::Recent, ..search_config.clone() }, ) - .await - .unwrap_or_default(); + .await + .context("failed to query newest memory for topic staleness")?; @@ - let mut results = deps + let mut results = deps .memory_search .search(query, &search_config) - .await - .unwrap_or_default(); + .await + .context("failed to query memories for topic synthesis")?;As per coding guidelines: "Don't silently discard errors; use
let _ =only on channel sends where the receiver may be dropped; handle, log, or propagate all other errors."Also applies to: 1832-1836
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/agent/cortex.rs` around lines 1803 - 1815, The current calls to deps.memory_search.search that assign newest_results (and the similar call around lines 1832-1836) use unwrap_or_default(), which hides store/search errors; change these to propagate failures instead of defaulting: have the call return a Result and use the ? operator (or map_err and return a meaningful error) so the enclosing topic sync function (where newest_results is used) can surface the error to the caller; ensure the same treatment is applied to both deps.memory_search.search invocations and update any function signatures to return a Result if needed so errors are not silently discarded.src/agent/channel_dispatch.rs-256-269 (1)
256-269:⚠️ Potential issue | 🟠 MajorDon’t suppress topic lookup failures when building worker context.
topic_store.get()failures are currently ignored, which can silently degrade worker context and hide store issues.Proposed fix
- for topic_id in topic_ids { - if let Ok(Some(topic)) = state.deps.topic_store.get(topic_id).await { - if !topic.content.is_empty() { - topic_context - .push_str(&format!("### {}\n\n{}\n\n", topic.title, topic.content)); - } - } - } + for topic_id in topic_ids { + match state.deps.topic_store.get(topic_id).await { + Ok(Some(topic)) if !topic.content.is_empty() => { + topic_context + .push_str(&format!("### {}\n\n{}\n\n", topic.title, topic.content)); + } + Ok(_) => {} + Err(error) => { + tracing::warn!( + %error, + topic_id = %topic_id, + "failed to load topic for worker context" + ); + } + } + }As per coding guidelines: "Don't silently discard errors; use
let _ =only on channel sends where the receiver may be dropped; handle, log, or propagate all other errors."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/agent/channel_dispatch.rs` around lines 256 - 269, The loop currently ignores errors from state.deps.topic_store.get(topic_id) which can hide store issues; change the logic in the block that builds topic_context (the code that calls state.deps.topic_store.get) to explicitly handle Err cases instead of only matching Ok(Some(...)): match the Result, on Ok(Some(topic)) keep the existing behavior, on Ok(None) continue, and on Err(e) either log the error with identifying info (include the topic_id and error) using the existing logger in state.deps or return/propagate the error up from the surrounding function (update its signature to Result if necessary); ensure the error handling path does not silently discard failures so topic lookup problems are visible while still allowing building worker_task when appropriate.src/agent/cortex.rs-1219-1237 (1)
1219-1237:⚠️ Potential issue | 🟠 MajorDon’t ignore topic-store read errors during ready-task prompt assembly.
deps.topic_store.get()errors are dropped, which silently removes expected context and hides datastore failures.Proposed fix
- if let Some(topic_ids) = task.metadata.get("topic_ids").and_then(|v| v.as_array()) { + if let Some(topic_ids) = task.metadata.get("topic_ids").and_then(|value| value.as_array()) { let topic_ids: Vec<&str> = topic_ids.iter().filter_map(|v| v.as_str()).collect(); if !topic_ids.is_empty() { let mut topic_content = String::new(); for topic_id in &topic_ids { - if let Ok(Some(topic)) = deps.topic_store.get(topic_id).await { - if !topic.content.is_empty() { - topic_content - .push_str(&format!("\n## {}\n{}\n", topic.title, topic.content)); - } - } + match deps.topic_store.get(topic_id).await { + Ok(Some(topic)) if !topic.content.is_empty() => { + topic_content + .push_str(&format!("\n## {}\n{}\n", topic.title, topic.content)); + } + Ok(_) => {} + Err(error) => { + tracing::warn!( + %error, + topic_id = %topic_id, + task_number = task.task_number, + "failed to load topic context for picked-up task" + ); + } + } } if !topic_content.is_empty() { task_prompt.push_str("\n\n---\nRelevant Context:\n"); task_prompt.push_str(&topic_content); }As per coding guidelines: "Don't silently discard errors; use
let _ =only on channel sends where the receiver may be dropped; handle, log, or propagate all other errors."src/api/topics.rs-69-73 (1)
69-73:⚠️ Potential issue | 🟠 Major
limitis unbounded and accepts invalid values.Line 72 allows any
i64, and Line 288 passes it directly to storage. Negative/zero or very large values can produce expensive or unbounded version reads.Suggested fix
pub(super) async fn topic_versions( State(state): State<Arc<ApiState>>, Path(id): Path<String>, Query(query): Query<VersionsQuery>, ) -> Result<Json<TopicVersionsResponse>, StatusCode> { + if !(1..=100).contains(&query.limit) { + return Err(StatusCode::BAD_REQUEST); + } + let stores = state.topic_stores.load(); let store = stores.get(&query.agent_id).ok_or(StatusCode::NOT_FOUND)?;Also applies to: 287-293
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/api/topics.rs` around lines 69 - 73, The VersionsQuery.limit field currently accepts any i64 and is forwarded directly to storage, allowing negative, zero, or huge values; change this by enforcing a positive bounded limit (e.g., 1..=MAX_VERSIONS) — either by making limit an unsigned/bounded type or by adding validation/deserialization logic (using serde deserialize_with or a constructor) that clamps/validates against a defined MAX_VERSIONS constant while preserving default_version_limit; also ensure the call site that forwards VersionsQuery.limit to storage performs/uses that validated/clamped value (the VersionsQuery struct, default_version_limit, and the code path that forwards the limit to storage should be updated).src/api/topics.rs-107-117 (1)
107-117:⚠️ Potential issue | 🟠 MajorStatus filter silently ignores
pausedandarchived.Line 107 only special-cases
"active". Passingstatus=pausedorstatus=archivedcurrently returns the full list instead of a filtered result (or a 400 for invalid status).Suggested fix
- let topics = if query.status.as_deref() == Some("active") { - store.list_active(&query.agent_id).await.map_err(|error| { - tracing::warn!(%error, agent_id = %query.agent_id, "failed to list active topics"); - StatusCode::INTERNAL_SERVER_ERROR - })? - } else { - store.list(&query.agent_id).await.map_err(|error| { - tracing::warn!(%error, agent_id = %query.agent_id, "failed to list topics"); - StatusCode::INTERNAL_SERVER_ERROR - })? - }; + let topics = match query.status.as_deref() { + None => store.list(&query.agent_id).await.map_err(|error| { + tracing::warn!(%error, agent_id = %query.agent_id, "failed to list topics"); + StatusCode::INTERNAL_SERVER_ERROR + })?, + Some("active") => store.list_active(&query.agent_id).await.map_err(|error| { + tracing::warn!(%error, agent_id = %query.agent_id, "failed to list active topics"); + StatusCode::INTERNAL_SERVER_ERROR + })?, + Some(status) => { + let desired_status = + crate::topics::TopicStatus::parse(status).ok_or(StatusCode::BAD_REQUEST)?; + store + .list(&query.agent_id) + .await + .map_err(|error| { + tracing::warn!(%error, agent_id = %query.agent_id, "failed to list topics"); + StatusCode::INTERNAL_SERVER_ERROR + })? + .into_iter() + .filter(|topic| topic.status == desired_status) + .collect() + } + };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/api/topics.rs` around lines 107 - 117, The handler currently only special-cases "active" and falls back to store.list for any other status; update the status handling to match query.status.as_deref() for "active", "paused", and "archived" and call the corresponding store methods (e.g., store.list_active(&query.agent_id), store.list_paused(&query.agent_id), store.list_archived(&query.agent_id)); if query.status is Some but not one of these three, return a 400 Bad Request; keep the existing map_err/tracing::warn logic for each branch and preserve the default store.list(&query.agent_id) when status is None.
🟡 Minor comments (4)
interface/src/routes/AgentTopics.tsx-98-103 (1)
98-103:⚠️ Potential issue | 🟡 MinorSync mutation should also invalidate topic versions cache.
The
["topic-versions", agentId, topicId]cache is queried in this file and will become stale after a sync operation updates the topic. Add invalidation for this cache key:Suggested fix
const syncMutation = useMutation({ mutationFn: (topicId: string) => api.syncTopic(agentId, topicId), - onSuccess: () => { + onSuccess: (_data, topicId) => { queryClient.invalidateQueries({ queryKey: ["topics", agentId] }); + queryClient.invalidateQueries({ + queryKey: ["topic-versions", agentId, topicId], + }); }, });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@interface/src/routes/AgentTopics.tsx` around lines 98 - 103, The syncMutation currently invalidates only ["topics", agentId]; after calling api.syncTopic in syncMutation (function syncMutation and mutationFn: api.syncTopic), also invalidate the topic versions cache key ["topic-versions", agentId, topicId] inside onSuccess by calling queryClient.invalidateQueries for that composite key so cached versions are refreshed after a sync. Ensure you reference agentId and the topicId passed to mutationFn when building the key so both query caches are invalidated.docs/design-docs/cortex-topic-synthesis.md-23-24 (1)
23-24:⚠️ Potential issue | 🟡 MinorClarify bulletin word-count statement to match the current runtime contract.
This says the bulletin is synthesized to ~1500 words. If this is historical context, label it explicitly; otherwise it conflicts with the current ~500-word contract.
Based on learnings: "The Cortex must generate a memory bulletin—a periodically refreshed, LLM-curated summary of agent knowledge (~500 words) ... and cache it in
RuntimeConfig::memory_bulletin."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/design-docs/cortex-topic-synthesis.md` around lines 23 - 24, The doc currently states generate_bulletin() synthesizes a ~1500 word briefing and stores it in RuntimeConfig::memory_bulletin as ArcSwap<String>; update the text to reflect the current runtime contract by changing the synthesized bulletin length to ~500 words (or explicitly mark the 1500-word figure as historical context), and ensure the sentence still references generate_bulletin() and RuntimeConfig::memory_bulletin (ArcSwap<String>) so readers can map the doc to the implementation.docs/design-docs/cortex-topic-synthesis.md-74-78 (1)
74-78:⚠️ Potential issue | 🟡 MinorAdd language specifiers to fenced code blocks (MD040).
Several fenced blocks are unlabeled and currently trigger markdownlint warnings. Please tag them (
text,rust,json, etc.) so docs lint passes cleanly.Also applies to: 107-110, 136-170, 260-264, 270-278, 406-414
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/design-docs/cortex-topic-synthesis.md` around lines 74 - 78, Several fenced code blocks in the cortex-topic-synthesis doc (for example the block showing "active — cortex syncs this topic on schedule", "paused — exists but skipped during sync", "archived — hidden from UI, never synced, kept for history", and other ungated blocks flagged at the review) are missing language specifiers and trigger MD040; update each triple-backtick block by adding an appropriate language tag (e.g., text, rust, json) immediately after the opening ``` so linting passes (apply this to the blocks containing those status lines and the other flagged sections around the later examples).src/agent/cortex.rs-378-386 (1)
378-386:⚠️ Potential issue | 🟡 MinorCollapse the nested
ifto satisfy the clippy CI gate.This hits
clippy::collapsible_ifin CI due to the nested conditional structure. Flatten it using let-chains syntax available in Rust 2024:Proposed fix
- if is_memory_saved { - if let Err(error) = self - .deps - .topic_store - .mark_all_stale(&self.deps.agent_id) - .await - { - tracing::warn!(%error, "failed to mark topics stale after memory save"); - } - } + if is_memory_saved + && let Err(error) = self + .deps + .topic_store + .mark_all_stale(&self.deps.agent_id) + .await + { + tracing::warn!(%error, "failed to mark topics stale after memory save"); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/agent/cortex.rs` around lines 378 - 386, Collapse the nested if by using a let-chain so clippy::collapsible_if is satisfied: replace the two-level check (if is_memory_saved { if let Err(error) = self.deps.topic_store.mark_all_stale(&self.deps.agent_id).await { ... } }) with a single conditional using let-chains — e.g. if is_memory_saved && let Err(error) = self.deps.topic_store.mark_all_stale(&self.deps.agent_id).await { tracing::warn!(%error, "failed to mark topics stale after memory save"); } — referencing is_memory_saved, self.deps.topic_store.mark_all_stale, and self.deps.agent_id.
🧹 Nitpick comments (1)
src/lib.rs (1)
222-222: Consider narrowingtopic_storeexposure instead of putting it on fully sharedAgentDeps.This broadens topic access to all process types. If worker isolation remains a requirement, prefer passing resolved topic context (or a narrower interface) only where needed.
Based on learnings: Worker processes must be independent, get a specific task with focused system prompt and task-appropriate tools, with no channel context or personality.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib.rs` at line 222, The AgentDeps struct currently exposes topic_store as a public Arc<topics::TopicStore>, which grants all consumers broad access; change this by removing the pub topic_store field from AgentDeps and instead provide a narrowed API: either (a) add a private topic_store field and expose a minimal trait or interface (e.g., TopicStoreReadOnly or TopicResolver) with only the operations workers need, or (b) stop sharing topic_store entirely and pass a resolved TopicContext (or TaskTopic) into worker constructors/functions that require it; update code paths that consume AgentDeps::topic_store to use the new trait/TopicContext (search for AgentDeps, topic_store, topics::TopicStore, and worker constructors) and adjust types/signatures accordingly so worker processes only receive the focused topic context or narrowed interface.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (26)
docs/design-docs/cortex-topic-synthesis.mdinterface/src/api/client.tsinterface/src/components/AgentTabs.tsxinterface/src/router.tsxinterface/src/routes/AgentTopics.tsxmigrations/20260302000001_topics.sqlprompts/en/cortex_topic_synthesis.md.j2src/agent/channel.rssrc/agent/channel_dispatch.rssrc/agent/cortex.rssrc/agent/cortex_chat.rssrc/api.rssrc/api/agents.rssrc/api/server.rssrc/api/state.rssrc/api/topics.rssrc/config.rssrc/lib.rssrc/main.rssrc/memory/search.rssrc/prompts/engine.rssrc/tools/spawn_worker.rssrc/topics.rssrc/topics/store.rstests/bulletin.rstests/context_dump.rs
| /// Convenience method for rendering topic synthesis system prompt. | ||
| pub fn render_topic_synthesis_prompt(&self, title: &str) -> Result<String> { | ||
| self.render( | ||
| "cortex_topic_synthesis", | ||
| context! { | ||
| title => title, | ||
| }, | ||
| ) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, check the actual content at the specified lines
head -n 410 src/prompts/engine.rs | tail -n 20Repository: spacedriveapp/spacebot
Length of output: 661
🏁 Script executed:
# Search for the render_topic_synthesis_prompt method and cortex_topic_synthesis references
rg -n 'render_topic_synthesis_prompt|cortex_topic_synthesis' src/Repository: spacedriveapp/spacebot
Length of output: 314
🏁 Script executed:
# Look at the PromptEngine::new() method to see template registration pattern
rg -n 'fn new\(\)' src/prompts/engine.rs -A 50 | head -n 100Repository: spacedriveapp/spacebot
Length of output: 48
🏁 Script executed:
# Check if there's a prompts/text directory or module with template content
fd -t f 'cortex_topic_synthesis' prompts/ || trueRepository: spacedriveapp/spacebot
Length of output: 105
🏁 Script executed:
# Look at the prompts directory structure
ls -la prompts/ 2>/dev/null || echo "prompts/ not found at root"
find . -name 'cortex_topic_synthesis*' -o -name 'cortex_profile*' 2>/dev/null | head -20Repository: spacedriveapp/spacebot
Length of output: 304
🏁 Script executed:
# View PromptEngine::new() to see template registration
rg -n 'impl PromptEngine' src/prompts/engine.rs -A 150 | grep -A 100 'fn new'Repository: spacedriveapp/spacebot
Length of output: 4293
🏁 Script executed:
# Look for the text module and get() function
find . -path './src/prompts/text*' -type fRepository: spacedriveapp/spacebot
Length of output: 87
🏁 Script executed:
# Search for how prompts::text::get is implemented
rg -n 'fn get\(' src/prompts/ -B 2 -A 10Repository: spacedriveapp/spacebot
Length of output: 893
🏁 Script executed:
# Check if there's a macro or loader for templates from the prompts directory
rg -n 'add_template|get_template' src/prompts/engine.rs | head -20Repository: spacedriveapp/spacebot
Length of output: 1008
🏁 Script executed:
# Search specifically for cortex_topic_synthesis in the new() method
rg -n 'cortex_topic_synthesis' src/prompts/engine.rsRepository: spacedriveapp/spacebot
Length of output: 107
🏁 Script executed:
# Check the text.rs file for the cortex_topic_synthesis entry in the match statement
rg -n 'cortex_topic_synthesis' src/prompts/text.rsRepository: spacedriveapp/spacebot
Length of output: 48
🏁 Script executed:
# Get the full PromptEngine::new() method to see where it ends
rg -n 'fn new\(language: &str\)' src/prompts/engine.rs -A 200 | tail -100Repository: spacedriveapp/spacebot
Length of output: 3944
🏁 Script executed:
# Check if text.rs is auto-generated or has a generator script
head -n 50 src/prompts/text.rsRepository: spacedriveapp/spacebot
Length of output: 1603
🏁 Script executed:
# Look for build scripts or macro invocations
find . -name 'build.rs' -o -name '*generate*' -o -name '*macro*' | grep -v node_modules | grep -v targetRepository: spacedriveapp/spacebot
Length of output: 76
🏁 Script executed:
# Check if cortex_profile is in text.rs since it IS registered
rg -n 'cortex_profile' src/prompts/text.rsRepository: spacedriveapp/spacebot
Length of output: 159
🏁 Script executed:
# Check the build.rs to understand how it generates text.rs
cat build.rsRepository: spacedriveapp/spacebot
Length of output: 1729
🏁 Script executed:
# Look for all entries in text.rs to see the full match statement
rg -n 'include_str.*cortex' src/prompts/text.rsRepository: spacedriveapp/spacebot
Length of output: 510
🏁 Script executed:
# Double-check - search for cortex_topic_synthesis anywhere in src/
rg -n 'cortex_topic_synthesis' src/ --type rsRepository: spacedriveapp/spacebot
Length of output: 92
🏁 Script executed:
# Search for cortex_topic_synthesis more broadly
rg 'cortex_topic_synthesis' src/Repository: spacedriveapp/spacebot
Length of output: 125
🏁 Script executed:
# Count how many entries are in the text.rs lookup function
rg -n 'include_str' src/prompts/text.rs | wc -lRepository: spacedriveapp/spacebot
Length of output: 68
🏁 Script executed:
# Get a sample of the middle section of text.rs to see all cortex entries
rg -n '"cortex' src/prompts/text.rsRepository: spacedriveapp/spacebot
Length of output: 421
🏁 Script executed:
# Confirm template file location and verify no other cortex_topic_synthesis entries exist
find prompts/ -name '*cortex_topic_synthesis*' && echo "---" && rg -r 'cortex_topic_synthesis' --count src/Repository: spacedriveapp/spacebot
Length of output: 1413
render_topic_synthesis_prompt will fail at runtime—template is not registered.
The template file prompts/en/cortex_topic_synthesis.md.j2 exists, but cortex_topic_synthesis is neither registered in the MiniJinja environment nor defined in the text::lookup function. When render_topic_synthesis_prompt() calls self.render("cortex_topic_synthesis", ...), it will fail with "template not found".
Fix requires updates to two files:
-
src/prompts/text.rs: Add entry to thelookupmatch statement:("en", "cortex_topic_synthesis") => include_str!("../../prompts/en/cortex_topic_synthesis.md.j2"), -
src/prompts/engine.rs: Add toPromptEngine::new()after thecortex_profileregistration:env.add_template( "cortex_topic_synthesis", crate::prompts::text::get("cortex_topic_synthesis"), )?;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/prompts/engine.rs` around lines 399 - 407, render_topic_synthesis_prompt
will panic at runtime because the "cortex_topic_synthesis" template isn't
registered or returned by text::lookup; update text::lookup to include ("en",
"cortex_topic_synthesis") mapping to the embedded template and register the
template in PromptEngine::new() right after the existing cortex_profile
registration by calling env.add_template with the name "cortex_topic_synthesis"
and the value from crate::prompts::text::get("cortex_topic_synthesis"); ensure
names match the string used in render_topic_synthesis_prompt.
| })? | ||
| .ok_or(StatusCode::NOT_FOUND)?; | ||
|
|
||
| Ok(Json(TopicResponse { topic })) |
There was a problem hiding this comment.
The store lookup is keyed by agent_id, but TopicStore::get() fetches by id only. If topic IDs ever leak/are guessable, this endpoint can read another agent’s topic (same idea for update/delete/sync).
| Ok(Json(TopicResponse { topic })) | |
| if topic.agent_id != query.agent_id { | |
| return Err(StatusCode::NOT_FOUND); | |
| } | |
| Ok(Json(TopicResponse { topic })) |
| for topic_id in topic_ids { | ||
| if let Ok(Some(topic)) = state.deps.topic_store.get(topic_id).await { | ||
| if !topic.content.is_empty() { | ||
| topic_context | ||
| .push_str(&format!("### {}\n\n{}\n\n", topic.title, topic.content)); | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
topic_ids is caller-controlled and TopicStore::get() isn’t scoped by agent, so this can inject another agent’s topic content if an ID is guessed/leaked. Also Err gets silently ignored here.
| for topic_id in topic_ids { | |
| if let Ok(Some(topic)) = state.deps.topic_store.get(topic_id).await { | |
| if !topic.content.is_empty() { | |
| topic_context | |
| .push_str(&format!("### {}\n\n{}\n\n", topic.title, topic.content)); | |
| } | |
| } | |
| } | |
| for topic_id in topic_ids { | |
| let topic = match state.deps.topic_store.get(topic_id).await { | |
| Ok(Some(topic)) if topic.agent_id == *state.deps.agent_id => topic, | |
| Ok(_) => continue, | |
| Err(error) => { | |
| tracing::warn!(%error, topic_id = %topic_id, "failed to load topic for worker context injection"); | |
| continue; | |
| } | |
| }; | |
| if !topic.content.is_empty() { | |
| topic_context.push_str(&format!("### {}\n\n{}\n\n", topic.title, topic.content)); | |
| } | |
| } |
Topics are no longer assigned to channels — the bulletin remains the universal awareness floor for all channels/branches. Topics exist as standalone context documents that workers and branches pull in explicitly via topic_ids on spawn_worker. - Always inject memory bulletin into channel/cortex-chat system prompts - List available topics (id + title) in the status block so the LLM can reference them when spawning workers - Remove channel_ids from Topic struct, store, migration, and API - Remove get_for_channel query (no longer needed) - Fix collapsible-if clippy lints in channel_dispatch and cortex
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
src/api/topics.rs (1)
243-268:⚠️ Potential issue | 🟡 Minor
sync_topicshould validate agent ownership before clearing sync timestamp.Like other endpoints, this fetches by
idwithout verifying the topic belongs to therequest.agent_id. A malicious caller could force re-sync of another agent's topic.🛡️ Suggested fix
let topic = store .get(&id) .await .map_err(|error| { tracing::warn!(%error, "failed to get topic for sync"); StatusCode::INTERNAL_SERVER_ERROR })? .ok_or(StatusCode::NOT_FOUND)?; + if topic.agent_id != request.agent_id { + return Err(StatusCode::NOT_FOUND); + } + // Clear last_synced_at to force a re-sync on the next topic sync pass.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/api/topics.rs` around lines 243 - 268, sync_topic currently fetches a topic by id and unconditionally clears last_synced_at, allowing callers to reset topics they don't own; update sync_topic to validate ownership by comparing the fetched topic's agent_id (from the Topic struct returned by store.get(&id)) with request.agent_id and return StatusCode::NOT_FOUND or FORBIDDEN if they differ, only proceeding to set last_synced_at = None and calling store.update(&updated) when the agent IDs match; reference functions/variables: sync_topic, request.agent_id, store.get(&id), the returned Topic (topic), and store.update(&updated).
🧹 Nitpick comments (1)
src/topics/store.rs (1)
311-317: Silent JSON parse failures could mask data corruption.Both helpers silently return defaults when JSON parsing fails. Consider logging a warning so corrupted data doesn't go unnoticed.
🔧 Proposed improvement
fn parse_json_string_array(value: &str) -> Vec<String> { - serde_json::from_str(value).unwrap_or_default() + serde_json::from_str(value).unwrap_or_else(|error| { + tracing::warn!(%error, raw = %value, "failed to parse JSON string array, using empty default"); + Vec::new() + }) } fn parse_criteria(value: &str) -> TopicCriteria { - serde_json::from_str(value).unwrap_or_default() + serde_json::from_str(value).unwrap_or_else(|error| { + tracing::warn!(%error, "failed to parse topic criteria JSON, using default"); + TopicCriteria::default() + }) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/topics/store.rs` around lines 311 - 317, The helpers parse_json_string_array and parse_criteria currently call serde_json::from_str(...).unwrap_or_default(), which silently hides parse errors; change them to attempt serde_json::from_str and on Err log a warning including the input string and the error (e.g., using log::warn! or tracing::warn!), then return the default; update both functions (parse_json_string_array and parse_criteria) so they do not panic but emit a warning with context (the original value and serde error) before returning the default.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/agent/cortex.rs`:
- Around line 1218-1235: The topic-injection loop in src/agent/cortex.rs calls
deps.topic_store.get(topic_id).await without validating that the returned topic
belongs to this agent and it swallows errors; change the loop to match on the
Result/Option from deps.topic_store.get(topic_id).await and handle cases
explicitly: on Ok(Some(topic)) check topic.agent_id == deps.agent_id and
topic.content.is_empty() before appending; on Ok(None) and on Err(e) emit at
least a debug-level log (include the topic_id and error for Err), and do not
silently ignore failures; mirror the validation used in spawn_worker_from_state
to locate this logic by referencing deps.topic_store.get, deps.agent_id, and
task.metadata/topic_ids.
In `@src/api/topics.rs`:
- Around line 271-287: topic_versions currently calls store.get_versions(&id,
query.limit) without verifying the topic belongs to query.agent_id; update it to
validate ownership first by fetching the topic (e.g., call
store.get_topic(&id).await or equivalent) and compare its agent/owner field to
query.agent_id, returning StatusCode::NOT_FOUND or FORBIDDEN if it doesn’t
match, then proceed to call get_versions only after validation; alternatively,
if the store supports agent-scoped access, use an agent-scoped method like
store.get_versions_for_agent(&query.agent_id, &id, query.limit) instead of
store.get_versions so the ownership check is enforced inside the store.
---
Duplicate comments:
In `@src/api/topics.rs`:
- Around line 243-268: sync_topic currently fetches a topic by id and
unconditionally clears last_synced_at, allowing callers to reset topics they
don't own; update sync_topic to validate ownership by comparing the fetched
topic's agent_id (from the Topic struct returned by store.get(&id)) with
request.agent_id and return StatusCode::NOT_FOUND or FORBIDDEN if they differ,
only proceeding to set last_synced_at = None and calling store.update(&updated)
when the agent IDs match; reference functions/variables: sync_topic,
request.agent_id, store.get(&id), the returned Topic (topic), and
store.update(&updated).
---
Nitpick comments:
In `@src/topics/store.rs`:
- Around line 311-317: The helpers parse_json_string_array and parse_criteria
currently call serde_json::from_str(...).unwrap_or_default(), which silently
hides parse errors; change them to attempt serde_json::from_str and on Err log a
warning including the input string and the error (e.g., using log::warn! or
tracing::warn!), then return the default; update both functions
(parse_json_string_array and parse_criteria) so they do not panic but emit a
warning with context (the original value and serde error) before returning the
default.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
migrations/20260302000001_topics.sqlsrc/agent/channel.rssrc/agent/channel_dispatch.rssrc/agent/cortex.rssrc/agent/cortex_chat.rssrc/api/topics.rssrc/topics/store.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- src/agent/cortex_chat.rs
- src/agent/channel.rs
| // Inject topic content from task metadata if topic_ids are present. | ||
| if let Some(topic_ids) = task.metadata.get("topic_ids").and_then(|v| v.as_array()) { | ||
| let topic_ids: Vec<&str> = topic_ids.iter().filter_map(|v| v.as_str()).collect(); | ||
| if !topic_ids.is_empty() { | ||
| let mut topic_content = String::new(); | ||
| for topic_id in &topic_ids { | ||
| if let Ok(Some(topic)) = deps.topic_store.get(topic_id).await | ||
| && !topic.content.is_empty() | ||
| { | ||
| topic_content.push_str(&format!("\n## {}\n{}\n", topic.title, topic.content)); | ||
| } | ||
| } | ||
| if !topic_content.is_empty() { | ||
| task_prompt.push_str("\n\n---\nRelevant Context:\n"); | ||
| task_prompt.push_str(&topic_content); | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Topic injection lacks agent_id validation and logs no errors.
Similar to spawn_worker_from_state, topic fetching here doesn't verify topic.agent_id == deps.agent_id, and errors from topic_store.get() are silently ignored. Consider adding validation and at least a debug-level log on failures.
🛡️ Suggested fix
if let Some(topic_ids) = task.metadata.get("topic_ids").and_then(|v| v.as_array()) {
let topic_ids: Vec<&str> = topic_ids.iter().filter_map(|v| v.as_str()).collect();
if !topic_ids.is_empty() {
let mut topic_content = String::new();
for topic_id in &topic_ids {
- if let Ok(Some(topic)) = deps.topic_store.get(topic_id).await
- && !topic.content.is_empty()
- {
- topic_content.push_str(&format!("\n## {}\n{}\n", topic.title, topic.content));
+ match deps.topic_store.get(topic_id).await {
+ Ok(Some(topic)) if topic.agent_id == *deps.agent_id && !topic.content.is_empty() => {
+ topic_content.push_str(&format!("\n## {}\n{}\n", topic.title, topic.content));
+ }
+ Ok(Some(_)) => {} // Wrong agent or empty content
+ Ok(None) => {}
+ Err(error) => {
+ tracing::debug!(%error, topic_id, "failed to load topic for task context");
+ }
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Inject topic content from task metadata if topic_ids are present. | |
| if let Some(topic_ids) = task.metadata.get("topic_ids").and_then(|v| v.as_array()) { | |
| let topic_ids: Vec<&str> = topic_ids.iter().filter_map(|v| v.as_str()).collect(); | |
| if !topic_ids.is_empty() { | |
| let mut topic_content = String::new(); | |
| for topic_id in &topic_ids { | |
| if let Ok(Some(topic)) = deps.topic_store.get(topic_id).await | |
| && !topic.content.is_empty() | |
| { | |
| topic_content.push_str(&format!("\n## {}\n{}\n", topic.title, topic.content)); | |
| } | |
| } | |
| if !topic_content.is_empty() { | |
| task_prompt.push_str("\n\n---\nRelevant Context:\n"); | |
| task_prompt.push_str(&topic_content); | |
| } | |
| } | |
| } | |
| // Inject topic content from task metadata if topic_ids are present. | |
| if let Some(topic_ids) = task.metadata.get("topic_ids").and_then(|v| v.as_array()) { | |
| let topic_ids: Vec<&str> = topic_ids.iter().filter_map(|v| v.as_str()).collect(); | |
| if !topic_ids.is_empty() { | |
| let mut topic_content = String::new(); | |
| for topic_id in &topic_ids { | |
| match deps.topic_store.get(topic_id).await { | |
| Ok(Some(topic)) if topic.agent_id == *deps.agent_id && !topic.content.is_empty() => { | |
| topic_content.push_str(&format!("\n## {}\n{}\n", topic.title, topic.content)); | |
| } | |
| Ok(Some(_)) => {} // Wrong agent or empty content | |
| Ok(None) => {} | |
| Err(error) => { | |
| tracing::debug!(%error, topic_id, "failed to load topic for task context"); | |
| } | |
| } | |
| } | |
| if !topic_content.is_empty() { | |
| task_prompt.push_str("\n\n---\nRelevant Context:\n"); | |
| task_prompt.push_str(&topic_content); | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/agent/cortex.rs` around lines 1218 - 1235, The topic-injection loop in
src/agent/cortex.rs calls deps.topic_store.get(topic_id).await without
validating that the returned topic belongs to this agent and it swallows errors;
change the loop to match on the Result/Option from
deps.topic_store.get(topic_id).await and handle cases explicitly: on
Ok(Some(topic)) check topic.agent_id == deps.agent_id and
topic.content.is_empty() before appending; on Ok(None) and on Err(e) emit at
least a debug-level log (include the topic_id and error for Err), and do not
silently ignore failures; mirror the validation used in spawn_worker_from_state
to locate this logic by referencing deps.topic_store.get, deps.agent_id, and
task.metadata/topic_ids.
| pub(super) async fn topic_versions( | ||
| State(state): State<Arc<ApiState>>, | ||
| Path(id): Path<String>, | ||
| Query(query): Query<VersionsQuery>, | ||
| ) -> Result<Json<TopicVersionsResponse>, StatusCode> { | ||
| let stores = state.topic_stores.load(); | ||
| let store = stores.get(&query.agent_id).ok_or(StatusCode::NOT_FOUND)?; | ||
|
|
||
| let versions = store | ||
| .get_versions(&id, query.limit) | ||
| .await | ||
| .map_err(|error| { | ||
| tracing::warn!(%error, topic_id = %id, "failed to get topic versions"); | ||
| StatusCode::INTERNAL_SERVER_ERROR | ||
| })?; | ||
|
|
||
| Ok(Json(TopicVersionsResponse { versions })) |
There was a problem hiding this comment.
topic_versions lacks agent ownership validation.
This endpoint fetches versions by topic_id without verifying the topic belongs to the queried agent_id. Consider fetching the topic first to validate ownership, or add an agent-scoped query to the store.
🛡️ Suggested fix
pub(super) async fn topic_versions(
State(state): State<Arc<ApiState>>,
Path(id): Path<String>,
Query(query): Query<VersionsQuery>,
) -> Result<Json<TopicVersionsResponse>, StatusCode> {
let stores = state.topic_stores.load();
let store = stores.get(&query.agent_id).ok_or(StatusCode::NOT_FOUND)?;
+ // Validate topic belongs to agent before returning versions.
+ let topic = store
+ .get(&id)
+ .await
+ .map_err(|error| {
+ tracing::warn!(%error, topic_id = %id, "failed to validate topic ownership");
+ StatusCode::INTERNAL_SERVER_ERROR
+ })?
+ .ok_or(StatusCode::NOT_FOUND)?;
+
+ if topic.agent_id != query.agent_id {
+ return Err(StatusCode::NOT_FOUND);
+ }
+
let versions = store
.get_versions(&id, query.limit)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/topics.rs` around lines 271 - 287, topic_versions currently calls
store.get_versions(&id, query.limit) without verifying the topic belongs to
query.agent_id; update it to validate ownership first by fetching the topic
(e.g., call store.get_topic(&id).await or equivalent) and compare its
agent/owner field to query.agent_id, returning StatusCode::NOT_FOUND or
FORBIDDEN if it doesn’t match, then proceed to call get_versions only after
validation; alternatively, if the store supports agent-scoped access, use an
agent-scoped method like store.get_versions_for_agent(&query.agent_id, &id,
query.limit) instead of store.get_versions so the ownership check is enforced
inside the store.
| let topic = store | ||
| .get(&id) | ||
| .await | ||
| .map_err(|error| { | ||
| tracing::warn!(%error, agent_id = %query.agent_id, topic_id = %id, "failed to get topic"); | ||
| StatusCode::INTERNAL_SERVER_ERROR | ||
| })? | ||
| .ok_or(StatusCode::NOT_FOUND)?; | ||
|
|
||
| Ok(Json(TopicResponse { topic })) |
There was a problem hiding this comment.
TopicStore::get() is WHERE id = ? (not agent-scoped), so a caller can read another agent’s topic if they know the ID. Probably worth enforcing ownership at the API boundary (and similarly for update/delete/sync/versions).
| let topic = store | |
| .get(&id) | |
| .await | |
| .map_err(|error| { | |
| tracing::warn!(%error, agent_id = %query.agent_id, topic_id = %id, "failed to get topic"); | |
| StatusCode::INTERNAL_SERVER_ERROR | |
| })? | |
| .ok_or(StatusCode::NOT_FOUND)?; | |
| Ok(Json(TopicResponse { topic })) | |
| let topic = store | |
| .get(&id) | |
| .await | |
| .map_err(|error| { | |
| tracing::warn!(%error, agent_id = %query.agent_id, topic_id = %id, "failed to get topic"); | |
| StatusCode::INTERNAL_SERVER_ERROR | |
| })? | |
| .ok_or(StatusCode::NOT_FOUND)?; | |
| if topic.agent_id != query.agent_id { | |
| return Err(StatusCode::NOT_FOUND); | |
| } | |
| Ok(Json(TopicResponse { topic })) |
| let stores = state.topic_stores.load(); | ||
| let store = stores.get(&query.agent_id).ok_or(StatusCode::NOT_FOUND)?; | ||
|
|
||
| let deleted = store.delete(&id).await.map_err(|error| { | ||
| tracing::warn!(%error, agent_id = %query.agent_id, topic_id = %id, "failed to delete topic"); | ||
| StatusCode::INTERNAL_SERVER_ERROR | ||
| })?; | ||
|
|
||
| if !deleted { | ||
| return Err(StatusCode::NOT_FOUND); | ||
| } | ||
|
|
||
| Ok(Json(TopicActionResponse { | ||
| success: true, | ||
| message: format!("Topic {id} deleted"), | ||
| })) |
There was a problem hiding this comment.
Same ownership issue for deletes: TopicStore::delete() is DELETE FROM topics WHERE id = ?, so this can delete cross-agent by ID.
| let stores = state.topic_stores.load(); | |
| let store = stores.get(&query.agent_id).ok_or(StatusCode::NOT_FOUND)?; | |
| let deleted = store.delete(&id).await.map_err(|error| { | |
| tracing::warn!(%error, agent_id = %query.agent_id, topic_id = %id, "failed to delete topic"); | |
| StatusCode::INTERNAL_SERVER_ERROR | |
| })?; | |
| if !deleted { | |
| return Err(StatusCode::NOT_FOUND); | |
| } | |
| Ok(Json(TopicActionResponse { | |
| success: true, | |
| message: format!("Topic {id} deleted"), | |
| })) | |
| let stores = state.topic_stores.load(); | |
| let store = stores.get(&query.agent_id).ok_or(StatusCode::NOT_FOUND)?; | |
| let topic = store | |
| .get(&id) | |
| .await | |
| .map_err(|error| { | |
| tracing::warn!(%error, agent_id = %query.agent_id, topic_id = %id, "failed to get topic for delete"); | |
| StatusCode::INTERNAL_SERVER_ERROR | |
| })? | |
| .ok_or(StatusCode::NOT_FOUND)?; | |
| if topic.agent_id != query.agent_id { | |
| return Err(StatusCode::NOT_FOUND); | |
| } | |
| let deleted = store.delete(&id).await.map_err(|error| { | |
| tracing::warn!(%error, agent_id = %query.agent_id, topic_id = %id, "failed to delete topic"); | |
| StatusCode::INTERNAL_SERVER_ERROR | |
| })?; | |
| if !deleted { | |
| return Err(StatusCode::NOT_FOUND); | |
| } | |
| Ok(Json(TopicActionResponse { | |
| success: true, | |
| message: format!("Topic {id} deleted"), | |
| })) |
Removes channel assignment UI from create/edit topic dialogs and the channel count badges from topic cards/detail views. Fixes 'undefined is not an object' crash from accessing topic.channel_ids after the backend stopped sending it.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@interface/src/api/client.ts`:
- Around line 954-963: The TopicCriteria interface exposes a stale channel_ids
property; remove channel_ids from the TopicCriteria type declaration and any
related typing imports/usages to prevent sending unsupported fields. Update all
call sites that construct TopicCriteria (search clients, UI form serializers,
and tests) to stop setting channel_ids and adjust any validation or request
payload mapping logic that filters/reads channel_ids; also update API
docs/comments and run tests to ensure no remaining references to
TopicCriteria.channel_ids remain.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
interface/src/api/client.tsinterface/src/routes/AgentTopics.tsx
🚧 Files skipped from review as they are similar to previous changes (1)
- interface/src/routes/AgentTopics.tsx
| export interface TopicCriteria { | ||
| query?: string; | ||
| memory_types?: MemoryType[]; | ||
| min_importance?: number; | ||
| channel_ids?: string[]; | ||
| max_age?: string; | ||
| mode?: SearchMode; | ||
| sort_by?: SearchSort; | ||
| max_memories?: number; | ||
| } |
There was a problem hiding this comment.
Remove stale channel_ids from TopicCriteria (API contract drift).
At Line 958, channel_ids is still exposed even though topics are now recall-layer based and no longer channel-assigned. Keeping it in the client model risks sending unsupported fields and drifting UI behavior from backend semantics.
Proposed fix
export interface TopicCriteria {
query?: string;
memory_types?: MemoryType[];
min_importance?: number;
- channel_ids?: string[];
max_age?: string;
mode?: SearchMode;
sort_by?: SearchSort;
max_memories?: number;
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@interface/src/api/client.ts` around lines 954 - 963, The TopicCriteria
interface exposes a stale channel_ids property; remove channel_ids from the
TopicCriteria type declaration and any related typing imports/usages to prevent
sending unsupported fields. Update all call sites that construct TopicCriteria
(search clients, UI form serializers, and tests) to stop setting channel_ids and
adjust any validation or request payload mapping logic that filters/reads
channel_ids; also update API docs/comments and run tests to ensure no remaining
references to TopicCriteria.channel_ids remain.
The sync_topic API handler was clearing last_synced_at but had no way to wake the background sync loop, which sleeps for up to 10 minutes. Add a tokio::sync::Notify to AgentDeps that the API handler signals and the sync loop listens for via tokio::select!, so Sync Now triggers synthesis immediately instead of waiting for the next timer tick.
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/api/agents.rs (1)
731-848:⚠️ Potential issue | 🟠 MajorRegister and lifecycle-manage
topic_sync_notifyalongsidetopic_store.
create_agentpersiststopic_storeinto API state, but the newtopic_sync_notifyis only created insideAgentDepsand never added tostate.topic_sync_notifiers. Also, agent deletion cleanup does not remove topic-related state maps. This creates stale state and can break on-demand topic sync behavior for agents created/deleted via API.💡 Suggested fix
+ let topic_sync_notify = Arc::new(tokio::sync::Notify::new()); let deps = crate::AgentDeps { agent_id: arc_agent_id.clone(), memory_search: memory_search.clone(), llm_manager, mcp_manager: mcp_manager.clone(), task_store: task_store.clone(), topic_store: topic_store.clone(), @@ - topic_sync_notify: Arc::new(tokio::sync::Notify::new()), + topic_sync_notify: topic_sync_notify.clone(), @@ let mut topic_stores_map = (**state.topic_stores.load()).clone(); topic_stores_map.insert(agent_id.clone(), topic_store); state .topic_stores .store(std::sync::Arc::new(topic_stores_map)); + + let mut topic_sync_notifiers = (**state.topic_sync_notifiers.load()).clone(); + topic_sync_notifiers.insert(agent_id.clone(), topic_sync_notify); + state + .topic_sync_notifiers + .store(std::sync::Arc::new(topic_sync_notifiers));// In delete_agent cleanup block, also remove both: let mut topic_stores = (**state.topic_stores.load()).clone(); topic_stores.remove(&agent_id); state.topic_stores.store(std::sync::Arc::new(topic_stores)); let mut topic_sync_notifiers = (**state.topic_sync_notifiers.load()).clone(); topic_sync_notifiers.remove(&agent_id); state .topic_sync_notifiers .store(std::sync::Arc::new(topic_sync_notifiers));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/api/agents.rs` around lines 731 - 848, The new topic_sync_notify created in create_agent (field topic_sync_notify in AgentDeps) is not stored in state.topic_sync_notifiers and delete_agent doesn't clean up topic-related maps; fix by persisting the Arc<Notify> into state.topic_sync_notifiers when registering the agent (alongside inserting topic_store into state.topic_stores) and update the agent deletion/cleanup path to remove both the agent's entries from state.topic_stores and state.topic_sync_notifiers (use the existing pattern used for agent_pools/memory_searches/task_stores to clone, remove(&agent_id), and store the Arc-wrapped maps); ensure the symbols referenced are topic_sync_notify, topic_store, state.topic_sync_notifiers, create_agent and the delete_agent cleanup block.
♻️ Duplicate comments (3)
src/agent/cortex.rs (3)
1218-1229:⚠️ Potential issue | 🟡 MinorDon’t silently ignore topic-load failures in task context injection.
This path suppresses
topic_store.geterrors and silently drops context. Please handleErrexplicitly (at least debug/warn log withtopic_id) instead of ignoring it.As per coding guidelines: "Don't silently discard errors; use
let _ =only on channel sends where the receiver may be dropped; handle, log, or propagate all other errors."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/agent/cortex.rs` around lines 1218 - 1229, The code currently ignores errors from deps.topic_store.get(topic_id) inside the task metadata injection loop; change the conditional to explicitly match the Result from deps.topic_store.get(topic_id).await so that on Err(e) you emit a debug/warn log that includes the topic_id and error (and then continue), and on Ok(Some(topic)) keep the existing behavior of checking topic.content and appending to topic_content; do not use silent suppression (no let _ =) and ensure you reference task.metadata.get("topic_ids"), deps.topic_store.get(topic_id), topic_content, topic.title and topic.content in your change.
1805-1839:⚠️ Potential issue | 🟠 MajorFix staleness detection and propagate search errors.
The staleness probe assumes recency ordering, but the inherited search mode may not honor
sort_by: Recentas expected. Also, both search calls useunwrap_or_default(), masking storage/search failures as “no changes.”As per coding guidelines: "Don't silently discard errors; use
let _ =only on channel sends where the receiver may be dropped; handle, log, or propagate all other errors."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/agent/cortex.rs` around lines 1805 - 1839, The staleness probe is unsafe because both searches call unwrap_or_default() (masking errors) and it compares RFC3339 strings assuming search honors SearchSort::Recent; change both memory_search.search calls to propagate errors instead of unwrap_or_default() (use ? or map_err) and ensure recency is determined from the returned Memory.created_at values directly (e.g., parse/compare chrono DateTime or sort results by created_at) rather than relying on SearchSort::Recent; update logic around newest_results/newest_memory_at/is_stale and the subsequent results variable to use the error-propagating search and explicit created_at comparison so failures surface and recency is correct.
376-386:⚠️ Potential issue | 🟠 MajorAvoid invalidating every topic on each memory write.
Marking all topics stale for every
MemorySavedevent can trigger broad unnecessary topic re-synthesis and LLM load, even for unrelated topics.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/agent/cortex.rs`:
- Around line 1899-1905: The topic-synthesis agent is built without required
patterns: set max_turns, attach hooks, and pass history; update the
AgentBuilder/agent usage around
AgentBuilder::new(model).preamble(&system_prompt).build() and the subsequent
agent.prompt(&synthesis_prompt).await call so that the builder sets max_turns
(use ~3-5), and the prompt invocation uses .with_hook(SpacebotHook::new(...))
and .with_history(&mut history) before awaiting the prompt; ensure you reference
the existing synthesis_prompt and the mutable history variable when adding
.with_history and follow the same hook construction and max_turns value style
used in worker.rs, branch.rs, or channel.rs.
In `@src/main.rs`:
- Around line 2782-2787: spawn_topic_sync_loop currently returns on the first
run_topic_sync_loop error which makes topic sync non-resilient; modify
spawn_topic_sync_loop in src/agent/cortex.rs to wrap run_topic_sync_loop in a
loop that catches errors (and panics if necessary) and continues retrying with
an exponential backoff/jitter (or fixed delay) while logging each failure via
CortexLogger and tracing::error including agent_id and the error details; ensure
the loop accepts cancellation via the original shutdown/stop signal (use the
same agent.deps handle) so the task can be cleanly aborted, and preserve
existing behavior of creating CortexLogger::new(agent.db.sqlite.clone()) when
respawning attempts.
---
Outside diff comments:
In `@src/api/agents.rs`:
- Around line 731-848: The new topic_sync_notify created in create_agent (field
topic_sync_notify in AgentDeps) is not stored in state.topic_sync_notifiers and
delete_agent doesn't clean up topic-related maps; fix by persisting the
Arc<Notify> into state.topic_sync_notifiers when registering the agent
(alongside inserting topic_store into state.topic_stores) and update the agent
deletion/cleanup path to remove both the agent's entries from state.topic_stores
and state.topic_sync_notifiers (use the existing pattern used for
agent_pools/memory_searches/task_stores to clone, remove(&agent_id), and store
the Arc-wrapped maps); ensure the symbols referenced are topic_sync_notify,
topic_store, state.topic_sync_notifiers, create_agent and the delete_agent
cleanup block.
---
Duplicate comments:
In `@src/agent/cortex.rs`:
- Around line 1218-1229: The code currently ignores errors from
deps.topic_store.get(topic_id) inside the task metadata injection loop; change
the conditional to explicitly match the Result from
deps.topic_store.get(topic_id).await so that on Err(e) you emit a debug/warn log
that includes the topic_id and error (and then continue), and on Ok(Some(topic))
keep the existing behavior of checking topic.content and appending to
topic_content; do not use silent suppression (no let _ =) and ensure you
reference task.metadata.get("topic_ids"), deps.topic_store.get(topic_id),
topic_content, topic.title and topic.content in your change.
- Around line 1805-1839: The staleness probe is unsafe because both searches
call unwrap_or_default() (masking errors) and it compares RFC3339 strings
assuming search honors SearchSort::Recent; change both memory_search.search
calls to propagate errors instead of unwrap_or_default() (use ? or map_err) and
ensure recency is determined from the returned Memory.created_at values directly
(e.g., parse/compare chrono DateTime or sort results by created_at) rather than
relying on SearchSort::Recent; update logic around
newest_results/newest_memory_at/is_stale and the subsequent results variable to
use the error-propagating search and explicit created_at comparison so failures
surface and recency is correct.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock,!**/*.lock
📒 Files selected for processing (8)
src/agent/cortex.rssrc/api/agents.rssrc/api/state.rssrc/api/topics.rssrc/lib.rssrc/main.rstests/bulletin.rstests/context_dump.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- src/lib.rs
- src/api/topics.rs
| let agent = rig::agent::AgentBuilder::new(model) | ||
| .preamble(&system_prompt) | ||
| .build(); | ||
|
|
||
| let content = agent | ||
| .prompt(&synthesis_prompt) | ||
| .await |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, inspect the exact code at lines 1899-1905 in cortex.rs
echo "=== Code at lines 1899-1905 ==="
sed -n '1895,1910p' src/agent/cortex.rs
echo -e "\n=== Context: function containing these lines ==="
# Find the function name containing line 1899
awk '/^fn|^pub fn|^async fn|^pub async fn/ {fname=$0; line=NR} NR==1899 && fname {print "Line", line ": " fname; exit}' src/agent/cortex.rs
echo -e "\n=== Broader context (lines 1880-1920) ==="
sed -n '1880,1920p' src/agent/cortex.rsRepository: spacedriveapp/spacebot
Length of output: 2306
🏁 Script executed:
#!/bin/bash
# Search for other agent construction patterns in src/agent/ to identify the standard pattern
echo "=== AgentBuilder patterns in src/agent/ ==="
rg -B2 -A5 'AgentBuilder::new' src/agent/ --type=rust
echo -e "\n=== SpacebotHook usage in src/agent/ ==="
rg -B2 -A5 'SpacebotHook' src/agent/ --type=rust | head -50
echo -e "\n=== with_history usage in src/agent/ ==="
rg -B2 -A2 'with_history' src/agent/ --type=rustRepository: spacedriveapp/spacebot
Length of output: 8799
🏁 Script executed:
#!/bin/bash
# Check if the function around line 1899 uses history or max_turns
echo "=== Search for 'sync_one_topic' function ==="
rg -B5 -A30 'fn sync_one_topic' src/agent/cortex.rsRepository: spacedriveapp/spacebot
Length of output: 1270
Add required agent construction patterns: max_turns, hooks, and history to topic synthesis agent.
Lines 1899-1905 build a bare agent with only preamble and immediately prompt. Per coding guidelines, all LLM processes in src/agent/ must:
- Explicitly set
max_turns(should be around 3-5 for synthesis) - Attach
SpacebotHookvia.with_hook()on the prompt call - Pass history externally via
.with_history(&mut history)on the prompt call
The current code skips all three. Align this with the patterns used by worker.rs, branch.rs, and channel.rs agents.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/agent/cortex.rs` around lines 1899 - 1905, The topic-synthesis agent is
built without required patterns: set max_turns, attach hooks, and pass history;
update the AgentBuilder/agent usage around
AgentBuilder::new(model).preamble(&system_prompt).build() and the subsequent
agent.prompt(&synthesis_prompt).await call so that the builder sets max_turns
(use ~3-5), and the prompt invocation uses .with_hook(SpacebotHook::new(...))
and .with_history(&mut history) before awaiting the prompt; ensure you reference
the existing synthesis_prompt and the mutable history variable when adding
.with_history and follow the same hook construction and max_turns value style
used in worker.rs, branch.rs, or channel.rs.
| let topic_sync_handle = spacebot::agent::cortex::spawn_topic_sync_loop( | ||
| agent.deps.clone(), | ||
| spacebot::agent::cortex::CortexLogger::new(agent.db.sqlite.clone()), | ||
| ); | ||
| cortex_handles.push(topic_sync_handle); | ||
| tracing::info!(agent_id = %agent_id, "cortex topic sync loop started"); |
There was a problem hiding this comment.
Make topic sync loop self-healing after runtime errors.
The new loop is started correctly, but spawn_topic_sync_loop currently exits after the first run_topic_sync_loop error (it only logs and returns). A transient DB/LLM/network failure can permanently disable topic synthesis for that agent until restart.
♻️ Suggested resilience fix (in src/agent/cortex.rs)
pub fn spawn_topic_sync_loop(deps: AgentDeps, logger: CortexLogger) -> tokio::task::JoinHandle<()> {
tokio::spawn(async move {
- if let Err(error) = run_topic_sync_loop(&deps, &logger).await {
- tracing::error!(%error, "cortex topic sync loop exited with error");
- }
+ let mut backoff = std::time::Duration::from_secs(1);
+ loop {
+ match run_topic_sync_loop(&deps, &logger).await {
+ Ok(()) => {
+ tracing::warn!("cortex topic sync loop exited normally; restarting");
+ backoff = std::time::Duration::from_secs(1);
+ }
+ Err(error) => {
+ tracing::error!(%error, ?backoff, "cortex topic sync loop exited with error; restarting");
+ tokio::time::sleep(backoff).await;
+ backoff = std::cmp::min(backoff * 2, std::time::Duration::from_secs(30));
+ }
+ }
+ }
})
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let topic_sync_handle = spacebot::agent::cortex::spawn_topic_sync_loop( | |
| agent.deps.clone(), | |
| spacebot::agent::cortex::CortexLogger::new(agent.db.sqlite.clone()), | |
| ); | |
| cortex_handles.push(topic_sync_handle); | |
| tracing::info!(agent_id = %agent_id, "cortex topic sync loop started"); | |
| pub fn spawn_topic_sync_loop(deps: AgentDeps, logger: CortexLogger) -> tokio::task::JoinHandle<()> { | |
| tokio::spawn(async move { | |
| let mut backoff = std::time::Duration::from_secs(1); | |
| loop { | |
| match run_topic_sync_loop(&deps, &logger).await { | |
| Ok(()) => { | |
| tracing::warn!("cortex topic sync loop exited normally; restarting"); | |
| backoff = std::time::Duration::from_secs(1); | |
| } | |
| Err(error) => { | |
| tracing::error!(%error, ?backoff, "cortex topic sync loop exited with error; restarting"); | |
| tokio::time::sleep(backoff).await; | |
| backoff = std::cmp::min(backoff * 2, std::time::Duration::from_secs(30)); | |
| } | |
| } | |
| } | |
| }) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/main.rs` around lines 2782 - 2787, spawn_topic_sync_loop currently
returns on the first run_topic_sync_loop error which makes topic sync
non-resilient; modify spawn_topic_sync_loop in src/agent/cortex.rs to wrap
run_topic_sync_loop in a loop that catches errors (and panics if necessary) and
continues retrying with an exponential backoff/jitter (or fixed delay) while
logging each failure via CortexLogger and tracing::error including agent_id and
the error details; ensure the loop accepts cancellation via the original
shutdown/stop signal (use the same agent.deps handle) so the task can be cleanly
aborted, and preserve existing behavior of creating
CortexLogger::new(agent.db.sqlite.clone()) when respawning attempts.
The template file existed at prompts/en/cortex_topic_synthesis.md.j2 but was never added to the text registry or the Jinja environment, causing 'failed to render topic synthesis prompt' on every sync.
Summary
How it works
The cortex sync loop periodically checks each active topic for staleness (new memories matching its criteria since last sync). When stale, it gathers memories via the existing search infrastructure, synthesizes a new version via LLM, and stores the result with a version snapshot.
Channels compose their assigned topics into context (falling back to the global bulletin if none are assigned). Workers accept
topic_idson thespawn_workertool to get relevant topic content prepended to their task prompt. The cortex chat session uses all active topics.New files
migrations/20260302000001_topics.sqltopics+topic_versionstablessrc/topics.rs+src/topics/store.rsTopic,TopicVersion,TopicStorewith CRUD, versioning, channel queriessrc/api/topics.rsprompts/en/cortex_topic_synthesis.md.j2interface/src/routes/AgentTopics.tsxModified
src/agent/cortex.rs—spawn_topic_sync_loop(),sync_all_topics(),sync_one_topic(),mark_all_stale()onMemorySaved, topic injection inpickup_one_ready_task()src/agent/channel.rs— both prompt builders compose channel topics, available topics listed in status blocksrc/agent/channel_dispatch.rs—spawn_worker_from_stateacceptstopic_ids, injects topic contentsrc/agent/cortex_chat.rs— uses all active topics instead of just bulletinsrc/tools/spawn_worker.rs—topic_idsfield onSpawnWorkerArgswith schema + descriptionsrc/config.rs—topic_sync_interval_secsonCortexConfig(default 600s)src/lib.rs—topic_storeonAgentDepssrc/main.rs— wiring for topic store, API state, sync loop startupsrc/api/state.rs,src/api/agents.rs,src/api/server.rs— API plumbingsrc/memory/search.rs—Serialize/DeserializeonSearchMode/SearchSortinterface/src/api/client.ts— topic types + 7 API methodsinterface/src/components/AgentTabs.tsx— "Topics" tabinterface/src/router.tsx—/agents/$agentId/topicsroute