Conversation
- Replace PgPool with cfg-gated DbPool type alias across all services - Quote "transaction" column name (reserved word in SQLite) - Replace NOW() with CURRENT_TIMESTAMP - Remove ::text::varchar casts (PostgreSQL-specific) - Replace row-value comparisons with expanded OR expressions - Generate UUIDs in application code instead of gen_random_uuid() - Cfg-gate pg_advisory_xact_lock (SQLite serializes writes naturally) - Replace make_interval with Rust-side chrono::Duration cooldown check - Change IpNetwork to Option<String> for cross-DB compatibility
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdd feature-flagged multi-backend DB support (SQLite default and optional Postgres) with a DbPool abstraction, add per-backend migrations (including full SQLite initial schema), change IP/storage to strings, move issue/event UUID generation into application code, make advisory locks Postgres-only, and update many service, route, test, CI, and docs files to be DB-agnostic. Changes
sequenceDiagram
participant Client as Client
participant Server as Actix\ Server
participant Handler as Route/Handler
participant Service as Application\ Service
participant Worker as Digest\ Worker
participant DB as Database\ (Postgres/SQLite)
Client->>Server: HTTP request (ingest / auth / alert)
Server->>Handler: route to handler
Handler->>Service: call with &DbPool
Service->>DB: execute SQL via DbPool
DB-->>Service: return rows
opt create issue/event
Service->>Worker: enqueue/process (generate UUIDs)
Worker->>DB: begin transaction
alt feature postgres
Worker->>DB: acquire advisory lock
end
Worker->>DB: insert/select issue, grouping, event (id included)
DB-->>Worker: commit/return
end
Service-->>Handler: response
Handler-->>Client: HTTP response
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/server/src/services/users.rs (1)
16-26:⚠️ Potential issue | 🟡 MinorDocument SQLite version requirement for system SQLite deployments.
The
INSERT ... RETURNINGsyntax requires SQLite 3.35.0+ (released March 2021). This only applies if using the optionalsqlitefeature with unbundled system SQLite; the default PostgreSQL configuration and bundled SQLite (3.45.1+) already meet this requirement. If supporting system SQLite deployments, document the minimum version in the project README.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/src/services/users.rs` around lines 16 - 26, The SQL uses INSERT ... RETURNING (see the query that binds to variable `user` in users.rs), which requires system SQLite >= 3.35.0 when the optional `sqlite` feature uses an unbundled SQLite; add a short note to the project README under the database/installation section stating that system SQLite deployments must be at least 3.35.0 (or recommend using the bundled SQLite), and mention this requirement applies only to the `sqlite` feature while PostgreSQL and the bundled SQLite already meet it.
🧹 Nitpick comments (3)
apps/server/src/digest/worker.rs (1)
188-195: Consider relocating theDbBackendtype alias for clarity.The
DbBackendtype alias (lines 189-192) is module-scoped but placed between the#[allow]attribute and the function definition, which can be misleading. Consider moving it near the imports or after the public function definitions to clarify that it's a module-level alias used by multiple internal functions.Suggested relocation
use crate::services::{ calculate_grouping_key, get_denormalized_fields, hash_grouping_key, AlertService, DenormalizedFields, EventService, ProjectService, RateLimitService, }; + +#[cfg(feature = "postgres")] +type DbBackend = sqlx::Postgres; +#[cfg(feature = "sqlite")] +type DbBackend = sqlx::Sqlite; /// Processes an event from temporary storage🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/src/digest/worker.rs` around lines 188 - 195, Move the module-scoped DbBackend type alias out from between the #[allow(clippy::too_many_arguments)] attribute and the private function declaration find_or_create_issue_and_grouping_inner so its placement clearly indicates module scope; relocate the alias (type DbBackend = sqlx::Postgres / sqlx::Sqlite) to a top-of-file area near the other imports or to a dedicated type/aliases section after public function definitions, ensuring all internal functions like find_or_create_issue_and_grouping_inner still reference DbBackend without changing its definition.apps/server/src/services/event.rs (1)
229-238: Consider usingEXISTSfor better query performance.The
COUNT(*)approach works but scans all matching rows. AnEXISTSsubquery would short-circuit after finding the first match, which is more efficient for existence checks.Performance improvement suggestion
pub async fn exists(pool: &DbPool, project_id: i32, event_id: Uuid) -> AppResult<bool> { - let count: i64 = sqlx::query_scalar( - "SELECT COUNT(*) FROM events WHERE project_id = $1 AND event_id = $2", + let exists: bool = sqlx::query_scalar( + "SELECT EXISTS(SELECT 1 FROM events WHERE project_id = $1 AND event_id = $2)", ) .bind(project_id) .bind(event_id) .fetch_one(pool) .await?; - Ok(count > 0) + Ok(exists) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/src/services/event.rs` around lines 229 - 238, The exists function currently uses COUNT(*) which scans rows; change the SQL to use an EXISTS subquery so it short-circuits: replace the query in pub async fn exists(pool: &DbPool, project_id: i32, event_id: Uuid) -> AppResult<bool> with a SELECT EXISTS(SELECT 1 FROM events WHERE project_id = $1 AND event_id = $2) and fetch the boolean result (using sqlx::query_scalar to produce a bool) and return that value unchanged; keep the function signature and error handling the same.apps/server/src/services/alert.rs (1)
680-726:process_retry_queueis defined but never invoked.Per context snippet 2 (
main.rs), there's no background task or scheduler that callsprocess_retry_queue. Failed alerts will remain inpendingstatus indefinitely. The#[allow(dead_code)]annotation confirms this is known, but consider adding a TODO or opening an issue to track the implementation of the retry worker.Would you like me to open an issue to track implementing the background retry worker?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/src/services/alert.rs` around lines 680 - 726, process_retry_queue is defined but never invoked (note the #[allow(dead_code)]), so pending alerts remain stuck; either wire it up as a periodic background worker in your application entrypoint (e.g., in main.rs spawn a tokio task or use an async scheduler that calls services::alert::process_retry_queue(pool, max_retries) on an interval and handles/logs errors) or explicitly track the work by removing #[allow(dead_code)] and adding a TODO + issue reference (or create an issue) that describes implementing the retry worker; update main.rs to import and call process_retry_queue (or leave the TODO/issue and a brief comment near #[allow(dead_code]] if you cannot implement it now).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/server/Cargo.toml`:
- Around line 136-139: Add compile-time guards to apps/server/src/db/mod.rs to
prevent both features "postgres" and "sqlite" from being enabled simultaneously
(and to require at least one). Specifically, insert cfg-based compile_error
checks: one that triggers when both features "postgres" and "sqlite" are active,
and one that triggers when neither is active, so the duplicate DbPool type
definitions are prevented; update the top of the module near the existing
#[cfg(feature = "...")] sections to include these checks.
- Around line 72-74: The PostgreSQL schema still defines remote_addr as INET
while the Rust model uses Option<String>, causing inconsistent validation;
update the DB schema/migration to store remote_addr as TEXT (or VARCHAR) in
Postgres to match SQLite and the existing Rust type. Modify the migration that
creates or alters the table containing the remote_addr column (referencing the
column name remote_addr and the code that reads realip_remote_addr()), add an
ALTER TABLE migration to change INET → TEXT (and a down migration if needed),
run/verify sqlx migrations and tests to ensure Option<String> continues to map
correctly, and remove the stale ipnetwork comment from Cargo.toml area.
In `@apps/server/migrations/sqlite/20260119000000_initial_schema.up.sql`:
- Around line 16-29: Add the missing is_deleted column to the projects table in
the CREATE TABLE statement: add an is_deleted boolean column with a NOT NULL
default (for SQLite prefer using INTEGER NOT NULL DEFAULT 0 to represent FALSE)
so the projects table definition includes "is_deleted INTEGER NOT NULL DEFAULT
0"; ensure the column name is exactly is_deleted to match code that queries or
filters on projects.is_deleted.
- Around line 16-29: The SQLite migration for the projects table diverges from
PostgreSQL: projects.name is marked UNIQUE here and the is_deleted column is
missing, breaking parity and soft-delete behavior; update the SQLite migration
(projects table) to match PostgreSQL by removing the UNIQUE constraint on name
if PostgreSQL does not enforce it and adding the is_deleted column as BOOLEAN
NOT NULL DEFAULT FALSE, or alternatively document and justify the intentional
divergence; ensure you reference and modify the CREATE TABLE projects definition
and related columns (name, is_deleted, quota_exceeded_until,
quota_exceeded_reason, next_quota_check) so both backends share the same
uniqueness and soft-delete semantics.
In `@apps/server/src/services/alert.rs`:
- Around line 413-432: The cooldown check introduces a TOCTOU race: currently
code reads rule.last_triggered_at and then unconditionally runs sqlx::query to
UPDATE alert_rules SET last_triggered_at = CURRENT_TIMESTAMP, allowing
concurrent triggers to bypass cooldown; replace the two-step check with a single
atomic conditional update on the alert_rules row (e.g., UPDATE ... SET
last_triggered_at = CURRENT_TIMESTAMP WHERE id = $1 AND (last_triggered_at IS
NULL OR last_triggered_at <= CURRENT_TIMESTAMP - INTERVAL '<cooldown>' ) ),
execute that via sqlx::query/.bind(rule.id) and then treat rows_affected()==0 as
"in cooldown or not found" (skip sending alert), ensuring rule.cooldown_minutes
(or cooldown_minutes cast) is used to compute the interval server-side so no
TOCTOU window remains.
---
Outside diff comments:
In `@apps/server/src/services/users.rs`:
- Around line 16-26: The SQL uses INSERT ... RETURNING (see the query that binds
to variable `user` in users.rs), which requires system SQLite >= 3.35.0 when the
optional `sqlite` feature uses an unbundled SQLite; add a short note to the
project README under the database/installation section stating that system
SQLite deployments must be at least 3.35.0 (or recommend using the bundled
SQLite), and mention this requirement applies only to the `sqlite` feature while
PostgreSQL and the bundled SQLite already meet it.
---
Nitpick comments:
In `@apps/server/src/digest/worker.rs`:
- Around line 188-195: Move the module-scoped DbBackend type alias out from
between the #[allow(clippy::too_many_arguments)] attribute and the private
function declaration find_or_create_issue_and_grouping_inner so its placement
clearly indicates module scope; relocate the alias (type DbBackend =
sqlx::Postgres / sqlx::Sqlite) to a top-of-file area near the other imports or
to a dedicated type/aliases section after public function definitions, ensuring
all internal functions like find_or_create_issue_and_grouping_inner still
reference DbBackend without changing its definition.
In `@apps/server/src/services/alert.rs`:
- Around line 680-726: process_retry_queue is defined but never invoked (note
the #[allow(dead_code)]), so pending alerts remain stuck; either wire it up as a
periodic background worker in your application entrypoint (e.g., in main.rs
spawn a tokio task or use an async scheduler that calls
services::alert::process_retry_queue(pool, max_retries) on an interval and
handles/logs errors) or explicitly track the work by removing
#[allow(dead_code)] and adding a TODO + issue reference (or create an issue)
that describes implementing the retry worker; update main.rs to import and call
process_retry_queue (or leave the TODO/issue and a brief comment near
#[allow(dead_code]] if you cannot implement it now).
In `@apps/server/src/services/event.rs`:
- Around line 229-238: The exists function currently uses COUNT(*) which scans
rows; change the SQL to use an EXISTS subquery so it short-circuits: replace the
query in pub async fn exists(pool: &DbPool, project_id: i32, event_id: Uuid) ->
AppResult<bool> with a SELECT EXISTS(SELECT 1 FROM events WHERE project_id = $1
AND event_id = $2) and fetch the boolean result (using sqlx::query_scalar to
produce a bool) and return that value unchanged; keep the function signature and
error handling the same.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: fb0c2fd8-cb66-458f-8357-a14f369ab083
⛔ Files ignored due to path filters (1)
apps/server/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (36)
apps/server/Cargo.tomlapps/server/migrations/postgres/20260119000000_initial_schema.down.sqlapps/server/migrations/postgres/20260119000000_initial_schema.up.sqlapps/server/migrations/postgres/20260119000001_add_projects_name_unique.down.sqlapps/server/migrations/postgres/20260119000001_add_projects_name_unique.up.sqlapps/server/migrations/postgres/20260119000002_create_auth_tokens.down.sqlapps/server/migrations/postgres/20260119000002_create_auth_tokens.up.sqlapps/server/migrations/postgres/20260119000003_create_issues.down.sqlapps/server/migrations/postgres/20260119000003_create_issues.up.sqlapps/server/migrations/postgres/20260119000004_create_groupings.down.sqlapps/server/migrations/postgres/20260119000004_create_groupings.up.sqlapps/server/migrations/postgres/20260119000005_create_events.down.sqlapps/server/migrations/postgres/20260119000005_create_events.up.sqlapps/server/migrations/postgres/20260119000006_add_rate_limiting.down.sqlapps/server/migrations/postgres/20260119000006_add_rate_limiting.up.sqlapps/server/migrations/postgres/20260119000007_remove_soft_delete.down.sqlapps/server/migrations/postgres/20260119000007_remove_soft_delete.up.sqlapps/server/migrations/postgres/20260120000000_create_users.down.sqlapps/server/migrations/postgres/20260120000000_create_users.up.sqlapps/server/migrations/postgres/20260121000000_create_alerting.down.sqlapps/server/migrations/postgres/20260121000000_create_alerting.up.sqlapps/server/migrations/sqlite/20260119000000_initial_schema.down.sqlapps/server/migrations/sqlite/20260119000000_initial_schema.up.sqlapps/server/src/auth/session.rsapps/server/src/bootstrap.rsapps/server/src/db/mod.rsapps/server/src/digest/worker.rsapps/server/src/models/event.rsapps/server/src/routes/auth.rsapps/server/src/services/alert.rsapps/server/src/services/auth_token.rsapps/server/src/services/event.rsapps/server/src/services/issue.rsapps/server/src/services/project.rsapps/server/src/services/rate_limit.rsapps/server/src/services/users.rs
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/server/src/services/alert.rs`:
- Around line 413-424: The cooldown/retry SQL comparisons must be feature-gated
so SQLite uses datetime/string comparisons and Postgres uses native TIMESTAMPTZ
comparisons: convert the current queries around cooldown_threshold (variable
cooldown_threshold and the sqlx::query that updates alert_rules), the retry
scheduling query (the query around lines ~604–620), and the retry fetch
predicate (~688–689) into two variants using cfg(feature = "sqlite") and
cfg(feature = "postgres") (or cfg(not(feature = "sqlite"))): for SQLite use SQL
that wraps timestamps with datetime(last_triggered_at) < datetime($2) (or
julianday(...) < julianday($2)) and bind an ISO8601 string (e.g.,
cooldown_threshold.to_rfc3339() / .to_string()), while for Postgres keep the
current parameter binding of chrono types and plain < / <= comparisons; update
the functions/queries that reference cooldown_threshold, retry scheduling, and
retry fetch predicate so each backend uses its respective predicate.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 783a2e0c-11dd-45a1-b99c-41e2b3cc940d
📒 Files selected for processing (4)
apps/server/.gitignoreapps/server/migrations/sqlite/20260119000000_initial_schema.up.sqlapps/server/src/db/mod.rsapps/server/src/services/alert.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- apps/server/migrations/sqlite/20260119000000_initial_schema.up.sql
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
apps/server/src/services/rate_limit.rs (1)
147-208:⚠️ Potential issue | 🟠 MajorAvoid stale-count writes in quota updates under concurrency.
On Line 147 and Line 205,
update_installation_quotacomputesnew_countfrom a read and writes it back as an absolute value. Concurrent workers can overwrite each other and undercount events.
On Line 283,next_quota_checkin project updates is also derived from a potentially stalenew_count.Suggested fix (atomic increment semantics)
- UPDATE installation - SET digested_event_count = $1, + UPDATE installation + SET digested_event_count = digested_event_count + 1, quota_exceeded_until = $2, quota_exceeded_reason = $3, - next_quota_check = $4 + next_quota_check = digested_event_count + 1 + $4 WHERE id = 1 ... - .bind(new_count) .bind(exceeded_until) .bind(exceeded_reason) - .bind(new_count + check_again_after) + .bind(check_again_after) .execute(pool) .await?; ... - sqlx::query("UPDATE installation SET digested_event_count = $1 WHERE id = 1") - .bind(new_count) + sqlx::query( + "UPDATE installation SET digested_event_count = digested_event_count + 1 WHERE id = 1", + ) .execute(pool) .await?;- UPDATE projects - SET digested_event_count = digested_event_count + 1, + UPDATE projects + SET digested_event_count = digested_event_count + 1, quota_exceeded_until = $2, quota_exceeded_reason = $3, - next_quota_check = $4 + next_quota_check = digested_event_count + 1 + $4 WHERE id = $1 ... - .bind(new_count + check_again_after) + .bind(check_again_after) .execute(pool) .await?;Also applies to: 228-237, 270-284
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/src/services/rate_limit.rs` around lines 147 - 208, The code reads digested_event_count into new_count and writes that absolute value back, causing lost increments under concurrency; change update_installation_quota (and the project quota update code that sets next_quota_check) to perform atomic updates in the database instead of writing a stale new_count: use an UPDATE that increments digested_event_count (e.g., SET digested_event_count = digested_event_count + 1) and either RETURNING the updated counts or run the logic inside a serializing SQL transaction / SELECT ... FOR UPDATE so next_quota_check and quota_exceeded_* are computed from the current DB values (use the returned values from the atomic UPDATE to compute check_again_after and bind those into the same UPDATE/another UPDATE executed in the same transaction). Ensure you update the code paths referencing new_count, next_quota_check, and functions like count_global_events_since to rely on the DB-returned values rather than the prior in-memory read.apps/server/src/services/alert.rs (1)
62-92:⚠️ Potential issue | 🟠 MajorUse backend-agnostic unique-violation checks for portable database support.
Lines 87, 132, and 230 use
db_err.constraint() == Some("constraint_name")to detect unique constraint violations. These named constraints exist in Postgres but not in SQLite, so duplicate-key errors fall through as genericAppError::Databaseinstead ofAppError::Conflictwhen using SQLite.Replace with
db_err.is_unique_violation(), available in sqlx 0.8'sDatabaseErrortrait, to handle both backends correctly.Suggested fix
.await .map_err(|e| { if let sqlx::Error::Database(ref db_err) = e { - if db_err.constraint() == Some("notification_channels_name_key") { + if db_err.is_unique_violation() { return AppError::Conflict(format!("Channel '{}' already exists", input.name)); } } AppError::Database(e) })?;.await .map_err(|e| { if let sqlx::Error::Database(ref db_err) = e { - if db_err.constraint() == Some("alert_rules_project_id_alert_type_key") { + if db_err.is_unique_violation() { return AppError::Conflict(format!( "Alert rule for type '{}' already exists in this project", input.alert_type )); } } AppError::Database(e) })?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/src/services/alert.rs` around lines 62 - 92, The DB error handling in create_channel (and the other places flagged) relies on Postgres-only constraint name checks; replace the constraint() == Some("...") check with the backend-agnostic db_err.is_unique_violation() from sqlx's DatabaseError trait so unique-key violations map to AppError::Conflict; update the closure in create_channel (and the other similar map_err handlers) to test db_err.is_unique_violation() and return AppError::Conflict(format!("Channel '{}' already exists", input.name)), and add the DatabaseError trait import (e.g. use sqlx::error::DatabaseError) so is_unique_violation() is available.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@apps/server/src/services/alert.rs`:
- Around line 62-92: The DB error handling in create_channel (and the other
places flagged) relies on Postgres-only constraint name checks; replace the
constraint() == Some("...") check with the backend-agnostic
db_err.is_unique_violation() from sqlx's DatabaseError trait so unique-key
violations map to AppError::Conflict; update the closure in create_channel (and
the other similar map_err handlers) to test db_err.is_unique_violation() and
return AppError::Conflict(format!("Channel '{}' already exists", input.name)),
and add the DatabaseError trait import (e.g. use sqlx::error::DatabaseError) so
is_unique_violation() is available.
In `@apps/server/src/services/rate_limit.rs`:
- Around line 147-208: The code reads digested_event_count into new_count and
writes that absolute value back, causing lost increments under concurrency;
change update_installation_quota (and the project quota update code that sets
next_quota_check) to perform atomic updates in the database instead of writing a
stale new_count: use an UPDATE that increments digested_event_count (e.g., SET
digested_event_count = digested_event_count + 1) and either RETURNING the
updated counts or run the logic inside a serializing SQL transaction / SELECT
... FOR UPDATE so next_quota_check and quota_exceeded_* are computed from the
current DB values (use the returned values from the atomic UPDATE to compute
check_again_after and bind those into the same UPDATE/another UPDATE executed in
the same transaction). Ensure you update the code paths referencing new_count,
next_quota_check, and functions like count_global_events_since to rely on the
DB-returned values rather than the prior in-memory read.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 194dd101-dde5-4375-b09a-b467012d3e01
📒 Files selected for processing (2)
apps/server/src/services/alert.rsapps/server/src/services/rate_limit.rs
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/server/src/services/project.rs (1)
29-36:⚠️ Potential issue | 🟠 MajorValidate pagination inputs before offset math.
page/per_pagecan be zero/negative, and(page - 1) * per_pagecan overflow for large values, producing invalidOFFSET/LIMITbehavior.Suggested fix
pub async fn list_offset( pool: &DbPool, order: SortOrder, page: i64, per_page: i64, ) -> AppResult<(Vec<Project>, i64)> { - let offset = (page - 1) * per_page; + if page < 1 { + return Err(AppError::Validation("page must be >= 1".to_string())); + } + if per_page < 1 { + return Err(AppError::Validation("per_page must be >= 1".to_string())); + } + let offset = page + .checked_sub(1) + .and_then(|p| p.checked_mul(per_page)) + .ok_or_else(|| AppError::Validation("pagination values are too large".to_string()))?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/src/services/project.rs` around lines 29 - 36, The pagination calculation in list_offset uses (page - 1) * per_page without validating page/per_page or checking for arithmetic overflow; update list_offset to validate that page and per_page are > 0 and use checked_sub/checked_mul (or convert to unsigned and use saturating arithmetic) to compute offset safely, returning an AppResult::Err (e.g., an invalid input error) when validation or checked math fails, and otherwise proceed with the computed offset and original query logic.
🧹 Nitpick comments (1)
apps/server/src/services/project.rs (1)
245-263: Use a set for slug membership checks to avoid repeated linear scans.The current
Vec::containscalls inside the loop are O(n) each; converting once toHashSetkeeps checks O(1) and simplifies scaling with many similar slugs.Suggested refactor
+use std::collections::HashSet; @@ - let similar_slugs: Vec<String> = + let similar_slugs: Vec<String> = sqlx::query_scalar("SELECT slug FROM projects WHERE slug LIKE $1 || '%'") .bind(&base_slug) .fetch_all(pool) .await?; + let similar_slugs: HashSet<String> = similar_slugs.into_iter().collect(); @@ - if !similar_slugs.contains(&base_slug) { + if !similar_slugs.contains(&base_slug) { return Ok(base_slug); } @@ - if !similar_slugs.contains(&candidate) { + if !similar_slugs.contains(&candidate) { return Ok(candidate); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/src/services/project.rs` around lines 245 - 263, Change the repeated O(n) Vec::contains checks to O(1) lookups by converting the `similar_slugs: Vec<String>` returned from the sqlx query into a HashSet (e.g., `let similar_slugs_set: HashSet<String> = similar_slugs.into_iter().collect();`) immediately after the query; then replace both `similar_slugs.contains(&base_slug)` and `similar_slugs.contains(&candidate)` with `similar_slugs_set.contains(&...)` in the same function/loop where `base_slug`, `candidate`, and `counter` are used.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@apps/server/src/services/project.rs`:
- Around line 29-36: The pagination calculation in list_offset uses (page - 1) *
per_page without validating page/per_page or checking for arithmetic overflow;
update list_offset to validate that page and per_page are > 0 and use
checked_sub/checked_mul (or convert to unsigned and use saturating arithmetic)
to compute offset safely, returning an AppResult::Err (e.g., an invalid input
error) when validation or checked math fails, and otherwise proceed with the
computed offset and original query logic.
---
Nitpick comments:
In `@apps/server/src/services/project.rs`:
- Around line 245-263: Change the repeated O(n) Vec::contains checks to O(1)
lookups by converting the `similar_slugs: Vec<String>` returned from the sqlx
query into a HashSet (e.g., `let similar_slugs_set: HashSet<String> =
similar_slugs.into_iter().collect();`) immediately after the query; then replace
both `similar_slugs.contains(&base_slug)` and
`similar_slugs.contains(&candidate)` with `similar_slugs_set.contains(&...)` in
the same function/loop where `base_slug`, `candidate`, and `counter` are used.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: fb07d9f8-cb5a-416a-ba1b-2aaed354fdee
📒 Files selected for processing (3)
apps/server/src/services/alert.rsapps/server/src/services/project.rsapps/server/src/services/rate_limit.rs
|
Hello @AbianS ! i went thru several iterations of fixing coderabbit remarks. I think i fixed all "good" remarks, the ones that now remain are i think more of personal pref than real improvements. Up to you, i can easily continue until "zero remarks" are left. The Greetings, |
|
Hey @kervel! Thanks a lot for this PR, this is exactly what I had planned to implement in the future. The DbPool abstraction looks really good to me. Once I finish reviewing it I'll merge it. Cheers! |
**Database Schema**
- Add Postgres migration to change remote_addr from INET to TEXT, preventing
runtime type errors when upgrading existing deployments
- Remove is_deleted column from SQLite projects migration (was removed from
Postgres in migration 00007, keeping schemas in sync)
- Add email format CHECK constraint to SQLite users table to match Postgres
email_format constraint validation behavior
**Query Compatibility**
- Fix SQLite timestamp comparisons in rate_limit.rs: replace manual
to_rfc3339() string binding with naive_utc(), allowing sqlx to use SQLite's
native YYYY-MM-DD HH:MM:SS format (avoids RFC3339 timezone handling ambiguity
in older SQLite versions)
- Fix SQLite timestamp comparisons in alert.rs: replace manual to_rfc3339()
binding with naive_utc() for consistent datetime comparison across backends
**Test Infrastructure**
- Rewrite tests/common/db.rs with #[cfg] feature gates:
* Postgres: uses testcontainers for real PostgreSQL container
* SQLite: uses in-memory database with max_connections(1) to prevent
SQLITE_LOCKED deadlocks during concurrent test execution
- Remove inline TestDb struct definitions from all 12 integration test files;
now use crate::common::TestDb for centralized database setup
- Update all helper function signatures from &PgPool to &rustrak::db::DbPool
- Adapt e2e tests to support both backends by:
* Adding mod common to e2e_tests.rs to share TestDb
* Replacing inline TestDb with crate::common::TestDb
* Changing pool field from PgPool to DbPool
**Result**
- 220 tests pass under --features sqlite (91 integration + 129 unit, --test-threads=1)
- 0 test failures with SQLite backend
- Postgres backend remains fully functional
BREAKING CHANGE: The `latest` Docker image now uses SQLite instead of PostgreSQL - Default feature changed from postgres to sqlite in Cargo.toml - Dockerfile now accepts ARG FEATURES parameter for compile-time backend selection - CI/CD pipeline builds and publishes two separate Docker image variants: * `latest` / `vX.Y.Z` → SQLite (no external database required) * `postgres` / `vX.Y.Z-postgres` → PostgreSQL (production scale) - DATABASE_URL is now optional for SQLite, defaults to sqlite:///data/rustrak.db - Fixed SQLite connection to create database file if missing (create_if_missing=true) - Updated documentation, README, and examples to reflect SQLite as primary option - Added "Database Backends" configuration page comparing SQLite vs PostgreSQL Users currently deploying `abians7/rustrak-server:latest` with PostgreSQL must update to: image: abians7/rustrak-server:postgres No data migration needed — only the image tag changes. See migration guide in CHANGELOG.
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
apps/server/tests/integration/bootstrap_test.rs (1)
14-16:⚠️ Potential issue | 🟡 MinorEnvironment variable manipulation is not thread-safe.
Multiple tests in this file use
env::set_varandenv::remove_varwhich are not thread-safe. When tests run in parallel (default for#[tokio::test]), they can interfere with each other'sCREATE_SUPERUSERvalues.Consider using
#[serial_test::serial](already in dev-dependencies) to ensure these tests run sequentially:♻️ Suggested fix
+use serial_test::serial; + #[tokio::test] +#[serial] async fn test_bootstrap_creates_superuser_when_empty() {Apply
#[serial]to all tests that manipulateCREATE_SUPERUSER.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/tests/integration/bootstrap_test.rs` around lines 14 - 16, Tests in bootstrap_test.rs set and remove the CREATE_SUPERUSER environment variable using env::set_var / env::remove_var which is not thread-safe; annotate every test function that manipulates CREATE_SUPERUSER with the serial_test attribute so they run sequentially (e.g., add #[serial_test::serial] directly above the existing #[tokio::test] on those test functions that call env::set_var or env::remove_var), ensuring the dev-dependency serial_test is used and no other changes to the test logic are required.apps/server/src/services/alert.rs (1)
623-638:⚠️ Potential issue | 🟡 MinorAdd feature-gated DateTime conversion for SQLite compatibility.
The
next_retryDateTime binding (line 636) should use.naive_utc()conversion for SQLite, consistent with the pattern at lines 416-440 wherecooldown_thresholdis similarly handled. Without this, the raw DateTime may not bind correctly to SQLite's TEXT timestamp storage format.Suggested fix
sqlx::query( r#" UPDATE alert_history SET status = 'pending', attempt_count = $2, error_message = $3, http_status_code = $4, next_retry_at = $5 WHERE id = $1 "#, ) .bind(history_id.0) .bind(attempt_count) .bind(&result.error_message) .bind(result.http_status.map(|s| s as i32)) + #[cfg(feature = "postgres")] .bind(next_retry) + #[cfg(feature = "sqlite")] + .bind(next_retry.naive_utc()) .execute(pool) .await?;Alternatively, wrap the entire UPDATE in feature-gated blocks as shown at lines 416-440.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/src/services/alert.rs` around lines 623 - 638, The UPDATE query setting next_retry binds a DateTime<Utc> directly (bound via next_retry) which breaks SQLite; change the binding to use .naive_utc() under the sqlite feature (or feature-gate the binding like the cooldown_threshold pattern) so the value is converted to NaiveDateTime before binding; update the call that binds next_retry in the same block that updates alert_history (the .bind(next_retry) after .bind(result.http_status.map(|s| s as i32))) to use next_retry.naive_utc() when compiling for sqlite (or surround alternative bindings with #[cfg(feature = "sqlite")] / #[cfg(not(feature = "sqlite"))] as done for cooldown_threshold) so SQLite receives a NaiveDateTime-compatible value.
🧹 Nitpick comments (6)
apps/server/tests/integration/ingest_test.rs (1)
19-26: Consider making test config database URL backend-agnostic.The
DatabaseConfigstill contains a hardcoded Postgres URL ("postgres://test:test@localhost/test"). While this doesn't break tests since the actual pool is injected viaweb::Data::new(db.pool.clone()), it creates an inconsistency with the multi-backend approach and could cause confusion for future maintainers or issues if any code path attempts to useconfig.database.urldirectly.💡 Suggested approach
Consider either:
- Feature-gating the URL in the test config:
#[cfg(feature = "postgres")] url: "postgres://test:test@localhost/test".to_string(), #[cfg(feature = "sqlite")] url: "sqlite::memory:".to_string(),
- Or using a placeholder that clearly indicates it's unused:
url: "unused://test-pool-injected-separately".to_string(),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/tests/integration/ingest_test.rs` around lines 19 - 26, The DatabaseConfig in the test uses a hardcoded Postgres URL ("postgres://...") which is misleading for multi-backend tests; update the test config where DatabaseConfig is constructed so config.database.url is backend-agnostic — either add cfg-gated values for postgres vs sqlite (using #[cfg(feature = "postgres")] and #[cfg(feature = "sqlite")] around the url field) or replace the URL with a clear placeholder string (e.g., "unused://pool-injected") to indicate the URL is not used; change the construction of DatabaseConfig in the test to use one of these approaches and keep the rest of the test (and injected db.pool) unchanged.apps/server/Dockerfile (1)
37-38: Consider documenting runtime library requirements for PostgreSQL variant.The comment on line 37 notes SQLite is statically compiled and needs no runtime deps. For the PostgreSQL variant, confirm that
distroless/cc-debian12includes the required TLS libraries (rustls should be fine as it's pure Rust). This appears correct since you're usingtls-rustlsin sqlx features.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/Dockerfile` around lines 37 - 38, The comment about SQLite runtime deps should also document the PostgreSQL variant's runtime library requirements: update the Dockerfile comment near the FROM gcr.io/distroless/cc-debian12:nonroot line to state that the distroless image includes the TLS libraries required by the sqlx postgres variant when using the tls-rustls feature (or note if additional C libs are required), and if any libs are missing confirm and either switch to a distroless image that includes them or add a short note explaining which extra packages are needed at runtime for Postgres connections.README.md (1)
188-207: Minor inconsistency: Architecture diagram shows only PostgreSQL.The ASCII diagram on line 192 shows
PostgreSQLexplicitly, but line 207 in the table correctly states "SQLite or PostgreSQL". Consider updating the diagram to reflect the flexible backend:┌─────────────────┐ ┌─────────────────┐ ┌──────────────┐ -│ Sentry SDK │────▶│ Rustrak Server │────▶│ PostgreSQL │ +│ Sentry SDK │────▶│ Rustrak Server │────▶│ Database │ │ (your app) │ │ (Rust/Actix) │ │ │🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 188 - 207, Update the ASCII architecture diagram under the "Architecture" section so the database node matches the table's description of supporting either SQLite or PostgreSQL; replace the single "PostgreSQL" label in the diagram with "SQLite / PostgreSQL" (or "SQLite or PostgreSQL") and ensure the surrounding box text aligns with the table row for the Database component to avoid the inconsistency.apps/server/tests/integration/projects_api_test.rs (2)
1-4: Update outdated documentation comment.The doc comment on line 3 states "real PostgreSQL database" but the test now uses the shared
TestDbwhich may be SQLite depending on the feature flag.📝 Suggested fix
//! Integration tests for the Projects API //! -//! Tests the complete Projects CRUD API with a real PostgreSQL database. +//! Tests the complete Projects CRUD API with a real database (PostgreSQL or SQLite). //! Uses session-based authentication via SessionMiddleware.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/tests/integration/projects_api_test.rs` around lines 1 - 4, Update the top-level doc comment that currently says "real PostgreSQL database" to accurately reflect the test backend: mention that the integration tests use the shared TestDb which may be SQLite or PostgreSQL depending on feature flags and still use session-based authentication via SessionMiddleware; locate the comment containing the phrase "real PostgreSQL database" (top of the file) and replace it with a neutral description referencing TestDb and feature-flag driven backend selection.
18-19: Hardcoded PostgreSQL URL in test config is misleading.The
DatabaseConfigcontains a hardcoded PostgreSQL URL, but the actual database pool comes fromTestDbwhich could be SQLite. While this URL may not be used at runtime (the pool fromTestDbis used directly), it's confusing.Consider using a placeholder or conditionally setting it:
♻️ Suggested fix
database: DatabaseConfig { - url: "postgres://test:test@localhost/test".to_string(), + url: "unused://placeholder".to_string(), // Pool provided by TestDb max_connections: 5,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/server/tests/integration/projects_api_test.rs` around lines 18 - 19, The hardcoded PostgreSQL URL in the DatabaseConfig initialization is misleading because the actual connection pool is supplied by TestDb; update the DatabaseConfig instantiation (the database: field in the test setup where DatabaseConfig is constructed) to use a clear placeholder or reflect the test DB (e.g., derive the URL from TestDb or set to an explicit placeholder like an empty string or "unused") so it no longer implies a live Postgres connection; change the value used there (instead of "postgres://test:test@localhost/test") to the placeholder or TestDb-provided URL to make the intent clear.apps/docs/content/getting-started/installation.mdx (1)
151-159: Consider dropping the explicit SQLiteDATABASE_URLfrom this quickstart.You already document the same default earlier on the page. Leaving it out here would make the "run the
latestimage with SQLite defaults" path clearer and avoid implying the variable is mandatory.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/docs/content/getting-started/installation.mdx` around lines 151 - 159, Remove the explicit SQLite DATABASE_URL environment variable from the example docker run command to avoid implying it’s required; in the block that runs the rustrak image (the docker run snippet referencing rustrak, -p 8080:8080, -v rustrak_data:/data and -e SESSION_SECRET_KEY), delete the line containing -e DATABASE_URL="sqlite:///data/rustrak.db" so the example uses the documented SQLite defaults instead.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/docs/content/configuration/database.mdx`:
- Around line 77-85: The backup snippet incorrectly copies only rustrak.db which
misses WAL changes; update the example to use SQLite's online backup for
consistent hot backups (e.g., run sqlite3 rustrak.db ".backup
/backups/rustrak-backup.db" inside the container) or explicitly show stopping
writes/containers before copying rustrak.db plus its -wal and -shm files;
replace the current docker run cp example with a command that invokes sqlite3
inside the alpine/container to run the .backup command (or add a clear note
requiring a quiescent DB and copying rustrak.db, rustrak.db-wal, and
rustrak.db-shm together).
In `@apps/docs/content/configuration/environment.mdx`:
- Around line 9-25: The docs present conflicting guidance for DATABASE_URL (it’s
described as "Optional for SQLite" but the SQLite section is titled as if
required); update the SQLite subsection headings (replace "Required" with
"Examples" or "Optional / example values") and add a short note showing the
omitted-variable fallback (e.g., defaulting to sqlite:///data/rustrak.db) so
readers know skipping DATABASE_URL is valid for SQLite; make the same
heading/title and fallback-note changes in the other similar block referenced
(the second SQLite block around the other example lines) and ensure the symbol
DATABASE_URL is used consistently in the explanatory text.
In `@apps/docs/content/configuration/production.mdx`:
- Around line 123-128: Replace the unsafe direct file copy line (the cp
/var/lib/rustrak/rustrak.db ... snippet) in
apps/docs/content/configuration/production.mdx with instructions to use SQLite's
.backup mechanism instead; update the example to show invoking sqlite3 on the
database file with the ".backup '<target-path>'" command (preserve the
datestamped filename pattern) and mention this method is safe while the server
is running to avoid corrupted snapshots when WAL or active writes are present.
In `@apps/docs/content/getting-started/overview.mdx`:
- Around line 25-34: The docs currently state `latest` is the SQLite image but
the later Docker snippet pairs abians7/rustrak-server:latest with a PostgreSQL
DATABASE_URL; update that Docker example to be consistent by either (A) changing
the image tag to abians7/rustrak-server:postgres (or vX.Y.Z-postgres) when
showing the PostgreSQL DATABASE_URL usage, or (B) converting the snippet to a
SQLite example by removing the DATABASE_URL and using the `latest`/vX.Y.Z image;
locate the mismatched example by searching for the Docker snippet that
references abians7/rustrak-server:latest and DATABASE_URL and make the
corresponding tag/config change.
In
`@apps/server/migrations/postgres/20260122000000_change_remote_addr_to_text.down.sql`:
- Line 2: The down migration blindly casts remote_addr to INET which will fail
if any rows contain non-INET strings; update the migration to guard and make the
rollback explicitly lossy by normalizing or nulling invalid values before or
during the ALTER: either run an UPDATE that sets remote_addr = NULL (or a
normalized value) for rows that don't parse as INET, or change the ALTER to use
a safe USING clause like CASE WHEN <inet-validate> THEN remote_addr::inet ELSE
NULL END and add a comment documenting the lossy behavior; refer to the ALTER
TABLE statement in the migration and the source of the loosened format in
apps/server/src/services/event.rs and apps/server/src/routes/ingest.rs to
determine the validation rule.
In `@apps/server/src/db/mod.rs`:
- Around line 46-57: The SQLite pool creation uses config.max_connections which
breaks in-memory databases; update the Sqlite branch (the block building opts,
SqlitePoolOptions::new(), and connect_with) to detect in-memory URLs (check
config.url for substrings like ":memory:" or "mode=memory" / "file::memory:")
and, when detected, force the pool's max_connections and min_connections to 1
before calling connect_with(opts); keep other timeout/lifetime settings
unchanged so tests/local dev use a single shared connection for in-memory DBs.
In `@apps/server/src/services/rate_limit.rs`:
- Around line 143-148: The codebase uses the SQLite RETURNING clause (e.g., the
UPDATE in rate_limit.rs that returns digested_event_count) which requires SQLite
>= 3.35.0; add a clear runtime check during DB pool initialization (where the
sqlx pool is created) that queries sqlite_version() and verifies it is >=
3.35.0, and if not, return a fatal error with a message telling operators to
upgrade SQLite or use the documented fallback, or alternatively document the
minimum version requirement in the README/deployment docs; implement this in the
DB init path so functions like the query in rate_limit.rs will fail fast with a
helpful error rather than obscure SQL failures.
---
Outside diff comments:
In `@apps/server/src/services/alert.rs`:
- Around line 623-638: The UPDATE query setting next_retry binds a DateTime<Utc>
directly (bound via next_retry) which breaks SQLite; change the binding to use
.naive_utc() under the sqlite feature (or feature-gate the binding like the
cooldown_threshold pattern) so the value is converted to NaiveDateTime before
binding; update the call that binds next_retry in the same block that updates
alert_history (the .bind(next_retry) after .bind(result.http_status.map(|s| s as
i32))) to use next_retry.naive_utc() when compiling for sqlite (or surround
alternative bindings with #[cfg(feature = "sqlite")] / #[cfg(not(feature =
"sqlite"))] as done for cooldown_threshold) so SQLite receives a
NaiveDateTime-compatible value.
In `@apps/server/tests/integration/bootstrap_test.rs`:
- Around line 14-16: Tests in bootstrap_test.rs set and remove the
CREATE_SUPERUSER environment variable using env::set_var / env::remove_var which
is not thread-safe; annotate every test function that manipulates
CREATE_SUPERUSER with the serial_test attribute so they run sequentially (e.g.,
add #[serial_test::serial] directly above the existing #[tokio::test] on those
test functions that call env::set_var or env::remove_var), ensuring the
dev-dependency serial_test is used and no other changes to the test logic are
required.
---
Nitpick comments:
In `@apps/docs/content/getting-started/installation.mdx`:
- Around line 151-159: Remove the explicit SQLite DATABASE_URL environment
variable from the example docker run command to avoid implying it’s required; in
the block that runs the rustrak image (the docker run snippet referencing
rustrak, -p 8080:8080, -v rustrak_data:/data and -e SESSION_SECRET_KEY), delete
the line containing -e DATABASE_URL="sqlite:///data/rustrak.db" so the example
uses the documented SQLite defaults instead.
In `@apps/server/Dockerfile`:
- Around line 37-38: The comment about SQLite runtime deps should also document
the PostgreSQL variant's runtime library requirements: update the Dockerfile
comment near the FROM gcr.io/distroless/cc-debian12:nonroot line to state that
the distroless image includes the TLS libraries required by the sqlx postgres
variant when using the tls-rustls feature (or note if additional C libs are
required), and if any libs are missing confirm and either switch to a distroless
image that includes them or add a short note explaining which extra packages are
needed at runtime for Postgres connections.
In `@apps/server/tests/integration/ingest_test.rs`:
- Around line 19-26: The DatabaseConfig in the test uses a hardcoded Postgres
URL ("postgres://...") which is misleading for multi-backend tests; update the
test config where DatabaseConfig is constructed so config.database.url is
backend-agnostic — either add cfg-gated values for postgres vs sqlite (using
#[cfg(feature = "postgres")] and #[cfg(feature = "sqlite")] around the url
field) or replace the URL with a clear placeholder string (e.g.,
"unused://pool-injected") to indicate the URL is not used; change the
construction of DatabaseConfig in the test to use one of these approaches and
keep the rest of the test (and injected db.pool) unchanged.
In `@apps/server/tests/integration/projects_api_test.rs`:
- Around line 1-4: Update the top-level doc comment that currently says "real
PostgreSQL database" to accurately reflect the test backend: mention that the
integration tests use the shared TestDb which may be SQLite or PostgreSQL
depending on feature flags and still use session-based authentication via
SessionMiddleware; locate the comment containing the phrase "real PostgreSQL
database" (top of the file) and replace it with a neutral description
referencing TestDb and feature-flag driven backend selection.
- Around line 18-19: The hardcoded PostgreSQL URL in the DatabaseConfig
initialization is misleading because the actual connection pool is supplied by
TestDb; update the DatabaseConfig instantiation (the database: field in the test
setup where DatabaseConfig is constructed) to use a clear placeholder or reflect
the test DB (e.g., derive the URL from TestDb or set to an explicit placeholder
like an empty string or "unused") so it no longer implies a live Postgres
connection; change the value used there (instead of
"postgres://test:test@localhost/test") to the placeholder or TestDb-provided URL
to make the intent clear.
In `@README.md`:
- Around line 188-207: Update the ASCII architecture diagram under the
"Architecture" section so the database node matches the table's description of
supporting either SQLite or PostgreSQL; replace the single "PostgreSQL" label in
the diagram with "SQLite / PostgreSQL" (or "SQLite or PostgreSQL") and ensure
the surrounding box text aligns with the table row for the Database component to
avoid the inconsistency.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3637530a-68ad-4170-9891-524d877dbc4f
📒 Files selected for processing (33)
.changeset/busy-files-throw.md.github/workflows/docker-publish.ymlREADME.mdapps/docs/content/configuration/_meta.jsapps/docs/content/configuration/database.mdxapps/docs/content/configuration/environment.mdxapps/docs/content/configuration/production.mdxapps/docs/content/getting-started/installation.mdxapps/docs/content/getting-started/overview.mdxapps/server/Cargo.tomlapps/server/Dockerfileapps/server/migrations/postgres/20260122000000_change_remote_addr_to_text.down.sqlapps/server/migrations/postgres/20260122000000_change_remote_addr_to_text.up.sqlapps/server/migrations/sqlite/20260119000000_initial_schema.up.sqlapps/server/src/config.rsapps/server/src/db/mod.rsapps/server/src/services/alert.rsapps/server/src/services/rate_limit.rsapps/server/tests/common/db.rsapps/server/tests/e2e/sentry_sdk_test.rsapps/server/tests/e2e_tests.rsapps/server/tests/integration/alerts_api_test.rsapps/server/tests/integration/auth_test.rsapps/server/tests/integration/bootstrap_test.rsapps/server/tests/integration/concurrency_test.rsapps/server/tests/integration/digest_test.rsapps/server/tests/integration/events_api_test.rsapps/server/tests/integration/health_test.rsapps/server/tests/integration/ingest_test.rsapps/server/tests/integration/issues_api_test.rsapps/server/tests/integration/projects_api_test.rsapps/server/tests/integration/rate_limit_test.rsapps/server/tests/integration/tokens_api_test.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- apps/server/migrations/sqlite/20260119000000_initial_schema.up.sql
apps/server/migrations/postgres/20260122000000_change_remote_addr_to_text.down.sql
Outdated
Show resolved
Hide resolved
- fix(docs): use :postgres tag in overview server-only example (was :latest) - fix(db): cap SQLite in-memory pool to 1 connection to prevent isolated DBs - fix(migration): guard remote_addr rollback cast against non-INET data - fix(docs): replace unsafe cp backup with sqlite3 .backup API
see #38
still todo:
Summary by CodeRabbit
New Features
Refactor
Docs
Chores