Skip to content

Add Storage Growth Monitor script for Ninja RMM (Ticket 1123004)#41

Open
AlrightLad wants to merge 8 commits intoDTC-Inc:mainfrom
AlrightLad:patch-5
Open

Add Storage Growth Monitor script for Ninja RMM (Ticket 1123004)#41
AlrightLad wants to merge 8 commits intoDTC-Inc:mainfrom
AlrightLad:patch-5

Conversation

@AlrightLad
Copy link

@AlrightLad AlrightLad commented Feb 13, 2026

Implements the Storage Growth Tracking solution per TDD v2.5 (Final).

This script runs daily via Ninja RMM on physical Hyper-V hosts and VMs, collecting drive metrics, calculating growth trends over a 60-day rolling window using linear regression, and reporting results to 16 Ninja custom fields. Critical trends trigger Windows Event Log entries (IDs 5001/5002) which Ninja monitors to auto-create Halo PSA tickets.

Key capabilities:

  • Drive discovery via Win32_LogicalDisk with Win32_Volume filtering
  • OS drive auto-detection via Win32_OperatingSystem.SystemDrive
  • Exclusion of system partitions (Recovery, EFI, System Reserved, etc.)
  • OLS linear regression for GB/day growth rate and days-until-full
  • Status classification: Critical / Attention / Growing / Stable / Declining
  • Data drive priority ranking with top 3 reported to Ninja
  • Fire-once alerting with automatic reset when drives recover
  • JSON persistence with backup, corruption recovery, and 65-day pruning
  • Test mode auto-detection for local development without Ninja context
  • Verbose diagnostic mode (-Verbose) with regression detail output
  • Local log file with 90-day rotation

Summary by CodeRabbit

  • New Features

    • Complete Storage Growth Monitor: per-drive time-series, regression-based growth rates, days-until-full, status classification, server severity, inline SVG charts and HTML reports, Ninja RMM field updates, and test mode.
  • Reliability

    • Local persistence with migration from legacy history, retention/pruning heuristics, offline/stale-drive cleanup, and resilient module/load/install behavior.
  • Observability & Alerts

    • Console summaries, structured logging, event-log "fire-once" alerts, and report generation for visibility.

Implements the Storage Growth Tracking solution per TDD v2.5 (Final).

This script runs daily via Ninja RMM on physical Hyper-V hosts and VMs,
collecting drive metrics, calculating growth trends over a 60-day rolling
window using linear regression, and reporting results to 16 Ninja custom
fields. Critical trends trigger Windows Event Log entries (IDs 5001/5002)
which Ninja monitors to auto-create Halo PSA tickets.

Key capabilities:
- Drive discovery via Win32_LogicalDisk with Win32_Volume filtering
- OS drive auto-detection via Win32_OperatingSystem.SystemDrive
- Exclusion of system partitions (Recovery, EFI, System Reserved, etc.)
- OLS linear regression for GB/day growth rate and days-until-full
- Status classification: Critical / Attention / Growing / Stable / Declining
- Data drive priority ranking with top 3 reported to Ninja
- Fire-once alerting with automatic reset when drives recover
- JSON persistence with backup, corruption recovery, and 65-day pruning
- Test mode auto-detection for local development without Ninja context
- Verbose diagnostic mode (-Verbose) with regression detail output
- Local log file with 90-day rotation
@AlrightLad AlrightLad requested a review from Gumbees as a code owner February 13, 2026 04:05
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 13, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a new PowerShell Storage Growth Monitor script that uses a local SQLite database to persist per-drive time-series metrics, performs regression-based growth analysis, writes fire-once Windows Event Log alerts, generates SVG/HTML reports, and updates Ninja RMM custom fields; includes discovery, migration, pruning, and a test-mode workflow.

Changes

Cohort / File(s) Summary
New Monitor Script
rmm-ninja/ServerGrowthTracking/Storage-Growth-Monitor.ps1
Introduces an end-to-end storage growth monitor implemented in PowerShell: SQLite-backed schema (device, drive, metric, alert_state), module loading (PSSQLite) with installer fallback, device/drive discovery and filtering, metric ingestion, upserted drive records, offline/stale removal, legacy JSON import, retention/pruning, and logging.
Analysis & Reporting
rmm-ninja/ServerGrowthTracking/Storage-Growth-Monitor.ps1
Adds per-drive history analysis (linear regression: slope/intercept/R²), growth-rate and days-to-full estimates, status classification (including Limited mode), server-wide severity aggregation, SVG chart generation, and HTML report assembly.
Alerting & Integrations
rmm-ninja/ServerGrowthTracking/Storage-Growth-Monitor.ps1
Implements fire-once Event Log alerts (Event IDs 5001/5002) with event source init/fallback, Ninja RMM custom-field updates (with test-mode/no-Ninja support), and persistence of alert state.
Persistence & Utilities
rmm-ninja/ServerGrowthTracking/Storage-Growth-Monitor.ps1
SQLite schema management, metric insert/upsert helpers, history queries/pruning, log pruning, robust error handling to continue on partial failures, and a Main orchestration function tying discovery, analysis, alerting, reporting, and persistence.

Sequence Diagram

sequenceDiagram
    actor Operator
    participant Script as Storage<br/>Growth Monitor
    participant CIM as CIM/WMI
    participant SQLite as SQLite<br/>DB (local)
    participant EventLog as Windows<br/>Event Log
    participant Ninja as Ninja RMM<br/>API
    participant Console as Console Output

    Operator->>Script: Run script (or scheduled)
    Script->>Script: Ensure shared storage path & load modules
    Script->>SQLite: Open/initialize schema (device,drive,metric,alert_state)
    Script->>CIM: Query drives & metrics
    CIM-->>Script: Drive list & current metrics
    Script->>SQLite: Upsert device/drive records
    Script->>SQLite: Insert metric datapoints
    Script->>SQLite: Prune old metrics / remove stale drives
    Script->>Script: Compute regression, growth, days-to-full, statuses
    alt New critical detected
        Script->>EventLog: Write fire-once critical alert
        EventLog-->>Script: Logged
    end
    alt Ninja available
        Script->>Ninja: Update custom field(s) with HTML/SVG report
        Ninja-->>Script: Ack
    else Test mode
        Script->>Console: Simulate Ninja update
    end
    Script->>Console: Print summary
    Script->>SQLite: Commit & close
Loading

Estimated Code Review Effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Poem

🐇 I hop through bytes and dig the past,
I store each nibble in SQLite fast,
I draw small charts and watch slopes grow,
I sound an alert when thresholds show,
I tidy logs and whisper to Ninja so.

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically describes the main change: adding a Storage Growth Monitor script for Ninja RMM integration, with a reference to the ticket.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@rmm-ninja/ServerGrowthTracking`:
- Around line 434-449: The pruning loop uses [DateTime]::Parse on user-persisted
$entry.timestamp which can throw and abort Update-History; change the logic in
the $driveHistory foreach (the block that builds $prunedHistory and assigns
$History.drives[$letter].history) to defensively parse timestamps using a safe
parse (e.g., [DateTime]::TryParse or a try/catch around [DateTime]::Parse), skip
entries with unparseable timestamps, and call Write-VerboseLog (or a warning
log) when skipping a malformed $entry.timestamp so the loop continues rather
than throwing; apply the same defensive approach for other similar occurrences
around Update-History where $entry.timestamp is parsed.
- Around line 220-237: Load-History currently renames the corrupted primary file
and returns New-EmptyHistory without attempting recovery from the backup created
by Save-History; modify Load-History to, upon failing to parse the primary after
maxRetries, attempt to read/convert the backup at $Script:BACKUP_FILE (or call a
dedicated JSON→hashtable helper used by Load-History) before giving up, logging
whether the backup was used or also corrupt; ensure you avoid infinite recursion
by using a boolean parameter like $tryBackup or by calling the conversion helper
directly, and if the backup is valid, restore/move it into place (or return its
parsed history) otherwise proceed with the existing corrupted-file rename and
return New-EmptyHistory.
🧹 Nitpick comments (6)
rmm-ninja/ServerGrowthTracking (6)

172-224: Retrying on JSON parse errors introduces unnecessary 10-second delay.

The retry loop (3 attempts × 5 s delay) is useful for transient file-lock contention, but if the failure is a parse error (corrupted content), the file won't self-heal between retries. Consider distinguishing I/O errors from parse/validation errors so that only transient failures trigger retries.

♻️ Sketch — separate I/O read from JSON validation
     for ($attempt = 1; $attempt -le $maxRetries; $attempt++) {
         try {
             $content = Get-Content -Path $Script:HISTORY_FILE -Raw -Encoding UTF8 -ErrorAction Stop
+        }
+        catch {
+            # I/O error — retry may help
+            if ($attempt -lt $maxRetries) {
+                Write-Log "WARNING: Failed to read history file (attempt $attempt/$maxRetries), retrying in ${retryDelay}s..."
+                Start-Sleep -Seconds $retryDelay
+                continue
+            }
+            throw  # re-throw to outer handler
+        }
+
+        try {
             $data = $content | ConvertFrom-Json -ErrorAction Stop
 
             # Validate JSON structure
             if (-not $data.version -or -not $data.drives) {
                 throw "Invalid JSON structure - missing version or drives"
             }
             ...
             return $history
         }
         catch {
-            if ($attempt -lt $maxRetries) {
-                Write-Log "WARNING: Failed to load history (attempt $attempt/$maxRetries), retrying in ${retryDelay}s..."
-                Start-Sleep -Seconds $retryDelay
-            }
-            else {
+            # Parse/validation error — retrying won't help
                 Write-Log "WARNING: History file corrupted. Renaming and starting fresh."
                 ...
                 return New-EmptyHistory
-            }
         }
     }

622-637: Redundant GBPerMonth assignment on line 627.

$result.GBPerMonth is already assigned at line 615 with the same $monthlyGrowth.ToString("F3") value. The reassignment on line 627 inside the Declining branch is a no-op.

♻️ Remove the redundant line
         } else {
             $result.DaysUntilFull = "Declining"
-            $result.GBPerMonth = $monthlyGrowth.ToString("F3")
         }

985-988: $exitCode is assigned but never modified — always exits 0.

$exitCode is set to 0 at line 986 and used at line 1211, but nothing in between ever changes it. The early failure paths use exit 1 (line 1014) and exit 0 (line 1038) directly. Either remove the variable and use exit 0 directly, or wire it up to reflect failures (e.g., Ninja field update failures).

Also applies to: 1211-1211


240-271: Consider atomic write to prevent partial-write corruption.

Save-History backs up the old file (line 246), then directly writes to the primary file (line 257). If the process is interrupted mid-write (e.g., power loss, forced termination), the primary file is left in a corrupt state. A safer pattern is to write to a temporary file first, then rename it over the primary — on NTFS, Move-Item -Force is atomic.

♻️ Atomic write pattern
     try {
-        $History | ConvertTo-Json -Depth 10 | Set-Content -Path $Script:HISTORY_FILE -Encoding UTF8 -ErrorAction Stop
+        $tempFile = $Script:HISTORY_FILE + ".tmp"
+        $History | ConvertTo-Json -Depth 10 | Set-Content -Path $tempFile -Encoding UTF8 -ErrorAction Stop
+        Move-Item -Path $tempFile -Destination $Script:HISTORY_FILE -Force -ErrorAction Stop

1049-1064: Step numbering in Main comments is out of order — step 4 appears after step 7.

The section comments jump from ── 3. to ── 5-6. to ── 7. then back to ── 4.. These reference TDD section numbers, not execution order, but it makes the code flow harder to follow. Consider renumbering to match the actual execution sequence.


808-870: Ninja field names are string literals repeated across Update-NinjaFields and Write-Summary — fragile if renamed.

The 16 Ninja custom field names (e.g., "Data Drive $slotNum Status") are constructed inline. A minor typo in any one call would silently write to the wrong field. For maintainability, consider defining the field name templates as constants alongside the other $Script: constants.

Added constants for Ninja RMM fields and improved timestamp parsing for better error handling. Updated functions to utilize new constants and enhanced backup recovery logic.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@rmm-ninja/ServerGrowthTracking`:
- Around line 514-518: The code assumes each $prunedHistory[].timestamp is at
least 10 chars and calls .Substring(0, 10), which can throw
ArgumentOutOfRangeException for shorter strings; update the logic that computes
$oldest and $newest (used in the Write-VerboseLog call) to safely take at most
the available length or to format the parsed DateTime instead: e.g., check the
length and use the substring only if >=10, otherwise use the entire string (or
prefer to parse the timestamp into a DateTime and call ToString('yyyy-MM-dd') on
it); update references to $prunedHistory, the .timestamp property, and the
Write-VerboseLog message accordingly.
🧹 Nitpick comments (1)
rmm-ninja/ServerGrowthTracking (1)

240-243: Exception silently discarded — consider returning or logging the error reason.

When Import-HistoryFromFile fails, the catch block returns $null with no diagnostic output. The caller (Load-History) only knows that it failed, not why. During troubleshooting, the specific error (malformed JSON, encoding issue, type conversion) would be valuable.

♻️ Suggested: propagate the error message via Write-VerboseLog
     catch {
+        Write-VerboseLog "Import-HistoryFromFile failed for '$FilePath': $_"
         return $null
     }

Enhanced error handling for history file loading and parsing, including retry logic for I/O errors and improved logging messages.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@rmm-ninja/ServerGrowthTracking`:
- Around line 1138-1143: The catch block handling failures from
Get-FilteredDrives currently logs the error, preserves history via Save-History,
and calls Save-LogFile but then exits with code 0; change the final exit code to
a non-zero value (e.g., exit 2) so monitoring systems can detect the failure.
Locate the catch that calls Write-Log "ERROR: Drive enumeration failed: $_",
Save-History -History $history, and Save-LogFile, and replace the trailing exit
0 with exit 2 (or another appropriate non-zero code) while keeping the existing
logging and save calls intact.
- Around line 261-280: Load-History currently reads the history file with retry
logic but then calls Import-HistoryFromFile which re-opens the file, so the
retry protection and the initial read are wasted; refactor the parsing into a
new function (e.g., ConvertFrom-HistoryJson or add an overload
Import-HistoryFromFile -Content) that accepts the raw file string and performs
the JSON/parse logic, then in Load-History pass the already-read $content to
that parser instead of calling the file-reading version; keep or add an
Import-HistoryFromFile(FilePath) wrapper that reads the file and delegates to
the new parser so existing callers continue to work.
🧹 Nitpick comments (2)
rmm-ninja/ServerGrowthTracking (2)

324-329: Atomic write pattern is sound but consider handling partial temp-file writes.

The temp-file + move pattern is good. One edge case: if Set-Content on line 328 succeeds but Move-Item on line 329 fails, the temp file lingers and the original file is intact (since it wasn't touched). The cleanup in the catch block handles this. However, the primary file's content is now stale (the backup was already overwritten on line 315 with the previous primary). This means both the primary and backup now hold the old-old content rather than the latest.

This is low-risk since the Move-Item is a same-directory rename on NTFS, but worth noting.


115-155: Log file management reads and rewrites the entire log on every run.

Save-LogFile reads all existing lines, prunes, appends, and rewrites the whole file. Over 90 days of daily runs this file will grow but should remain manageable. If the log ever grows large (e.g., verbose mode enabled permanently), this could become slow. An append-only strategy with a separate periodic pruning pass would scale better, but this is fine for the expected workload.

Here's an extended description you can use for your PR commit or description:

---

**Address all CodeRabbit review feedback for Storage Growth Monitor**

This change resolves all actionable issues and nitpicks raised across three rounds of CodeRabbit review on the Storage Growth Monitor script (Ticket 1123004).

**Robustness improvements:**

- **Safe timestamp parsing:** Introduced `ConvertTo-SafeDateTime` helper using `TryParse` instead of `[DateTime]::Parse()` throughout the script, preventing script termination on malformed persisted timestamps. Applied to history pruning, offline drive removal, and linear regression calculations.
- **Substring length guard:** Added length check before `.Substring(0, 10)` on timestamps in the history pruning log output, preventing `ArgumentOutOfRangeException` on unexpectedly short values.
- **Backup recovery:** `Load-History` now checks for a backup file and attempts restoration before falling back to a fresh history when the primary file is corrupted.

**Architecture improvements:**

- **Eliminated redundant I/O in Load-History:** Extracted `ConvertFrom-HistoryJson` to accept raw JSON content. `Load-History` now reads the file with retry protection for transient I/O errors, then passes the already-read content directly to parsing (no retry for parse errors, which are not transient). Previously, the retry loop read `$content` but `Import-HistoryFromFile` independently re-opened the file, discarding the retried read entirely.
- **Import-HistoryFromFile** remains as a thin read-and-parse wrapper used by backup recovery, delegating parsing to `ConvertFrom-HistoryJson`.
- **Ninja RMM field constants:** Defined all 16 field names as `$Script:FIELD_*` constants instead of inline string literals, reducing fragility and making field name changes single-point edits.

**Reliability improvements:**

- **Atomic write fallback:** `Save-History` uses temp file + `Move-Item` for atomic writes. If `Move-Item` fails after the temp file is successfully written, it now falls back to a direct `Set-Content` write so the new data point isn't lost, then cleans up the temp file.
- **Log file performance:** `Save-LogFile` switched from read-prune-rewrite on every run to append-only (`Add-Content`). The expensive prune cycle only triggers when the log file exceeds 256KB, which is appropriate given the script runs daily and generates ~20-50 lines per execution.
- **Diagnostic logging in Import-HistoryFromFile:** The catch block now logs parse failure details via `Write-VerboseLog` instead of silently returning `$null`.

**Correctness fix:**

- **Drive enumeration exit code:** Changed from `exit 0` to `exit 2` when `Get-FilteredDrives` throws an exception, so Ninja RMM can detect non-transient failures instead of treating them as successful runs.
AlrightLad

This comment was marked as duplicate.

@AlrightLad
Copy link
Author

@coderabbitai review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 13, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Robustness improvements from code review
All 14 CodeRabbit review findings (3 rounds) have been addressed:

Safe timestamp parsing — ConvertTo-SafeDateTime helper using TryParse replaces all [DateTime]::Parse() calls, preventing script termination on malformed persisted timestamps.
Substring length guard — Length check before .Substring(0, 10) on timestamps prevents ArgumentOutOfRangeException on short values.
Backup recovery — Load-History checks for a backup file and attempts restoration before falling back to a fresh history when the primary file is corrupted.
Eliminated redundant I/O — Extracted ConvertFrom-HistoryJson to accept raw JSON content. Load-History reads the file with retry protection for transient I/O errors, then passes content directly to parsing (no retry for parse errors, which are not transient).
Ninja RMM field constants — All 16 field names defined as $Script:FIELD_* constants instead of inline string literals.
Atomic write with fallback — Save-History uses temp file + Move-Item. If the rename fails, it falls back to a direct Set-Content write so the data point isn't lost.
Log file performance — Save-LogFile uses append-only (Add-Content). The expensive prune cycle only triggers when the log exceeds 256 KB.
Diagnostic logging — Import-HistoryFromFile catch block logs parse failure details via Write-VerboseLog instead of silently returning $null.
Correct exit code — Drive enumeration failure exits with code 2 instead of 0, so Ninja RMM can detect non-transient failures.
…shboard

Summary
Introduces a PowerShell script that runs as a daily Ninja RMM scheduled task on servers (physical Hyper-V hosts and VMs). It collects disk usage metrics, persists them to a shared SQLite database at %PROGRAMDATA%\dtc-inc\rmm\dtc-rmm.db, calculates storage growth trends via OLS linear regression, and writes a single HTML dashboard — complete with an SVG line chart — to one Ninja RMM WYSIWYG custom field (storageReport).

The SQLite database is designed as a shared data store that other scripts and tools can query directly. The schema uses PostgreSQL-compatible conventions (snake_case columns, standard SQL types, parameterized queries) to enable a future migration to a centralized API service backed by PostgreSQL.

What the script does
1. Drive Discovery
Enumerates fixed drives via Win32_LogicalDisk and Win32_Volume, auto-detects the OS drive, and filters out recovery/EFI/system partitions, excluded filesystems (FAT/RAW), and drives under 1 GB.

2. SQLite Persistence
Stores all data in %PROGRAMDATA%\dtc-inc\rmm\dtc-rmm.db using the PSSQLite module (auto-installed from PSGallery if not present). WAL journal mode is enabled for safe concurrent reads by other tools.

Schema (4 tables):

Table	Purpose
device	One row per monitored machine (hostname, OS version, timestamps)
drive	One row per drive per device (letter, label, size, type OS/Data, online/offline status, alert state)
metric	Time-series data points — used GB, free GB, usage % — the core trend data
alert_state	Fire-once alert tracking per device (scaffolding for future centralized use)
Indexes on (drive_id, timestamp) and (device_id) for efficient range queries. All SQL is parameterized — zero string concatenation.

3. Data Lifecycle
65-day rolling retention window with automatic metric pruning
Drives not visible for 30+ days are marked Offline, then removed with their metrics
Disk resize detection with logging
4. Trend Analysis
Runs ordinary least squares (OLS) linear regression on each drive's metric history to calculate:

GB/day growth rate and GB/month for display
Days-until-full projections (capped at 5 years)
R² confidence values
Minimum 7 data points required; full confidence at 30+
5. Status Classification
Classifies each drive using a priority-based system (first match wins):

Priority	Status	Condition
1	Critical	Usage > 95% OR < 30 days until full
2	Attention	30–90 days until full
3	Growing	> 0.1 GB/day growth, > 90 days remaining
4	Stable	< 0.1 GB/day growth
5	Declining	Negative growth rate
6	Insufficient Data	< 7 data points
7	Offline	Drive not currently visible
Appends (Limited) when fewer than 30 data points are available.

6. Visual HTML Dashboard (single Ninja field)
Instead of 16 separate text custom fields, the script writes one WYSIWYG field (storageReport) containing:

Status header — Color-coded gradient banner (red/orange/amber/green/blue/gray) showing hostname, overall status, and timestamp
SVG line chart — Usage % trends for all drives over the retention window. Color-coded lines from a colorblind-friendly palette, Y-axis gridlines at 0/25/50/75/100%, up to 7 X-axis date labels, and an inline legend
Summary table — Per-drive row with color swatch matching the chart, drive letter + type tag + volume label, total/used/free sizes, GB/month growth, human-friendly time-to-full (e.g. 45d, 2.1mo, 5yr+), and a pill-style status badge
Footer — Script version, drive count, data point count, engine info
No more MAX_DATA_DRIVES = 3 limitation — all drives are shown.

7. Alerting
Writes Windows Event Log entry 5001 (Warning) when a drive transitions to Critical status. Uses fire-once logic so alerts only fire on state transitions, not every run. Resets when the drive exits Critical.

8. Legacy JSON Migration
On first run, auto-detects existing JSON history at C:\ProgramData\NinjaRMM\StorageMetrics\storage_history.json. If the SQLite database is empty, migrates all drives and metric data points, preserving alert-sent flags. Renames the JSON files to .migrated afterward. Zero data loss.

Robustness
All 14 CodeRabbit findings (3 review rounds) plus additional self-review findings have been addressed:

Safe timestamp parsing — ConvertTo-SafeDateTime using TryParse throughout; null guard in Remove-StaleDrives prevents crash on malformed last_seen
Null-safe query wrappers — All Invoke-SqliteQuery result sets filtered through Where-Object { $null -ne $_ } to prevent @($null) single-element arrays reaching downstream functions
Fixed continue in expression — Legacy migration guard clause rewritten from unreliable if/else expression assignment to standard pattern
Schema consistency — All timestamp columns including last_seen have DEFAULT values
Robust final stats — SELECT COUNT(*) wrapped in try/catch so a locked database doesn't crash script exit
Correct exit codes — Drive enumeration failure exits with code 2; PSSQLite/database failures exit with code 1
Log file performance — Append-only with prune cycle only when log exceeds 256 KB
Exit codes
Code	Meaning
0	Success
1	Cannot create storage folder, PSSQLite unavailable, or database init failure
2	Drive enumeration failure
Ninja RMM Setup
Create one custom field:

Name: storageReport
Type: WYSIWYG
Scope: Device (Role: Server)
Database path
%PROGRAMDATA%\dtc-inc\rmm\dtc-rmm.db

Other scripts can query this directly. Schema is documented in the Initialize-Database function.

Future: Centralized API + PostgreSQL
The schema is designed for a clean migration:

INTEGER PRIMARY KEY AUTOINCREMENT → SERIAL PRIMARY KEY
strftime(...) defaults → NOW()
TEXT timestamps → TIMESTAMPTZ
All queries are standard SQL with named parameters — swap the connection layer and adjust date functions
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (3)
rmm-ninja/ServerGrowthTracking (3)

275-284: alert_state table is created but never used — dead schema.

All fire-once alert tracking is handled via the alert_sent column on the drive table (lines 1497–1508). The alert_state table is never queried, inserted into, or updated anywhere in the script. Either remove it or document it as a planned future extension.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rmm-ninja/ServerGrowthTracking` around lines 275 - 284, The new alert_state
table is unused dead schema; remove the CREATE TABLE block for alert_state (or
if intended as future work, add a clear TODO comment) and consolidate fire-once
alert logic to the existing alert_sent column on the drive table; locate the
CREATE TABLE for alert_state and either delete it or replace it with a
documented comment referencing drive.alert_sent and the existing alert flow so
there is no unused schema left in the script.

1124-1124: Redundant conditional — both branches produce the same value.

if ($a.GBPerMonth -match '^\-?\d') { "$($a.GBPerMonth)" } else { $a.GBPerMonth } returns $a.GBPerMonth in both branches. The wrapping $() in the true-branch is a no-op.

♻️ Proposed simplification
-        $growthStr = if ($a.GBPerMonth -match '^\-?\d') { "$($a.GBPerMonth)" } else { $a.GBPerMonth }
+        $growthStr = $a.GBPerMonth
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rmm-ninja/ServerGrowthTracking` at line 1124, The conditional that sets
$growthStr is redundant because both branches yield $a.GBPerMonth; replace the
whole if-expression with a direct assignment to $growthStr using $a.GBPerMonth
(remove the if/else and the unnecessary "$($a.GBPerMonth)" wrapping) to simplify
the code and preserve behavior for the $growthStr variable.

1350-1351: $env:COMPUTERNAME as device_id breaks history continuity on hostname change.

A hostname rename creates a new device row and orphans all historical drive/metric rows under the old ID, silently resetting 65 days of trend data. A hardware-stable identifier (e.g., (Get-CimInstance Win32_BIOS).SerialNumber or Win32_ComputerSystemProduct.UUID) would survive renames. At minimum, consider appending a note in the header comment documenting this limitation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rmm-ninja/ServerGrowthTracking` around lines 1350 - 1351, The script
currently sets $hostname and $deviceId from $env:COMPUTERNAME which causes
device_id churn when hostnames change; replace $deviceId assignment to use a
stable hardware identifier (for example retrieve BIOS serial via
(Get-CimInstance Win32_BIOS).SerialNumber or UUID via (Get-CimInstance
Win32_ComputerSystemProduct).UUID) so device rows persist across renames, and
update the header comment near the $hostname/$deviceId usage to document the
previous behavior and the change (or, if you choose not to change code, document
the limitation clearly). Ensure you update references that write/read device IDs
to use the new $deviceId symbol consistently (e.g., where device rows and
drive/metric rows are recorded).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@rmm-ninja/ServerGrowthTracking`:
- Line 201: The current line overwrites the entire SecurityProtocol mask and can
disable TLS 1.3; change the assignment so you add TLS 1.2 to the existing mask
using a bitwise OR rather than replacing it. Locate the
[Net.ServicePointManager]::SecurityProtocol assignment and update it to combine
the current SecurityProtocol value with the Tls12 flag (use the PowerShell
bitwise OR operation) so existing protocols (e.g., TLS 1.3 on .NET 5+/PowerShell
7+) remain enabled.
- Around line 1127-1133: Replace the hardcoded sentinel "1825.00" with a value
derived from the DAYS_CAP constant so the branch stays valid if the cap changes:
compute a formatted sentinel string from $Script:DAYS_CAP (e.g. $capStr =
"{0:N2}" -f $Script:DAYS_CAP) and use that in the if check instead of the
literal; keep the rest of the logic using $daysStr/$daysVal unchanged so capped
drives still map to the "Xyr+" label when matched.
- Around line 582-589: The loop over $driveData.history is casting legacy fields
directly which converts $null to 0.0; update the foreach block that reads $entry
(and the call to Add-MetricRecord) to first verify that $entry.usedGB,
$entry.freeGB and $entry.usagePercent are present and non-null (e.g., test with
-ne $null or -and conditions) and skip/continue any $entry missing any of those
fields so you do not cast nulls into zeros before calling Add-MetricRecord (keep
using $driveId and increment $migratedPoints only for successfully accepted
entries).
- Around line 1429-1435: The loop inserts metrics every run causing multiple
rows per drive per day; before calling Add-MetricRecord for each $drive (in the
foreach over $currentDrives using $deviceId and $now) check whether a metric
already exists for that drive for the same calendar day and skip or update
instead of inserting; locate the Add-MetricRecord usage and add a guard that
queries existing metrics (e.g., via a Get-MetricRecordByDriveAndDate or a DB
lookup using driveId and date-truncated $now) and only call Add-MetricRecord
when no same-day record exists (or call an Update-MetricRecord when you want to
replace the day's values).
- Around line 1271-1272: The Write-Log calls using expressions like
([char]0x2550 * 63) are doing numeric multiplication instead of producing
repeated characters; replace each occurrence (the Write-Log lines that use
[char]0x2550 and [char]0x2500) with a string-repeat expression such as: -join
(1..63 | ForEach-Object {[char]0x2550}) (and similarly use [char]0x2500 for the
other lines) so Write-Log receives a string of 63 repeated decorative characters
instead of an integer.

---

Nitpick comments:
In `@rmm-ninja/ServerGrowthTracking`:
- Around line 275-284: The new alert_state table is unused dead schema; remove
the CREATE TABLE block for alert_state (or if intended as future work, add a
clear TODO comment) and consolidate fire-once alert logic to the existing
alert_sent column on the drive table; locate the CREATE TABLE for alert_state
and either delete it or replace it with a documented comment referencing
drive.alert_sent and the existing alert flow so there is no unused schema left
in the script.
- Line 1124: The conditional that sets $growthStr is redundant because both
branches yield $a.GBPerMonth; replace the whole if-expression with a direct
assignment to $growthStr using $a.GBPerMonth (remove the if/else and the
unnecessary "$($a.GBPerMonth)" wrapping) to simplify the code and preserve
behavior for the $growthStr variable.
- Around line 1350-1351: The script currently sets $hostname and $deviceId from
$env:COMPUTERNAME which causes device_id churn when hostnames change; replace
$deviceId assignment to use a stable hardware identifier (for example retrieve
BIOS serial via (Get-CimInstance Win32_BIOS).SerialNumber or UUID via
(Get-CimInstance Win32_ComputerSystemProduct).UUID) so device rows persist
across renames, and update the header comment near the $hostname/$deviceId usage
to document the previous behavior and the change (or, if you choose not to
change code, document the limitation clearly). Ensure you update references that
write/read device IDs to use the new $deviceId symbol consistently (e.g., where
device rows and drive/metric rows are recorded).

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 68013a9 and cc1d179.

📒 Files selected for processing (1)
  • rmm-ninja/ServerGrowthTracking

# Install from PSGallery
Write-Log "PSSQLite module not found - installing from PSGallery..."
try {
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

SecurityProtocol assignment strips TLS 1.3 and any currently-enabled protocols.

= [Net.SecurityProtocolType]::Tls12 replaces the entire protocol mask. On .NET 5+/PowerShell 7+ environments where TLS 1.3 is enabled by default, this silently disables it for the duration of the process and may break PSGallery calls or future HTTPS requests.

Use a bitwise OR to add TLS 1.2 without removing anything already enabled:

🐛 Proposed fix
-[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
+[Net.ServicePointManager]::SecurityProtocol = [Net.ServicePointManager]::SecurityProtocol -bor [Net.SecurityProtocolType]::Tls12
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
[Net.ServicePointManager]::SecurityProtocol = [Net.ServicePointManager]::SecurityProtocol -bor [Net.SecurityProtocolType]::Tls12
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rmm-ninja/ServerGrowthTracking` at line 201, The current line overwrites the
entire SecurityProtocol mask and can disable TLS 1.3; change the assignment so
you add TLS 1.2 to the existing mask using a bitwise OR rather than replacing
it. Locate the [Net.ServicePointManager]::SecurityProtocol assignment and update
it to combine the current SecurityProtocol value with the Tls12 flag (use the
PowerShell bitwise OR operation) so existing protocols (e.g., TLS 1.3 on .NET
5+/PowerShell 7+) remain enabled.

Comment on lines +582 to +589
if ($driveData.history) {
foreach ($entry in $driveData.history) {
if (-not $entry.timestamp) { continue }
$ts = $entry.timestamp
Add-MetricRecord -DriveId $driveId -Timestamp $ts `
-UsedGB ([double]$entry.usedGB) -FreeGB ([double]$entry.freeGB) `
-UsagePercent ([double]$entry.usagePercent)
$migratedPoints++
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Missing null/existence check before casting legacy metric fields — silent zero insertion.

If a legacy JSON entry is missing usedGB, freeGB, or usagePercent (e.g., a partially-written record), [double]$null silently yields 0.0. A zero usedGB point will pull the regression slope negative and corrupt DaysUntilFull.

🛡️ Proposed fix — skip entries missing required numeric fields
             foreach ($entry in $driveData.history) {
                 if (-not $entry.timestamp) { continue }
+                if ($null -eq $entry.usedGB -or $null -eq $entry.freeGB -or $null -eq $entry.usagePercent) {
+                    Write-VerboseLog "Migration: Skipping entry with missing fields (ts=$($entry.timestamp))"
+                    continue
+                }
                 $ts = $entry.timestamp
                 Add-MetricRecord -DriveId $driveId -Timestamp $ts `
                     -UsedGB ([double]$entry.usedGB) -FreeGB ([double]$entry.freeGB) `
                     -UsagePercent ([double]$entry.usagePercent)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if ($driveData.history) {
foreach ($entry in $driveData.history) {
if (-not $entry.timestamp) { continue }
$ts = $entry.timestamp
Add-MetricRecord -DriveId $driveId -Timestamp $ts `
-UsedGB ([double]$entry.usedGB) -FreeGB ([double]$entry.freeGB) `
-UsagePercent ([double]$entry.usagePercent)
$migratedPoints++
if ($driveData.history) {
foreach ($entry in $driveData.history) {
if (-not $entry.timestamp) { continue }
if ($null -eq $entry.usedGB -or $null -eq $entry.freeGB -or $null -eq $entry.usagePercent) {
Write-VerboseLog "Migration: Skipping entry with missing fields (ts=$($entry.timestamp))"
continue
}
$ts = $entry.timestamp
Add-MetricRecord -DriveId $driveId -Timestamp $ts `
-UsedGB ([double]$entry.usedGB) -FreeGB ([double]$entry.freeGB) `
-UsagePercent ([double]$entry.usagePercent)
$migratedPoints++
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rmm-ninja/ServerGrowthTracking` around lines 582 - 589, The loop over
$driveData.history is casting legacy fields directly which converts $null to
0.0; update the foreach block that reads $entry (and the call to
Add-MetricRecord) to first verify that $entry.usedGB, $entry.freeGB and
$entry.usagePercent are present and non-null (e.g., test with -ne $null or -and
conditions) and skip/continue any $entry missing any of those fields so you do
not cast nulls into zeros before calling Add-MetricRecord (keep using $driveId
and increment $migratedPoints only for successfully accepted entries).

Comment on lines +1039 to +1053
# Legend row at bottom
$legendY = $mTop + $plotH + 30
$legendX = $mLeft
foreach ($item in $legendItems) {
$label = $item.Label -replace ':$', ''
if ($item.Type -eq 'OS') { $label += " (OS)" }

[void]$sb.Append("<line x1=`"$legendX`" y1=`"$legendY`" x2=`"$($legendX + 16)`" y2=`"$legendY`" stroke=`"$($item.Color)`" stroke-width=`"2.5`" stroke-linecap=`"round`"/>")
[void]$sb.Append("<text x=`"$($legendX + 20)`" y=`"$($legendY + 3)`" fill=`"#475569`" font-size=`"10`">$label</text>")

$legendX += 65 + ($label.Length * 2)
if ($legendX -gt ($chartWidth - 80)) {
$legendX = $mLeft
$legendY += 16
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Legend rows that wrap overflow the fixed SVG viewBox height and are clipped.

$legendY increments by 16 px per wrapped row but $chartHeight = 240 is never adjusted. With 5+ drives the second legend row falls outside the viewBox.

Consider either computing the needed legend height before emitting the SVG, or increasing $mBottom dynamically based on legend item count to provide enough padding for all legend rows.

Comment on lines +1127 to +1133
if ($daysStr -eq "1825.00") { $daysStr = "5yr+" }
elseif ($daysStr -match '^\d+\.\d+$') {
$daysVal = [double]$daysStr
if ($daysVal -ge 365) { $daysStr = "$([math]::Round($daysVal / 365, 1))yr" }
elseif ($daysVal -ge 30) { $daysStr = "$([math]::Round($daysVal / 30, 1))mo" }
else { $daysStr = "$([math]::Round($daysVal, 0))d" }
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Magic string "1825.00" is a hardcoded copy of $Script:DAYS_CAP — silent breakage on constant change.

If $Script:DAYS_CAP is ever changed (e.g., to 730 for a 2-year cap), the display branch will never match and all capped drives will show a raw decimal instead of "5yr+".

🐛 Proposed fix — derive the sentinel string from the constant
-        if ($daysStr -eq "1825.00") { $daysStr = "5yr+" }
+        $capStr = ([double]$Script:DAYS_CAP).ToString("F2")
+        $capYears = [math]::Round($Script:DAYS_CAP / 365, 0)
+        if ($daysStr -eq $capStr) { $daysStr = "${capYears}yr+" }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if ($daysStr -eq "1825.00") { $daysStr = "5yr+" }
elseif ($daysStr -match '^\d+\.\d+$') {
$daysVal = [double]$daysStr
if ($daysVal -ge 365) { $daysStr = "$([math]::Round($daysVal / 365, 1))yr" }
elseif ($daysVal -ge 30) { $daysStr = "$([math]::Round($daysVal / 30, 1))mo" }
else { $daysStr = "$([math]::Round($daysVal, 0))d" }
}
$capStr = ([double]$Script:DAYS_CAP).ToString("F2")
$capYears = [math]::Round($Script:DAYS_CAP / 365, 0)
if ($daysStr -eq $capStr) { $daysStr = "${capYears}yr+" }
elseif ($daysStr -match '^\d+\.\d+$') {
$daysVal = [double]$daysStr
if ($daysVal -ge 365) { $daysStr = "$([math]::Round($daysVal / 365, 1))yr" }
elseif ($daysVal -ge 30) { $daysStr = "$([math]::Round($daysVal / 30, 1))mo" }
else { $daysStr = "$([math]::Round($daysVal, 0))d" }
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rmm-ninja/ServerGrowthTracking` around lines 1127 - 1133, Replace the
hardcoded sentinel "1825.00" with a value derived from the DAYS_CAP constant so
the branch stays valid if the cap changes: compute a formatted sentinel string
from $Script:DAYS_CAP (e.g. $capStr = "{0:N2}" -f $Script:DAYS_CAP) and use that
in the if check instead of the literal; keep the rest of the logic using
$daysStr/$daysVal unchanged so capped drives still map to the "Xyr+" label when
matched.

Comment on lines +1271 to +1272
Write-Log ([char]0x2550 * 63)
Write-Log ""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

PowerShell [char] multiplication operator behavior [char]0x2550 * 63 integer or string

💡 Result:

In PowerShell, * is polymorphic: it does numeric multiplication, or it repeats strings/arrays. Which behavior you get is primarily determined by the left operand’s type (so * is not commutative for mixed types). [1]

  • [char]0x2550 is a System.Char, whose value is its 16-bit numeric (ordinal / code-point) value. [2]
  • Therefore, [char]0x2550 * 63 is treated as numeric multiplication, not string repetition:
    • 0x2550 = 9552
    • 9552 * 63 = 601776 (type: numeric / Int32)

If you want a string of that character repeated 63 times, force the left operand to be a string:

([string][char]0x2550) * 63
# or
([char]0x2550).ToString() * 63

Sources: [1] [3] [2]


🏁 Script executed:

# Find the file and verify lines exist
find . -type f -name "*ServerGrowthTracking*" 2>/dev/null | head -20

Repository: DTC-Inc/msp-script-library

Length of output: 102


🏁 Script executed:

# Check file length first
wc -l ./rmm-ninja/ServerGrowthTracking

# Then read the specific lines
sed -n '1269,1275p' ./rmm-ninja/ServerGrowthTracking
sed -n '1297,1302p' ./rmm-ninja/ServerGrowthTracking

Repository: DTC-Inc/msp-script-library

Length of output: 517


🏁 Script executed:

# Search for all instances of [char] followed by * in the file
rg -n '\[char\][^*]*\*\s*\d+' ./rmm-ninja/ServerGrowthTracking

Repository: DTC-Inc/msp-script-library

Length of output: 186


🏁 Script executed:

# Check line 1340 context
sed -n '1338,1344p' ./rmm-ninja/ServerGrowthTracking

Repository: DTC-Inc/msp-script-library

Length of output: 232


[char]0x2550 * 63 and [char]0x2500 * 63 evaluate to integers (601776 and 596736), not decorative separator lines.

In PowerShell, [char] is a numeric type; when used with the * operator, it triggers numeric multiplication, not string repetition. [char]0x2550 is 9552; 9552 × 63 = 601776. The same defect occurs on line 1299 ([char]0x2500 * 63 = 596736) and line 1340.

To produce the intended decorative lines, convert the char to a string first:

🐛 Proposed fix
-    Write-Log ([char]0x2550 * 63)
+    Write-Log ("$([char]0x2550)" * 63)
-    Write-Log ([char]0x2500 * 63)
+    Write-Log ("$([char]0x2500)" * 63)

Apply both fixes at lines 1271, 1299, and 1340.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Write-Log ([char]0x2550 * 63)
Write-Log ""
Write-Log ("$([char]0x2550)" * 63)
Write-Log ""
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rmm-ninja/ServerGrowthTracking` around lines 1271 - 1272, The Write-Log calls
using expressions like ([char]0x2550 * 63) are doing numeric multiplication
instead of producing repeated characters; replace each occurrence (the Write-Log
lines that use [char]0x2550 and [char]0x2500) with a string-repeat expression
such as: -join (1..63 | ForEach-Object {[char]0x2550}) (and similarly use
[char]0x2500 for the other lines) so Write-Log receives a string of 63 repeated
decorative characters instead of an integer.

Comment on lines +1429 to +1435
foreach ($drive in $currentDrives) {
$driveId = Save-DriveRecord -DeviceId $deviceId -DriveLetter $drive.Letter `
-VolumeLabel $drive.VolumeLabel -TotalSizeGB $drive.TotalSizeGB `
-DriveType $drive.DriveType -Status "Online"

Add-MetricRecord -DriveId $driveId -Timestamp $now `
-UsedGB $drive.UsedGB -FreeGB $drive.FreeGB -UsagePercent $drive.UsagePercent
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

No daily deduplication — multiple runs per day corrupt the regression.

Add-MetricRecord inserts unconditionally on every execution. If the script is re-run (manual test, schedule overlap, re-deploy), multiple rows for the same day are inserted. OLS regression treats each row as an independent sample, over-weighting that day and producing a biased slope and DaysUntilFull.

Add a same-day guard before inserting:

🐛 Proposed fix
+    # Only insert one metric per drive per calendar day
+    $todayPrefix = $now.Substring(0, 10)   # "yyyy-MM-dd"
+    $existing_today = Invoke-SqliteQuery -DataSource $Script:DB_PATH `
+        -Query "SELECT id FROM metric WHERE drive_id = `@driveId` AND timestamp LIKE `@prefix`" `
+        -SqlParameters @{ driveId = $driveId; prefix = "$todayPrefix%" }
+    if ($existing_today) {
+        Write-VerboseLog "Drive $($drive.Letter): Metric already recorded today — skipping duplicate insert"
+    } else {
         Add-MetricRecord -DriveId $driveId -Timestamp $now `
             -UsedGB $drive.UsedGB -FreeGB $drive.FreeGB -UsagePercent $drive.UsagePercent
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rmm-ninja/ServerGrowthTracking` around lines 1429 - 1435, The loop inserts
metrics every run causing multiple rows per drive per day; before calling
Add-MetricRecord for each $drive (in the foreach over $currentDrives using
$deviceId and $now) check whether a metric already exists for that drive for the
same calendar day and skip or update instead of inserting; locate the
Add-MetricRecord usage and add a guard that queries existing metrics (e.g., via
a Get-MetricRecordByDriveAndDate or a DB lookup using driveId and date-truncated
$now) and only call Add-MetricRecord when no same-day record exists (or call an
Update-MetricRecord when you want to replace the day's values).

Resolves all 9 unresolved findings from CodeRabbit review round 5 (review ID 3852355772) plus 7 additional issues caught during self-review. No behavioral changes to the monitoring logic — all fixes target correctness, safety, and robustness.

Critical bug fix:

[char]0x2550 * 63 performed numeric multiplication (produced 601776) instead of string repetition — fixed to "$([char]0x2550)" * 63 in all three Write-Summary separator lines
Data integrity (3 fixes):

Added Set-DailyMetric function that checks for an existing same-day metric before inserting, preventing duplicate rows when the script runs more than once per day (e.g. during testing or if the scheduled task fires twice)
Guarded against [double]$null silently casting to 0.0 in legacy JSON migration — incomplete entries are now skipped with verbose logging instead of corrupting regression data
Changed migration file rename from -ErrorAction SilentlyContinue to -ErrorAction Stop so a failed rename surfaces in the try/catch rather than silently allowing re-migration on next run
Security (2 fixes):

SecurityProtocol = Tls12 overwrote the entire protocol mask, disabling TLS 1.3 on .NET 5+ — now uses -bor to add TLS 1.2 without stripping existing protocols
Volume labels, drive letters, and hostname are now HTML-encoded via [System.Net.WebUtility]::HtmlEncode() before interpolation into the WYSIWYG report to prevent XSS injection
Pipeline safety:

Added | Out-Null to all 19 DML Invoke-SqliteQuery calls (INSERT, UPDATE, DELETE, CREATE TABLE, CREATE INDEX) to prevent implicit output from leaking into function return values and corrupting callers
Correctness (2 fixes):

Reverted SELECT last_insert_rowid() back to re-query by UNIQUE constraint — last_insert_rowid() is connection-scoped and PSSQLite opens a new connection per -DataSource call, making the value unreliable
Fixed Set-DailyMetric day boundary from < T23:59:59 to < next-day T00:00:00 to eliminate off-by-one at the day boundary
Maintainability (3 fixes):

Hardcoded "1825.00" sentinel in days-until-full display now derived from $Script:DAYS_CAP constant
SVG chart viewBox height is now computed dynamically from legend row count, preventing legend clipping with 5+ drives
$removedCount from Remove-StaleDrives is now logged instead of silently discarded
Documentation/style:

Added TODO comment to alert_state table explaining it's scaffolding for future centralized alerting
Simplified redundant conditional in $growthStr (both branches returned identical values)
Documented $env:COMPUTERNAME device_id hostname-change limitation in script header
Legacy JSON path now uses $env:ProgramData instead of hardcoded C:\ProgramData
Moved $Script:LOG_PRUNE_THRESHOLD_KB from inside Save-LogFile function body to the constants section
Refactor drive metric insertion logic to use atomic DELETE and INSERT in a single batch, improving efficiency and preventing duplicate entries. Update HTML encoding for user-sourced values to enhance security.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant