Skip to content

Fix README and update manifests for C6 builds#105

Merged
harpua555 merged 10 commits intomainfrom
dev
Mar 1, 2026
Merged

Fix README and update manifests for C6 builds#105
harpua555 merged 10 commits intomainfrom
dev

Conversation

@harpua555
Copy link
Owner

@harpua555 harpua555 commented Mar 1, 2026

Summary by CodeRabbit

  • New Features

    • Added support for three experimental ESP32-C6 board variants.
  • Documentation

    • Added Stress mode docs and expanded contributor guidance for simulator/testing.
  • Improvements

    • Response caching and deduplication to reduce server load.
    • Improved thread-safety and logging robustness.
    • Enabled additional build configuration flags for diagnostics.
  • Tests

    • Added comprehensive thread-safety and long-running soak test suites with sanitizer support.
  • Chores

    • Broadened CI/manual build environment options and release asset descriptions.

harpua555 and others added 9 commits February 6, 2026 11:47
Logger fixes:
- clearLogs(): move index resets inside critical section (TOCTOU race)
- getLogsAsText(): snapshot indices under mutex (torn read race)
- logInternal(): move generateUUID() inside critical section (duplicate UUID race)

WebServer fixes:
- Replace String caches + cacheMutex with lock-free double-buffer pattern
  (CachedResponse struct eliminates malloc under spinlock)
- Cache discovery JSON on main loop instead of copying vector from async handler
- Cache version JSON at startup to avoid LittleFS reads from async handler

Testing:
- Add test/test_thread_safety.cpp with 6 POSIX-thread stress tests
- Add --sanitize (ASan) and --tsan (TSan) flags to build_and_run_all_tests.sh

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
6 soak tests simulating extended firmware runtime:
- Full pipeline soak (2M iterations, ~139h simulated, 4032 print cycles)
- Logger circular buffer soak (1M entries with periodic clears)
- CachedResponse double-buffer soak (1M publishes)
- millis() rollover handling (ULONG_MAX wraparound)
- Rapid print start/stop cycling (100K cycles)
- Extrusion accumulation precision over long prints

All tests pass with ASan (no memory errors detected).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 1, 2026

Caution

Review failed

The pull request is closed.

ℹ️ Recent review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9c85549 and b7b096c.

📒 Files selected for processing (2)
  • README.md
  • platformio.ini

Walkthrough

Adds three ESP32‑C6 board entries and expands CI build envs; replaces several heap-allocated Strings with fixed-size C buffers; tightens Logger synchronization; implements cached, thread-safe WebServer responses and pending command queueing; enables additional build_flags and core‑dump config; and adds extensive thread-safety and soak test suites plus sanitizer support.

Changes

Cohort / File(s) Summary
CI & Release Workflow
.github/workflows/manual-build-artifacts.yml
Expanded board_envs metadata for workflow_dispatch to include many additional PlatformIO envs (no artifact or control-flow logic changes).
Board Registry & Distributor Docs
distributor/boards.json, distributor/README.md
Appended three new experimental board entries (esp32c6, esp32c6m, seeed_esp32c6, v0.5.0) and updated distributor README to include the new envs and note optional bootloader/partitions artifacts.
Build Configuration
platformio.ini
Enabled FILAMENT_RUNOUT_PIN and MOVEMENT_SENSOR_PIN build flags (uncommented) and restored/enabled ESP core‑dump related CONFIG_ flags; minor comment typo fix.
String Handling Refactor
src/ElegooCC.h, src/ElegooCC.cpp, test/test_elegoo_cc.cpp
Replaced String members in printer_info_t with fixed-size char arrays (mainboardID[64], taskId[64], filename[128]) and updated code/tests to use bounded strlcpy/strcmp.
Logger Synchronization
src/Logger.cpp
Moved UUID generation inside the mutex to ensure atomic increments; snapshot and clear operations updated to perform index/clear actions within critical sections; adjusted per-entry locking in streamLogs to copy entries under lock.
WebServer Caching & Command Queueing
src/WebServer.h, src/WebServer.cpp
Added CRC32 helper, cached JSON responses (double-buffered), pending command flags/queue for settings/pause/resume/discovery, SSE client cap and deduplication, processPendingCommands/refreshCachedResponses/cleanupSSEClients methods, and switched status JSON build to use preallocated StaticJsonDocument.
Main Loop / Stress Mode
src/main.cpp, README.md
Introduced kMainLoopDelayMs (0 under STRESS_MODE, otherwise 1) and conditional STRESS_MODE logging; documented Stress mode usage and contributing guidance about pulse simulator.
Test Harness & Stress Tests
test/build_and_run_all_tests.sh, test/test_soak.cpp, test/test_thread_safety.cpp, test/test_elegoo_cc.cpp, test/tsan_output.txt
Added sanitizer flags and options to test runner; added test_thread_safety and test_soak stress suites (POSIX-thread mocks, logger/CachedResponse stress tests, soak scenarios, rollover and precision tests); updated test_elegoo_cc to match refactor; added ThreadSanitizer output artifact.
🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Title check ⚠️ Warning The title focuses narrowly on README and manifest updates for C6 builds, but the changeset includes substantial refactoring across WebServer, Logger, ElegooCC, and major test infrastructure additions that represent the most complex and important changes. Consider a more comprehensive title that reflects the major architectural changes, such as 'Add thread-safe caching, refactor logging synchronization, and expand C6 support' or 'Implement SSE deduplication, cache layers, and thread-safety improvements with C6 board support'.
Docstring Coverage ⚠️ Warning Docstring coverage is 25.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
platformio.ini (2)

200-200: ⚠️ Potential issue | 🟡 Minor

Typo: "synchrnous" should be "synchronous".

📝 Proposed fix
 lib_ignore = 
-    WebServer ; arduino 3.x has its own synchrnous WebServer implementation that must be removed
+    WebServer ; arduino 3.x has its own synchronous WebServer implementation that must be removed
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@platformio.ini` at line 200, Fix the typo in the platformio.ini comment:
change "synchrnous" to "synchronous" in the line containing "WebServer ; arduino
3.x has its own synchrnous WebServer implementation that must be removed" so the
comment reads "WebServer ; arduino 3.x has its own synchronous WebServer
implementation that must be removed".

39-41: ⚠️ Potential issue | 🔴 Critical

Missing indentation on build_flags continuation lines causes coredump flags to be excluded from the build.

Lines 39-41 lack proper indentation and are not recognized as part of the build_flags value. In PlatformIO's INI format, multi-line values require continuation lines to be indented. Without indentation, these coredump configuration flags are silently dropped from the build, making the configuration ineffective.

🐛 Proposed fix
     ; Coredump configuration - saves crash info to flash partition for analysis
     ; -D CONFIG_APP_REPRODUCIBLE_BUILD=y  ; commented out because 
     ;   Firmware ThumbprintFilesystemThumbprint,Project Status
     ;   and Version displayed in the WebUI about tab might depend on them
--D CONFIG_ESP_COREDUMP_ENABLE_TO_FLASH=1
--D CONFIG_ESP_COREDUMP_DATA_FORMAT_ELF=1
--D CONFIG_ESP_COREDUMP_CHECKSUM_CRC32=1
+    -D CONFIG_ESP_COREDUMP_ENABLE_TO_FLASH=1
+    -D CONFIG_ESP_COREDUMP_DATA_FORMAT_ELF=1
+    -D CONFIG_ESP_COREDUMP_CHECKSUM_CRC32=1
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@platformio.ini` around lines 39 - 41, The three coredump preprocessor flags
(-D CONFIG_ESP_COREDUMP_ENABLE_TO_FLASH=1, -D
CONFIG_ESP_COREDUMP_DATA_FORMAT_ELF=1, -D CONFIG_ESP_COREDUMP_CHECKSUM_CRC32=1)
are not being included because their lines are not indented as continuations of
the build_flags value; edit the platformio.ini so these three -D entries are
indented under the existing build_flags key (so they are treated as continuation
lines and included in the build), ensuring the build_flags stanza contains those
flags for the build system to pick up.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@README.md`:
- Around line 117-124: The README's Stress mode example uses a top-level
"build_flags = -DSTRESS_MODE" which can overwrite existing flags; update the
documentation to instruct users to append the flag in an environment's
build_flags and preserve existing flags (e.g., reference "build_flags",
"-DSTRESS_MODE" and the common expansion "${common.build_flags}") so the
guidance shows adding -DSTRESS_MODE to an environment's build_flags while
including existing flags via ${common.build_flags}.

In `@src/WebServer.cpp`:
- Around line 525-527: The value for "detection_ratio_threshold" is being parsed
as an int causing truncation; change the parsing call in the JSON handling to
use a floating-point type (e.g.,
jsonObj["detection_ratio_threshold"].as<float>() or as<double>()) and pass that
float/double into settingsManager.setDetectionRatioThreshold, updating or
overloading setDetectionRatioThreshold if it currently only accepts an integer
so the stored/persisted threshold retains fractional precision.
- Around line 146-166: The current check-then-set uses two separate critical
sections allowing a race; acquire pendingMutex once to perform the entire
admission (read pendingSettingsUpdate, and if false, clear pendingSettingsDoc,
copy src into dst/pendingSettingsDoc and set pendingSettingsUpdate = true)
before releasing the mutex; i.e., move the initial check (the read of
pendingSettingsUpdate and the request->send 429 early return) inside the same
portENTER_CRITICAL/portEXIT_CRITICAL block that performs the
pendingSettingsDoc.clear(), JsonObject src = json.as<JsonObject>(), JsonObject
dst = pendingSettingsDoc.to<JsonObject>(), the copy loop over JsonPair, and
pendingSettingsUpdate = true so the check-and-enqueue is atomic.
- Around line 236-244: The onConnect handler for statusEvents currently calls
statusEvents.close(), which disconnects all SSE clients; instead, modify the
lambda registered in statusEvents.onConnect (the callback with signature
[this](AsyncEventSourceClient *client)) to call client->close() on the
individual AsyncEventSourceClient pointer when statusEvents.count() >
kMaxSSEClients so only the new excess connection is disconnected; update the log
and return path accordingly to avoid using statusEvents.close().

In `@test/tsan_output.txt`:
- Line 1: The committed test artifact test/tsan_output.txt contains
ThreadSanitizer output and should be removed from version control; delete the
file from the repo and add an ignore rule (e.g., add test/*.txt or specifically
test/tsan_output.txt) to .gitignore so future runs don’t get committed, or
alternatively, if you intend to keep this as documentation, move it to a docs/
folder and add a brief note in README explaining the intentional retention.

---

Outside diff comments:
In `@platformio.ini`:
- Line 200: Fix the typo in the platformio.ini comment: change "synchrnous" to
"synchronous" in the line containing "WebServer ; arduino 3.x has its own
synchrnous WebServer implementation that must be removed" so the comment reads
"WebServer ; arduino 3.x has its own synchronous WebServer implementation that
must be removed".
- Around line 39-41: The three coredump preprocessor flags (-D
CONFIG_ESP_COREDUMP_ENABLE_TO_FLASH=1, -D CONFIG_ESP_COREDUMP_DATA_FORMAT_ELF=1,
-D CONFIG_ESP_COREDUMP_CHECKSUM_CRC32=1) are not being included because their
lines are not indented as continuations of the build_flags value; edit the
platformio.ini so these three -D entries are indented under the existing
build_flags key (so they are treated as continuation lines and included in the
build), ensuring the build_flags stanza contains those flags for the build
system to pick up.

ℹ️ Review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 05fe365 and 9c85549.

⛔ Files ignored due to path filters (1)
  • test/tsan_full.log is excluded by !**/*.log
📒 Files selected for processing (22)
  • .github/workflows/manual-build-artifacts.yml
  • README.md
  • distributor/README.md
  • distributor/boards.json
  • platformio.ini
  • src/ElegooCC.cpp
  • src/ElegooCC.h
  • src/Logger.cpp
  • src/WebServer.cpp
  • src/WebServer.h
  • src/main.cpp
  • test/build_and_run_all_tests.sh
  • test/test_elegoo_cc.cpp
  • test/test_soak
  • test/test_soak.cpp
  • test/test_thread_safety
  • test/test_thread_safety.cpp
  • test/test_thread_safety_asan
  • test/test_thread_safety_tsan
  • test/test_thread_safety_tsan2
  • test/test_thread_safety_tsan3
  • test/tsan_output.txt

Comment on lines +146 to +166
portENTER_CRITICAL(&pendingMutex);
bool alreadyPending = pendingSettingsUpdate;
portEXIT_CRITICAL(&pendingMutex);

// Track if IP address changed to trigger reconnect
String oldIp = settingsManager.getElegooIP();
bool ipChanged = false;

// Only update fields that are present in the request
if (jsonObj.containsKey("elegooip"))
{
String newIp = jsonObj["elegooip"].as<String>();
ipChanged = (oldIp != newIp) && newIp.length() > 0;
settingsManager.setElegooIP(newIp);
}
if (jsonObj.containsKey("ssid"))
settingsManager.setSSID(jsonObj["ssid"].as<String>());
if (jsonObj.containsKey("passwd") && jsonObj["passwd"].as<String>().length() > 0)
settingsManager.setPassword(jsonObj["passwd"].as<String>());
if (jsonObj.containsKey("ap_mode"))
settingsManager.setAPMode(jsonObj["ap_mode"].as<bool>());
if (jsonObj.containsKey("pause_on_runout"))
settingsManager.setPauseOnRunout(jsonObj["pause_on_runout"].as<bool>());
if (jsonObj.containsKey("enabled"))
settingsManager.setEnabled(jsonObj["enabled"].as<bool>());
if (jsonObj.containsKey("detection_length_mm"))
settingsManager.setDetectionHardJamMm(jsonObj["detection_length_mm"].as<float>());
if (jsonObj.containsKey("detection_grace_period_ms"))
settingsManager.setDetectionGracePeriodMs(jsonObj["detection_grace_period_ms"].as<int>());
if (jsonObj.containsKey("detection_ratio_threshold"))
settingsManager.setDetectionRatioThreshold(jsonObj["detection_ratio_threshold"].as<int>());
if (jsonObj.containsKey("detection_hard_jam_mm"))
settingsManager.setDetectionHardJamMm(jsonObj["detection_hard_jam_mm"].as<float>());
if (jsonObj.containsKey("detection_soft_jam_time_ms"))
settingsManager.setDetectionSoftJamTimeMs(jsonObj["detection_soft_jam_time_ms"].as<int>());
if (jsonObj.containsKey("detection_hard_jam_time_ms"))
settingsManager.setDetectionHardJamTimeMs(jsonObj["detection_hard_jam_time_ms"].as<int>());
if (jsonObj.containsKey("detection_mode"))
settingsManager.setDetectionMode(jsonObj["detection_mode"].as<int>());
if (jsonObj.containsKey("sdcp_loss_behavior"))
settingsManager.setSdcpLossBehavior(jsonObj["sdcp_loss_behavior"].as<int>());
if (jsonObj.containsKey("flow_telemetry_stale_ms"))
settingsManager.setFlowTelemetryStaleMs(jsonObj["flow_telemetry_stale_ms"].as<int>());
if (jsonObj.containsKey("ui_refresh_interval_ms"))
settingsManager.setUiRefreshIntervalMs(jsonObj["ui_refresh_interval_ms"].as<int>());
if (jsonObj.containsKey("suppress_pause_commands"))
settingsManager.setSuppressPauseCommands(jsonObj["suppress_pause_commands"].as<bool>());
if (jsonObj.containsKey("log_level"))
settingsManager.setLogLevel(jsonObj["log_level"].as<int>());
if (jsonObj.containsKey("movement_mm_per_pulse"))
settingsManager.setMovementMmPerPulse(jsonObj["movement_mm_per_pulse"].as<float>());
if (jsonObj.containsKey("auto_calibrate_sensor"))
settingsManager.setAutoCalibrateSensor(jsonObj["auto_calibrate_sensor"].as<bool>());
if (jsonObj.containsKey("pulse_reduction_percent"))
settingsManager.setPulseReductionPercent(jsonObj["pulse_reduction_percent"].as<float>());
if (jsonObj.containsKey("test_recording_mode"))
settingsManager.setTestRecordingMode(jsonObj["test_recording_mode"].as<bool>());
if (jsonObj.containsKey("show_debug_page"))
settingsManager.setShowDebugPage(jsonObj["show_debug_page"].as<bool>());
if (jsonObj.containsKey("timezone_offset_minutes"))
settingsManager.setTimezoneOffsetMinutes(jsonObj["timezone_offset_minutes"].as<int>());

bool saved = settingsManager.save();
if (saved)
if (alreadyPending)
{
elegooCC.refreshCaches();
if (ipChanged)
{
elegooCC.reconnect();
}
request->send(200, "application/json", "{\"status\":\"ok\"}");
request->send(429, "application/json",
"{\"error\":\"Settings update already pending\"}");
return;
}
else

portENTER_CRITICAL(&pendingMutex);
pendingSettingsDoc.clear();
JsonObject src = json.as<JsonObject>();
JsonObject dst = pendingSettingsDoc.to<JsonObject>();
for (JsonPair kv : src)
{
request->send(500, "application/json",
"{\"error\":\"Failed to save settings to flash\"}");
dst[kv.key()] = kv.value();
}
pendingSettingsUpdate = true;
portEXIT_CRITICAL(&pendingMutex);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Make pending-settings admission atomic.

Line [146]-[148] and Line [157]-[166] create a check-then-set gap. Two concurrent /update_settings requests can both pass and one silently overwrites the other.

💡 Suggested fix (single critical section for check+enqueue)
-            portENTER_CRITICAL(&pendingMutex);
-            bool alreadyPending = pendingSettingsUpdate;
-            portEXIT_CRITICAL(&pendingMutex);
-
-            if (alreadyPending)
-            {
-                request->send(429, "application/json",
-                              "{\"error\":\"Settings update already pending\"}");
-                return;
-            }
-
-            portENTER_CRITICAL(&pendingMutex);
-            pendingSettingsDoc.clear();
-            JsonObject src = json.as<JsonObject>();
-            JsonObject dst = pendingSettingsDoc.to<JsonObject>();
-            for (JsonPair kv : src)
-            {
-                dst[kv.key()] = kv.value();
-            }
-            pendingSettingsUpdate = true;
-            portEXIT_CRITICAL(&pendingMutex);
+            bool accepted = false;
+            portENTER_CRITICAL(&pendingMutex);
+            if (!pendingSettingsUpdate)
+            {
+                pendingSettingsDoc.clear();
+                JsonObject src = json.as<JsonObject>();
+                JsonObject dst = pendingSettingsDoc.to<JsonObject>();
+                for (JsonPair kv : src)
+                {
+                    dst[kv.key()] = kv.value();
+                }
+                pendingSettingsUpdate = true;
+                accepted = true;
+            }
+            portEXIT_CRITICAL(&pendingMutex);
+
+            if (!accepted)
+            {
+                request->send(429, "application/json",
+                              "{\"error\":\"Settings update already pending\"}");
+                return;
+            }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/WebServer.cpp` around lines 146 - 166, The current check-then-set uses
two separate critical sections allowing a race; acquire pendingMutex once to
perform the entire admission (read pendingSettingsUpdate, and if false, clear
pendingSettingsDoc, copy src into dst/pendingSettingsDoc and set
pendingSettingsUpdate = true) before releasing the mutex; i.e., move the initial
check (the read of pendingSettingsUpdate and the request->send 429 early return)
inside the same portENTER_CRITICAL/portEXIT_CRITICAL block that performs the
pendingSettingsDoc.clear(), JsonObject src = json.as<JsonObject>(), JsonObject
dst = pendingSettingsDoc.to<JsonObject>(), the copy loop over JsonPair, and
pendingSettingsUpdate = true so the check-and-enqueue is atomic.

Comment on lines +236 to +244
statusEvents.onConnect([this](AsyncEventSourceClient *client) {
if (statusEvents.count() > kMaxSSEClients)
{
// Over limit - the library already added the client, so we
// just won't send data and it will be cleaned up on next sweep.
logger.logf("SSE client rejected (count=%d, max=%d)",
statusEvents.count(), kMaxSSEClients);
return;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's look at the actual code in src/WebServer.cpp around lines 619-634
wc -l src/WebServer.cpp

Repository: harpua555/OpenFilamentSensor

Length of output: 93


🏁 Script executed:

# Read the relevant lines mentioned in the review
sed -n '619,634p' src/WebServer.cpp

Repository: harpua555/OpenFilamentSensor

Length of output: 498


🏁 Script executed:

# Also check the initial snippet mentioned (lines 236-244)
sed -n '236,244p' src/WebServer.cpp

Repository: harpua555/OpenFilamentSensor

Length of output: 508


🏁 Script executed:

# Search for statusEvents.close() calls in the file
rg 'statusEvents\.close\(' src/WebServer.cpp -n -A 5 -B 5

Repository: harpua555/OpenFilamentSensor

Length of output: 401


🏁 Script executed:

# Search for AsyncEventSource includes and usage in the codebase
rg 'AsyncEventSource' src/ -n

Repository: harpua555/OpenFilamentSensor

Length of output: 263


🏁 Script executed:

# Check if ESPAsyncWebServer is in the project
find . -name "*.h" -o -name "*.cpp" | xargs grep -l "AsyncEventSource" | head -20

Repository: harpua555/OpenFilamentSensor

Length of output: 109


🏁 Script executed:

# Look for the library directory or dependency info
cat platformio.ini 2>/dev/null || cat package.json 2>/dev/null || cat CMakeLists.txt 2>/dev/null | head -50

Repository: harpua555/OpenFilamentSensor

Length of output: 7716


🌐 Web query:

ESPAsyncWebServer v3.7.3 AsyncEventSource close() method behavior documentation

💡 Result:

In ESP32Async/ESPAsyncWebServer v3.7.3, the “close” behavior for Server‑Sent Events (SSE) is defined in the library source (there isn’t separate narrative documentation).

  • AsyncEventSourceClient::close(): if the client is still present, it simply calls the underlying TCP client’s AsyncClient::close() (no “force” flag, no explicit queue flush, no direct removal from the server’s client list). [1]
  • AsyncEventSource::close() (the SSE endpoint): iterates over all tracked SSE clients and calls c->close() on each that reports connected(). It does not clear _clients itself; client objects get removed later via the disconnect path (_handleDisconnect) when the TCP disconnect occurs. [1]
  • Related: on timeout, the code uses _client->close(true) (force close) in _onTimeout(), which is different from close()’s default behavior. [1]
  • The header documents intent at a high level: AsyncEventSource::close() is “close all connected clients”, and AsyncEventSourceClient::close() is “close client’s connection”. [2]

Sources: AsyncEventSource.cpp (v3.7.3) [1], AsyncEventSource.h (v3.7.3) [2].


Client-cap cleanup disconnects all SSE clients, not just the excess connection.

Line 633 calls statusEvents.close() when count exceeds the cap, which closes every connected client. This causes unnecessary churn—healthy clients get disconnected and must reconnect repeatedly. The cleanup runs every 30 seconds (line 622), exacerbating the problem.

AsyncEventSource::close() iterates over all clients and disconnects each one. Instead, disconnect only the newly connected over-limit client by calling client->close() directly on the individual client pointer available in the onConnect callback (line 236).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/WebServer.cpp` around lines 236 - 244, The onConnect handler for
statusEvents currently calls statusEvents.close(), which disconnects all SSE
clients; instead, modify the lambda registered in statusEvents.onConnect (the
callback with signature [this](AsyncEventSourceClient *client)) to call
client->close() on the individual AsyncEventSourceClient pointer when
statusEvents.count() > kMaxSSEClients so only the new excess connection is
disconnected; update the log and return path accordingly to avoid using
statusEvents.close().

Comment on lines +525 to +527
if (jsonObj.containsKey("detection_ratio_threshold"))
settingsManager.setDetectionRatioThreshold(jsonObj["detection_ratio_threshold"].as<int>());
if (jsonObj.containsKey("detection_hard_jam_mm"))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Parse detection_ratio_threshold as float, not int.

Line [526] truncates fractional values (0.70 becomes 0), which can invalidate persisted detection tuning.

💡 Suggested fix
         if (jsonObj.containsKey("detection_ratio_threshold"))
-            settingsManager.setDetectionRatioThreshold(jsonObj["detection_ratio_threshold"].as<int>());
+            settingsManager.setDetectionRatioThreshold(jsonObj["detection_ratio_threshold"].as<float>());
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (jsonObj.containsKey("detection_ratio_threshold"))
settingsManager.setDetectionRatioThreshold(jsonObj["detection_ratio_threshold"].as<int>());
if (jsonObj.containsKey("detection_hard_jam_mm"))
if (jsonObj.containsKey("detection_ratio_threshold"))
settingsManager.setDetectionRatioThreshold(jsonObj["detection_ratio_threshold"].as<float>());
if (jsonObj.containsKey("detection_hard_jam_mm"))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/WebServer.cpp` around lines 525 - 527, The value for
"detection_ratio_threshold" is being parsed as an int causing truncation; change
the parsing call in the JSON handling to use a floating-point type (e.g.,
jsonObj["detection_ratio_threshold"].as<float>() or as<double>()) and pass that
float/double into settingsManager.setDetectionRatioThreshold, updating or
overloading setDetectionRatioThreshold if it currently only accepts an integer
so the stored/persisted threshold retains fractional precision.

@@ -0,0 +1 @@
FATAL: ThreadSanitizer: unexpected memory mapping 0x640c7f664000-0x640c7f667000
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Consider excluding this test artifact from version control.

This file contains a ThreadSanitizer fatal error (unexpected memory mapping) which is typically an environment/ASLR configuration issue rather than a code defect. Committing test output artifacts to the repository may cause confusion and isn't a common practice.

If this is intentional (e.g., documenting a known issue), consider adding context in a comment or README. Otherwise, consider adding test/*.txt or test/tsan_output.txt to .gitignore.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/tsan_output.txt` at line 1, The committed test artifact
test/tsan_output.txt contains ThreadSanitizer output and should be removed from
version control; delete the file from the repo and add an ignore rule (e.g., add
test/*.txt or specifically test/tsan_output.txt) to .gitignore so future runs
don’t get committed, or alternatively, if you intend to keep this as
documentation, move it to a docs/ folder and add a brief note in README
explaining the intentional retention.

@harpua555 harpua555 merged commit 2df4d7e into main Mar 1, 2026
2 of 3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant