Conversation
Logger fixes: - clearLogs(): move index resets inside critical section (TOCTOU race) - getLogsAsText(): snapshot indices under mutex (torn read race) - logInternal(): move generateUUID() inside critical section (duplicate UUID race) WebServer fixes: - Replace String caches + cacheMutex with lock-free double-buffer pattern (CachedResponse struct eliminates malloc under spinlock) - Cache discovery JSON on main loop instead of copying vector from async handler - Cache version JSON at startup to avoid LittleFS reads from async handler Testing: - Add test/test_thread_safety.cpp with 6 POSIX-thread stress tests - Add --sanitize (ASan) and --tsan (TSan) flags to build_and_run_all_tests.sh Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
6 soak tests simulating extended firmware runtime: - Full pipeline soak (2M iterations, ~139h simulated, 4032 print cycles) - Logger circular buffer soak (1M entries with periodic clears) - CachedResponse double-buffer soak (1M publishes) - millis() rollover handling (ULONG_MAX wraparound) - Rapid print start/stop cycling (100K cycles) - Extrusion accumulation precision over long prints All tests pass with ASan (no memory errors detected). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Caution Review failedThe pull request is closed. ℹ️ Recent review infoConfiguration used: Repository UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (2)
WalkthroughAdds three ESP32‑C6 board entries and expands CI build envs; replaces several heap-allocated Strings with fixed-size C buffers; tightens Logger synchronization; implements cached, thread-safe WebServer responses and pending command queueing; enables additional build_flags and core‑dump config; and adds extensive thread-safety and soak test suites plus sanitizer support. Changes
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
platformio.ini (2)
200-200:⚠️ Potential issue | 🟡 MinorTypo: "synchrnous" should be "synchronous".
📝 Proposed fix
lib_ignore = - WebServer ; arduino 3.x has its own synchrnous WebServer implementation that must be removed + WebServer ; arduino 3.x has its own synchronous WebServer implementation that must be removed🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@platformio.ini` at line 200, Fix the typo in the platformio.ini comment: change "synchrnous" to "synchronous" in the line containing "WebServer ; arduino 3.x has its own synchrnous WebServer implementation that must be removed" so the comment reads "WebServer ; arduino 3.x has its own synchronous WebServer implementation that must be removed".
39-41:⚠️ Potential issue | 🔴 CriticalMissing indentation on build_flags continuation lines causes coredump flags to be excluded from the build.
Lines 39-41 lack proper indentation and are not recognized as part of the
build_flagsvalue. In PlatformIO's INI format, multi-line values require continuation lines to be indented. Without indentation, these coredump configuration flags are silently dropped from the build, making the configuration ineffective.🐛 Proposed fix
; Coredump configuration - saves crash info to flash partition for analysis ; -D CONFIG_APP_REPRODUCIBLE_BUILD=y ; commented out because ; Firmware ThumbprintFilesystemThumbprint,Project Status ; and Version displayed in the WebUI about tab might depend on them --D CONFIG_ESP_COREDUMP_ENABLE_TO_FLASH=1 --D CONFIG_ESP_COREDUMP_DATA_FORMAT_ELF=1 --D CONFIG_ESP_COREDUMP_CHECKSUM_CRC32=1 + -D CONFIG_ESP_COREDUMP_ENABLE_TO_FLASH=1 + -D CONFIG_ESP_COREDUMP_DATA_FORMAT_ELF=1 + -D CONFIG_ESP_COREDUMP_CHECKSUM_CRC32=1🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@platformio.ini` around lines 39 - 41, The three coredump preprocessor flags (-D CONFIG_ESP_COREDUMP_ENABLE_TO_FLASH=1, -D CONFIG_ESP_COREDUMP_DATA_FORMAT_ELF=1, -D CONFIG_ESP_COREDUMP_CHECKSUM_CRC32=1) are not being included because their lines are not indented as continuations of the build_flags value; edit the platformio.ini so these three -D entries are indented under the existing build_flags key (so they are treated as continuation lines and included in the build), ensuring the build_flags stanza contains those flags for the build system to pick up.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@README.md`:
- Around line 117-124: The README's Stress mode example uses a top-level
"build_flags = -DSTRESS_MODE" which can overwrite existing flags; update the
documentation to instruct users to append the flag in an environment's
build_flags and preserve existing flags (e.g., reference "build_flags",
"-DSTRESS_MODE" and the common expansion "${common.build_flags}") so the
guidance shows adding -DSTRESS_MODE to an environment's build_flags while
including existing flags via ${common.build_flags}.
In `@src/WebServer.cpp`:
- Around line 525-527: The value for "detection_ratio_threshold" is being parsed
as an int causing truncation; change the parsing call in the JSON handling to
use a floating-point type (e.g.,
jsonObj["detection_ratio_threshold"].as<float>() or as<double>()) and pass that
float/double into settingsManager.setDetectionRatioThreshold, updating or
overloading setDetectionRatioThreshold if it currently only accepts an integer
so the stored/persisted threshold retains fractional precision.
- Around line 146-166: The current check-then-set uses two separate critical
sections allowing a race; acquire pendingMutex once to perform the entire
admission (read pendingSettingsUpdate, and if false, clear pendingSettingsDoc,
copy src into dst/pendingSettingsDoc and set pendingSettingsUpdate = true)
before releasing the mutex; i.e., move the initial check (the read of
pendingSettingsUpdate and the request->send 429 early return) inside the same
portENTER_CRITICAL/portEXIT_CRITICAL block that performs the
pendingSettingsDoc.clear(), JsonObject src = json.as<JsonObject>(), JsonObject
dst = pendingSettingsDoc.to<JsonObject>(), the copy loop over JsonPair, and
pendingSettingsUpdate = true so the check-and-enqueue is atomic.
- Around line 236-244: The onConnect handler for statusEvents currently calls
statusEvents.close(), which disconnects all SSE clients; instead, modify the
lambda registered in statusEvents.onConnect (the callback with signature
[this](AsyncEventSourceClient *client)) to call client->close() on the
individual AsyncEventSourceClient pointer when statusEvents.count() >
kMaxSSEClients so only the new excess connection is disconnected; update the log
and return path accordingly to avoid using statusEvents.close().
In `@test/tsan_output.txt`:
- Line 1: The committed test artifact test/tsan_output.txt contains
ThreadSanitizer output and should be removed from version control; delete the
file from the repo and add an ignore rule (e.g., add test/*.txt or specifically
test/tsan_output.txt) to .gitignore so future runs don’t get committed, or
alternatively, if you intend to keep this as documentation, move it to a docs/
folder and add a brief note in README explaining the intentional retention.
---
Outside diff comments:
In `@platformio.ini`:
- Line 200: Fix the typo in the platformio.ini comment: change "synchrnous" to
"synchronous" in the line containing "WebServer ; arduino 3.x has its own
synchrnous WebServer implementation that must be removed" so the comment reads
"WebServer ; arduino 3.x has its own synchronous WebServer implementation that
must be removed".
- Around line 39-41: The three coredump preprocessor flags (-D
CONFIG_ESP_COREDUMP_ENABLE_TO_FLASH=1, -D CONFIG_ESP_COREDUMP_DATA_FORMAT_ELF=1,
-D CONFIG_ESP_COREDUMP_CHECKSUM_CRC32=1) are not being included because their
lines are not indented as continuations of the build_flags value; edit the
platformio.ini so these three -D entries are indented under the existing
build_flags key (so they are treated as continuation lines and included in the
build), ensuring the build_flags stanza contains those flags for the build
system to pick up.
ℹ️ Review info
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
test/tsan_full.logis excluded by!**/*.log
📒 Files selected for processing (22)
.github/workflows/manual-build-artifacts.ymlREADME.mddistributor/README.mddistributor/boards.jsonplatformio.inisrc/ElegooCC.cppsrc/ElegooCC.hsrc/Logger.cppsrc/WebServer.cppsrc/WebServer.hsrc/main.cpptest/build_and_run_all_tests.shtest/test_elegoo_cc.cpptest/test_soaktest/test_soak.cpptest/test_thread_safetytest/test_thread_safety.cpptest/test_thread_safety_asantest/test_thread_safety_tsantest/test_thread_safety_tsan2test/test_thread_safety_tsan3test/tsan_output.txt
| portENTER_CRITICAL(&pendingMutex); | ||
| bool alreadyPending = pendingSettingsUpdate; | ||
| portEXIT_CRITICAL(&pendingMutex); | ||
|
|
||
| // Track if IP address changed to trigger reconnect | ||
| String oldIp = settingsManager.getElegooIP(); | ||
| bool ipChanged = false; | ||
|
|
||
| // Only update fields that are present in the request | ||
| if (jsonObj.containsKey("elegooip")) | ||
| { | ||
| String newIp = jsonObj["elegooip"].as<String>(); | ||
| ipChanged = (oldIp != newIp) && newIp.length() > 0; | ||
| settingsManager.setElegooIP(newIp); | ||
| } | ||
| if (jsonObj.containsKey("ssid")) | ||
| settingsManager.setSSID(jsonObj["ssid"].as<String>()); | ||
| if (jsonObj.containsKey("passwd") && jsonObj["passwd"].as<String>().length() > 0) | ||
| settingsManager.setPassword(jsonObj["passwd"].as<String>()); | ||
| if (jsonObj.containsKey("ap_mode")) | ||
| settingsManager.setAPMode(jsonObj["ap_mode"].as<bool>()); | ||
| if (jsonObj.containsKey("pause_on_runout")) | ||
| settingsManager.setPauseOnRunout(jsonObj["pause_on_runout"].as<bool>()); | ||
| if (jsonObj.containsKey("enabled")) | ||
| settingsManager.setEnabled(jsonObj["enabled"].as<bool>()); | ||
| if (jsonObj.containsKey("detection_length_mm")) | ||
| settingsManager.setDetectionHardJamMm(jsonObj["detection_length_mm"].as<float>()); | ||
| if (jsonObj.containsKey("detection_grace_period_ms")) | ||
| settingsManager.setDetectionGracePeriodMs(jsonObj["detection_grace_period_ms"].as<int>()); | ||
| if (jsonObj.containsKey("detection_ratio_threshold")) | ||
| settingsManager.setDetectionRatioThreshold(jsonObj["detection_ratio_threshold"].as<int>()); | ||
| if (jsonObj.containsKey("detection_hard_jam_mm")) | ||
| settingsManager.setDetectionHardJamMm(jsonObj["detection_hard_jam_mm"].as<float>()); | ||
| if (jsonObj.containsKey("detection_soft_jam_time_ms")) | ||
| settingsManager.setDetectionSoftJamTimeMs(jsonObj["detection_soft_jam_time_ms"].as<int>()); | ||
| if (jsonObj.containsKey("detection_hard_jam_time_ms")) | ||
| settingsManager.setDetectionHardJamTimeMs(jsonObj["detection_hard_jam_time_ms"].as<int>()); | ||
| if (jsonObj.containsKey("detection_mode")) | ||
| settingsManager.setDetectionMode(jsonObj["detection_mode"].as<int>()); | ||
| if (jsonObj.containsKey("sdcp_loss_behavior")) | ||
| settingsManager.setSdcpLossBehavior(jsonObj["sdcp_loss_behavior"].as<int>()); | ||
| if (jsonObj.containsKey("flow_telemetry_stale_ms")) | ||
| settingsManager.setFlowTelemetryStaleMs(jsonObj["flow_telemetry_stale_ms"].as<int>()); | ||
| if (jsonObj.containsKey("ui_refresh_interval_ms")) | ||
| settingsManager.setUiRefreshIntervalMs(jsonObj["ui_refresh_interval_ms"].as<int>()); | ||
| if (jsonObj.containsKey("suppress_pause_commands")) | ||
| settingsManager.setSuppressPauseCommands(jsonObj["suppress_pause_commands"].as<bool>()); | ||
| if (jsonObj.containsKey("log_level")) | ||
| settingsManager.setLogLevel(jsonObj["log_level"].as<int>()); | ||
| if (jsonObj.containsKey("movement_mm_per_pulse")) | ||
| settingsManager.setMovementMmPerPulse(jsonObj["movement_mm_per_pulse"].as<float>()); | ||
| if (jsonObj.containsKey("auto_calibrate_sensor")) | ||
| settingsManager.setAutoCalibrateSensor(jsonObj["auto_calibrate_sensor"].as<bool>()); | ||
| if (jsonObj.containsKey("pulse_reduction_percent")) | ||
| settingsManager.setPulseReductionPercent(jsonObj["pulse_reduction_percent"].as<float>()); | ||
| if (jsonObj.containsKey("test_recording_mode")) | ||
| settingsManager.setTestRecordingMode(jsonObj["test_recording_mode"].as<bool>()); | ||
| if (jsonObj.containsKey("show_debug_page")) | ||
| settingsManager.setShowDebugPage(jsonObj["show_debug_page"].as<bool>()); | ||
| if (jsonObj.containsKey("timezone_offset_minutes")) | ||
| settingsManager.setTimezoneOffsetMinutes(jsonObj["timezone_offset_minutes"].as<int>()); | ||
|
|
||
| bool saved = settingsManager.save(); | ||
| if (saved) | ||
| if (alreadyPending) | ||
| { | ||
| elegooCC.refreshCaches(); | ||
| if (ipChanged) | ||
| { | ||
| elegooCC.reconnect(); | ||
| } | ||
| request->send(200, "application/json", "{\"status\":\"ok\"}"); | ||
| request->send(429, "application/json", | ||
| "{\"error\":\"Settings update already pending\"}"); | ||
| return; | ||
| } | ||
| else | ||
|
|
||
| portENTER_CRITICAL(&pendingMutex); | ||
| pendingSettingsDoc.clear(); | ||
| JsonObject src = json.as<JsonObject>(); | ||
| JsonObject dst = pendingSettingsDoc.to<JsonObject>(); | ||
| for (JsonPair kv : src) | ||
| { | ||
| request->send(500, "application/json", | ||
| "{\"error\":\"Failed to save settings to flash\"}"); | ||
| dst[kv.key()] = kv.value(); | ||
| } | ||
| pendingSettingsUpdate = true; | ||
| portEXIT_CRITICAL(&pendingMutex); |
There was a problem hiding this comment.
Make pending-settings admission atomic.
Line [146]-[148] and Line [157]-[166] create a check-then-set gap. Two concurrent /update_settings requests can both pass and one silently overwrites the other.
💡 Suggested fix (single critical section for check+enqueue)
- portENTER_CRITICAL(&pendingMutex);
- bool alreadyPending = pendingSettingsUpdate;
- portEXIT_CRITICAL(&pendingMutex);
-
- if (alreadyPending)
- {
- request->send(429, "application/json",
- "{\"error\":\"Settings update already pending\"}");
- return;
- }
-
- portENTER_CRITICAL(&pendingMutex);
- pendingSettingsDoc.clear();
- JsonObject src = json.as<JsonObject>();
- JsonObject dst = pendingSettingsDoc.to<JsonObject>();
- for (JsonPair kv : src)
- {
- dst[kv.key()] = kv.value();
- }
- pendingSettingsUpdate = true;
- portEXIT_CRITICAL(&pendingMutex);
+ bool accepted = false;
+ portENTER_CRITICAL(&pendingMutex);
+ if (!pendingSettingsUpdate)
+ {
+ pendingSettingsDoc.clear();
+ JsonObject src = json.as<JsonObject>();
+ JsonObject dst = pendingSettingsDoc.to<JsonObject>();
+ for (JsonPair kv : src)
+ {
+ dst[kv.key()] = kv.value();
+ }
+ pendingSettingsUpdate = true;
+ accepted = true;
+ }
+ portEXIT_CRITICAL(&pendingMutex);
+
+ if (!accepted)
+ {
+ request->send(429, "application/json",
+ "{\"error\":\"Settings update already pending\"}");
+ return;
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/WebServer.cpp` around lines 146 - 166, The current check-then-set uses
two separate critical sections allowing a race; acquire pendingMutex once to
perform the entire admission (read pendingSettingsUpdate, and if false, clear
pendingSettingsDoc, copy src into dst/pendingSettingsDoc and set
pendingSettingsUpdate = true) before releasing the mutex; i.e., move the initial
check (the read of pendingSettingsUpdate and the request->send 429 early return)
inside the same portENTER_CRITICAL/portEXIT_CRITICAL block that performs the
pendingSettingsDoc.clear(), JsonObject src = json.as<JsonObject>(), JsonObject
dst = pendingSettingsDoc.to<JsonObject>(), the copy loop over JsonPair, and
pendingSettingsUpdate = true so the check-and-enqueue is atomic.
| statusEvents.onConnect([this](AsyncEventSourceClient *client) { | ||
| if (statusEvents.count() > kMaxSSEClients) | ||
| { | ||
| // Over limit - the library already added the client, so we | ||
| // just won't send data and it will be cleaned up on next sweep. | ||
| logger.logf("SSE client rejected (count=%d, max=%d)", | ||
| statusEvents.count(), kMaxSSEClients); | ||
| return; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's look at the actual code in src/WebServer.cpp around lines 619-634
wc -l src/WebServer.cppRepository: harpua555/OpenFilamentSensor
Length of output: 93
🏁 Script executed:
# Read the relevant lines mentioned in the review
sed -n '619,634p' src/WebServer.cppRepository: harpua555/OpenFilamentSensor
Length of output: 498
🏁 Script executed:
# Also check the initial snippet mentioned (lines 236-244)
sed -n '236,244p' src/WebServer.cppRepository: harpua555/OpenFilamentSensor
Length of output: 508
🏁 Script executed:
# Search for statusEvents.close() calls in the file
rg 'statusEvents\.close\(' src/WebServer.cpp -n -A 5 -B 5Repository: harpua555/OpenFilamentSensor
Length of output: 401
🏁 Script executed:
# Search for AsyncEventSource includes and usage in the codebase
rg 'AsyncEventSource' src/ -nRepository: harpua555/OpenFilamentSensor
Length of output: 263
🏁 Script executed:
# Check if ESPAsyncWebServer is in the project
find . -name "*.h" -o -name "*.cpp" | xargs grep -l "AsyncEventSource" | head -20Repository: harpua555/OpenFilamentSensor
Length of output: 109
🏁 Script executed:
# Look for the library directory or dependency info
cat platformio.ini 2>/dev/null || cat package.json 2>/dev/null || cat CMakeLists.txt 2>/dev/null | head -50Repository: harpua555/OpenFilamentSensor
Length of output: 7716
🌐 Web query:
ESPAsyncWebServer v3.7.3 AsyncEventSource close() method behavior documentation
💡 Result:
In ESP32Async/ESPAsyncWebServer v3.7.3, the “close” behavior for Server‑Sent Events (SSE) is defined in the library source (there isn’t separate narrative documentation).
AsyncEventSourceClient::close(): if the client is still present, it simply calls the underlying TCP client’sAsyncClient::close()(no “force” flag, no explicit queue flush, no direct removal from the server’s client list). [1]AsyncEventSource::close()(the SSE endpoint): iterates over all tracked SSE clients and callsc->close()on each that reportsconnected(). It does not clear_clientsitself; client objects get removed later via the disconnect path (_handleDisconnect) when the TCP disconnect occurs. [1]- Related: on timeout, the code uses
_client->close(true)(force close) in_onTimeout(), which is different fromclose()’s default behavior. [1] - The header documents intent at a high level:
AsyncEventSource::close()is “close all connected clients”, andAsyncEventSourceClient::close()is “close client’s connection”. [2]
Sources: AsyncEventSource.cpp (v3.7.3) [1], AsyncEventSource.h (v3.7.3) [2].
Client-cap cleanup disconnects all SSE clients, not just the excess connection.
Line 633 calls statusEvents.close() when count exceeds the cap, which closes every connected client. This causes unnecessary churn—healthy clients get disconnected and must reconnect repeatedly. The cleanup runs every 30 seconds (line 622), exacerbating the problem.
AsyncEventSource::close() iterates over all clients and disconnects each one. Instead, disconnect only the newly connected over-limit client by calling client->close() directly on the individual client pointer available in the onConnect callback (line 236).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/WebServer.cpp` around lines 236 - 244, The onConnect handler for
statusEvents currently calls statusEvents.close(), which disconnects all SSE
clients; instead, modify the lambda registered in statusEvents.onConnect (the
callback with signature [this](AsyncEventSourceClient *client)) to call
client->close() on the individual AsyncEventSourceClient pointer when
statusEvents.count() > kMaxSSEClients so only the new excess connection is
disconnected; update the log and return path accordingly to avoid using
statusEvents.close().
| if (jsonObj.containsKey("detection_ratio_threshold")) | ||
| settingsManager.setDetectionRatioThreshold(jsonObj["detection_ratio_threshold"].as<int>()); | ||
| if (jsonObj.containsKey("detection_hard_jam_mm")) |
There was a problem hiding this comment.
Parse detection_ratio_threshold as float, not int.
Line [526] truncates fractional values (0.70 becomes 0), which can invalidate persisted detection tuning.
💡 Suggested fix
if (jsonObj.containsKey("detection_ratio_threshold"))
- settingsManager.setDetectionRatioThreshold(jsonObj["detection_ratio_threshold"].as<int>());
+ settingsManager.setDetectionRatioThreshold(jsonObj["detection_ratio_threshold"].as<float>());📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (jsonObj.containsKey("detection_ratio_threshold")) | |
| settingsManager.setDetectionRatioThreshold(jsonObj["detection_ratio_threshold"].as<int>()); | |
| if (jsonObj.containsKey("detection_hard_jam_mm")) | |
| if (jsonObj.containsKey("detection_ratio_threshold")) | |
| settingsManager.setDetectionRatioThreshold(jsonObj["detection_ratio_threshold"].as<float>()); | |
| if (jsonObj.containsKey("detection_hard_jam_mm")) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/WebServer.cpp` around lines 525 - 527, The value for
"detection_ratio_threshold" is being parsed as an int causing truncation; change
the parsing call in the JSON handling to use a floating-point type (e.g.,
jsonObj["detection_ratio_threshold"].as<float>() or as<double>()) and pass that
float/double into settingsManager.setDetectionRatioThreshold, updating or
overloading setDetectionRatioThreshold if it currently only accepts an integer
so the stored/persisted threshold retains fractional precision.
| @@ -0,0 +1 @@ | |||
| FATAL: ThreadSanitizer: unexpected memory mapping 0x640c7f664000-0x640c7f667000 | |||
There was a problem hiding this comment.
Consider excluding this test artifact from version control.
This file contains a ThreadSanitizer fatal error (unexpected memory mapping) which is typically an environment/ASLR configuration issue rather than a code defect. Committing test output artifacts to the repository may cause confusion and isn't a common practice.
If this is intentional (e.g., documenting a known issue), consider adding context in a comment or README. Otherwise, consider adding test/*.txt or test/tsan_output.txt to .gitignore.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/tsan_output.txt` at line 1, The committed test artifact
test/tsan_output.txt contains ThreadSanitizer output and should be removed from
version control; delete the file from the repo and add an ignore rule (e.g., add
test/*.txt or specifically test/tsan_output.txt) to .gitignore so future runs
don’t get committed, or alternatively, if you intend to keep this as
documentation, move it to a docs/ folder and add a brief note in README
explaining the intentional retention.
Summary by CodeRabbit
New Features
Documentation
Improvements
Tests
Chores