Skip to content

test: add unit tests for config and utils modules#424

Closed
nikolasdehor wants to merge 4 commits intoSynkraAI:mainfrom
nikolasdehor:test/consolidated-config-utils
Closed

test: add unit tests for config and utils modules#424
nikolasdehor wants to merge 4 commits intoSynkraAI:mainfrom
nikolasdehor:test/consolidated-config-utils

Conversation

@nikolasdehor
Copy link
Contributor

@nikolasdehor nikolasdehor commented Feb 19, 2026

Add test coverage for config and utils modules. All tests pass locally. Closes #423

Summary by CodeRabbit

  • Tests
    • Added comprehensive unit test suites covering configuration caching/loading, environment interpolation, and config merging.
    • Added tests for personalized output formatting and extensive security utilities (path/sanitization/JSON/depth checks, rate limiting, safe strings).
    • Added a large YAML validation suite (validation, auto-fix, reporting, deep-structure edge cases) and expanded coverage for TTL/expiration, caching metrics, and serialization behaviors.

Copilot AI review requested due to automatic review settings February 19, 2026 19:16
@vercel
Copy link

vercel bot commented Feb 19, 2026

@nikolasdehor is attempting to deploy a commit to the Pedro Valério Lopez's projects Team on Vercel.

A member of the Team first needs to authorize it.

@coderabbitai
Copy link

coderabbitai bot commented Feb 19, 2026

Walkthrough

Adds comprehensive unit tests for core config and utils modules (ConfigCache, config-loader, env-interpolator, merge-utils, PersonalizedOutputFormatter, security-utils, YAMLValidator). All changes are test files only; no production API signatures were modified.

Changes

Cohort / File(s) Summary
Config tests
tests/core/config/config-cache.test.js, tests/core/config/config-loader.test.js, tests/core/config/env-interpolator.test.js, tests/core/config/merge-utils.test.js
New unit tests covering ConfigCache TTL/expiry, stats, serialization, and singleton; config-loader YAML parsing, caching, section/agent loading, metrics, error paths; env-interpolator string/recursive env resolution and linting; merge-utils deepMerge/mergeAll behaviors. Pay attention to time-based fake timers and fs/js-yaml mocking.
Utils tests
tests/core/utils/output-formatter.test.js, tests/core/utils/security-utils.test.js, tests/core/utils/yaml-validator.test.js
New unit tests for PersonalizedOutputFormatter persona loading/formatting and tones, security-utils path/JSON/sanitization/RateLimiter/safety checks, and YAMLValidator validation, auto-fix, indentation/quote fixes, and reporting. Note extensive fixture usage and timer/depth edge-case tests.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested labels

tests, type: test, area: core

Suggested reviewers

  • Pedrovaleriolopez
  • oalanicolas
🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'test: add unit tests for config and utils modules' clearly and concisely describes the main change—adding comprehensive unit tests for the specified modules.
Linked Issues check ✅ Passed The pull request fully addresses issue #423 by adding comprehensive unit tests for all requested modules: config (cache, loader, merge-utils, env-interpolator) and utils (security-utils, yaml-validator, output-formatter).
Out of Scope Changes check ✅ Passed All changes are test files exclusively; no modifications to production code or out-of-scope alterations were introduced.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds comprehensive unit test coverage for the config and utils modules in the AIOS core, addressing issue #423. The tests cover critical functionality including YAML validation, security utilities, output formatting, configuration merging, environment variable interpolation, config loading, and config caching.

Changes:

  • Added 6 new test files with over 2,000 lines of test code covering previously untested modules
  • Implemented thorough test coverage for security-critical functions (path validation, input sanitization, rate limiting)
  • Added tests for configuration system components (merge utilities, environment interpolation, caching, and lazy loading)

Reviewed changes

Copilot reviewed 7 out of 7 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
tests/core/utils/yaml-validator.test.js Comprehensive tests for YAML validation including structure validation, auto-fixing, and depth checking (818 lines)
tests/core/utils/security-utils.test.js Tests for security utilities including path validation, input sanitization, JSON validation, rate limiting, and safe path operations (347 lines)
tests/core/utils/output-formatter.test.js Tests for personalized output formatting including persona loading, tone-based messages, and verb conjugation (272 lines)
tests/core/config/merge-utils.test.js Tests for deep merge functionality including scalar merging, array handling, +append modifier, and null deletion (126 lines)
tests/core/config/env-interpolator.test.js Tests for environment variable interpolation including ${VAR} patterns, defaults, and recursive object walking (122 lines)
tests/core/config/config-loader.test.js Tests for lazy config loading including caching, agent-specific sections, performance metrics, and validation (337 lines)
tests/core/config/config-cache.test.js Tests for config caching including TTL expiration, statistics tracking, and entry management (251 lines)

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

* validateFieldTypes(), getMaxDepth(), autoFix(), fixIndentation(),
* fixQuotes(), generateReport(), and validateYAML().
*
* Refs #52
Copy link

Copilot AI Feb 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment "Refs #52" references an issue that doesn't appear to exist or be related to this PR. According to the PR description, this PR closes issue #423, not #52. This reference should either be removed or corrected to reference the appropriate issue.

Suggested change
* Refs #52
* Refs #423

Copilot uses AI. Check for mistakes.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (4)
tests/core/utils/yaml-validator.test.js (1)

582-591: Strengthen the changed flag assertion from type-check to value-check.

The input ' key: value with: colon' contains a colon in the value, which fixQuotes should quote. The outcome is deterministic — changed should be true. Asserting only typeof result.changed === 'boolean' adds no real regression protection.

-  expect(typeof result.changed).toBe('boolean');
+  expect(result.changed).toBe(true);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/utils/yaml-validator.test.js` around lines 582 - 591, The test
currently only asserts typeof result.changed is 'boolean' but the input '  key:
value with: colon' deterministically triggers fixQuotes, so change the assertion
to expect(result.changed).toBe(true); update the test in yaml-validator.test.js
by calling validator.autoFix(badContent) (already present) and replacing the
loose type assertion with a strict value check for true so the test fails if
autoFix/ fixQuotes no longer quote the value.
tests/core/config/config-loader.test.js (1)

35-41: Missing afterEach for console spy restoration — inconsistent with output-formatter.test.js.

Every beforeEach call stacks a new jest.spyOn(console, 'log') wrapper on top of the existing one (since jest.resetAllMocks() resets mock state but does not call mockRestore()). While the tests still produce correct assertions because resetAllMocks() clears call counts on the most-recent spy, the stacking is a code smell and diverges from the pattern in output-formatter.test.js.

♻️ Proposed fix
+  afterEach(() => {
+    console.log.mockRestore();
+    console.error.mockRestore();
+  });
+
   beforeEach(() => {
     jest.resetAllMocks();
     jest.spyOn(console, 'log').mockImplementation();
     jest.spyOn(console, 'error').mockImplementation();
     // Clear cache between tests
     clearCache();
   });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/config/config-loader.test.js` around lines 35 - 41, The test setup
uses jest.spyOn(console, 'log') and jest.spyOn(console, 'error') in the
beforeEach of config-loader.test.js but never restores them, causing stacked
spies; add an afterEach that calls mockRestore() on the spies (restore
console.log and console.error) to match the pattern used in
output-formatter.test.js and keep clearCache() in beforeEach unchanged so each
test resets mocks and restores original console methods.
tests/core/config/config-cache.test.js (1)

8-8: Consider adding afterAll(() => jest.useRealTimers()).

jest.useFakeTimers() at module scope remains in effect for the lifetime of this test worker. While Jest isolates files in separate workers under the default --maxWorkers config, running with --runInBand (common in CI debugging) would leave fake timers active for subsequently required modules. An explicit teardown is a low-cost safeguard.

 jest.useFakeTimers();
+afterAll(() => jest.useRealTimers());
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/config/config-cache.test.js` at line 8, Add a teardown to restore
real timers after this test file: after calling jest.useFakeTimers() at module
scope, add an afterAll hook that calls jest.useRealTimers(); this ensures fake
timers set by jest.useFakeTimers() are reverted when tests complete (reference:
jest.useFakeTimers() and afterAll(() => jest.useRealTimers())).
tests/core/config/env-interpolator.test.js (1)

95-99: Strengthen the arrays linting assertion — verify the variable name, not just the count.

 test('detects patterns in arrays', () => {
   const config = { hosts: ['${HOST_1}', 'static'] };
   const findings = lintEnvPatterns(config, 'test.yaml');
   expect(findings).toHaveLength(1);
+  expect(findings[0]).toContain('HOST_1');
 });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/config/env-interpolator.test.js` around lines 95 - 99, Replace the
weak length-only assertion in the test for arrays by asserting the actual
detected variable name: after calling lintEnvPatterns(config, 'test.yaml')
verify that findings[0] identifies "HOST_1" (e.g. expect the
findings[0].variable to equal 'HOST_1' or, if the result shape is a message
string, assert findings[0].message contains 'HOST_1') so the test confirms the
correct variable was reported rather than only counting findings.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/core/utils/security-utils.test.js`:
- Around line 287-299: The test is flaky because windowMs: 1 is too narrow for
two back-to-back RateLimiter.check('user1') calls; replace the nondeterministic
timing with deterministic fake timers or increase the window size and advance
time explicitly: in the test that constructs new RateLimiter({ maxRequests: 1,
windowMs: 1 }), either (a) use Jest fake timers (jest.useFakeTimers();
jest.advanceTimersByTime(...)) around the check calls so both checks occur
within the intended window and the post-cleanup wait is simulated, or (b) set
windowMs to a larger value (e.g., 50 or 100) so the two consecutive
check('user1') calls reliably fall within the same window, then keep the
explicit await/setTimeout + limiter.cleanup() and assert
limiter.check('user1').allowed is true; reference RateLimiter, check, cleanup,
and windowMs when making the change.

In `@tests/core/utils/yaml-validator.test.js`:
- Line 9: Update the stale issue reference in the test file header by replacing
the string "Refs `#52`" with "Refs `#423`" so the PR correctly links to and closes
issue `#423`; locate the header comment containing "Refs `#52`" (in
tests/core/utils/yaml-validator.test.js) and change it to "Refs `#423`".
- Around line 491-502: The test "does not warn on nesting at exactly 10 levels"
builds only 9 levels — update the construction of `data` so it produces depth 10
(e.g., change the loop to run 10 iterations or otherwise wrap the base object
ten times) before calling `validator.validateGeneral(data, results)`; keep the
rest of the assertions (filtering `results.warnings` for `type ===
'deep_nesting'` and expecting length 0) unchanged so the boundary case for depth
== 10 is actually exercised.

---

Nitpick comments:
In `@tests/core/config/config-cache.test.js`:
- Line 8: Add a teardown to restore real timers after this test file: after
calling jest.useFakeTimers() at module scope, add an afterAll hook that calls
jest.useRealTimers(); this ensures fake timers set by jest.useFakeTimers() are
reverted when tests complete (reference: jest.useFakeTimers() and afterAll(() =>
jest.useRealTimers())).

In `@tests/core/config/config-loader.test.js`:
- Around line 35-41: The test setup uses jest.spyOn(console, 'log') and
jest.spyOn(console, 'error') in the beforeEach of config-loader.test.js but
never restores them, causing stacked spies; add an afterEach that calls
mockRestore() on the spies (restore console.log and console.error) to match the
pattern used in output-formatter.test.js and keep clearCache() in beforeEach
unchanged so each test resets mocks and restores original console methods.

In `@tests/core/config/env-interpolator.test.js`:
- Around line 95-99: Replace the weak length-only assertion in the test for
arrays by asserting the actual detected variable name: after calling
lintEnvPatterns(config, 'test.yaml') verify that findings[0] identifies "HOST_1"
(e.g. expect the findings[0].variable to equal 'HOST_1' or, if the result shape
is a message string, assert findings[0].message contains 'HOST_1') so the test
confirms the correct variable was reported rather than only counting findings.

In `@tests/core/utils/yaml-validator.test.js`:
- Around line 582-591: The test currently only asserts typeof result.changed is
'boolean' but the input '  key: value with: colon' deterministically triggers
fixQuotes, so change the assertion to expect(result.changed).toBe(true); update
the test in yaml-validator.test.js by calling validator.autoFix(badContent)
(already present) and replacing the loose type assertion with a strict value
check for true so the test fails if autoFix/ fixQuotes no longer quote the
value.

- Fix stale issue reference (SynkraAI#52 -> SynkraAI#423) in yaml-validator header
- Fix off-by-one in nesting depth test (9 iterations -> 10 for depth 10)
- Fix flaky rate limiter test: replace 1ms window with fake timers
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
tests/core/utils/yaml-validator.test.js (1)

582-591: Strengthen the changed flag assertion to verify actual transformation behavior.

The test only checks that result.changed is a boolean, which will never fail meaningfully. For the given input ' key: value with: colon', fixIndentation removes the leading spaces, so changed should be true. Consider asserting the concrete expected value:

♻️ Example: Assert concrete value
    test('sets changed flag when content is modified', async () => {
      const badContent = '  key: value with: colon';
      const result = await validator.autoFix(badContent);

      expect(result.changed).toBe(true);
      expect(result.content).toBeDefined();
      expect(result.validation).toBeDefined();
    });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/utils/yaml-validator.test.js` around lines 582 - 591, The test's
assertion for result.changed is too weak; call validator.autoFix with the input
and assert the concrete expected transformation: since fixIndentation will
remove the leading spaces from '  key: value with: colon', change the assertion
to expect(result.changed).toBe(true) (and optionally assert the exact
transformed content or that result.content !== badContent) so the test verifies
the actual change; locate this in the test that calls validator.autoFix and
mentions fixIndentation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/core/utils/yaml-validator.test.js`:
- Around line 213-219: The test "parse error includes line and column when
available" currently only asserts the error message is truthy; update the test
for validator.validate in tests/core/utils/yaml-validator.test.js to explicitly
assert that result.errors[0] contains numeric line and column properties (e.g.,
typeof result.errors[0].line === 'number' and typeof result.errors[0].column ===
'number' or that they are greater than or equal to 0) in addition to existing
assertions so the test actually verifies presence of line and column information
from the parse_error.

---

Duplicate comments:
In `@tests/core/utils/yaml-validator.test.js`:
- Around line 491-502: Test "does not warn on nesting at exactly 10 levels" was
off-by-one and has been fixed by looping 10 times to produce depth 10; ensure
the test continues to build exactly 10 nested objects and asserts no
'deep_nesting' warnings by calling validator.validateGeneral(data, results) and
filtering results.warnings for type 'deep_nesting' as done in this test; verify
the test name, the loop using for (let i = 0; i < 10; i++), the constructed data
object using the 'nested' key, and the expect(...).toHaveLength(0) assertion
referencing validateGeneral so the boundary (>10) is exercised correctly.

---

Nitpick comments:
In `@tests/core/utils/yaml-validator.test.js`:
- Around line 582-591: The test's assertion for result.changed is too weak; call
validator.autoFix with the input and assert the concrete expected
transformation: since fixIndentation will remove the leading spaces from '  key:
value with: colon', change the assertion to expect(result.changed).toBe(true)
(and optionally assert the exact transformed content or that result.content !==
badContent) so the test verifies the actual change; locate this in the test that
calls validator.autoFix and mentions fixIndentation.

Comment on lines +213 to +219
test('parse error includes line and column when available', async () => {
const badYaml = 'key: [unclosed';
const result = await validator.validate(badYaml);
expect(result.valid).toBe(false);
expect(result.errors[0].type).toBe('parse_error');
expect(result.errors[0].message).toBeTruthy();
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Test name claims to verify line/column presence, but assertions don't check for them.

The test is named "parse error includes line and column when available" but only asserts result.errors[0].message is truthy. Neither line nor column fields are verified, so this test would pass even if those fields were missing entirely.

🔧 Suggested fix
     test('parse error includes line and column when available', async () => {
       const badYaml = 'key: [unclosed';
       const result = await validator.validate(badYaml);
       expect(result.valid).toBe(false);
       expect(result.errors[0].type).toBe('parse_error');
       expect(result.errors[0].message).toBeTruthy();
+      expect(result.errors[0].line).toBeDefined();
+      expect(result.errors[0].column).toBeDefined();
     });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/utils/yaml-validator.test.js` around lines 213 - 219, The test
"parse error includes line and column when available" currently only asserts the
error message is truthy; update the test for validator.validate in
tests/core/utils/yaml-validator.test.js to explicitly assert that
result.errors[0] contains numeric line and column properties (e.g., typeof
result.errors[0].line === 'number' and typeof result.errors[0].column ===
'number' or that they are greater than or equal to 0) in addition to existing
assertions so the test actually verifies presence of line and column information
from the parse_error.

@Pedrovaleriolopez
Copy link
Contributor

Hi @nikolasdehor — Please address CodeRabbit's feedback when you get a chance. Most issues are minor nitpicks (assertion specificity, edge cases). Once addressed, we'll approve. Thanks for the great test coverage work! 🎯

- security-utils: aumentar windowMs de 1ms para 60000ms no teste
  de RateLimiter para evitar flakiness em CI
- yaml-validator: documentar boundary case (10 wraps = depth 10,
  threshold > 10), fortalecer assertion de changed para toBe(true)
- config-cache: adicionar afterAll(jest.useRealTimers) para limpar
  fake timers
- config-loader: adicionar afterEach(jest.restoreAllMocks) para
  evitar stack de console spies, usar mockImplementation(() => {})
- env-interpolator: fortalecer assertion de array lint com nome
  da variável
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (3)
tests/core/utils/security-utils.test.js (1)

254-264: Assumption about remaining calculation order is documented but fragile.

The comment on line 257 notes that remaining is calculated before the request is recorded, making first.remaining equal to maxRequests (5). This is an implementation detail — if the underlying RateLimiter ever changes to compute remaining after recording, these assertions will break without an obvious reason.

This is fine as-is since the comment documents the assumption, but consider adding a brief note in the assertion message for future maintainers:

-      expect(first.remaining).toBe(5);
+      expect(first.remaining).toBe(5); // remaining computed before recording
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/utils/security-utils.test.js` around lines 254 - 264, The test
assumes RateLimiter.check calculates remaining before recording a request, which
is fragile; update the assertions in tests/core/utils/security-utils.test.js
(the it block using RateLimiter and the check(...) calls that set first and
third) to include explicit assertion messages stating this assumption (e.g.,
that remaining reflects pre-recording count) so future maintainers see why
first.remaining === maxRequests and third.remaining === 3; keep the logic
unchanged, only add clear assertion message strings to the expect(...) calls
that reference first.remaining and third.remaining.
tests/core/config/config-loader.test.js (1)

280-319: validateAgentConfig — consider adding a test for a known agent with missing non-ALWAYS_LOADED sections.

The current tests cover:

  1. A config with all ALWAYS_LOADED sections present (line 281–295) — valid.
  2. An empty config for 'dev' (line 297–304) — invalid due to missing sections.
  3. An unknown agent with ALWAYS_LOADED present (line 306–318) — valid.

A missing scenario: a known agent (e.g., 'dev') where ALWAYS_LOADED sections are present but agent-specific sections (e.g., pvMindContext, hybridOpsConfig) are absent. This would verify that the validation catches partial configs for known agents — a meaningful edge case given that dev has specialized requirements (as tested in lines 74-77).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/config/config-loader.test.js` around lines 280 - 319, Add a new
test for validateAgentConfig that uses a known agent like 'dev' where you mock
fs.readFile and yaml.load to return an object containing only ALWAYS_LOADED
sections (e.g., frameworkDocsLocation, projectDocsLocation, devLoadAlwaysFiles,
lazyLoading) but omitting agent-specific sections such as pvMindContext and
hybridOpsConfig; call validateAgentConfig('dev') and assert result.valid is
false and result.missingSections contains the expected agent-specific keys,
referencing the validateAgentConfig function and the ALWAYS_LOADED constant to
locate the implementation.
tests/core/config/config-cache.test.js (1)

108-119: clear test verifies both entries and stats are reset — good.

One minor note: this test doesn't verify that clear returns any value or that subsequent get calls return null. Since size === 0 is checked, it's implicitly covered, but an explicit expect(cache.get('a')).toBeNull() post-clear would make intent crystal clear. Not blocking.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/config/config-cache.test.js` around lines 108 - 119, Add an
explicit assertion after calling cache.clear() to verify subsequent reads return
null and optionally that clear itself returns undefined; specifically, in the
'clear' test call const ret = cache.clear() (or just cache.clear()) and add
expect(cache.get('a')).toBeNull() and optionally expect(ret).toBeUndefined() to
make the intent explicit; reference the cache.clear and cache.get calls and the
existing assertions on cache.size, cache.hits, and cache.misses when updating
the test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tests/core/config/config-cache.test.js`:
- Around line 108-119: Add an explicit assertion after calling cache.clear() to
verify subsequent reads return null and optionally that clear itself returns
undefined; specifically, in the 'clear' test call const ret = cache.clear() (or
just cache.clear()) and add expect(cache.get('a')).toBeNull() and optionally
expect(ret).toBeUndefined() to make the intent explicit; reference the
cache.clear and cache.get calls and the existing assertions on cache.size,
cache.hits, and cache.misses when updating the test.

In `@tests/core/config/config-loader.test.js`:
- Around line 280-319: Add a new test for validateAgentConfig that uses a known
agent like 'dev' where you mock fs.readFile and yaml.load to return an object
containing only ALWAYS_LOADED sections (e.g., frameworkDocsLocation,
projectDocsLocation, devLoadAlwaysFiles, lazyLoading) but omitting
agent-specific sections such as pvMindContext and hybridOpsConfig; call
validateAgentConfig('dev') and assert result.valid is false and
result.missingSections contains the expected agent-specific keys, referencing
the validateAgentConfig function and the ALWAYS_LOADED constant to locate the
implementation.

In `@tests/core/utils/security-utils.test.js`:
- Around line 254-264: The test assumes RateLimiter.check calculates remaining
before recording a request, which is fragile; update the assertions in
tests/core/utils/security-utils.test.js (the it block using RateLimiter and the
check(...) calls that set first and third) to include explicit assertion
messages stating this assumption (e.g., that remaining reflects pre-recording
count) so future maintainers see why first.remaining === maxRequests and
third.remaining === 3; keep the logic unchanged, only add clear assertion
message strings to the expect(...) calls that reference first.remaining and
third.remaining.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 63ab2f5 and e16f5f1.

📒 Files selected for processing (5)
  • tests/core/config/config-cache.test.js
  • tests/core/config/config-loader.test.js
  • tests/core/config/env-interpolator.test.js
  • tests/core/utils/security-utils.test.js
  • tests/core/utils/yaml-validator.test.js
🚧 Files skipped from review as they are similar to previous changes (2)
  • tests/core/config/env-interpolator.test.js
  • tests/core/utils/yaml-validator.test.js

- Adiciona assertion de line/column no parse error quando disponível
- Corrige comentário off-by-one no teste de depth 10
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
tests/core/utils/yaml-validator.test.js (1)

213-224: ⚠️ Potential issue | 🟡 Minor

The test still doesn’t verify column despite the test intent.

At Line 221, the test conditionally validates line, but column is never asserted. A regression dropping column would still pass this test.

🔧 Proposed assertion tightening
     test('parse error includes line and column when available', async () => {
       const badYaml = 'key: [unclosed';
       const result = await validator.validate(badYaml);
       expect(result.valid).toBe(false);
       expect(result.errors[0].type).toBe('parse_error');
       expect(result.errors[0].message).toBeTruthy();
       expect(typeof result.errors[0].message).toBe('string');
-      // js-yaml inclui posição no message quando disponível
-      if (result.errors[0].line !== undefined) {
-        expect(typeof result.errors[0].line).toBe('number');
+      if (result.errors[0].line !== undefined || result.errors[0].column !== undefined) {
+        expect(typeof result.errors[0].line).toBe("number");
+        expect(typeof result.errors[0].column).toBe("number");
       }
     });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/utils/yaml-validator.test.js` around lines 213 - 224, The test
"parse error includes line and column when available" currently only asserts the
presence/type of result.errors[0].line; update it to also assert
result.errors[0].column when present by adding a conditional check similar to
the line check (e.g., if result.errors[0].column !== undefined then
expect(typeof result.errors[0].column).toBe('number')). Locate the test using
the name and the call to validator.validate(badYaml) and add the column
assertion next to the existing line assertion to ensure regressions that drop
column will fail.
🧹 Nitpick comments (1)
tests/core/utils/yaml-validator.test.js (1)

801-823: Make the warning assertion specific to the expected warning type.

The current check (warnings.length > 0) is broad and can pass for unrelated warnings. Since this test targets unknown top-level field behavior, assert that an unknown_field warning exists (optionally for unknownTopLevel).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/utils/yaml-validator.test.js` around lines 801 - 823, The test
should assert a specific unknown_field warning instead of a generic length
check: after calling validateYAML(yamlWithNull, 'agent') inspect result.warnings
(from the yamlWithNull input) and assert that it contains an entry with type
'unknown_field' (and optionally field === 'unknownTopLevel') so the spec targets
the unknown top-level field behavior rather than any warning; update the
assertion that currently checks result.warnings.length to check
result.warnings.some(w => w.type === 'unknown_field' && (w.field ===
'unknownTopLevel' || true if field not provided)).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@tests/core/utils/yaml-validator.test.js`:
- Around line 213-224: The test "parse error includes line and column when
available" currently only asserts the presence/type of result.errors[0].line;
update it to also assert result.errors[0].column when present by adding a
conditional check similar to the line check (e.g., if result.errors[0].column
!== undefined then expect(typeof result.errors[0].column).toBe('number')).
Locate the test using the name and the call to validator.validate(badYaml) and
add the column assertion next to the existing line assertion to ensure
regressions that drop column will fail.

---

Nitpick comments:
In `@tests/core/utils/yaml-validator.test.js`:
- Around line 801-823: The test should assert a specific unknown_field warning
instead of a generic length check: after calling validateYAML(yamlWithNull,
'agent') inspect result.warnings (from the yamlWithNull input) and assert that
it contains an entry with type 'unknown_field' (and optionally field ===
'unknownTopLevel') so the spec targets the unknown top-level field behavior
rather than any warning; update the assertion that currently checks
result.warnings.length to check result.warnings.some(w => w.type ===
'unknown_field' && (w.field === 'unknownTopLevel' || true if field not
provided)).

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e16f5f1 and 2cd79cf.

📒 Files selected for processing (1)
  • tests/core/utils/yaml-validator.test.js

@nikolasdehor
Copy link
Contributor Author

Fechando PR batch de config/utils. Parte já coberta pelos PRs #498 (config-cache) e #500 (config-loader).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add unit tests for config and utils modules

3 participants