Skip to content

Scanner evasion test corpus #3

@Haserjian

Description

@Haserjian

Context

The scanner uses AST analysis to find LLM call sites. There are known patterns it cannot detect, and these should be explicitly tested and documented as known limitations.

Known gaps (not currently tested)

  • Aliased imports: `import openai as ai; ai.OpenAI().chat.completions.create(...)`
  • Variable assignment chains: `Chat = client.chat; Chat.completions.create(...)`
  • Wrapper functions: `def my_llm(prompt): return client.chat.completions.create(...)`
  • Dynamic dispatch: `getattr(client, "chat").completions.create(...)`
  • import patterns: `provider = import("openai")`
  • Async wrappers: Custom async decorators wrapping SDK calls
  • Cross-file instrumentation: patch() in one file, SDK call in another

What to build

  • Create a fixture directory with one Python file per evasion pattern
  • For each pattern: test that the scanner returns the expected result (detect or known-miss)
  • Document known-miss patterns in scanner output/report so users know what to instrument manually
  • Consider a `--strict` scanner flag that reports known-miss patterns as warnings

Priority

Post-launch. The scanner already catches 202 high-confidence and 903 total findings across real repos. Evasion patterns are edge cases that matter for completeness, not for the initial value proposition.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions