Upload a .jar, get a risk verdict with explainable indicators before you install.
The name Jarspect is a portmanteau of JAR (Java Archive) and Inspect, reflecting its mission to provide deep, automated inspection of game mods for hidden threats.
Jarspect is an AI-first security scanner for Minecraft mods. It runs a 3-layer pipeline: MalwareBazaar threat intel, bytecode capability extraction, and Azure OpenAI verdict; to classify .jar files as CLEAN, SUSPICIOUS, or MALICIOUS with full explanations. The bytecode layer parses compiled .class files at the constant-pool and instruction level, reconstructs obfuscated strings, resolves method invocations, runs YARA rules per archive entry, and feeds 8 capability detectors into a structured profile that the AI analyzes to produce its verdict.
In June 2023, fractureiser compromised dozens of Minecraft mods on CurseForge and Bukkit. Stage 0 hid a URL inside new String(new byte[]{...}) to evade string-based scanners, then loaded a remote class via URLClassLoader reflection chain. Stages 1-3 added persistence (Windows Run keys, Linux systemd units), credential theft (MSA tokens, Discord tokens, browser cookies, crypto wallets), and self-replication into other installed mods.
BleedingPipe exploited unsafe ObjectInputStream.readObject() calls in popular server-side mods, allowing remote code execution on Minecraft servers. PussyRAT used reflection to hijack Minecraft session tokens. The Stargazers Ghost Network distributed trojanized mods through fake GitHub stars, delivering multi-stage Java-to-.NET info-stealers.
Traditional antivirus and text-based scanners score these threats 0/100 because real malware lives in compiled bytecode; API references exist as structured constant-pool entries, not grep-able plain text. A scanner that doesn't parse .class files is blind to all of it. And rule-based scoring alone can't distinguish a rendering mod that calls Runtime.exec() for GPU probing from a RAT that calls it to run shell commands.
Jarspect was built to fix both problems. Every detection technique in the bytecode layer traces back to a real-world malware sample or documented attack vector. The AI verdict layer understands context; it knows that sodium calling glxinfo is legitimate GPU detection, not process execution abuse.
Jarspect uses a 3-layer pipeline where each layer can short-circuit to a final verdict:
POST /upload (multipart .jar)
|
v
upload_id stored at .local-data/uploads/{upload_id}.jar
|
POST /scan (upload_id)
|
Layer 1: MalwareBazaar Threat Intel
| SHA-256 hash lookup against abuse.ch database
| Match? -> MALICIOUS (confidence 1.0, method: malwarebazaar_hash)
| Optional: set JARSPECT_MB_MATCH_CONTINUE_ANALYSIS=1 to still run
| archive traversal + static analysis for reporting artifacts (verdict unchanged)
|
Layer 2: Bytecode Capability Extraction
+-- Archive traversal recursive jar-in-jar extraction with budget gates
+-- Bytecode evidence cafebabe class parsing -> constant pool + invoke resolution
+-- Byte-array strings reconstruct new String(new byte[]{...}) hidden values
+-- YARA per-entry inflate each entry, scan individually, severity from metadata
+-- Metadata checks fabric.mod.json / mods.toml / META-INF/neoforge.mods.toml / plugin.yml / MANIFEST.MF
+-- Capability detectors 8 detectors (exec, network, dynamic load, fs/jar modify,
| persistence, deserialization, native/JNI, credential theft)
+-- Profile builder structured capability profile with extracted artifacts
|
Layer 3: AI Verdict (Azure OpenAI)
| Receives capability profile + extracted URLs, domains, commands, file paths
| Returns: CLEAN / SUSPICIOUS / MALICIOUS with confidence, explanation,
| and per-capability rationale
|
v
scan_id persisted at .local-data/scans/{scan_id}.json
|
GET /scans/{scan_id} -> fetch full result at any time
Key properties:
- AI-first -- Azure OpenAI (gpt-4o) analyzes the full capability profile and decides the verdict; no rule-based scoring fallback
- Static override layer -- high-confidence static signals (production YARA hits and malware-specific compound detectors) override the AI verdict to MALICIOUS via
static_override(ai_verdict), preventing the AI from downgrading obvious malware - Known-malware guaranteed -- MalwareBazaar hash match short-circuits to MALICIOUS before any other analysis (verdict always uses method
malwarebazaar_hash) - Reporting-friendly artifacts -- scan JSON includes top-level
sha256and (when available)static_findingsfor extracted URLs/domains/paths/commands - Bytecode-native -- parses
.classconstant pools and resolvesinvoke*instructions instead of running regex over lossy UTF-8 - Reconstructs hidden strings -- recovers
new String(new byte[]{...})values that fractureiser Stage 0 used to hide URLs and class names - Recursive archive scanning -- follows jar-in-jar nesting with
!/path provenance and budget-gated inflation - Per-entry YARA -- scans each inflated archive entry individually (not the compressed jar blob) with severity from rule metadata
- 8 capability detectors -- each uses an evidence index with class-scoped correlation gates for severity escalation
- Artifact extraction -- URLs, domains, shell commands, and file paths extracted from bytecode evidence and fed to the AI
- Fully explainable -- the AI provides per-capability rationale explaining what it found and why it matters (or doesn't)
- Single binary --
cargo runstarts the HTTP server and the web UI on the same port
Every .class entry (identified by 0xCAFEBABE magic) is parsed using the cafebabe crate:
-
Constant-pool strings -- all
CONSTANT_Utf8andCONSTANT_Stringentries are extracted and deduplicated. These contain class names (java/lang/Runtime), method names (exec), descriptors ((Ljava/lang/String;)Ljava/lang/Process;), and string literals. -
Invoke resolution -- every
invokevirtual,invokestatic,invokespecial,invokeinterface, andinvokedynamicinstruction is resolved through the constant-pool reference chain into(owner, name, descriptor)tuples with method name and program-counter location metadata. -
Byte-array string reconstruction -- a narrow opcode state machine walks method bytecode looking for the
newarray T_BYTE+dup/bipush/sipush/bastore+String.<init>([B)Vpattern. When found, the byte values are assembled into the hidden string. This is how fractureiser Stage 0 concealedjava.net.URLClassLoaderand the remote class name -- these strings never appear in the constant pool.
Eight detectors run against an EvidenceIndex built from the extracted bytecode evidence. Each detector uses class-scoped correlation gates: a method call alone may be low severity, but the same call in a class that also references suspicious strings or complementary APIs escalates to high.
| ID | Capability | What it catches |
|---|---|---|
| DETC-01 | Process execution | Runtime.exec(), ProcessBuilder.start(), shell command strings |
| DETC-02 | Network I/O | URL, HttpURLConnection, HttpClient, socket APIs, extracted URLs |
| DETC-03 | Dynamic class loading | URLClassLoader, ClassLoader.defineClass, Class.forName, reflection chains |
| DETC-04 | Filesystem/JAR modification | ZipOutputStream, JarFile, Files.walk, directory traversal + .jar markers |
| DETC-05 | Persistence | Windows Run keys, systemd unit paths, startup folders, schtasks/crontab |
| DETC-06 | Unsafe deserialization | ObjectInputStream.readObject() (BleedingPipe-style vulnerability risk) |
| DETC-07 | Native/JNI loading | System.load/loadLibrary, embedded .dll/.so/.dylib entries |
| DETC-08 | Credential theft | Discord token paths, browser cookie/login databases, .minecraft session files |
YARA rules run on each inflated archive entry individually -- not the compressed jar blob. This is critical because YARA over the whole JAR won't reliably match inside deflated entries.
Severities come from rule metadata (meta.severity) with fallback chain to meta.threat_level, tags, then pack default. Rule IDs are prefixed by pack provenance (YARA-DEMO-* for demo rules, YARA-PROD-* for production rules).
Demo and production rulepacks are separated via the JARSPECT_RULEPACKS environment variable.
The prod rulepack (data/signatures/prod/rules.yar) contains 6 high-precision rules targeting known Minecraft mod malware families:
| Rule | Family / Campaign | What it matches |
|---|---|---|
minecraft_makslibraries_mcmod_info |
Maks Libraries | Forge mcmod.info with makslibraries mod ID |
minecraft_pussylib_pussygo_class |
PussyRAT | pussylib/pussygo class marker |
minecraft_loaderclient_staging_helper |
Loader/Stager | StagingHelper + JAR staging + HTTP client |
minecraft_krypton_loader_stub |
Krypton stealer | Obfuscated Fabric stub with URLClassLoader + a/a/a/Config + UTF_16BE + Error in hash |
minecraft_maxcoffe_socket_loader_stub |
MaxCoffe / MaksRAT | Socket I/O + JAR staging + defineClass + nothing_to_see_here marker |
minecraft_eth_rpc_endpoint_list |
Fractureiser-tagged | RPCHelper class with 6+ hardcoded Ethereum JSON-RPC endpoints |
All production rules use severity = "high" and require multiple corroborating strings (no single-string rules) to maintain high precision. A production YARA hit at high/critical severity triggers the static_override layer, guaranteeing a MALICIOUS verdict regardless of the AI's assessment.
Jarspect parses mod metadata files and cross-references them against archive contents:
- Fabric/Quilt --
fabric.mod.json: validates entrypoint classes exist in the JAR - Forge --
META-INF/mods.toml: checks mod ID, version, loader constraints - NeoForge --
META-INF/neoforge.mods.toml: checks mod ID, version, NeoForge-specific fields - Spigot/Bukkit --
plugin.yml: validates main class exists - MANIFEST.MF -- flags high-risk Java agent attributes (
Premain-Class,Agent-Class,Can-Redefine-Classes, etc.)
When multiple metadata files are present, Jarspect picks the shallowest (most likely to be the main mod) to avoid noise from bundled dependencies.
Jars can contain jars (Fabric nested jars under META-INF/jars/, or malware embedding payload archives). Jarspect recursively extracts nested archives with:
!/delimited path provenance (e.g.outer.jar!/META-INF/jars/inner.jar!/com/Evil.class)- Budget gates: per-entry size limit, compression ratio limit, total inflated bytes cap
- Configurable depth limit
Jarspect uses a 3-layer verdict pipeline. Each layer can produce a final verdict:
Before any static analysis, the jar's SHA-256 hash is checked against MalwareBazaar (abuse.ch). If the hash matches a known malware sample, the scan immediately returns MALICIOUS with confidence: 1.0 and method: malwarebazaar_hash.
By default, MalwareBazaar matches short-circuit (no further analysis). If you need static-analysis artifacts for reporting/graphs even on known-malware samples, set JARSPECT_MB_MATCH_CONTINUE_ANALYSIS=1. The final verdict still stays MalwareBazaar-based.
If no threat intel match is found, the full bytecode analysis runs: archive traversal, class parsing, YARA scanning, and 8 capability detectors. The results are assembled into a CapabilityProfile containing:
- Which capabilities are present (network, execution, persistence, etc.) with evidence
- YARA rule matches with severity
- Mod metadata (loader, mod ID, version, authors, entrypoints)
- Reconstructed hidden strings
- Suspicious manifest entries
- Extracted artifacts: URLs, domains, shell commands, file paths found in bytecode evidence
The capability profile is sent to Azure OpenAI (gpt-4o) with a specialized system prompt. The AI analyzes the profile in context -- understanding that Runtime.exec() in a rendering mod calling glxinfo is legitimate GPU probing, not malicious process execution. It returns:
- Verdict:
CLEAN,SUSPICIOUS, orMALICIOUS - Confidence: 0.0 to 1.0
- Risk score: 0 to 100
- Explanation: prose description of findings and reasoning
- Capabilities assessment: per-capability rationale explaining what was found and why it matters (or doesn't)
The AI is instructed to cite concrete evidence (class paths, extracted URLs) and to explain what would upgrade/downgrade a SUSPICIOUS verdict.
After the AI (or heuristic fallback) produces its verdict, a final guard runs: if any high-confidence static signal is present, the verdict is overridden to MALICIOUS regardless of what the AI said. This prevents the AI from downgrading obvious malware.
Signals that trigger the override:
- Any production YARA rule match (
YARA-PROD-*) at high/critical severity DETC-03.BASE64_STAGERat high/criticalDETC-02.DISCORD_WEBHOOKat high/criticalNET-DISCORD-WEBHOOKsignature match at high/critical
When triggered, the verdict method becomes static_override(ai_verdict) (or static_override(heuristic_fallback)).
| Verdict | Meaning |
|---|---|
CLEAN |
No malicious indicators; capabilities are consistent with legitimate mod behavior |
SUSPICIOUS |
Some concerning signals but insufficient evidence for a definitive malicious classification |
MALICIOUS |
Strong evidence of malicious intent -- known malware hash, or AI-confirmed coordinated malicious behavior |
We tested Jarspect against 120 real-world samples -- 70 confirmed malware and 50 popular benign mods -- with MalwareBazaar hash matching disabled so every verdict had to be earned through bytecode analysis, YARA rules, and AI reasoning rather than a trivial hash lookup.
Datasets. The malware corpus is 70 jars from MalwareBazaar, filtered to samples that actually contain Minecraft mod metadata (fractureiser, mavenrat, maksstealer, maksrat tags). The benign corpus is the top 50 most-downloaded mods from Modrinth. Both are documented in docs/corpus-calibration.md. Wilson 95% confidence intervals are reported because 120 samples is honest but not large -- we don't want to overclaim.
| Dataset | n | CLEAN | SUSPICIOUS | MALICIOUS | Rate (Wilson 95% CI) |
|---|---|---|---|---|---|
| MalwareBazaar strict-modlike | 70 | 0 | 0 | 70 | Detection: 100% (94.8–100.0%) |
| Modrinth top-50 | 50 | 50 | 0 | 0 | Clean: 100% (92.9–100.0%) |
Perfect separation on this corpus. No false positives, no missed malware.
In our baseline malware corpus, many verdicts were guaranteed by the static override layer (static_override(ai_verdict)) -- meaning production YARA rules or other high-confidence signals locked in MALICIOUS before the AI even had a say. All 50 benign mods were classified by ai_verdict.
The ablation study strips layers away to show what each one is doing:
| Configuration | Malware (n=70) | Benign (n=50) |
|---|---|---|
| Full pipeline (prod YARA + AI + static override) | 70 MALICIOUS | 50 CLEAN |
| AI alone (derived: no static override) | 7 MALICIOUS | 50 CLEAN |
| AI disabled (prod YARA + heuristic + static override) | 70 MALICIOUS | 43 CLEAN, 7 SUSPICIOUS |
| AI + prod YARA disabled (demo YARA + heuristic + static override) | 66 MALICIOUS, 3 CLEAN, 1 SUSPICIOUS | 39 CLEAN, 10 SUSPICIOUS, 1 MALICIOUS |
The takeaway: the static override layer carries much of the malware detection weight in the baseline run, but the AI is critical for benign accuracy -- without it, benign mods get flagged SUSPICIOUS. The production YARA rules are the difference between catching nearly-all and all malware samples in this corpus.
Malware and benign mods look very different at the bytecode level. Capabilities are counted only when detectors fire at medium or high severity -- low-signal noise is excluded.
| Capability | Malware (n=70) | Benign (n=50) |
|---|---|---|
dynamic_loading |
94.3% | 2.0% |
network |
78.6% | 18.0% |
filesystem |
75.7% | 6.0% |
deserialization |
0.0% | 8.0% |
native_loading |
0.0% | 8.0% |
execution |
0.0% | 0.0% |
persistence |
0.0% | 0.0% |
credential_theft |
0.0% | 0.0% |
The signature pattern is clear: malware clusters heavily on dynamic_loading + network + filesystem (the URLClassLoader → remote fetch → write-to-disk pipeline). Benign mods are mostly capability-free at medium/high severity, with small amounts of network (mod update checkers) and deserialization/native_loading (legitimate use cases).
Full reproduction steps, corpus selection criteria, and aggregation scripts are documented in docs/benchmarking.md and docs/corpus-calibration.md. The SVG figures are generated with:
bun scripts/render-benchmark-figures.ts \
--baseline-malware-aggregate <malware-run>/aggregate.csv \
--baseline-benign-aggregate <benign-run>/aggregate.csv \
--out-dir docs/benchmarksPrerequisites: Rust stable toolchain (rustup.rs)
git clone https://github.com/Microck/jarspect.git
cd jarspect
cargo runThe server starts on http://127.0.0.1:18000 by default.
- Web UI -- http://localhost:18000/
- Health check -- http://localhost:18000/health
To run the end-to-end demo (builds a synthetic suspicious .jar and exercises the full API):
bash scripts/demo_run.shJarspect ships as a single Rust binary. No external runtime or database required.
# 1. Install Rust (if you don't have it)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source "$HOME/.cargo/env"
# 2. Clone and build
git clone https://github.com/Microck/jarspect.git
cd jarspect
cargo build --release
# 3. Run
./target/release/jarspectAll persistent data lands in .local-data/ relative to the working directory (auto-created on first run).
The full flow is three HTTP calls:
Step 1: Upload the .jar
curl -X POST http://localhost:18000/upload \
-F "file=@/path/to/yourmod.jar"{
"upload_id": "a3f9c1d2e4b56789...",
"filename": "yourmod.jar",
"size_bytes": 204800,
"storage_url": ".local-data/uploads/a3f9c1d2e4b56789....jar"
}Step 2: Run the scan
curl -X POST http://localhost:18000/scan \
-H "Content-Type: application/json" \
-d '{"upload_id": "a3f9c1d2e4b56789..."}'The response contains the full ScanRunResponse (see Data Model) including the scan_id.
Step 3: Fetch results
curl http://localhost:18000/scans/<scan_id>| Variable | Default | Description |
|---|---|---|
JARSPECT_BIND |
127.0.0.1:18000 |
Host and port the HTTP server binds to |
JARSPECT_RULEPACKS |
demo |
Which YARA/signature rulepacks to load: demo, prod, or demo,prod |
JARSPECT_AI_ENABLED |
1 |
Enable/disable AI verdict even if Azure OpenAI env vars are set (0/false to disable) |
JARSPECT_UPLOAD_MAX_BYTES |
52428800 |
Maximum accepted upload size in bytes (default 50 MiB) |
JARSPECT_MB_HASH_MATCH_ENABLED |
1 |
Enable/disable MalwareBazaar hash matching (0/false to disable; useful for benchmarking static/AI detectors) |
RUST_LOG |
jarspect=info,tower_http=info |
Log verbosity (uses tracing-subscriber env-filter syntax) |
| Variable | Description |
|---|---|
AZURE_OPENAI_ENDPOINT |
Azure OpenAI endpoint URL (e.g. https://your-resource.openai.azure.com) |
AZURE_OPENAI_API_KEY |
Azure OpenAI API key |
AZURE_OPENAI_DEPLOYMENT |
Deployment name (e.g. gpt-4o) |
AZURE_OPENAI_API_VERSION |
API version (default: 2024-10-21) |
| Variable | Description |
|---|---|
MALWAREBAZAAR_API_KEY |
MalwareBazaar API key for hash lookups (optional but recommended) |
Example: full production configuration:
JARSPECT_BIND=0.0.0.0:18000 \
JARSPECT_RULEPACKS=prod \
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com \
AZURE_OPENAI_API_KEY=your-key \
AZURE_OPENAI_DEPLOYMENT=gpt-4o \
MALWAREBAZAAR_API_KEY=your-mb-key \
RUST_LOG=jarspect=info \
cargo run --releaseAll responses are JSON. Error responses use {"detail": "<message>"}.
Upload a .jar file for later scanning. Accepts multipart/form-data with a field named file.
Constraints: .jar extension required; max 50 MB.
Response 200:
{
"upload_id": "a3f9c1d2e4b56789abcdef0123456789",
"filename": "suspicious-mod.jar",
"size_bytes": 18432,
"storage_url": ".local-data/uploads/a3f9c1d2e4b56789abcdef0123456789.jar"
}upload_id is a 32-character lowercase hex string (UUID v4, simple form).
Run the 3-layer scan pipeline (MalwareBazaar -> bytecode extraction -> AI verdict) on a previously uploaded .jar.
Request body:
{
"upload_id": "a3f9c1d2e4b56789abcdef0123456789"
}Response 200: Full ScanRunResponse (see Data Model).
Retrieve a previously persisted scan result. scan_id must be a 32-character hex string.
Response 404: {"detail": "Scan not found"}
Liveness check. Reports AI status, loaded rulepacks, signature/YARA rule counts, and feature flags.
Response 200:
{
"status": "ok",
"service": "jarspect",
"version": "0.1.0",
"ai_enabled": true,
"rulepacks": "prod",
"signature_count": 12,
"yara_rule_count": 6,
"mb_hash_match_enabled": true,
"upload_max_bytes": 52428800
}Open http://localhost:18000/ after starting the server.
The single-page console lets you:
- Drop a
.jarfile onto the upload zone or use the file picker - Click "Run scan" -- the UI calls
/uploadthen/scanand streams status messages - Inspect the verdict -- displays the AI verdict (CLEAN / SUSPICIOUS / MALICIOUS), confidence score, detection method, and the AI's full explanation
- Review AI reasoning -- per-capability rationale from the AI explaining why each detected capability is or isn't concerning
- Browse indicators -- filterable list with severity badges, evidence text, and rationale from both bytecode detectors and threat intelligence
flowchart TB
subgraph Client
UI["Browser UI"]
CLI["CLI / scripts"]
end
subgraph Server["Jarspect (Axum server)"]
H["GET /health"]
UP["POST /upload\n(multipart .jar)"]
SC["POST /scan\n(upload_id)"]
GET["GET /scans/{scan_id}"]
end
subgraph Storage["Local storage (.local-data)"]
UPL[(".local-data/uploads/{upload_id}.jar")]
SCS[(".local-data/scans/{scan_id}.json")]
end
subgraph Pipeline["Scan pipeline (src/scan.rs)"]
SHA["SHA-256"]
MB["Layer 1: MalwareBazaar hash lookup"]
STATIC["Layer 2: Bytecode analysis\n(archive traversal + evidence + YARA + detectors)"]
PROF["Capability profile"]
AI["Layer 3: AI verdict (Azure OpenAI)"]
OVR["Static override (high-confidence)\n(prod YARA + malware-specific detectors)"]
end
subgraph External["External services"]
MBAPI["MalwareBazaar (abuse.ch)"]
AZ["Azure OpenAI"]
end
subgraph Signatures["Signature data"]
SIGS["data/signatures/{demo,prod}/"]
end
UI --> UP
CLI --> UP
UI --> SC
CLI --> SC
UI --> GET
CLI --> GET
UP -->|write jar| UPL
SC -->|read jar| UPL
SC --> SHA --> MB --> MBAPI
MB -->|no match| STATIC
STATIC --> SIGS
STATIC --> PROF --> AI --> AZ
AI --> OVR -->|persist result| SCS
GET -->|read result| SCS
The scan pipeline lives in src/scan.rs as an orchestrator that runs the 3-layer pipeline. src/main.rs is the Axum transport layer.
POST /scan
|
+- SHA-256 hash sha2::Sha256::digest()
+- MalwareBazaar lookup malwarebazaar::check_hash() - match? -> MALICIOUS
|
+- Archive traversal analysis::read_archive_entries_recursive()
+- Bytecode extraction analysis::extract_bytecode_evidence()
+- YARA per-entry analysis::run_yara_scan()
+- Capability detectors detectors::run_detectors() - 8 detectors against EvidenceIndex
+- Profile builder profile::build_profile() - structured capability summary
|
+- AI verdict verdict::ai_verdict() - Azure OpenAI gpt-4o analysis
returns CLEAN / SUSPICIOUS / MALICIOUS + explanation
Signature loading happens once at startup:
data/signatures/{demo,prod}/signatures.json-- loaded per rulepackdata/signatures/{demo,prod}/rules.yar-- compiled viayara_x::Compiler, stored asArc<Rules>
Scan results are persisted as pretty-printed JSON at:
.local-data/scans/{scan_id}.json
Top-level shape (ScanRunResponse):
| Field | Type | Description |
|---|---|---|
scan_id |
string |
32-character hex UUID for this scan run |
verdict |
object | AI verdict: result, confidence, risk_score, method, explanation, capabilities_assessment |
malwarebazaar |
object or null | MalwareBazaar match details (if Layer 1 matched): sha256_hash, family, tags, first_seen |
capabilities |
object or null | Capability signals per detector: { name: { present, evidence[] } } |
yara_hits |
array or null | YARA rule matches: [{ id, severity, file_path, evidence }] |
metadata |
object or null | Mod metadata from fabric.mod.json / mods.toml / plugin.yml |
profile |
object or null | Full CapabilityProfile sent to the AI for analysis |
intake |
object | Upload metadata: upload_id, storage_path, file_count, class_file_count |
Verdict object:
| Field | Type | Description |
|---|---|---|
result |
string |
CLEAN, SUSPICIOUS, or MALICIOUS |
confidence |
f64 |
0.0 to 1.0 confidence in the verdict |
risk_score |
u8 |
0 to 100 risk score |
method |
string |
How the verdict was determined: ai_verdict, malwarebazaar_hash, static_override(ai_verdict), or heuristic_fallback |
explanation |
string |
Full prose explanation of findings and reasoning |
capabilities_assessment |
map<string, string> |
Per-capability rationale from the AI (e.g. { "execution": "Runtime.exec used for GPU probing, not malicious" }) |
- No sandbox. Jarspect does not execute or load any
.classfiles. All analysis is purely static (bytecode-level constant-pool and instruction parsing). - AI-dependent. Production verdicts require a working Azure OpenAI endpoint. Without AI configuration, scans will fail with an error. The AI model's judgment is the final authority on ambiguous cases.
- Rate limiting. Azure OpenAI endpoints may be rate-limited (429 responses). Jarspect retries with exponential backoff but will fail if rate-limited for too long.
- Synthetic demo fixtures. The bundled demo rulepack matches strings from
demo/suspicious_sample.jar-- a synthetic artifact built bydemo/build_sample.sh. No real malware samples are included in the repository. - Static analysis only. The bytecode layer extracts capabilities and artifacts deterministically from bytecode evidence, but does not execute code.
- 50 MB upload cap. Enforced server-side; configurable via
JARSPECT_UPLOAD_MAX_BYTES. .jaronly. Other archive types are rejected at the upload handler.- Budget-gated extraction. Recursive archive scanning has per-entry size, compression ratio, total inflated bytes, and depth limits to prevent zip-bomb denial of service.
# Check for compile errors
cargo check
# Run tests
cargo test
# Build optimized binary
cargo build --release
# Run with verbose logging
RUST_LOG=debug cargo run
# Run with production YARA rules
JARSPECT_RULEPACKS=prod cargo runProject layout:
src/
main.rs Axum HTTP transport layer
lib.rs core types, static analysis, run_scan() delegation
scan.rs 3-layer scan pipeline orchestrator
verdict.rs AI verdict via Azure OpenAI (prompt, retry, rate-limit handling)
profile.rs capability profile builder (structured AI input)
malwarebazaar.rs MalwareBazaar hash lookup (abuse.ch)
analysis/
mod.rs analysis module exports and shared types
archive.rs recursive jar-in-jar traversal with budget gates
classfile_evidence.rs cafebabe class parsing, constant-pool + invoke resolution
byte_array_strings.rs new String(new byte[]{...}) reconstruction state machine
evidence.rs EvidenceIndex for detector lookups
metadata.rs fabric.mod.json / mods.toml / META-INF/neoforge.mods.toml / plugin.yml / MANIFEST.MF
yara.rs per-entry YARA scanning with rulepack separation
detectors/
mod.rs detector runner and exports
spec.rs detector specification types
index.rs EvidenceIndex builder
capability_exec.rs DETC-01: process execution
capability_network.rs DETC-02: network I/O
capability_dynamic_load.rs DETC-03: dynamic class loading
capability_fs_modify.rs DETC-04: filesystem/JAR modification
capability_persistence.rs DETC-05: persistence mechanisms
capability_deser.rs DETC-06: unsafe deserialization
capability_native.rs DETC-07: native/JNI loading
capability_cred_theft.rs DETC-08: credential theft
capability_base64_stager.rs compound: base64-encoded stager detection
capability_discord_webhook.rs compound: Discord webhook exfiltration
capability_remote_code_load.rs compound: remote code fetch + load correlation
data/signatures/
demo/ demo rulepack (matches synthetic fixtures)
prod/ production rulepack (real bytecode-aware rules)
web/
index.html single-page browser UI
app.js UI logic with verdict rendering
styles.css UI styles (Geist)
docs/
corpus-calibration.md calibration report from corpus testing
benchmarking.md benchmark workflows and aggregation
false-positives.md FP case studies and fixes
brand/ logo assets
scripts/
demo_run.sh end-to-end demo (build sample + scan)
modrinth-top-50-scan.sh benign benchmark: download + scan Modrinth top mods
scan-local-dir.sh batch scan a local directory of jars
malwarebazaar-download.sh download MalwareBazaar samples by tag
select-malwarebazaar-dataset.ts filter downloaded jars to mod-like subset
aggregate-run.ts aggregate a run into CSV + summary JSON
.local-data/ runtime data (gitignored)
uploads/{upload_id}.jar uploaded .jar files
scans/{scan_id}.json persisted scan results
To add new YARA rules, append rules to the appropriate rulepack under data/signatures/{demo,prod}/rules.yar. Include meta.severity in your rules for automatic severity mapping. The compiler runs at startup and will report any syntax errors.
Issues and pull requests are welcome at github.com/Microck/jarspect.
For bug reports, include the scan_id and the anonymized .jar that triggered the issue (or a minimal reproduction). For new detection rules, include the rationale and a safe synthetic fixture that demonstrates the match.
Apache-2.0. See LICENSE.
Built at the Microsoft AI Dev Days Hackathon 2026.