Skip to content

feuersteiner/contextrie

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

196 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Contextrie

Manifesto YouTube Demo @contextrie/core @contextrie/parsers @contextrie/cli Discord

Contextrie header

License: MIT

Dynamic context curation for long-running agent work.

AI agents get worse as irrelevant context piles up. Contextrie helps you select, index, judge, and compose the right context for each task so long-running agent systems stay sharp.

Contextrie is a context-engineering toolkit for agent workflows.

Use it in two ways:

  • as a library inside your own agent system with @contextrie/core and @contextrie/parsers
  • as a local CLI that indexes project files and composes task-specific context into .contextrie/context.md

Status: early development; expect breaking changes.


Start Here

CLI quickstart

Install the published CLI and use contextrie directly.

npm install -g @contextrie/cli
contextrie --index --all --openai-api-key "$OPENAI_API_KEY" --openai-model "gpt-5.4"
contextrie --task "Summarize the files most relevant to parser source path handling."

This writes:

  • .contextrie/sources.json: indexed source metadata
  • .contextrie/context.md: the composed task-specific context

Use --openai-base-url as well if you are targeting an OpenAI-compatible provider.

For local development inside this repo, you can still run bun run ./cli/index.ts ....

Library quickstart

Install the packages:

npm install @contextrie/core @contextrie/parsers

Extremely brief how to:

import { openai } from "@ai-sdk/openai";
import {
  ComposerAgent,
  DocumentSource,
  IndexingAgent,
  JudgeAgent,
} from "@contextrie/core";

const model = openai("gpt-5.4");
const objective = "response";
const task = "Explain which internal docs matter most when debugging why retrieval is missing indexed metadata.";
const source = new DocumentSource(
  "indexing-architecture",
  undefined,
  "Indexed sources store generated metadata separately from source content, and shallow judgment relies on that metadata for fast relevance scoring.",
);
const indexed = await new IndexingAgent(model).add(source).run();
const judgments = await new JudgeAgent(model).from(indexed).run({
  objective,
  input: task,
});
const context = await new ComposerAgent(model)
  .from(
    Object.fromEntries(
      indexed.map((item) => [item.id, { source: item, decision: judgments[item.id] }]),
    ),
  )
  .run({
    objective,
    input: task,
  });

console.log(context);

What This Is

Contextrie is for systems where agents should not see everything all the time.

It gives you primitives to:

  • define sources around the content you want to retrieve
  • generate metadata for those sources
  • score relevance against a task
  • compose a tighter working context for the next agent step

Why It Exists

Most agent systems fail gradually, not instantly. They accumulate irrelevant context, lose precision, and waste tokens. Contextrie makes context selection a first-class part of the system instead of an afterthought.


Packages

  • @contextrie/core: published now, TypeScript contracts and core agents
  • @contextrie/parsers: published now, file parsers for .csv, .md, and .txt
  • @contextrie/cli: published npm CLI for indexing local sources and composing task-specific context
  • coding-agent: planned coding agent harness with a stub README in coding-agent/README.md
  • benchmarks: benchmark definitions and protocol for evaluating agent workflows
  • docs: documentation site in progress
  • python: Python core library with mirrored source and agent contracts

Examples

Runnable examples are available in examples/.

Start with examples/demo/README.md for the smallest end-to-end flow:

  • index a few sources
  • judge them against a task
  • compose the final context output

Use examples/parsers/README.md for a standalone package example that consumes @contextrie/parsers and parses local files into Contextrie sources.

Benchmark planning lives in benchmarks/README.md, and the coding agent harness stub lives in coding-agent/README.md.


Roadmap 🚧

  • Coding agent harness and eval protocol
  • Hosted docs site
  • CLI binary distribution
  • More examples and eval coverage

Call To Action

  • Install and try @contextrie/core
  • Install and try @contextrie/parsers
  • Read the manifesto
  • Watch the demo
  • Join the Discord
  • Open an issue if you want a parser, adapter, or language target

Repo Layout

.
β”œβ”€ assets/        Visuals and branding
β”œβ”€ benchmarks/    Benchmark protocol and future harness
β”œβ”€ cli/           Published npm CLI package
β”œβ”€ coding-agent/  Coding agent harness stub and future implementation
β”œβ”€ core/          TypeScript library (npm)
β”œβ”€ docs/          SvelteKit documentation site
β”œβ”€ examples/      Minimal examples
β”œβ”€ python/        Python core library
└─ README.md      Project overview

Concepts

  • Sources always carry underlying content
  • A source without metadata is still draft
  • IndexingAgent adds metadata to that source for retrieval and shallow judgment

State flow:

flowchart LR
  Source["Source (content, no metadata)"] --> IndexedSource["Source (content + metadata)"]
Loading

Getting Started

Each package maintains its own development and contribution instructions. Start in the package README for the area you are working on.

For library usage, start with core/README.md. For evaluation work, start with benchmarks/README.md.


Status

Early development; expect breaking changes.

About

Contextrie curates what each agent sees so tasks stay sharp from step one to step one thousand.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Contributors