Paper |
Github |
SII Context
- We formalize “context” and “context engineering,” situating them within a 20+ year history (from GUI-era context-aware systems to agentic LLM systems).
- We frame the evolution across eras:
CE 1.0 (primitive computing) → 2.0 (intelligent agents) → 3.0 (human-level) → 4.0 (superhuman). - Core idea: context engineering can be seen as a process of entropy reduction — transforming high-entropy human/environmental signals into low-entropy machine-interpretable representations.
-
Manus | Context Engineering for AI Agents: Lessons from Building Manus
-
Anthropic | Writing effective tools for agents — with agents
-
Letta | Anatomy of a Context Window: A Guide to Context Engineering
-
Letta | Agent Memory: How to Build Agents that Learn and Remember
-
Letta | Memory Blocks: The Key to Agentic Context Management
-
Github | 12 Factor Agents: Principles for building reliable LLM applications
-
Context Rot: How Increasing Input Tokens Impacts LLM Performance
-
What is Context Engineering and How It Differs from Prompt Engineering
-
Context Engineering with Agents using LangGraph: A Guide for Modern AI Development
-
Context Engineering - What it is, and techniques to consider
-
Andrej Karpathy on X: "+1 for "context engineering" over "prompt engineering"
-
Context Engineering with DSPy - the fully hands-on Basics to Pro course
-
Towards a Better Understanding of Context and Context-Awareness, Dey et al.,
-
A Conceptual Framework and a Toolkit for Supporting the Rapid Prototyping of Context-Aware Applications, Dey et al.,
-
Context-Aware Computing Applications, Schilit et al.,
-
The Computer for the 21st Century, Weiser et al.,
-
The active badge location system, Want et al.,
-
ContextAdapter: Dynamic cross-system context translation for heterogeneous agents, Zhang et al.,
-
Towards a better understanding of context and context-awareness, Abowd et al.,
-
Pervasive computing: Vision and challenges, Satyanarayanan et al.,
-
A survey of mobile phone sensing, Lane et al.,
-
Sensing meets mobile social networks: The design, implementation and evaluation of the cenceme application, Miluzzo et al.,
-
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing, Liu et al.,
- A Survey of Context Engineering for Large
Language Models, Mei et al.,
-
AgentFold: Long-Horizon Web Agents with Proactive Context Management, Ye et al.,
- MemGPT: Towards LLMs as Operating Systems, Packer et al.,
- MEM0: Building Production-Ready AI Agents with Scalable Long-Term Memory, Chhikara et al.,
- MEM1: Learning to Synergize Memory and Reasoning for Efficient Long-Horizon Agents, Zhou et al.,
-
Memos: A memory os for ai system, Li et al.,
-
Exit: Context-aware extractive compression for enhancing retrieval-augmented generation, Hwang et al.,
-
Prompt compression with context-aware sentence encoding for fast and improved LLM inference, Liskavets et al.,
-
LLM4Tag: Automatic tagging system for information retrieval via large language models, Tang et al.,
-
LIFT: Improving long context understanding of large language models through long input fine-tuning, Mao et al.,
-
GILL: Generative image-to-language and language-to-image pretraining for unified vision-language understanding and generation, Shen et al.,
-
PromptCap: Prompt-guided zero-shot image captioning, Wang et al.,
-
Kosmos-2: Grounded language-image-action models for vision, language, and action, Huang et al.,
-
Perceiver: General perception with iterative attention, Jaegle et al.,
-
RA-CM3: Retrieval-augmented contextual multimodal models, Wang et al.,
-
MemGPT-Vision: Salience-guided memory for multimodal agents, Xu et al.,
-
UI-TARS: Pioneering automated GUI interaction with native agents, Qin et al.,
-
Learning to synergize memory and reasoning for efficient long-horizon agents, Xu et al.,
-
ChatDev: Collaborative software development with LLM agents, Li et al.,
-
MEMOS: An operating system for memory-augmented generation in large language models, Han et al.,
-
A-Mem: Agentic memory for LLM agents, Chen et al.,
-
ContextAdapter: Dynamic cross-system context translation for heterogeneous agents, Zhang et al.,
-
SharedRep: A standardized context representation for multi-platform AI integration, Garcia et al.,
-
CAIM: Development and evaluation of a cognitive AI memory, Westhäuser et al.,
-
Pretraining context compressor for large language models, Dai et al.,
-
Prompt compression with context-aware sentence encoding for fast and improved LLM inference, Liskavets et al.,
-
Kosmos-3: Scaling cross-modal alignment with temporal fusion layers, Li et al.,
-
BLIP: Bootstrapping language-image pre-training for unified vision-language understanding and generation, Li et al.,
-
Flamingo: A visual language model for few-shot learning, Alayrac et al.,
-
HMT: Hierarchical Memory Transformer for Efficient Long Context Language Processing, He et al.,
-
Task memory engine: Spatial memory for robust multi-step LLM agents, Ye et al.,
-
G-memory: Tracing hierarchical memory for multi-agent systems, Zhang et al.,
-
Long-term memory: The foundation of AI self-evolution, Jin et al.,
-
Large language models empower personalized valuation in auction, Sun et al.,
-
Tree of thoughts: Deliberate problem solving with large language models, Yao et al.,
-
Flexible brain–computer interfaces, Tang et al.,
-
A memristor-based adaptive neuromorphic decoder for brain–computer interfaces, Liu et al.,
-
Non-invasive brain-computer interfaces: state of the art and trends, Edelman et al.,
-
Affective brain–computer interfaces (abcis): A tutorial, Wu et al.,
-
Retrieval augmented generation (rag) and beyond: A comprehensive survey on how to make your llms use external data more wisely, Zhao et al.,
-
Evolution and prospects of foundation models: From large language models to large multimodal models, Chen et al.,
-
Survey on explainable AI: From approaches, limitations and applications aspects, Yang et al.,
-
Large language models and knowledge graphs: Opportunities and challenges, Pan et al.,
-
Mamba: Linear-time sequence modeling with selective state spaces, Gu et al.,
-
LongMamba: Enhancing Mamba for long context tasks, Ye et al.,
-
Locost: Long context, sparse transformers, Le Bronnec et al.,
- The full LaTeX sources and complete paper are now public.
- You are welcome to read, share, and cite the work.
- Community feedback, issues, and PRs for corrections or improvements to the source are encouraged.

