Skip to content

GAIR-NLP/Context-Engineering-2.0

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 

Repository files navigation

Context Engineering 2.0: The Context of Context Engineering

arXiv Paper   |   Github Github   |   SII Context SII Context

🚀 TL;DR

  • We formalize “context” and “context engineering,” situating them within a 20+ year history (from GUI-era context-aware systems to agentic LLM systems).
  • We frame the evolution across eras:
    CE 1.0 (primitive computing) → 2.0 (intelligent agents) → 3.0 (human-level) → 4.0 (superhuman).
  • Core idea: context engineering can be seen as a process of entropy reduction — transforming high-entropy human/environmental signals into low-entropy machine-interpretable representations.


🌐 Related Blogs

🎤 Talks & Discussions

📚 Papers

Era 1.0

  • Towards a Better Understanding of Context and Context-Awareness, Dey et al., Springer Badge Academia PDF Badge
  • A Conceptual Framework and a Toolkit for Supporting the Rapid Prototyping of Context-Aware Applications, Dey et al., Journal Badge
  • Context-Aware Computing Applications, Schilit et al., IEEE Badge
  • The Computer for the 21st Century, Weiser et al., Scholar Badge
  • The active badge location system, Want et al., ACM Badge
  • ContextAdapter: Dynamic cross-system context translation for heterogeneous agents, Zhang et al., IEEE Badge
  • Towards a better understanding of context and context-awareness, Abowd et al., Springer Badge
  • Pervasive computing: Vision and challenges, Satyanarayanan et al., IEEE Badge
  • A survey of mobile phone sensing, Lane et al., IEEE Badge
  • Sensing meets mobile social networks: The design, implementation and evaluation of the cenceme application, Miluzzo et al., ACM Badge

Era 2.0

  • Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing, Liu et al., ACM Badge
  • A Survey of Context Engineering for Large Language Models, Mei et al.,arXiv Badge GitHub stars
  • AgentFold: Long-Horizon Web Agents with Proactive Context Management, Ye et al., arXiv Badge
  • MemGPT: Towards LLMs as Operating Systems, Packer et al., arXiv Badge GitHub stars
  • MEM0: Building Production-Ready AI Agents with Scalable Long-Term Memory, Chhikara et al., arXiv Badge GitHub stars
  • MEM1: Learning to Synergize Memory and Reasoning for Efficient Long-Horizon Agents, Zhou et al., arXiv Badge GitHub stars
  • Memos: A memory os for ai system, Li et al., arXiv Badge
  • Exit: Context-aware extractive compression for enhancing retrieval-augmented generation, Hwang et al., arXiv Badge
  • Prompt compression with context-aware sentence encoding for fast and improved LLM inference, Liskavets et al., arXiv Badge
  • LLM4Tag: Automatic tagging system for information retrieval via large language models, Tang et al., arXiv Badge
  • LIFT: Improving long context understanding of large language models through long input fine-tuning, Mao et al., arXiv Badge
  • GILL: Generative image-to-language and language-to-image pretraining for unified vision-language understanding and generation, Shen et al., arXiv Badge
  • PromptCap: Prompt-guided zero-shot image captioning, Wang et al., CVPR Badge
  • Kosmos-2: Grounded language-image-action models for vision, language, and action, Huang et al., arXiv Badge
  • Perceiver: General perception with iterative attention, Jaegle et al., ICML Badge
  • RA-CM3: Retrieval-augmented contextual multimodal models, Wang et al., arXiv Badge
  • MemGPT-Vision: Salience-guided memory for multimodal agents, Xu et al., arXiv Badge
  • UI-TARS: Pioneering automated GUI interaction with native agents, Qin et al., arXiv Badge
  • Learning to synergize memory and reasoning for efficient long-horizon agents, Xu et al., arXiv Badge
  • ChatDev: Collaborative software development with LLM agents, Li et al., NeurIPS Badge
  • MEMOS: An operating system for memory-augmented generation in large language models, Han et al., arXiv Badge
  • A-Mem: Agentic memory for LLM agents, Chen et al., arXiv Badge
  • ContextAdapter: Dynamic cross-system context translation for heterogeneous agents, Zhang et al., IEEE Badge
  • SharedRep: A standardized context representation for multi-platform AI integration, Garcia et al., IJCAI Badge
  • CAIM: Development and evaluation of a cognitive AI memory, Westhäuser et al., arXiv Badge
  • Pretraining context compressor for large language models, Dai et al., ACL Badge
  • Prompt compression with context-aware sentence encoding for fast and improved LLM inference, Liskavets et al., arXiv Badge
  • Kosmos-3: Scaling cross-modal alignment with temporal fusion layers, Li et al., arXiv Badge
  • BLIP: Bootstrapping language-image pre-training for unified vision-language understanding and generation, Li et al., ICML Badge
  • Flamingo: A visual language model for few-shot learning, Alayrac et al., arXiv Badge
  • HMT: Hierarchical Memory Transformer for Efficient Long Context Language Processing, He et al., Neural Badge
  • Task memory engine: Spatial memory for robust multi-step LLM agents, Ye et al., arXiv Badge
  • G-memory: Tracing hierarchical memory for multi-agent systems, Zhang et al., arXiv Badge
  • Long-term memory: The foundation of AI self-evolution, Jin et al., arXiv Badge
  • Large language models empower personalized valuation in auction, Sun et al., arXiv Badge
  • Tree of thoughts: Deliberate problem solving with large language models, Yao et al., ICLR Badge
  • Flexible brain–computer interfaces, Tang et al., Nature Electronics Badge
  • A memristor-based adaptive neuromorphic decoder for brain–computer interfaces, Liu et al., Nature Electronics Badge
  • Non-invasive brain-computer interfaces: state of the art and trends, Edelman et al., IEEE Badge
  • Affective brain–computer interfaces (abcis): A tutorial, Wu et al., IEEE Badge
  • Retrieval augmented generation (rag) and beyond: A comprehensive survey on how to make your llms use external data more wisely, Zhao et al., arXiv Badge
  • Evolution and prospects of foundation models: From large language models to large multimodal models, Chen et al., Computers Badge
  • Survey on explainable AI: From approaches, limitations and applications aspects, Yang et al., Springer Badge
  • Large language models and knowledge graphs: Opportunities and challenges, Pan et al., arXiv Badge
  • Mamba: Linear-time sequence modeling with selective state spaces, Gu et al., arXiv Badge
  • LongMamba: Enhancing Mamba for long context tasks, Ye et al., arXiv Badge
  • Locost: Long context, sparse transformers, Le Bronnec et al., arXiv Badge

🔒 Public Contents Policy

  • The full LaTeX sources and complete paper are now public.
  • You are welcome to read, share, and cite the work.
  • Community feedback, issues, and PRs for corrections or improvements to the source are encouraged.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published