| layout | default |
|---|---|
| title | Letta Tutorial - Chapter 1: Getting Started |
| nav_order | 1 |
| has_children | false |
| parent | Letta Tutorial |
Welcome to Chapter 1: Getting Started with Letta. In this part of Letta Tutorial: Stateful LLM Agents, you will build an intuitive mental model first, then move into concrete implementation details and practical production tradeoffs.
Install Letta, create your first agent, and start a conversation with persistent memory.
Letta (formerly MemGPT) enables AI agents with persistent memory. This chapter covers installation, basic setup, and your first conversation with an agent that remembers.
- Python 3.9+
- OpenAI API key or compatible LLM provider
- Basic command line knowledge
Install Letta via pip:
pip install lettaOr for development:
git clone https://github.com/letta-ai/letta.git
cd letta
pip install -e .Create your first agent with one command:
letta create --name sam --persona "You are Sam, a helpful AI assistant."This creates an agent with default settings and starts a chat session.
Set up your API keys and configuration:
# Set OpenAI API key
export OPENAI_API_KEY="sk-your-key-here"
# Or configure via letta config
letta configureChoose your LLM provider and model:
letta config set default_model gpt-4o-mini
letta config set default_embedding_model text-embedding-ada-002Start chatting with your agent:
letta chat --name samIn the chat interface:
Human: Hi, I'm John and I work as a software developer.
Assistant: Hello John! I'm Sam, your helpful AI assistant. I see you're a software developer. I'll remember that for our future conversations.
Human: What's my name and profession?
Assistant: Your name is John and you're a software developer. I remember that from our conversation just now!
The agent remembers your introduction across the session!
Check what your agent knows:
letta get-agent --name samThis shows the agent's core memory, including facts about you.
Create agents with different personalities:
# A creative writing assistant
letta create --name writer --persona "You are a creative writing coach who helps with stories and characters."
# A coding assistant
letta create --name coder --persona "You are an expert software engineer who writes clean, efficient code."Chat with different agents:
letta chat --name writer
letta chat --name coderEach agent maintains its own memory and personality.
View an agent's memory:
# See all agents
letta list-agents
# Get detailed info about an agent
letta get-agent --name sam --jsonCommon issues:
- API Key Not Set: Make sure
OPENAI_API_KEYis exported - Model Not Available: Check if your API key has access to the specified model
- Memory Issues: Ensure you have enough disk space for the local database
Now that you have agents running, let's explore how their memory actually works in the next chapter.
$ letta create --name sam --persona "You are Sam, a helpful AI assistant."
Created agent 'sam' with ID: agent_123
$ letta chat --name sam
Starting chat with agent 'sam'...
Human: Hello! I'm Alex, a data scientist from New York.
Assistant: Hi Alex! I'm Sam. It's great to meet you. I'll remember you're a data scientist from New York for our future conversations.
Human: What do you know about me?
Assistant: I know that your name is Alex, you're a data scientist, and you're from New York. Is there anything else you'd like me to know or any questions you have?The agent automatically stored and retrieved your personal information! This is the foundation of Letta's persistent memory system.
Most teams struggle here because the hard part is not writing more code, but deciding clear boundaries for letta, name, assistant so behavior stays predictable as complexity grows.
In practical terms, this chapter helps you avoid three common failures:
- coupling core logic too tightly to one implementation path
- missing the handoff boundaries between setup, execution, and validation
- shipping changes without clear rollback or observability strategy
After working through this chapter, you should be able to reason about Chapter 1: Getting Started with Letta as an operating subsystem inside Letta Tutorial: Stateful LLM Agents, with explicit contracts for inputs, state transitions, and outputs.
Use the implementation notes around chat, agent, create as your checklist when adapting these patterns to your own repository.
Under the hood, Chapter 1: Getting Started with Letta usually follows a repeatable control path:
- Context bootstrap: initialize runtime config and prerequisites for
letta. - Input normalization: shape incoming data so
namereceives stable contracts. - Core execution: run the main logic branch and propagate intermediate state through
assistant. - Policy and safety checks: enforce limits, auth scopes, and failure boundaries.
- Output composition: return canonical result payloads for downstream consumers.
- Operational telemetry: emit logs/metrics needed for debugging and performance tuning.
When debugging, walk this sequence in order and confirm each stage has explicit success/failure conditions.
Use the following upstream sources to verify implementation details while reading this chapter:
- View Repo
Why it matters: authoritative reference on
View Repo(github.com). - Awesome Code Docs
Why it matters: authoritative reference on
Awesome Code Docs(github.com).
Suggested trace strategy:
- search upstream code for
lettaandnameto map concrete implementation paths - compare docs claims against actual runtime/config code before reusing patterns in production