| layout | title | parent | nav_order |
|---|---|---|---|
default |
Chapter 1: Getting Started with LangChain |
LangChain Tutorial |
1 |
Welcome to your first steps with LangChain! If you've ever wanted to build applications that can understand and generate human-like text, you're in the right place. In this chapter, we'll set up your development environment and create your first LangChain application.
Imagine you want to build a chatbot that can:
- Remember your previous conversations
- Search through your documents to answer questions
- Use tools like calculators or web browsers
- Work with different AI models seamlessly
Before LangChain, you'd have to write custom code for each of these features. LangChain provides pre-built components that you can "chain" together like building blocks.
Let's start by setting up your development environment. LangChain works with Python, so you'll need Python 3.8 or higher.
# Create a virtual environment
python -m venv langchain-env
source langchain-env/bin/activate # On Windows: langchain-env\Scripts\activate
# Install LangChain
pip install langchain
# Install OpenAI integration (you'll need an API key)
pip install langchain-openai
# Optional: Install additional integrations
pip install langchain-community langchain-coreLet's create a simple application that uses LangChain to interact with a language model. This will help you understand the basic concepts.
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage
# Initialize the language model
chat = ChatOpenAI(
temperature=0.7, # Controls creativity (0.0 = deterministic, 1.0 = very creative)
model="gpt-3.5-turbo" # You can also use gpt-4 for better results
)
# Create a simple conversation
messages = [
SystemMessage(content="You are a helpful assistant that explains concepts clearly."),
HumanMessage(content="What is LangChain and why should I use it?")
]
# Get the response
response = chat.invoke(messages)
print(response.content)Let's break down what just happened:
This is the "brain" of your application. LangChain supports many different models:
- OpenAI's GPT models
- Anthropic's Claude
- Google's Gemini
- Local models via Ollama
- And many more!
LangChain uses a structured format for conversations:
- SystemMessage: Sets the AI's behavior and role
- HumanMessage: Represents user input
- AIMessage: Contains the AI's response
This is LangChain's standard way to run components. You'll see this pattern throughout the framework.
When you call chat.invoke(messages), LangChain:
- Formats the messages into the format expected by the OpenAI API
- Makes the API call to OpenAI's servers
- Parses the response back into LangChain's message format
- Returns the result for you to use
This abstraction layer is what makes LangChain powerful - you can swap out different models without changing your code!
Let's create a simple test script to make sure everything works:
# test_langchain.py
import os
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
# Make sure to set your OpenAI API key
# You can get one from https://platform.openai.com/api-keys
os.environ["OPENAI_API_KEY"] = "your-api-key-here"
def test_basic_chat():
"""Test basic chat functionality"""
chat = ChatOpenAI(temperature=0.7)
messages = [
HumanMessage(content="Hello! Can you tell me one fun fact about programming?")
]
response = chat.invoke(messages)
print("🤖 AI Response:")
print(response.content)
print("\n✅ LangChain is working correctly!")
if __name__ == "__main__":
test_basic_chat()Error: OpenAI API key not found
Solution: Set your API key as an environment variable:
export OPENAI_API_KEY="your-api-key-here"Error: Connection timeout
Solution: Check your internet connection and OpenAI service status.
Error: ImportError
Solution: Make sure you're using compatible versions:
pip install --upgrade langchain langchain-openai langchain-coreCongratulations! 🎉 You've just:
- Set up your LangChain environment with Python and necessary packages
- Created your first LangChain application that can chat with an AI
- Learned about core components like language models and messages
- Understood the basic architecture of how LangChain works
Now that you have the basics working, you're ready to explore more advanced features. In the next chapter, we'll learn about Prompt Templates - a powerful way to create reusable prompts for consistent results.
Ready for more? Continue to Chapter 2: Prompt Templates & Chains
What would you like to build with your new LangChain setup? Try modifying the example to ask different questions or change the system message! 🚀
Most teams struggle here because the hard part is not writing more code, but deciding clear boundaries for langchain, messages, chat so behavior stays predictable as complexity grows.
In practical terms, this chapter helps you avoid three common failures:
- coupling core logic too tightly to one implementation path
- missing the handoff boundaries between setup, execution, and validation
- shipping changes without clear rollback or observability strategy
After working through this chapter, you should be able to reason about Chapter 1: Getting Started with LangChain as an operating subsystem inside LangChain Tutorial: Building AI Applications with Large Language Models, with explicit contracts for inputs, state transitions, and outputs.
Use the implementation notes around content, response, install as your checklist when adapting these patterns to your own repository.
Under the hood, Chapter 1: Getting Started with LangChain usually follows a repeatable control path:
- Context bootstrap: initialize runtime config and prerequisites for
langchain. - Input normalization: shape incoming data so
messagesreceives stable contracts. - Core execution: run the main logic branch and propagate intermediate state through
chat. - Policy and safety checks: enforce limits, auth scopes, and failure boundaries.
- Output composition: return canonical result payloads for downstream consumers.
- Operational telemetry: emit logs/metrics needed for debugging and performance tuning.
When debugging, walk this sequence in order and confirm each stage has explicit success/failure conditions.
Use the following upstream sources to verify implementation details while reading this chapter:
- View Repo
Why it matters: authoritative reference on
View Repo(github.com).
Suggested trace strategy:
- search upstream code for
langchainandmessagesto map concrete implementation paths - compare docs claims against actual runtime/config code before reusing patterns in production