AI Engineering Learning: LangChain Fundamentals
Table of Contents
- Introduction
- Week 1: Core Concepts
- Week 2: Chains and Advanced Concepts
- Practical Exercises
- Debugging and Best Practices
Introduction
What is LangChain?
LangChain is an open-source Python framework designed to simplify building applications powered by Large Language Models (LLMs). Instead of writing complex API calls and managing prompts manually, LangChain provides modular components that you can combine like building blocks.
Think of LangChain as a bridge between your application code and LLMs, handling:
- Connection to different LLM providers (OpenAI, Anthropic, Hugging Face, etc.)
- Prompt management and templating
- Memory and context management
- Tool integration and chains
- Debugging and monitoring
Why Learn LangChain?
- Abstraction: You don’t need to write raw API calls
- Modularity: Reusable components that work together seamlessly
- Flexibility: Works with any LLM provider
- Productivity: Build complex AI applications faster
- Best Practices: Built-in patterns for production applications
Week 1: Core Concepts
1. Development Environment Setup
Step 1: Create a Virtual Environment
A virtual environment isolates your project dependencies from your system Python.
1
2
3
4
5
6
7
8
# Create virtual environment
python -m venv langchain_env
# Activate it (macOS/Linux)
source langchain_env/bin/activate
# Activate it (Windows)
langchain_env\Scripts\activate
Step 2: Install Required Packages
1
2
3
4
5
6
7
8
9
# Install core LangChain packages
pip install langchain==0.1.0
pip install langchain-openai==0.0.2
pip install langchain-community==0.0.12
pip install python-dotenv
pip install openai==1.7.2
# Verify installation
python -c "import langchain; print(langchain.__version__)"
Step 3: Set Up Your OpenAI API Key
- Go to platform.openai.com
- Create an account and set up billing
- Navigate to API keys section
- Create a new API key
- Create a
.envfile in your project root:
1
OPENAI_API_KEY=sk-proj-your-api-key-here
Important: Never commit your .env file to version control. Add it to .gitignore.
2. Chat Models and LLMs - Theoretical Foundation
What Are Chat Models?
Chat Models are language models specifically designed for conversational interactions. Unlike traditional text generation models, chat models:
- Accept multiple messages as input
- Understand different message roles (System, Human, AI)
- Return structured message objects
- Are optimized for back-and-forth conversations
Message Types
There are three primary message types in LangChain:
- SystemMessage: Tells the model how to behave
- Defines the model’s role or persona
- Provides context and instructions
- Not visible to end users
- Example: “You are a helpful coding assistant”
- HumanMessage: User’s input to the model
- Represents questions or requests from the user
- Can include context or data
- Marked as coming from the human/user
- AIMessage: Model’s response
- Generated by the language model
- Can include structured data like tool calls
- Marks previous model outputs in conversation history
Key LLM Parameter: Temperature
Temperature controls the randomness/creativity of the model’s output:
- Temperature = 0.0: Deterministic and focused
- Always picks the most likely token
- Use for: Code generation, data extraction, structured outputs
- Example: SQL queries, JSON formatting
- Temperature = 0.5-0.7: Balanced (recommended for most cases)
- Mix of consistency and creativity
- Use for: General writing, summarization, Q&A
- Temperature = 0.9-1.0: Creative and diverse
- Picks less likely tokens, more variety
- Use for: Brainstorming, storytelling, creative writing
Remember: Temperature ≠ Creativity alone. It controls randomness, which increases diversity but doesn’t guarantee creative quality.
3. Your First Chat Model - Practical
Code Example
Create a file first_model.py:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain.schema.messages import SystemMessage, HumanMessage, AIMessage
# Load environment variables
load_dotenv()
# Step 1: Initialize the chat model
# gpt-3.5-turbo is fast and cost-effective for learning
chat_model = ChatOpenAI(
model="gpt-3.5-turbo-0125",
temperature=0, # Deterministic for learning
api_key=os.getenv("OPENAI_API_KEY")
)
# Step 2: Create messages
system_message = SystemMessage(
content="You are a helpful Python programming tutor. Explain concepts clearly."
)
human_message = HumanMessage(
content="What is a Python list comprehension? Give a simple example."
)
# Step 3: Invoke the model
response = chat_model.invoke([system_message, human_message])
# Step 4: Access the response
print(f"Model response: {response.content}")
print(f"Response type: {type(response)}")
What’s Happening?
ChatOpenAI(...): Creates a connection to OpenAI’s APIinvoke([messages]): Sends messages to the model and waits for response- Response object: Returns an
AIMessagecontaining the model’s answer
Running the Code
1
python first_model.py
Expected output:
1
2
Model response: A list comprehension is a concise way to create lists in Python...
Response type: <class 'langchain_core.messages.ai.AIMessage'>
Experiment: Understanding SystemMessage Impact
Create system_message_demo.py:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain.schema.messages import SystemMessage, HumanMessage
load_dotenv()
chat_model = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
# Test 1: Without system message
print("=== Without System Message ===")
response1 = chat_model.invoke([
HumanMessage(content="What is photosynthesis?")
])
print(response1.content[:150] + "...")
# Test 2: With restrictive system message
print("\n=== With Restrictive System Message ===")
response2 = chat_model.invoke([
SystemMessage(content="You are a strict biology teacher. Only answer about animals, never plants."),
HumanMessage(content="What is photosynthesis?")
])
print(response2.content)
# Test 3: With different system message
print("\n=== With Casual System Message ===")
response3 = chat_model.invoke([
SystemMessage(content="You are a casual friend explaining things. Use emojis and simple language."),
HumanMessage(content="What is photosynthesis?")
])
print(response3.content[:200] + "...")
Key Learning: The SystemMessage completely changes how the model behaves, even with the same user question!
4. Prompt Templates - Theoretical Foundation
What Are Prompt Templates?
Prompt Templates are predefined structures for creating prompts with placeholders. Instead of hard-coding prompts, you:
- Define a template with variable placeholders (using
{variable_name}) - Reuse the same template with different inputs
- Make prompts modular and maintainable
- Ensure consistency across multiple uses
Why Use Prompt Templates?
- Reusability: Define once, use everywhere
- Consistency: Same structure, different data
- Maintainability: Change template in one place
- Readability: Clear what variables are expected
Types of Prompt Templates
- Simple PromptTemplate: Basic string templates
- ChatPromptTemplate: For message-based prompts
- MessagePromptTemplate: For individual message types
- FewShotPromptTemplate: Includes examples in the prompt
5. Prompt Templates - Practical Examples
Simple PromptTemplate
Create simple_template.py:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
from langchain.prompts import PromptTemplate
# Define a template with placeholders
template = """Tell me a joke about {topic} that is appropriate for {audience}."""
# Create the prompt template
prompt_template = PromptTemplate(
input_variables=["topic", "audience"],
template=template
)
# Format with actual values
formatted_prompt = prompt_template.format(
topic="programming",
audience="software engineers"
)
print(formatted_prompt)
# Output: Tell me a joke about programming that is appropriate for software engineers.
ChatPromptTemplate with Multiple Messages
Create chat_template.py:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
from langchain.prompts import ChatPromptTemplate
from langchain.schema.messages import SystemMessage, HumanMessage
# Method 1: Using from_template (simple)
simple_template = ChatPromptTemplate.from_template(
"""You are a {role}. Answer the following question:
{question}"""
)
# Method 2: Using message templates (more control)
from langchain.prompts import (
PromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
ChatPromptTemplate
)
system_template = """You are an expert {expertise} with {years} years of experience."""
system_prompt = SystemMessagePromptTemplate(
prompt=PromptTemplate(
input_variables=["expertise", "years"],
template=system_template
)
)
human_template = """{user_query}"""
human_prompt = HumanMessagePromptTemplate(
prompt=PromptTemplate(
input_variables=["user_query"],
template=human_template
)
)
# Combine into ChatPromptTemplate
chat_template = ChatPromptTemplate(
input_variables=["expertise", "years", "user_query"],
messages=[system_prompt, human_prompt]
)
# Format and view the messages
messages = chat_template.format_messages(
expertise="Python programming",
years="10",
user_query="How do I optimize my code?"
)
print("Messages to send to model:")
for msg in messages:
print(f"{msg.__class__.__name__}: {msg.content[:50]}...")
Week 2: Chains and Advanced Concepts
1. Understanding Chains - Theoretical Foundation
What Are Chains?
A Chain is a sequence of operations connected together where the output of one step becomes the input to the next. LangChain chains:
- Connect prompts, models, and tools
- Allow complex workflows with multiple steps
- Are reusable and composable
- Support both sync and async execution
The Problem Chains Solve
Without chains, you’d write:
1
2
3
4
5
# Manual, error-prone
prompt_str = template.format(input1=x, input2=y)
response = model.invoke(prompt_str)
parsed = parser.parse(response)
# ... more manual steps
With chains, you write:
1
2
3
# Clean, reusable
chain = template | model | parser
result = chain.invoke({"input1": x, "input2": y})
2. LangChain Expression Language (LCEL)
What is LCEL?
LCEL (LangChain Expression Language) is a declarative way to compose chains using the pipe operator (|).
Core Concept: The Pipe Operator
The pipe operator (|) chains components together:
1
chain = component1 | component2 | component3
This reads as: “Take the output of component1, pass it to component2, then pass that output to component3.”
Building Your First Chain
Create first_chain.py:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
load_dotenv()
# Step 1: Create a prompt template
prompt = ChatPromptTemplate.from_template(
"""Write a short poem about {topic} in the style of {style}."""
)
# Step 2: Create a model
model = ChatOpenAI(
model="gpt-3.5-turbo-0125",
temperature=0.7, # A bit creative for poetry
api_key=os.getenv("OPENAI_API_KEY")
)
# Step 3: Create an output parser (converts AIMessage to string)
parser = StrOutputParser()
# Step 4: Build the chain using LCEL
chain = prompt | model | parser
# Step 5: Execute the chain
result = chain.invoke({
"topic": "autumn leaves",
"style": "haiku"
})
print("Generated Poem:")
print(result)
Understanding the Flow
1
2
3
4
5
6
7
8
9
Input Dictionary
↓
prompt.format() → Formatted prompt
↓
model.invoke() → AIMessage with content
↓
parser.invoke() → Clean string output
↓
Final Result
3. Advanced Chain Examples
Example 1: Multi-Step Chain with Analysis
Create analysis_chain.py:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
load_dotenv()
model = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
# Chain 1: Summarize text
summarize_prompt = ChatPromptTemplate.from_template(
"""Summarize the following text in 2-3 sentences:
{text}"""
)
summarize_chain = summarize_prompt | model | StrOutputParser()
# Chain 2: Extract key topics from summary
topics_prompt = ChatPromptTemplate.from_template(
"""Extract the 3 most important topics from this summary:
{summary}
Return as a numbered list."""
)
topics_chain = topics_prompt | model | StrOutputParser()
# Combine chains manually
article = """Machine learning is a subset of artificial intelligence that enables
systems to learn and improve from experience without being explicitly programmed.
It focuses on data analysis and pattern recognition."""
summary = summarize_chain.invoke({"text": article})
print("Summary:", summary)
topics = topics_chain.invoke({"summary": summary})
print("\nKey Topics:", topics)
Example 2: Using RunnablePassthrough
Create advanced_chain.py:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain.schema.runnable import RunnablePassthrough
load_dotenv()
model = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
prompt = ChatPromptTemplate.from_template(
"""Original request: {original_request}
Response to evaluate: {response}
Evaluate if the response properly addresses the request. Be concise."""
)
# RunnablePassthrough() passes input unchanged to the next step
chain = (
{"original_request": RunnablePassthrough(),
"response": RunnablePassthrough()}
| prompt
| model
| StrOutputParser()
)
request = "Explain quantum computing"
response = "Quantum computers use quantum bits..."
result = chain.invoke({
"original_request": request,
"response": response
})
print("Evaluation:", result)
4. Execution Methods: invoke, batch, stream
invoke() - Single Execution
1
2
3
# Simple, single execution
result = chain.invoke({"topic": "Python"})
print(result)
batch() - Multiple Executions
Create batch_execution.py:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
load_dotenv()
model = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0.7)
prompt = ChatPromptTemplate.from_template("Tell a joke about {topic}")
chain = prompt | model | StrOutputParser()
# Process multiple topics at once
topics = [
{"topic": "programming"},
{"topic": "coffee"},
{"topic": "Python"}
]
results = chain.batch(topics)
for topic, joke in zip(topics, results):
print(f"\nJoke about {topic['topic']}:")
print(joke[:150] + "...")
stream() - Token-by-Token Streaming
Create streaming_execution.py:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
load_dotenv()
model = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0.7)
prompt = ChatPromptTemplate.from_template(
"""Write a short story about {topic} in 50 words."""
)
chain = prompt | model | StrOutputParser()
# Stream the response token-by-token
print("Streaming story:")
print("-" * 40)
for chunk in chain.stream({"topic": "a lost robot"}):
print(chunk, end="", flush=True)
print("\n" + "-" * 40)
print("Streaming complete!")
Comparison: invoke vs batch vs stream
1
2
3
4
5
6
7
8
9
# invoke: One input, wait for complete output
result = chain.invoke({"topic": "AI"}) # Single string
# batch: Multiple inputs, wait for all outputs
results = chain.batch([{"topic": "AI"}, {"topic": "ML"}]) # List of strings
# stream: One input, tokens arrive gradually
for chunk in chain.stream({"topic": "AI"}): # Generator of strings
print(chunk, end="")
Practical Exercises
Exercise 1: Build a Personal Assistant
Objective: Create a chain that acts as a personal productivity assistant.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate, PromptTemplate
from langchain_core.output_parsers import StrOutputParser
load_dotenv()
# Create system prompt
system_template = """You are a personal productivity assistant. Help the user organize their thoughts
and prioritize tasks. Be encouraging and practical in your suggestions."""
system_prompt = SystemMessagePromptTemplate(
prompt=PromptTemplate(template=system_template, input_variables=[])
)
# Create human prompt
human_template = """I have {num_tasks} tasks to complete today and feel overwhelmed.
The tasks are: {tasks}
What would you recommend I focus on first?"""
human_prompt = HumanMessagePromptTemplate(
prompt=PromptTemplate(
input_variables=["num_tasks", "tasks"],
template=human_template
)
)
# Build prompt template
prompt = ChatPromptTemplate(
messages=[system_prompt, human_prompt],
input_variables=["num_tasks", "tasks"]
)
# Create model and chain
model = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0.7)
chain = prompt | model | StrOutputParser()
# Execute
result = chain.invoke({
"num_tasks": "5",
"tasks": "Write report, review code, fix bug, team meeting, email follow-ups"
})
print("Assistant's Advice:")
print(result)
Exercise 2: Multi-Step Document Processing
Objective: Create a chain that processes documents through multiple steps.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
load_dotenv()
model = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
# Step 1: Extract key information
extract_prompt = ChatPromptTemplate.from_template(
"""Extract the main points from this document:
{document}
List as bullet points."""
)
extract_chain = extract_prompt | model | StrOutputParser()
# Step 2: Summarize extracted points
summarize_prompt = ChatPromptTemplate.from_template(
"""Summarize these points into a single paragraph:
{points}"""
)
summarize_chain = summarize_prompt | model | StrOutputParser()
# Step 3: Generate action items
action_prompt = ChatPromptTemplate.from_template(
"""Based on this summary, what are the top 3 action items?
{summary}
Format as: 1. ... 2. ... 3. ..."""
)
action_chain = action_prompt | model | StrOutputParser()
# Execute the pipeline
document = """Our Q4 meeting covered new market opportunities in Southeast Asia,
budget increases for marketing by 20%, and the need to hire 5 new engineers.
We also discussed upgrading our data infrastructure."""
print("Step 1: Extracting key points...")
points = extract_chain.invoke({"document": document})
print(points)
print("\nStep 2: Summarizing...")
summary = summarize_chain.invoke({"points": points})
print(summary)
print("\nStep 3: Action items...")
actions = action_chain.invoke({"summary": summary})
print(actions)
Exercise 3: Comparison Chain
Objective: Use batch to compare outputs from different styles.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
load_dotenv()
model = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0.5)
prompt = ChatPromptTemplate.from_template(
"""Explain {concept} as if you're talking to {audience}."""
)
chain = prompt | model | StrOutputParser()
# Define different audiences
audiences = [
{"concept": "machine learning", "audience": "a 5-year-old"},
{"concept": "machine learning", "audience": "a high school student"},
{"concept": "machine learning", "audience": "a PhD researcher"},
]
# Use batch to process all at once
results = chain.batch(audiences)
for audience_dict, result in zip(audiences, results):
print(f"\n{'='*50}")
print(f"Explaining to: {audience_dict['audience']}")
print(f"{'='*50}")
print(result[:200] + "...\n")
Debugging and Best Practices
1. Enable Debug Mode
Using set_debug()
1
2
3
4
5
6
7
8
9
10
from langchain.globals import set_debug
# Enable detailed debugging
set_debug(True)
# Your chain execution here
result = chain.invoke({"topic": "AI"})
# Disable when done
set_debug(False)
This shows:
- Input to each component
- Output from each component
- Component names
- Timing information
- Token usage
Using set_verbose()
1
2
3
4
5
from langchain.globals import set_verbose
set_verbose(True)
result = chain.invoke({"topic": "AI"})
set_verbose(False)
Difference from set_debug():
- Less detailed than debug mode
- More readable formatting
- Skips token usage statistics
Using ConsoleCallbackHandler
1
2
3
4
5
6
from langchain.callbacks.tracers import ConsoleCallbackHandler
result = chain.invoke(
{"topic": "AI"},
config={'callbacks': [ConsoleCallbackHandler()]}
)
2. Common Issues and Solutions
Issue 1: Missing API Key
1
2
3
4
5
6
7
8
9
10
11
12
# ❌ Wrong
model = ChatOpenAI(model="gpt-3.5-turbo-0125")
# ✅ Correct
import os
from dotenv import load_dotenv
load_dotenv()
model = ChatOpenAI(
model="gpt-3.5-turbo-0125",
api_key=os.getenv("OPENAI_API_KEY")
)
Issue 2: Incorrect Input Variables
1
2
3
4
5
6
# ❌ Wrong - variable name doesn't match
prompt = ChatPromptTemplate.from_template("Hello {name}")
result = chain.invoke({"person": "Alice"}) # Wrong key!
# ✅ Correct
result = chain.invoke({"name": "Alice"})
Issue 3: Temperature Too High/Low
1
2
3
4
5
# ❌ For code generation
model = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0.9) # Too creative!
# ✅ For code generation
model = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0.2)
3. Best Practices
1. Always Use Environment Variables
1
2
3
4
5
6
7
8
9
# ✅ Good
from dotenv import load_dotenv
import os
load_dotenv()
api_key = os.getenv("OPENAI_API_KEY")
# ❌ Bad
api_key = "sk-proj-xxx" # Exposed in code!
2. Use Meaningful Variable Names in Templates
1
2
3
4
5
6
# ✅ Clear
template = """For the {customer_type} customer,
the {issue} needs this {solution}."""
# ❌ Confusing
template = """For {x} customer, {y} needs {z}."""
3. Test with Simple Inputs First
1
2
3
4
5
6
7
# ✅ Start simple
prompt = ChatPromptTemplate.from_template("Hello {name}")
result = chain.invoke({"name": "World"})
print(result) # Test basic functionality
# Then add complexity
prompt = ChatPromptTemplate.from_template("Hello {name}, your age is {age}")
4. Always Use Temperature Appropriately
| Task | Temperature |
| Code generation | 0.0-0.3 |
| Data extraction | 0.0-0.2 |
| Q&A | 0.2-0.5 |
| General writing | 0.5-0.7 |
| Creative writing | 0.7-0.9 |
| Brainstorming | 0.8-1.0 |
5. Add Error Handling
1
2
3
4
5
6
7
8
from langchain.callbacks.base import BaseCallbackHandler
try:
result = chain.invoke({"topic": "AI"})
print(f"Success: {result}")
except Exception as e:
print(f"Error occurred: {e}")
# Handle error appropriately
Summary of Week 1-2 Learning
Week 1 Accomplishments
✅ Set up development environment
✅ Understood chat models and message types
✅ Learned about temperature parameter
✅ Created your first chat model interaction
✅ Learned prompt templates
Week 2 Accomplishments
✅ Understood chains and LCEL
✅ Built multiple chains
✅ Learned invoke, batch, stream methods
✅ Created multi-step chains
✅ Mastered debugging techniques
Next Steps (Week 3+)
After mastering these fundamentals, you’re ready for:
- Output Parsers: Parse structured data from model responses
- Memory: Add conversation history to chains
- Retrievers: Fetch relevant documents for context
- Agents: Build systems that decide which tools to use
- Streaming: Build real-time chatbot interfaces
Quick Reference
Most Important Code Patterns
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# Pattern 1: Simple Chain
chain = prompt | model | parser
result = chain.invoke({"variable": "value"})
# Pattern 2: Multiple Messages
chat_template = ChatPromptTemplate(
messages=[system_prompt, human_prompt],
input_variables=["var1", "var2"]
)
# Pattern 3: Batch Processing
results = chain.batch([
{"variable": "value1"},
{"variable": "value2"}
])
# Pattern 4: Streaming
for chunk in chain.stream({"variable": "value"}):
print(chunk, end="")
Resources
- Official LangChain Documentation
- LangChain GitHub Repository
- OpenAI API Documentation
- LangSmith for Debugging
Main Page: AI Engineering Learning
Happy Learning! Remember: The best way to learn is by doing. Create small projects, experiment with different parameters, and build from simple to complex.
