Created with Claude Code - A comprehensive Python project demonstrating LangChain and LangGraph integration with Claude Sonnet, featuring AI agents, workflows, structured output, and advanced tool calling capabilities.
- AI Agents & Workflows - Built with LangChain/LangGraph for intelligent agent development
- Claude Sonnet Integration - Leveraging Anthropic's most capable model
- Structured Output - Pydantic models for type-safe, structured responses
- Tool Calling - Advanced examples of Claude calling external tools
- Workflow Orchestration - Multi-step agent workflows and business logic
- Production-Ready Patterns - Real-world examples for AI agent development
- LangChain Foundation - Core concepts for building AI applications
- LangGraph Workflows - Creating complex agent workflows and state management
- Claude Integration - Advanced patterns with Anthropic's Claude Sonnet
- Structured Output - Type-safe responses using Pydantic models
- Tool Orchestration - Building agents that can use external tools
- Agent Architecture - Designing scalable AI agent systems
- Production Patterns - Error handling, validation, and monitoring
- Python 3.8+
- Anthropic API key
-
Clone the repository:
git clone https://github.com/yourusername/langgraphing.git cd langgraphing -
Install dependencies:
pip install -r requirements.txt
-
Set up environment variables:
cp .env.example .env # Edit .env and add your ANTHROPIC_API_KEY
python simple_invoke.pySimple example showing basic Claude integration and foundation concepts.
python structured_search.pyDemonstrates how to get structured JSON responses using Pydantic models - perfect for building reliable AI agents.
python tool_calling.pyAdvanced example showing Claude calling tools and providing structured analysis - the foundation for LangGraph workflows.
python prompt_chaining.pyDemonstrates two different approaches to multi-step prompt chaining for financial trading decisions:
- Procedural: Class-based workflow with manual orchestration
- LangGraph Graph API: StateGraph with node-based architecture
python prompt_chaining_functional.pyDemonstrates @entrypoint/@task functional approach using LangGraph decorators for workflow orchestration with proper execution context.
langgraphing/
βββ simple_invoke.py # Basic Claude integration
βββ structured_search.py # Structured output with Pydantic
βββ tool_calling.py # Tool calling + agent workflows
βββ prompt_chaining.py # Procedural + Graph API trading workflows
βββ prompt_chaining_functional.py # @entrypoint/@task functional approach
βββ tools_models.py # Tool definitions with schemas
βββ search_models.py # Pydantic models for structured data
βββ requirements.txt # Python dependencies
βββ .env.example # Environment variables template
βββ CLAUDE.md # Claude Code guidance
βββ README.md # This file
The project includes three example tools:
- Purpose: Mathematical calculations
- Input: Mathematical expressions
- Output: Structured results with validation
- Purpose: Weather information and forecasts
- Input: Location and number of days
- Output: Current weather and forecast data
- Purpose: Mock database queries
- Input: Table name, filters, and limits
- Output: Structured query results with metadata
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model="claude-3-5-sonnet-20241022",
api_key=os.getenv("ANTHROPIC_API_KEY")
)
response = model.invoke("Hello, Claude!")from pydantic import BaseModel
class MyResponse(BaseModel):
answer: str
confidence: float
structured_model = model.with_structured_output(MyResponse)
response = structured_model.invoke("What is 2+2?")from langchain_core.tools import tool
@tool
def my_tool(input: str) -> str:
\"\"\"Description of what the tool does.\"\"\"
return f"Tool result: {input}"
model_with_tools = model.bind_tools([my_tool])The tool_calling.py script demonstrates a powerful two-phase workflow pattern:
User Task β Claude Analyzes β Tool Calls β Tool Results β Structured Analysis
Key Components:
- Dual Model Setup: One for tool calling, one for structured output
- Pydantic Schemas: Type-safe tool results and analysis
- Intelligent Orchestration: Claude selects and sequences tools automatically
Phase 1: Tool Execution
# Claude decides which tools to call
model_with_tools = model.bind_tools([calculator_tool, weather_tool, database_query_tool])
response = model_with_tools.invoke("Plan a company retreat with costs and weather")
# Tools are executed automatically
# - weather_tool.invoke({"location": "San Francisco", "days": 5})
# - calculator_tool.invoke({"expression": "25 * 150 * 3"})
# - database_query_tool.invoke({"table": "users", "filters": {"role": "admin"}})Phase 2: Structured Analysis
# Structured analysis of all tool results
class ToolAnalysisResult(BaseModel):
task_description: str
tools_used: List[str]
tool_results: List[Dict[str, Any]]
analysis: str
confidence: float
recommendations: List[str]
structured_model = model.with_structured_output(ToolAnalysisResult)
final_analysis = structured_model.invoke(f"Analyze these tool results: {tool_results}")- Intelligent Tool Selection: Claude automatically chooses relevant tools
- Sequential Execution: Tools are called in logical order
- Structured Reliability: All outputs follow Pydantic schemas
- Business Logic Integration: Combines multiple tool results into actionable insights
- Error Handling: Graceful handling of tool failures with fallback strategies
- Confidence Scoring: Provides reliability metrics for business decisions
Task: "Plan company retreat for 25 employees"
β
Claude Analysis: Need weather, costs, and admin info
β
Tools Called:
- Weather: San Francisco, 5 days β "Sunny, 22Β°C average"
- Calculator: 25 Γ $150 Γ 3 days β "$11,250 total"
- Database: Admin users β "3 admin users found"
β
Structured Output:
{
"analysis": "Weather favorable, budget reasonable for 25 employees",
"confidence": 0.85,
"recommendations": ["Book outdoor venues", "Budget $12K total"],
"follow_up_questions": ["Dietary restrictions?", "Team building activities?"]
}
This pattern is foundational for LangGraph workflows where agents need to orchestrate multiple tools and provide reliable, structured outputs for business applications.
The prompt_chaining.py script demonstrates advanced prompt chaining - a powerful pattern where complex tasks are decomposed into sequential LLM calls, with each step processing the structured output of the previous step.
Traditional Approach (Single Prompt):
"Analyze AAPL stock and make a trading decision considering market conditions, risk factors, and portfolio allocation"
β Single massive prompt trying to do everything at once
β Often inconsistent, hard to validate, difficult to debug
Prompt Chaining Approach:
Step 1: "Analyze AAPL market conditions" β MarketAnalysis
Step 2: "Assess risk given this market analysis" β RiskAssessment
Step 3: "Make trading decision based on analysis + risk" β TradingDecision
β Specialized prompts with structured handoffs
β Consistent, validatable, modular workflow
Market Data Input β π Market Analysis β β οΈ Risk Assessment β πΌ Trading Decision β π Final Result
β validate β validate β validate
Gate 1 Gate 2 Gate 3
class MarketAnalysis(BaseModel):
symbol: str = "AAPL"
current_price: float = 150.25
price_trend: str = "Upward momentum with strong volume"
volume_analysis: str = "85M shares vs 65M average"
technical_indicators: Dict[str, Any] = {
"rsi_14": 68.5, "macd": 1.25, "sma_20": 148.75
}
market_sentiment: str = "Bullish with technology sector strength"
confidence_score: float = 0.85Specialized Prompt Focus:
- Technical indicator analysis (RSI, MACD, Bollinger Bands)
- Volume and momentum assessment
- Market sentiment evaluation
- Price trend identification
class RiskAssessment(BaseModel):
symbol: str = "AAPL"
risk_level: RiskLevel = "MEDIUM"
volatility_score: float = 0.65
market_conditions: str = "Stable with moderate volatility"
position_size_recommendation: float = 0.08 # 8% of portfolio
stop_loss_level: float = 145.00
risk_factors: List[str] = [
"Tech sector concentration risk",
"Earnings volatility near resistance"
]
confidence_score: float = 0.78Risk-Specific Analysis:
- Takes MarketAnalysis as input
- Evaluates portfolio-specific risk factors
- Calculates position sizing based on risk tolerance
- Sets stop-loss levels using technical analysis
class TradingDecision(BaseModel):
symbol: str = "AAPL"
action: TradingAction = "BUY"
entry_price: float = 150.25
target_price: float = 155.00
stop_loss: float = 145.00
position_size: float = 0.08
reasoning: str = "Strong technical setup with controlled risk"
expected_return: float = 0.032 # 3.2%
time_horizon: str = "2-4 weeks"
confidence_score: float = 0.82Decision Synthesis:
- Combines MarketAnalysis + RiskAssessment
- Makes actionable BUY/SELL/HOLD decision
- Sets specific entry, target, and stop prices
- Provides detailed reasoning and return expectations
Between each step, the workflow includes validation checkpoints:
class TradingPromptChain:
def execute_trading_workflow(self, symbol, market_data, portfolio_info):
# Step 1: Market Analysis
market_analysis = self.step1_market_analysis(symbol, market_data)
# πͺ Gate 1: Confidence Threshold
if market_analysis.confidence_score < 0.4:
raise ValueError("Market analysis confidence below threshold")
# Step 2: Risk Assessment
risk_assessment = self.step2_risk_assessment(symbol, market_analysis, portfolio_info)
# πͺ Gate 2: Risk Validation
if risk_assessment.confidence_score < 0.4:
raise ValueError("Risk assessment confidence below threshold")
# Step 3: Trading Decision
trading_decision = self.step3_trading_decision(symbol, market_analysis, risk_assessment)
# πͺ Gate 3: Decision Validation
if trading_decision.confidence_score < 0.4:
raise ValueError("Trading decision confidence below threshold")
# π Cross-Step Coherence Validation
warnings = self.validate_workflow_coherence(
market_analysis, risk_assessment, trading_decision
)
return final_resultBeyond confidence scores, the workflow validates logical consistency:
def validate_workflow_coherence(self, market_analysis, risk_assessment, trading_decision):
warnings = []
# Risk-Action Alignment
if risk_assessment.risk_level == "HIGH" and trading_decision.action == "BUY":
if trading_decision.position_size > 0.05: # 5%
warnings.append("High risk trade with large position - reduce exposure")
# Stop-Loss Consistency
if abs(trading_decision.stop_loss - risk_assessment.stop_loss_level) > 0.1:
warnings.append("Stop-loss levels inconsistent between steps")
# Position Size Alignment
if abs(trading_decision.position_size - risk_assessment.position_size_recommendation) > 0.05:
warnings.append("Position size differs from risk recommendation")
return warnings- π― Specialized Focus: Each prompt optimized for specific domain knowledge
- π Structured Handoffs: Type-safe data flow between steps using Pydantic
- π‘οΈ Quality Gates: Validation prevents bad data from propagating downstream
- π§ Modularity: Can modify/replace individual steps independently
- π Confidence Tracking: Reliability metrics at each stage
- π§ͺ Testability: Each step can be unit tested independently
- π Debuggability: Easy to identify which step failed and why
{
"symbol": "AAPL",
"workflow_timestamp": "2024-01-15T14:30:00",
"market_analysis": {
"price_trend": "Bullish momentum above 20-day SMA",
"technical_indicators": {"rsi_14": 68.5, "macd_signal": "bullish"},
"confidence_score": 0.85
},
"risk_assessment": {
"risk_level": "MEDIUM",
"position_size_recommendation": 0.08,
"stop_loss_level": 145.00,
"confidence_score": 0.78
},
"trading_decision": {
"action": "BUY",
"entry_price": 150.25,
"target_price": 155.00,
"reasoning": "Strong technical setup with controlled downside risk",
"confidence_score": 0.82
},
"overall_confidence": 0.817,
"warnings": []
}Perfect For:
- β Complex multi-domain decisions (finance, legal, medical)
- β Workflows requiring validation between steps
- β Business processes with compliance requirements
- β Tasks needing audit trails and explainability
- β Scenarios where partial failure should be handled gracefully
Not Ideal For:
- β Simple single-domain tasks
- β Creative tasks requiring holistic thinking
- β Real-time applications with latency constraints
- β Tasks where intermediate steps don't make sense
The script demonstrates the same trading workflow using three different patterns:
class TradingPromptChain:
def execute_trading_workflow(self, symbol, market_data, portfolio_info):
# Manual step execution with validation gates
market_analysis = self.step1_market_analysis(symbol, market_data)
risk_assessment = self.step2_risk_assessment(symbol, market_analysis, portfolio_info)
trading_decision = self.step3_trading_decision(symbol, market_analysis, risk_assessment)
return self.aggregate_results(...)workflow = StateGraph(TradingWorkflowState)
workflow.add_node("analyze_market", market_analysis_node)
workflow.add_node("assess_risk", risk_assessment_node)
workflow.add_node("make_decision", trading_decision_node)
workflow.add_edge("analyze_market", "assess_risk")
app = workflow.compile()
result = app.invoke(initial_state)@task
def analyze_market_task(symbol: str, market_data: Dict) -> MarketAnalysis:
return market_analyzer.invoke(prompt)
@task
def assess_risk_task(symbol: str, market_analysis: MarketAnalysis, portfolio_info: Dict) -> RiskAssessment:
return risk_assessor.invoke(prompt)
# Functional composition
def execute_workflow(symbol, market_data, portfolio_info):
market_analysis = analyze_market_task(symbol, market_data)
risk_assessment = assess_risk_task(symbol, market_analysis, portfolio_info)
return aggregate_final_result_task(symbol, market_analysis, risk_assessment, trading_decision)Each approach demonstrates different paradigms for building enterprise AI workflows where reliability, auditability, and domain expertise matter more than raw speed.
When you run python prompt_chaining.py, it generates Mermaid diagrams showing the workflow structure for each approach:
flowchart TD
A[π Start Trading Workflow] --> B[π Initialize TradingPromptChain Class]
B --> C[π Step 1: Market Analysis]
C --> D{πͺ Gate 1: Confidence > 0.4?}
D -->|β No| E[β Terminate: Low Confidence]
D -->|β
Yes| F[β οΈ Step 2: Risk Assessment]
F --> G{πͺ Gate 2: Confidence > 0.4?}
G -->|β No| H[β Terminate: Low Risk Confidence]
G -->|β
Yes| I[πΌ Step 3: Trading Decision]
I --> J{πͺ Gate 3: Confidence > 0.4?}
J -->|β No| K[β Terminate: Low Decision Confidence]
J -->|β
Yes| L[π Cross-Step Validation]
L --> M[π Generate Final Result]
M --> N[β
Complete Workflow]
style A fill:#e1f5fe
style N fill:#e8f5e8
style E fill:#ffebee
style H fill:#ffebee
style K fill:#ffebee
flowchart TD
A[π Start StateGraph Workflow] --> B[π Initialize TradingWorkflowState]
B --> C[π analyze_market]
C --> D[π Update State: market_analysis]
D --> E{πͺ Check Errors in State}
E -->|β Has Errors| F[β Workflow Failed]
E -->|β
No Errors| G[β οΈ assess_risk]
G --> H[π Update State: risk_assessment]
H --> I{π CONDITIONAL EDGE: Risk Level?}
I -->|π΄ HIGH/EXTREME| J[π¨ additional_risk_review]
I -->|π’ LOW/MEDIUM| K[πΌ make_decision]
J --> L[π Apply Risk Controls & Reduce Position]
L --> K[πΌ make_decision]
K --> M[π Update State: trading_decision]
M --> N{πͺ Check Errors in State}
N -->|β Has Errors| O[β Workflow Failed]
N -->|β
No Errors| P[π validate_results]
P --> Q[π Generate Final Result in State]
Q --> R[β
Return Final State]
subgraph "StateGraph Nodes"
C
G
J
K
P
end
subgraph "β Conditional Routing"
I
end
style A fill:#e1f5fe
style R fill:#e8f5e8
style F fill:#ffebee
style O fill:#ffebee
style I fill:#fff3e0
style J fill:#ffecb3
flowchart TD
A[π Start @task Functional Workflow] --> B[π analyze_market_task]
B --> C[π MarketAnalysis Result]
C --> D{πͺ Confidence Gate}
D -->|β No| E[β Raise ValueError]
D -->|β
Yes| F[β οΈ assess_risk_task]
F --> G[π RiskAssessment Result]
G --> H{πͺ Confidence Gate}
H -->|β No| I[β Raise ValueError]
H -->|β
Yes| J[πΌ make_trading_decision_task]
J --> K[π TradingDecision Result]
K --> L{πͺ Confidence Gate}
L -->|β No| M[β Raise ValueError]
L -->|β
Yes| N[π aggregate_final_result_task]
N --> O[π Cross-Task Validation]
O --> P[π TradingWorkflowResult]
P --> Q[β
Return Final Result]
subgraph "@task Functions"
B
F
J
N
end
style A fill:#e1f5fe
style Q fill:#e8f5e8
style E fill:#ffebee
style I fill:#ffebee
style M fill:#ffebee
The tool calling script demonstrates foundational patterns for LangGraph workflows:
- Multi-tool Orchestration - Coordinating multiple tools in agent workflows
- Structured Analysis - Type-safe analysis of tool results
- Business Logic Integration - Real-world workflow patterns
- Error Handling & Validation - Production-ready error management
- Confidence Scoring - Building reliable AI agents
- Workflow State Management - Foundation for complex agent systems
The prompt chaining script demonstrates three distinct implementation approaches:
- Class-Based Architecture - Object-oriented workflow management
- Manual Orchestration - Direct control over step execution and validation
- Custom Validation Gates - Programmatic checks between each step
- Error Handling - Explicit exception management and workflow termination
- StateGraph Architecture - Typed state management with
TradingWorkflowState - Node-Based Design - Each workflow step as a graph node
- Conditional Edge Routing - Risk-based branching with
add_conditional_edges() - State Persistence - Automatic state passing between nodes
- Built-in Error Collection - Graph-level error aggregation
- Advanced Validation - Cross-step coherence checks within the graph
- Pure Functional Design - Each step as a stateless
@taskfunction - Type-Safe Composition - Clear input/output types for data flow
- Conditional Task Execution - Risk-based branching within functional workflow
- Functional Orchestration - Task composition with validation gates
- Performance Optimization - LangGraph can optimize task execution
- High Testability - Each
@taskindependently unit testable
The Graph API implementation features sophisticated conditional edge routing that demonstrates enterprise-grade workflow control:
def route_after_risk_assessment(state: TradingWorkflowState) -> str:
risk_level = state["risk_assessment"].risk_level
if risk_level in [RiskLevel.HIGH, RiskLevel.EXTREME]:
return "additional_risk_review" # Route to risk controls
else:
return "make_decision" # Skip directly to trading decision
# Add conditional edge to workflow
workflow.add_conditional_edges(
"assess_risk",
route_after_risk_assessment,
{
"additional_risk_review": "additional_risk_review",
"make_decision": "make_decision"
}
)- HIGH Risk Trades: Position size reduced by 30% or capped at 5%
- EXTREME Risk Trades: Position size capped at 2% + compliance approval required
- LOW/MEDIUM Risk: Skip additional review, proceed directly to trading decision
MEDIUM Risk: 8% position β Direct to decision (0 warnings)
HIGH Risk: 15% β 5% position reduction (1 warning)
EXTREME Risk: 2% position cap + compliance (3 warnings)
This demonstrates how LangGraph conditional edges enable sophisticated business logic where workflow paths adapt dynamically based on data conditions - essential for enterprise AI applications.
The @task Functional API also implements conditional logic, but using programmatic branching within the @entrypoint function:
@entrypoint(checkpointer=MemorySaver())
def trading_workflow_functional(inputs: Dict[str, Any]) -> TradingWorkflowResult:
# Step 1: Market Analysis
market_analysis = analyze_market_task(symbol, market_data).result()
# Step 2: Risk Assessment
risk_assessment = assess_risk_task(symbol, market_analysis, portfolio_info).result()
# CONDITIONAL LOGIC: Additional risk review for HIGH/EXTREME risk
if risk_assessment.risk_level in [RiskLevel.HIGH, RiskLevel.EXTREME]:
print(f"Conditional Routing: {risk_assessment.risk_level} risk - triggering additional review")
risk_assessment = additional_risk_review_task(symbol, risk_assessment).result()
else:
print(f"Conditional Routing: {risk_assessment.risk_level} risk - proceeding directly to decision")
# Step 3: Trading Decision (using potentially updated risk assessment)
trading_decision = make_trading_decision_task(symbol, market_analysis, risk_assessment).result()| Approach | Implementation | Benefits | Use Cases |
|---|---|---|---|
| Graph API | add_conditional_edges() with routing functions |
Visual workflow representation, built-in state management | Complex multi-path workflows, debugging, visualization |
| Functional API | Programmatic if/else within @entrypoint |
Direct control, simple to understand, easy testing | Linear workflows with occasional branching, performance-critical paths |
Both approaches achieve the same risk-based controls but with different paradigms for enterprise workflow design.
Enterprise Patterns Demonstrated:
- Sequential Processing - Multi-step LLM workflows with structured handoffs
- Dynamic Routing - Conditional edges based on business logic and data conditions
- Domain Specialization - Each step optimized for specific expertise areas
- Quality Gates - Validation checkpoints preventing error propagation
- Cross-Step Validation - Logical consistency checks across workflow stages
- Modular Architecture - Independent steps that can be modified/replaced
- Audit Trails - Complete visibility into decision-making process
| Pattern | Best For | Strengths | Use Cases |
|---|---|---|---|
| Tool Calling | Multi-domain data gathering | Parallel execution, rich context | Research, analysis, data aggregation |
| Prompt Chaining (Procedural) | Custom workflow control | Direct control, explicit validation | Custom business logic, legacy integration |
| Prompt Chaining (Graph API) | Dynamic conditional workflows | Conditional routing, state management, error collection | Risk-based decisions, complex multi-step processes, debugging |
| Prompt Chaining (@task Functional) | Pure functional workflows | Performance, testability, reusability | Microservices, functional programming, scaling |
Choose Procedural When:
- β Need direct control over workflow execution
- β Integrating with existing object-oriented systems
- β Custom validation logic required
- β Legacy system compatibility needed
Choose Graph API When:
- β Complex state management required
- β Need built-in error collection and debugging
- β Workflow visualization important
- β Dynamic routing or conditional logic needed
- β Risk-based or data-driven workflow branching required
- β Enterprise compliance workflows with automated controls
Choose @task Functional When:
- β Performance and scalability critical
- β High testability requirements
- β Functional programming paradigm preferred
- β Microservices architecture
- β Task reusability across workflows
All patterns are foundational for LangGraph where agents need reliable, structured workflows for production applications.
This project was created using Claude Code, demonstrating:
- AI-Assisted Development - Rapid prototyping of AI agent systems
- Best Practices - Production-ready patterns and architectures
- Comprehensive Examples - Real-world scenarios and use cases
- Modern AI Stack - LangChain, LangGraph, Claude Sonnet integration
This project provides the foundation for building more complex LangGraph workflows:
- State Machines - Complex agent state management
- Multi-Agent Systems - Coordinating multiple AI agents
- Conditional Routing - Dynamic workflow branching
- Memory & Context - Persistent agent memory systems
- Human-in-the-Loop - Interactive agent workflows
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
- Claude Code for AI-assisted development
- LangChain for the amazing framework
- LangGraph for agent workflow orchestration
- Anthropic for Claude Sonnet
- Pydantic for data validation
- LangChain Documentation
- LangGraph Documentation
- Claude Code Documentation
- Anthropic API Documentation
- Pydantic Documentation
Ready to build AI agents? Start with these examples and scale to production LangGraph workflows! π