Skip to content

chalam/langgraphing

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

7 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

LangGraphing πŸ¦œπŸ”—πŸ€–

Created with Claude Code - A comprehensive Python project demonstrating LangChain and LangGraph integration with Claude Sonnet, featuring AI agents, workflows, structured output, and advanced tool calling capabilities.

πŸš€ Features

  • AI Agents & Workflows - Built with LangChain/LangGraph for intelligent agent development
  • Claude Sonnet Integration - Leveraging Anthropic's most capable model
  • Structured Output - Pydantic models for type-safe, structured responses
  • Tool Calling - Advanced examples of Claude calling external tools
  • Workflow Orchestration - Multi-step agent workflows and business logic
  • Production-Ready Patterns - Real-world examples for AI agent development

πŸ“‹ What You'll Learn

  1. LangChain Foundation - Core concepts for building AI applications
  2. LangGraph Workflows - Creating complex agent workflows and state management
  3. Claude Integration - Advanced patterns with Anthropic's Claude Sonnet
  4. Structured Output - Type-safe responses using Pydantic models
  5. Tool Orchestration - Building agents that can use external tools
  6. Agent Architecture - Designing scalable AI agent systems
  7. Production Patterns - Error handling, validation, and monitoring

πŸ› οΈ Setup

Prerequisites

  • Python 3.8+
  • Anthropic API key

Installation

  1. Clone the repository:

    git clone https://github.com/yourusername/langgraphing.git
    cd langgraphing
  2. Install dependencies:

    pip install -r requirements.txt
  3. Set up environment variables:

    cp .env.example .env
    # Edit .env and add your ANTHROPIC_API_KEY

🎯 Usage

Basic Chat Example

python simple_invoke.py

Simple example showing basic Claude integration and foundation concepts.

Structured Output Example

python structured_search.py

Demonstrates how to get structured JSON responses using Pydantic models - perfect for building reliable AI agents.

Tool Calling & Workflows Example

python tool_calling.py

Advanced example showing Claude calling tools and providing structured analysis - the foundation for LangGraph workflows.

Prompt Chaining Trading Workflow

python prompt_chaining.py

Demonstrates two different approaches to multi-step prompt chaining for financial trading decisions:

  • Procedural: Class-based workflow with manual orchestration
  • LangGraph Graph API: StateGraph with node-based architecture
python prompt_chaining_functional.py

Demonstrates @entrypoint/@task functional approach using LangGraph decorators for workflow orchestration with proper execution context.

πŸ“ Project Structure

langgraphing/
β”œβ”€β”€ simple_invoke.py              # Basic Claude integration
β”œβ”€β”€ structured_search.py          # Structured output with Pydantic
β”œβ”€β”€ tool_calling.py              # Tool calling + agent workflows
β”œβ”€β”€ prompt_chaining.py           # Procedural + Graph API trading workflows
β”œβ”€β”€ prompt_chaining_functional.py # @entrypoint/@task functional approach
β”œβ”€β”€ tools_models.py              # Tool definitions with schemas
β”œβ”€β”€ search_models.py             # Pydantic models for structured data
β”œβ”€β”€ requirements.txt             # Python dependencies
β”œβ”€β”€ .env.example                # Environment variables template
β”œβ”€β”€ CLAUDE.md                   # Claude Code guidance
└── README.md                   # This file

πŸ”§ Available Tools

The project includes three example tools:

Calculator Tool

  • Purpose: Mathematical calculations
  • Input: Mathematical expressions
  • Output: Structured results with validation

Weather Tool

  • Purpose: Weather information and forecasts
  • Input: Location and number of days
  • Output: Current weather and forecast data

Database Tool

  • Purpose: Mock database queries
  • Input: Table name, filters, and limits
  • Output: Structured query results with metadata

πŸ“š Key Concepts Demonstrated

1. Basic Integration

from langchain_anthropic import ChatAnthropic

model = ChatAnthropic(
    model="claude-3-5-sonnet-20241022",
    api_key=os.getenv("ANTHROPIC_API_KEY")
)
response = model.invoke("Hello, Claude!")

2. Structured Output

from pydantic import BaseModel

class MyResponse(BaseModel):
    answer: str
    confidence: float

structured_model = model.with_structured_output(MyResponse)
response = structured_model.invoke("What is 2+2?")

3. Tool Calling

from langchain_core.tools import tool

@tool
def my_tool(input: str) -> str:
    \"\"\"Description of what the tool does.\"\"\"
    return f"Tool result: {input}"

model_with_tools = model.bind_tools([my_tool])

πŸ”„ Tool Calling Workflow Deep Dive

The tool_calling.py script demonstrates a powerful two-phase workflow pattern:

πŸ—οΈ Architecture Overview

User Task β†’ Claude Analyzes β†’ Tool Calls β†’ Tool Results β†’ Structured Analysis

Key Components:

  • Dual Model Setup: One for tool calling, one for structured output
  • Pydantic Schemas: Type-safe tool results and analysis
  • Intelligent Orchestration: Claude selects and sequences tools automatically

🎯 Two-Phase Workflow Pattern

Phase 1: Tool Execution

# Claude decides which tools to call
model_with_tools = model.bind_tools([calculator_tool, weather_tool, database_query_tool])
response = model_with_tools.invoke("Plan a company retreat with costs and weather")

# Tools are executed automatically
# - weather_tool.invoke({"location": "San Francisco", "days": 5})
# - calculator_tool.invoke({"expression": "25 * 150 * 3"})
# - database_query_tool.invoke({"table": "users", "filters": {"role": "admin"}})

Phase 2: Structured Analysis

# Structured analysis of all tool results
class ToolAnalysisResult(BaseModel):
    task_description: str
    tools_used: List[str]
    tool_results: List[Dict[str, Any]]
    analysis: str
    confidence: float
    recommendations: List[str]

structured_model = model.with_structured_output(ToolAnalysisResult)
final_analysis = structured_model.invoke(f"Analyze these tool results: {tool_results}")

πŸ’‘ Key Learning Points

  1. Intelligent Tool Selection: Claude automatically chooses relevant tools
  2. Sequential Execution: Tools are called in logical order
  3. Structured Reliability: All outputs follow Pydantic schemas
  4. Business Logic Integration: Combines multiple tool results into actionable insights
  5. Error Handling: Graceful handling of tool failures with fallback strategies
  6. Confidence Scoring: Provides reliability metrics for business decisions

🎯 Real-World Example

Task: "Plan company retreat for 25 employees"
↓
Claude Analysis: Need weather, costs, and admin info
↓
Tools Called:
- Weather: San Francisco, 5 days β†’ "Sunny, 22Β°C average"
- Calculator: 25 Γ— $150 Γ— 3 days β†’ "$11,250 total"
- Database: Admin users β†’ "3 admin users found"
↓
Structured Output:
{
  "analysis": "Weather favorable, budget reasonable for 25 employees",
  "confidence": 0.85,
  "recommendations": ["Book outdoor venues", "Budget $12K total"],
  "follow_up_questions": ["Dietary restrictions?", "Team building activities?"]
}

This pattern is foundational for LangGraph workflows where agents need to orchestrate multiple tools and provide reliable, structured outputs for business applications.

πŸ”— Prompt Chaining Deep Dive

The prompt_chaining.py script demonstrates advanced prompt chaining - a powerful pattern where complex tasks are decomposed into sequential LLM calls, with each step processing the structured output of the previous step.

🧠 What is Prompt Chaining?

Traditional Approach (Single Prompt):

"Analyze AAPL stock and make a trading decision considering market conditions, risk factors, and portfolio allocation"
β†’ Single massive prompt trying to do everything at once
β†’ Often inconsistent, hard to validate, difficult to debug

Prompt Chaining Approach:

Step 1: "Analyze AAPL market conditions" β†’ MarketAnalysis
Step 2: "Assess risk given this market analysis" β†’ RiskAssessment  
Step 3: "Make trading decision based on analysis + risk" β†’ TradingDecision
β†’ Specialized prompts with structured handoffs
β†’ Consistent, validatable, modular workflow

πŸ—οΈ Architecture: Financial Trading Chain

Market Data Input β†’ πŸ” Market Analysis β†’ ⚠️ Risk Assessment β†’ πŸ’Ό Trading Decision β†’ πŸ“Š Final Result
                     ↓ validate        ↓ validate         ↓ validate
                   Gate 1             Gate 2             Gate 3

🎯 The Three-Step Workflow

Step 1: Market Analysis πŸ”

class MarketAnalysis(BaseModel):
    symbol: str = "AAPL"
    current_price: float = 150.25
    price_trend: str = "Upward momentum with strong volume"
    volume_analysis: str = "85M shares vs 65M average"
    technical_indicators: Dict[str, Any] = {
        "rsi_14": 68.5, "macd": 1.25, "sma_20": 148.75
    }
    market_sentiment: str = "Bullish with technology sector strength"
    confidence_score: float = 0.85

Specialized Prompt Focus:

  • Technical indicator analysis (RSI, MACD, Bollinger Bands)
  • Volume and momentum assessment
  • Market sentiment evaluation
  • Price trend identification

Step 2: Risk Assessment ⚠️

class RiskAssessment(BaseModel):
    symbol: str = "AAPL"
    risk_level: RiskLevel = "MEDIUM"
    volatility_score: float = 0.65
    market_conditions: str = "Stable with moderate volatility"
    position_size_recommendation: float = 0.08  # 8% of portfolio
    stop_loss_level: float = 145.00
    risk_factors: List[str] = [
        "Tech sector concentration risk",
        "Earnings volatility near resistance"
    ]
    confidence_score: float = 0.78

Risk-Specific Analysis:

  • Takes MarketAnalysis as input
  • Evaluates portfolio-specific risk factors
  • Calculates position sizing based on risk tolerance
  • Sets stop-loss levels using technical analysis

Step 3: Trading Decision πŸ’Ό

class TradingDecision(BaseModel):
    symbol: str = "AAPL"
    action: TradingAction = "BUY"
    entry_price: float = 150.25
    target_price: float = 155.00
    stop_loss: float = 145.00
    position_size: float = 0.08
    reasoning: str = "Strong technical setup with controlled risk"
    expected_return: float = 0.032  # 3.2%
    time_horizon: str = "2-4 weeks"
    confidence_score: float = 0.82

Decision Synthesis:

  • Combines MarketAnalysis + RiskAssessment
  • Makes actionable BUY/SELL/HOLD decision
  • Sets specific entry, target, and stop prices
  • Provides detailed reasoning and return expectations

πŸ›‘οΈ Programmatic Validation Gates

Between each step, the workflow includes validation checkpoints:

class TradingPromptChain:
    def execute_trading_workflow(self, symbol, market_data, portfolio_info):
        # Step 1: Market Analysis
        market_analysis = self.step1_market_analysis(symbol, market_data)
        
        # πŸšͺ Gate 1: Confidence Threshold
        if market_analysis.confidence_score < 0.4:
            raise ValueError("Market analysis confidence below threshold")
        
        # Step 2: Risk Assessment  
        risk_assessment = self.step2_risk_assessment(symbol, market_analysis, portfolio_info)
        
        # πŸšͺ Gate 2: Risk Validation
        if risk_assessment.confidence_score < 0.4:
            raise ValueError("Risk assessment confidence below threshold")
        
        # Step 3: Trading Decision
        trading_decision = self.step3_trading_decision(symbol, market_analysis, risk_assessment)
        
        # πŸšͺ Gate 3: Decision Validation  
        if trading_decision.confidence_score < 0.4:
            raise ValueError("Trading decision confidence below threshold")
        
        # πŸ” Cross-Step Coherence Validation
        warnings = self.validate_workflow_coherence(
            market_analysis, risk_assessment, trading_decision
        )
        
        return final_result

πŸ” Advanced Validation: Cross-Step Coherence

Beyond confidence scores, the workflow validates logical consistency:

def validate_workflow_coherence(self, market_analysis, risk_assessment, trading_decision):
    warnings = []
    
    # Risk-Action Alignment
    if risk_assessment.risk_level == "HIGH" and trading_decision.action == "BUY":
        if trading_decision.position_size > 0.05:  # 5%
            warnings.append("High risk trade with large position - reduce exposure")
    
    # Stop-Loss Consistency
    if abs(trading_decision.stop_loss - risk_assessment.stop_loss_level) > 0.1:
        warnings.append("Stop-loss levels inconsistent between steps")
    
    # Position Size Alignment
    if abs(trading_decision.position_size - risk_assessment.position_size_recommendation) > 0.05:
        warnings.append("Position size differs from risk recommendation")
    
    return warnings

πŸ’‘ Key Advantages of Prompt Chaining

  1. 🎯 Specialized Focus: Each prompt optimized for specific domain knowledge
  2. πŸ”— Structured Handoffs: Type-safe data flow between steps using Pydantic
  3. πŸ›‘οΈ Quality Gates: Validation prevents bad data from propagating downstream
  4. πŸ”§ Modularity: Can modify/replace individual steps independently
  5. πŸ“Š Confidence Tracking: Reliability metrics at each stage
  6. πŸ§ͺ Testability: Each step can be unit tested independently
  7. πŸ› Debuggability: Easy to identify which step failed and why

🎯 Real-World Example Output

{
  "symbol": "AAPL",
  "workflow_timestamp": "2024-01-15T14:30:00",
  "market_analysis": {
    "price_trend": "Bullish momentum above 20-day SMA",
    "technical_indicators": {"rsi_14": 68.5, "macd_signal": "bullish"},
    "confidence_score": 0.85
  },
  "risk_assessment": {
    "risk_level": "MEDIUM", 
    "position_size_recommendation": 0.08,
    "stop_loss_level": 145.00,
    "confidence_score": 0.78
  },
  "trading_decision": {
    "action": "BUY",
    "entry_price": 150.25,
    "target_price": 155.00,
    "reasoning": "Strong technical setup with controlled downside risk",
    "confidence_score": 0.82
  },
  "overall_confidence": 0.817,
  "warnings": []
}

πŸš€ When to Use Prompt Chaining

Perfect For:

  • βœ… Complex multi-domain decisions (finance, legal, medical)
  • βœ… Workflows requiring validation between steps
  • βœ… Business processes with compliance requirements
  • βœ… Tasks needing audit trails and explainability
  • βœ… Scenarios where partial failure should be handled gracefully

Not Ideal For:

  • ❌ Simple single-domain tasks
  • ❌ Creative tasks requiring holistic thinking
  • ❌ Real-time applications with latency constraints
  • ❌ Tasks where intermediate steps don't make sense

πŸ”§ Three Implementation Approaches

The script demonstrates the same trading workflow using three different patterns:

Procedural Approach πŸ—οΈ

class TradingPromptChain:
    def execute_trading_workflow(self, symbol, market_data, portfolio_info):
        # Manual step execution with validation gates
        market_analysis = self.step1_market_analysis(symbol, market_data)
        risk_assessment = self.step2_risk_assessment(symbol, market_analysis, portfolio_info)
        trading_decision = self.step3_trading_decision(symbol, market_analysis, risk_assessment)
        return self.aggregate_results(...)

LangGraph Graph API πŸ”—

workflow = StateGraph(TradingWorkflowState)
workflow.add_node("analyze_market", market_analysis_node)
workflow.add_node("assess_risk", risk_assessment_node)
workflow.add_node("make_decision", trading_decision_node)
workflow.add_edge("analyze_market", "assess_risk")
app = workflow.compile()
result = app.invoke(initial_state)

LangGraph @task Functional ⚑

@task
def analyze_market_task(symbol: str, market_data: Dict) -> MarketAnalysis:
    return market_analyzer.invoke(prompt)

@task  
def assess_risk_task(symbol: str, market_analysis: MarketAnalysis, portfolio_info: Dict) -> RiskAssessment:
    return risk_assessor.invoke(prompt)

# Functional composition
def execute_workflow(symbol, market_data, portfolio_info):
    market_analysis = analyze_market_task(symbol, market_data)
    risk_assessment = assess_risk_task(symbol, market_analysis, portfolio_info)
    return aggregate_final_result_task(symbol, market_analysis, risk_assessment, trading_decision)

Each approach demonstrates different paradigms for building enterprise AI workflows where reliability, auditability, and domain expertise matter more than raw speed.

πŸ“Š Visual Workflow Diagrams

When you run python prompt_chaining.py, it generates Mermaid diagrams showing the workflow structure for each approach:

Procedural Workflow

flowchart TD
    A[πŸš€ Start Trading Workflow] --> B[πŸ“Š Initialize TradingPromptChain Class]
    B --> C[πŸ” Step 1: Market Analysis]
    C --> D{πŸšͺ Gate 1: Confidence > 0.4?}
    D -->|❌ No| E[❌ Terminate: Low Confidence]
    D -->|βœ… Yes| F[⚠️ Step 2: Risk Assessment]
    F --> G{πŸšͺ Gate 2: Confidence > 0.4?}
    G -->|❌ No| H[❌ Terminate: Low Risk Confidence]
    G -->|βœ… Yes| I[πŸ’Ό Step 3: Trading Decision]
    I --> J{πŸšͺ Gate 3: Confidence > 0.4?}
    J -->|❌ No| K[❌ Terminate: Low Decision Confidence]
    J -->|βœ… Yes| L[πŸ” Cross-Step Validation]
    L --> M[πŸ“Š Generate Final Result]
    M --> N[βœ… Complete Workflow]
    
    style A fill:#e1f5fe
    style N fill:#e8f5e8
    style E fill:#ffebee
    style H fill:#ffebee
    style K fill:#ffebee
Loading

Graph API Workflow with Conditional Edges

flowchart TD
    A[πŸš€ Start StateGraph Workflow] --> B[πŸ“Š Initialize TradingWorkflowState]
    B --> C[πŸ” analyze_market]
    C --> D[πŸ“ Update State: market_analysis]
    D --> E{πŸšͺ Check Errors in State}
    E -->|❌ Has Errors| F[❌ Workflow Failed]
    E -->|βœ… No Errors| G[⚠️ assess_risk]
    G --> H[πŸ“ Update State: risk_assessment]
    H --> I{πŸ”„ CONDITIONAL EDGE: Risk Level?}
    I -->|πŸ”΄ HIGH/EXTREME| J[🚨 additional_risk_review]
    I -->|🟒 LOW/MEDIUM| K[πŸ’Ό make_decision]
    J --> L[πŸ“ Apply Risk Controls & Reduce Position]
    L --> K[πŸ’Ό make_decision]
    K --> M[πŸ“ Update State: trading_decision]
    M --> N{πŸšͺ Check Errors in State}
    N -->|❌ Has Errors| O[❌ Workflow Failed]
    N -->|βœ… No Errors| P[πŸ” validate_results]
    P --> Q[πŸ“Š Generate Final Result in State]
    Q --> R[βœ… Return Final State]
    
    subgraph "StateGraph Nodes"
    C
    G
    J
    K
    P
    end
    
    subgraph "⭐ Conditional Routing"
    I
    end
    
    style A fill:#e1f5fe
    style R fill:#e8f5e8
    style F fill:#ffebee
    style O fill:#ffebee
    style I fill:#fff3e0
    style J fill:#ffecb3
Loading

Functional @task Workflow

flowchart TD
    A[πŸš€ Start @task Functional Workflow] --> B[πŸ” analyze_market_task]
    B --> C[πŸ“Š MarketAnalysis Result]
    C --> D{πŸšͺ Confidence Gate}
    D -->|❌ No| E[❌ Raise ValueError]
    D -->|βœ… Yes| F[⚠️ assess_risk_task]
    F --> G[πŸ“Š RiskAssessment Result]
    G --> H{πŸšͺ Confidence Gate}
    H -->|❌ No| I[❌ Raise ValueError]
    H -->|βœ… Yes| J[πŸ’Ό make_trading_decision_task]
    J --> K[πŸ“Š TradingDecision Result]
    K --> L{πŸšͺ Confidence Gate}
    L -->|❌ No| M[❌ Raise ValueError]
    L -->|βœ… Yes| N[πŸ“Š aggregate_final_result_task]
    N --> O[πŸ” Cross-Task Validation]
    O --> P[πŸ“Š TradingWorkflowResult]
    P --> Q[βœ… Return Final Result]
    
    subgraph "@task Functions"
    B
    F
    J
    N
    end
    
    style A fill:#e1f5fe
    style Q fill:#e8f5e8
    style E fill:#ffebee
    style I fill:#ffebee
    style M fill:#ffebee
Loading

🌟 Advanced Agent Patterns

Tool Calling Patterns (tool_calling.py)

The tool calling script demonstrates foundational patterns for LangGraph workflows:

  • Multi-tool Orchestration - Coordinating multiple tools in agent workflows
  • Structured Analysis - Type-safe analysis of tool results
  • Business Logic Integration - Real-world workflow patterns
  • Error Handling & Validation - Production-ready error management
  • Confidence Scoring - Building reliable AI agents
  • Workflow State Management - Foundation for complex agent systems

Prompt Chaining Patterns (prompt_chaining.py)

The prompt chaining script demonstrates three distinct implementation approaches:

1️⃣ Procedural Approach (demonstrate_trading_workflow_procedural)

  • Class-Based Architecture - Object-oriented workflow management
  • Manual Orchestration - Direct control over step execution and validation
  • Custom Validation Gates - Programmatic checks between each step
  • Error Handling - Explicit exception management and workflow termination

2️⃣ LangGraph Graph API (demonstrate_trading_workflow_graph)

  • StateGraph Architecture - Typed state management with TradingWorkflowState
  • Node-Based Design - Each workflow step as a graph node
  • Conditional Edge Routing - Risk-based branching with add_conditional_edges()
  • State Persistence - Automatic state passing between nodes
  • Built-in Error Collection - Graph-level error aggregation
  • Advanced Validation - Cross-step coherence checks within the graph

3️⃣ LangGraph @task Functional API (demonstrate_trading_workflow_functional)

  • Pure Functional Design - Each step as a stateless @task function
  • Type-Safe Composition - Clear input/output types for data flow
  • Conditional Task Execution - Risk-based branching within functional workflow
  • Functional Orchestration - Task composition with validation gates
  • Performance Optimization - LangGraph can optimize task execution
  • High Testability - Each @task independently unit testable

πŸ”„ Conditional Edge Routing

The Graph API implementation features sophisticated conditional edge routing that demonstrates enterprise-grade workflow control:

Risk-Based Branching Logic

def route_after_risk_assessment(state: TradingWorkflowState) -> str:
    risk_level = state["risk_assessment"].risk_level
    
    if risk_level in [RiskLevel.HIGH, RiskLevel.EXTREME]:
        return "additional_risk_review"  # Route to risk controls
    else:
        return "make_decision"  # Skip directly to trading decision

# Add conditional edge to workflow
workflow.add_conditional_edges(
    "assess_risk",
    route_after_risk_assessment,
    {
        "additional_risk_review": "additional_risk_review",
        "make_decision": "make_decision"
    }
)

Automated Risk Controls

  • HIGH Risk Trades: Position size reduced by 30% or capped at 5%
  • EXTREME Risk Trades: Position size capped at 2% + compliance approval required
  • LOW/MEDIUM Risk: Skip additional review, proceed directly to trading decision

Real-World Results

MEDIUM Risk: 8% position β†’ Direct to decision (0 warnings)
HIGH Risk:   15% β†’ 5% position reduction (1 warning)  
EXTREME Risk: 2% position cap + compliance (3 warnings)

This demonstrates how LangGraph conditional edges enable sophisticated business logic where workflow paths adapt dynamically based on data conditions - essential for enterprise AI applications.

⚑ Functional Conditional Logic

The @task Functional API also implements conditional logic, but using programmatic branching within the @entrypoint function:

@entrypoint(checkpointer=MemorySaver())
def trading_workflow_functional(inputs: Dict[str, Any]) -> TradingWorkflowResult:
    # Step 1: Market Analysis
    market_analysis = analyze_market_task(symbol, market_data).result()
    
    # Step 2: Risk Assessment
    risk_assessment = assess_risk_task(symbol, market_analysis, portfolio_info).result()
    
    # CONDITIONAL LOGIC: Additional risk review for HIGH/EXTREME risk
    if risk_assessment.risk_level in [RiskLevel.HIGH, RiskLevel.EXTREME]:
        print(f"Conditional Routing: {risk_assessment.risk_level} risk - triggering additional review")
        risk_assessment = additional_risk_review_task(symbol, risk_assessment).result()
    else:
        print(f"Conditional Routing: {risk_assessment.risk_level} risk - proceeding directly to decision")
    
    # Step 3: Trading Decision (using potentially updated risk assessment)
    trading_decision = make_trading_decision_task(symbol, market_analysis, risk_assessment).result()

Functional vs Graph API Conditional Logic

Approach Implementation Benefits Use Cases
Graph API add_conditional_edges() with routing functions Visual workflow representation, built-in state management Complex multi-path workflows, debugging, visualization
Functional API Programmatic if/else within @entrypoint Direct control, simple to understand, easy testing Linear workflows with occasional branching, performance-critical paths

Both approaches achieve the same risk-based controls but with different paradigms for enterprise workflow design.

Enterprise Patterns Demonstrated:

  • Sequential Processing - Multi-step LLM workflows with structured handoffs
  • Dynamic Routing - Conditional edges based on business logic and data conditions
  • Domain Specialization - Each step optimized for specific expertise areas
  • Quality Gates - Validation checkpoints preventing error propagation
  • Cross-Step Validation - Logical consistency checks across workflow stages
  • Modular Architecture - Independent steps that can be modified/replaced
  • Audit Trails - Complete visibility into decision-making process

πŸ”„ Pattern Comparison

Pattern Best For Strengths Use Cases
Tool Calling Multi-domain data gathering Parallel execution, rich context Research, analysis, data aggregation
Prompt Chaining (Procedural) Custom workflow control Direct control, explicit validation Custom business logic, legacy integration
Prompt Chaining (Graph API) Dynamic conditional workflows Conditional routing, state management, error collection Risk-based decisions, complex multi-step processes, debugging
Prompt Chaining (@task Functional) Pure functional workflows Performance, testability, reusability Microservices, functional programming, scaling

πŸ“Š Implementation Approach Guide

Choose Procedural When:

  • βœ… Need direct control over workflow execution
  • βœ… Integrating with existing object-oriented systems
  • βœ… Custom validation logic required
  • βœ… Legacy system compatibility needed

Choose Graph API When:

  • βœ… Complex state management required
  • βœ… Need built-in error collection and debugging
  • βœ… Workflow visualization important
  • βœ… Dynamic routing or conditional logic needed
  • βœ… Risk-based or data-driven workflow branching required
  • βœ… Enterprise compliance workflows with automated controls

Choose @task Functional When:

  • βœ… Performance and scalability critical
  • βœ… High testability requirements
  • βœ… Functional programming paradigm preferred
  • βœ… Microservices architecture
  • βœ… Task reusability across workflows

All patterns are foundational for LangGraph where agents need reliable, structured workflows for production applications.

πŸ€– Built with Claude Code

This project was created using Claude Code, demonstrating:

  • AI-Assisted Development - Rapid prototyping of AI agent systems
  • Best Practices - Production-ready patterns and architectures
  • Comprehensive Examples - Real-world scenarios and use cases
  • Modern AI Stack - LangChain, LangGraph, Claude Sonnet integration

🎯 Next Steps for LangGraph

This project provides the foundation for building more complex LangGraph workflows:

  • State Machines - Complex agent state management
  • Multi-Agent Systems - Coordinating multiple AI agents
  • Conditional Routing - Dynamic workflow branching
  • Memory & Context - Persistent agent memory systems
  • Human-in-the-Loop - Interactive agent workflows

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests if applicable
  5. Submit a pull request

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

πŸ”— Useful Links


Ready to build AI agents? Start with these examples and scale to production LangGraph workflows! πŸš€

About

LangChain + Claude Sonnet integration examples with structured output and tool calling

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages