Skip to content

SHAP, LIME, feature importance for model interpretability

License

Notifications You must be signed in to change notification settings

KeepALifeUS/ml-explainable-ai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ML Explainable AI Framework for Crypto Trading Bot v5.0

Overview

Comprehensive enterprise-grade Explainable AI (XAI) framework specifically designed for crypto trading models. This framework provides deep insights into ML model decisions, ensuring transparency, regulatory compliance, and interpretability in high-stakes financial environments.

Key Features

Core Explainers

  • SHAP Explainer: SHapley Additive exPlanations with enterprise caching and async processing
  • LIME Explainer: Local Interpretable Model-agnostic Explanations for individual predictions
  • Counterfactual Explainer: What-if scenario analysis with trading constraints
  • Anchor Explainer: Rule-based explanations with high-precision decision boundaries

Advanced Analysis

  • Feature Importance Analysis: Multi-method consensus ranking with trading-specific categorization
  • Decision Path Analysis: Tree-based model interpretation with risk path identification
  • Sensitivity Analysis: Model robustness and stability assessment
  • Interactive Dashboards: Real-time explanation visualization with trading focus

Enterprise Features

  • Enterprise Integration: Enterprise-grade architectural patterns
  • Async Processing: High-performance explanations for real-time trading
  • Caching System: Intelligent explanation caching for performance optimization
  • Compliance Tracking: Regulatory compliance monitoring and reporting
  • Multi-Model Support: Works with all major ML frameworks

Quick Start

Installation

# Install the package
cd packages/ml-explainable-ai
pip install -e .

# Or using the project scripts
pnpm install
pnpm python:install

Basic Usage

from src.explainers import SHAPExplainer, LIMEExplainer
from src.analysis import FeatureImportanceAnalyzer
from src.visualization import ExplanationDashboard

# Initialize SHAP explainer
shap_explainer = SHAPExplainer(model=your_trading_model)
shap_explainer.fit(training_data, feature_names=['price', 'volume', 'rsi'])

# Generate explanation
explanation = shap_explainer.explain(
    instance_data,
    symbol='BTCUSDT'
)

# Analyze results
print(f"Trade Signal: {explanation.trade_signal}")
print(f"Confidence: {explanation.prediction_confidence:.3f}")
print(f"Top Features: {explanation.feature_names[:5]}")

Dashboard Usage

# Start interactive dashboard
from src.visualization.explanation_dashboard import CryptoTradingExplanationDashboard

dashboard = CryptoTradingExplanationDashboard(port=8050)
dashboard.add_explanation(explanation, 'shap')
dashboard.run()  # Access at http://localhost:8050

Core Components

Explainers (src/explainers/)

  • shap_explainer.py - SHAP-based global and local explanations
  • lime_explainer.py - Model-agnostic local explanations
  • counterfactual_explainer.py - Counterfactual scenario analysis
  • anchor_explainer.py - High-precision rule extraction

Analysis (src/analysis/)

  • feature_importance.py - Multi-method feature importance analysis
  • decision_paths.py - Tree-based decision path extraction
  • sensitivity_analysis.py - Model stability and robustness analysis

Visualization (src/visualization/)

  • explanation_dashboard.py - Interactive Dash-based dashboard
  • interactive_plots.py - Advanced plotly visualizations

Reporting (src/reporting/)

  • explanation_reporter.py - Automated explanation reports
  • compliance_reporter.py - Regulatory compliance documentation

Dashboard Features

Real-time Monitoring

  • Live explanation updates
  • Trading signal analysis
  • Model performance tracking
  • Risk assessment visualization

Interactive Analysis

  • Feature importance heatmaps
  • Decision tree visualization
  • Counterfactual scenario exploration
  • Anchor rule inspection

Trading-Specific Views

  • Symbol-based filtering
  • Signal distribution analysis
  • Risk level categorization
  • Market condition correlation

Configuration

SHAP Configuration

from src.explainers.shap_explainer import SHAPConfig

config = SHAPConfig(
    explainer_type='tree',  # 'tree', 'deep', 'linear', 'kernel'
    max_evals=100,
    cache_explanations=True,
    enable_gpu=True,
    parallel_workers=4
)

LIME Configuration

from src.explainers.lime_explainer import LIMEConfig

config = LIMEConfig(
    num_features=10,
    num_samples=1000,
    discretize_continuous=True,
    cache_explanations=True
)

Architecture

enterprise patterns

  • Async-First Design: All operations support async/await
  • Enterprise Caching: Redis-compatible caching layer
  • Monitoring Integration: Structured logging and metrics
  • Security-First: Input validation and audit trails
  • Scalability: Horizontal scaling support

Trading-Specific Features

  • Market Constraint Preservation: Trading-aware feature bounds
  • Signal Interpretation: Automatic buy/sell/hold mapping
  • Risk Assessment: Multi-level risk categorization
  • Performance Attribution: P&L explanation capabilities

Explanation Types

Global Explanations

  • Overall model behavior analysis
  • Feature importance rankings
  • Decision boundary visualization
  • Model complexity metrics

Local Explanations

  • Individual prediction breakdown
  • Feature contribution analysis
  • Counterfactual scenarios
  • Decision rule extraction

Trading Explanations

  • Trade signal justification
  • Risk factor identification
  • Market regime analysis
  • Strategy attribution

Compliance & Security

Regulatory Features

  • Model transparency documentation
  • Decision audit trails
  • Bias detection and reporting
  • Fairness metric calculation

Security Features

  • Input sanitization
  • Explanation validation
  • Access control integration
  • Encrypted cache storage

Testing

# Run all tests
pnpm test

# Python tests only
pnpm python:test

# With coverage
pnpm python:coverage

# Specific test categories
pytest tests/test_explainers.py -v
pytest tests/test_analysis.py -v

Performance

Optimization Features

  • Intelligent explanation caching
  • Async batch processing
  • GPU acceleration support
  • Memory-efficient algorithms

Benchmarks

  • SHAP explanations: ~50ms per instance
  • LIME explanations: ~200ms per instance
  • Dashboard response: <100ms
  • Batch processing: 1000+ explanations/minute

Integration Examples

With Trading Models

# XGBoost integration
import xgboost as xgb
model = xgb.XGBClassifier()
explainer = SHAPExplainer(model)

# Random Forest integration
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier()
explainer = LIMEExplainer(model)

# Neural Network integration
import torch.nn as nn
model = nn.Sequential(...)
explainer = CounterfactualExplainer(model)

With Trading Infrastructure

# Real-time integration
async def explain_trade_decision(order_data, model):
    explainer = SHAPExplainer(model)
    explanation = await explainer.explain_async(
        order_data,
        symbol=order_data['symbol']
    )

    # Log for compliance
    await log_explanation(explanation)

    return explanation

API Reference

Core Classes

  • SHAPExplainer: SHAP-based explanations
  • LIMEExplainer: LIME-based explanations
  • CounterfactualExplainer: Counterfactual analysis
  • AnchorExplainer: Rule-based explanations
  • FeatureImportanceAnalyzer: Feature analysis
  • DecisionPathAnalyzer: Decision tree analysis

Configuration Classes

  • SHAPConfig: SHAP explainer configuration
  • LIMEConfig: LIME explainer configuration
  • CounterfactualConfig: Counterfactual configuration
  • AnchorConfig: Anchor explainer configuration

Result Classes

  • SHAPExplanation: SHAP results with metadata
  • LIMEExplanation: LIME results with metadata
  • CounterfactualExplanation: Counterfactual results
  • AnchorExplanation: Anchor rule results

Troubleshooting

Common Issues

High Memory Usage

# Solution: Enable caching and limit batch sizes
config.cache_explanations = True
config.batch_size = 32

Slow Explanations

# Solution: Use background processing
explanation = await explainer.explain_async(data)

Dashboard Not Loading

# Solution: Check port availability
dashboard = ExplanationDashboard(port=8051)  # Try different port

Roadmap

Upcoming Features

  • Deep learning explainer improvements
  • Advanced counterfactual algorithms
  • Multi-model comparison tools
  • Automated report generation
  • Enhanced compliance features

Performance Improvements

  • GPU acceleration for LIME
  • Distributed explanation processing
  • Advanced caching strategies
  • Real-time streaming support

Contributing

This framework is part of the Crypto Trading Bot v5.0 enterprise system. For contributions:

  1. Follow architectural patterns
  2. Maintain async-first design principles
  3. Include comprehensive tests
  4. Update documentation
  5. Ensure trading-specific features work correctly

License

MIT License - see LICENSE file for details.

Support

For technical support and feature requests, please use the project's issue tracking system.


Built for transparent and explainable crypto trading AI systems

Support

For questions and support, please open an issue on GitHub.

About

SHAP, LIME, feature importance for model interpretability

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages