Comprehensive enterprise-grade Explainable AI (XAI) framework specifically designed for crypto trading models. This framework provides deep insights into ML model decisions, ensuring transparency, regulatory compliance, and interpretability in high-stakes financial environments.
- SHAP Explainer: SHapley Additive exPlanations with enterprise caching and async processing
- LIME Explainer: Local Interpretable Model-agnostic Explanations for individual predictions
- Counterfactual Explainer: What-if scenario analysis with trading constraints
- Anchor Explainer: Rule-based explanations with high-precision decision boundaries
- Feature Importance Analysis: Multi-method consensus ranking with trading-specific categorization
- Decision Path Analysis: Tree-based model interpretation with risk path identification
- Sensitivity Analysis: Model robustness and stability assessment
- Interactive Dashboards: Real-time explanation visualization with trading focus
- Enterprise Integration: Enterprise-grade architectural patterns
- Async Processing: High-performance explanations for real-time trading
- Caching System: Intelligent explanation caching for performance optimization
- Compliance Tracking: Regulatory compliance monitoring and reporting
- Multi-Model Support: Works with all major ML frameworks
# Install the package
cd packages/ml-explainable-ai
pip install -e .
# Or using the project scripts
pnpm install
pnpm python:install
from src.explainers import SHAPExplainer, LIMEExplainer
from src.analysis import FeatureImportanceAnalyzer
from src.visualization import ExplanationDashboard
# Initialize SHAP explainer
shap_explainer = SHAPExplainer(model=your_trading_model)
shap_explainer.fit(training_data, feature_names=['price', 'volume', 'rsi'])
# Generate explanation
explanation = shap_explainer.explain(
instance_data,
symbol='BTCUSDT'
)
# Analyze results
print(f"Trade Signal: {explanation.trade_signal}")
print(f"Confidence: {explanation.prediction_confidence:.3f}")
print(f"Top Features: {explanation.feature_names[:5]}")# Start interactive dashboard
from src.visualization.explanation_dashboard import CryptoTradingExplanationDashboard
dashboard = CryptoTradingExplanationDashboard(port=8050)
dashboard.add_explanation(explanation, 'shap')
dashboard.run() # Access at http://localhost:8050shap_explainer.py- SHAP-based global and local explanationslime_explainer.py- Model-agnostic local explanationscounterfactual_explainer.py- Counterfactual scenario analysisanchor_explainer.py- High-precision rule extraction
feature_importance.py- Multi-method feature importance analysisdecision_paths.py- Tree-based decision path extractionsensitivity_analysis.py- Model stability and robustness analysis
explanation_dashboard.py- Interactive Dash-based dashboardinteractive_plots.py- Advanced plotly visualizations
explanation_reporter.py- Automated explanation reportscompliance_reporter.py- Regulatory compliance documentation
- Live explanation updates
- Trading signal analysis
- Model performance tracking
- Risk assessment visualization
- Feature importance heatmaps
- Decision tree visualization
- Counterfactual scenario exploration
- Anchor rule inspection
- Symbol-based filtering
- Signal distribution analysis
- Risk level categorization
- Market condition correlation
from src.explainers.shap_explainer import SHAPConfig
config = SHAPConfig(
explainer_type='tree', # 'tree', 'deep', 'linear', 'kernel'
max_evals=100,
cache_explanations=True,
enable_gpu=True,
parallel_workers=4
)from src.explainers.lime_explainer import LIMEConfig
config = LIMEConfig(
num_features=10,
num_samples=1000,
discretize_continuous=True,
cache_explanations=True
)- Async-First Design: All operations support async/await
- Enterprise Caching: Redis-compatible caching layer
- Monitoring Integration: Structured logging and metrics
- Security-First: Input validation and audit trails
- Scalability: Horizontal scaling support
- Market Constraint Preservation: Trading-aware feature bounds
- Signal Interpretation: Automatic buy/sell/hold mapping
- Risk Assessment: Multi-level risk categorization
- Performance Attribution: P&L explanation capabilities
- Overall model behavior analysis
- Feature importance rankings
- Decision boundary visualization
- Model complexity metrics
- Individual prediction breakdown
- Feature contribution analysis
- Counterfactual scenarios
- Decision rule extraction
- Trade signal justification
- Risk factor identification
- Market regime analysis
- Strategy attribution
- Model transparency documentation
- Decision audit trails
- Bias detection and reporting
- Fairness metric calculation
- Input sanitization
- Explanation validation
- Access control integration
- Encrypted cache storage
# Run all tests
pnpm test
# Python tests only
pnpm python:test
# With coverage
pnpm python:coverage
# Specific test categories
pytest tests/test_explainers.py -v
pytest tests/test_analysis.py -v
- Intelligent explanation caching
- Async batch processing
- GPU acceleration support
- Memory-efficient algorithms
- SHAP explanations: ~50ms per instance
- LIME explanations: ~200ms per instance
- Dashboard response: <100ms
- Batch processing: 1000+ explanations/minute
# XGBoost integration
import xgboost as xgb
model = xgb.XGBClassifier()
explainer = SHAPExplainer(model)
# Random Forest integration
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier()
explainer = LIMEExplainer(model)
# Neural Network integration
import torch.nn as nn
model = nn.Sequential(...)
explainer = CounterfactualExplainer(model)# Real-time integration
async def explain_trade_decision(order_data, model):
explainer = SHAPExplainer(model)
explanation = await explainer.explain_async(
order_data,
symbol=order_data['symbol']
)
# Log for compliance
await log_explanation(explanation)
return explanationSHAPExplainer: SHAP-based explanationsLIMEExplainer: LIME-based explanationsCounterfactualExplainer: Counterfactual analysisAnchorExplainer: Rule-based explanationsFeatureImportanceAnalyzer: Feature analysisDecisionPathAnalyzer: Decision tree analysis
SHAPConfig: SHAP explainer configurationLIMEConfig: LIME explainer configurationCounterfactualConfig: Counterfactual configurationAnchorConfig: Anchor explainer configuration
SHAPExplanation: SHAP results with metadataLIMEExplanation: LIME results with metadataCounterfactualExplanation: Counterfactual resultsAnchorExplanation: Anchor rule results
High Memory Usage
# Solution: Enable caching and limit batch sizes
config.cache_explanations = True
config.batch_size = 32Slow Explanations
# Solution: Use background processing
explanation = await explainer.explain_async(data)Dashboard Not Loading
# Solution: Check port availability
dashboard = ExplanationDashboard(port=8051) # Try different port- Deep learning explainer improvements
- Advanced counterfactual algorithms
- Multi-model comparison tools
- Automated report generation
- Enhanced compliance features
- GPU acceleration for LIME
- Distributed explanation processing
- Advanced caching strategies
- Real-time streaming support
This framework is part of the Crypto Trading Bot v5.0 enterprise system. For contributions:
- Follow architectural patterns
- Maintain async-first design principles
- Include comprehensive tests
- Update documentation
- Ensure trading-specific features work correctly
MIT License - see LICENSE file for details.
For technical support and feature requests, please use the project's issue tracking system.
Built for transparent and explainable crypto trading AI systems
For questions and support, please open an issue on GitHub.