A comprehensive FastAPI-based backend service for multi-modal stress detection using physiological signals, psychological questionnaires (DASS-21), and voice analysis. The system employs Explainable AI (XAI) techniques to provide interpretable stress predictions.
- Multi-Modal Stress Detection: Combines physiological, psychological, and voice data
- Explainable AI (XAI): SHAP and LIME-based explanations for predictions
- Late Fusion Architecture: Intelligent combination of multiple modalities
- Real-time Processing: Fast inference with optimized feature extraction
- RESTful API: Easy integration with frontend applications
- Docker Support: Containerized deployment ready
- Physiological Signals (ECG, EDA, EMG, Temperature)
- Psychological Assessment (DASS-21 Questionnaire)
- Voice Analysis (Optional audio processing)
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Physiological β β DASS-21 β β Voice β
β Data (CSV) β β Responses β β Probabilities β
βββββββββββ¬ββββββββ βββββββββββ¬ββββββββ βββββββββββ¬ββββββββ
β β β
βΌ βΌ βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Feature Extraction β
β β’ Time-domain features (mean, std, skew, kurtosis) β
β β’ Frequency-domain features (power spectral density) β
β β’ Wavelet features (multi-resolution analysis) β
β β’ ECG-specific features (RR intervals, heart rate) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β β
βΌ βΌ βΌ
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Physiological β β DASS-21 β β Voice β
β Model β β Model β β Model β
βββββββββββ¬ββββββββ βββββββββββ¬ββββββββ βββββββββββ¬ββββββββ
β β β
ββββββββββββββββββββββββΌβββββββββββββββββββββββ
βΌ
βββββββββββββββββββββββββββ
β Late Fusion Model β
β (PhysioDominantFusion) β
βββββββββββββββ¬ββββββββββββ
βΌ
βββββββββββββββββββββββββββ
β XAI Explanations β
β (SHAP + LIME) β
βββββββββββββββ¬ββββββββββββ
βΌ
βββββββββββββββββββββββββββ
β Stress Prediction β
β (Low/Medium/High) β
βββββββββββββββββββββββββββ
- Python 3.10+
- pip
- Docker (optional)
-
Clone the repository
git clone <repository-url> cd Safespace_fastapi
-
Create virtual environment
python -m venv newenv # On Windows newenv\Scripts\activate # On macOS/Linux source newenv/bin/activate
-
Install dependencies
pip install -r requirements.txt
-
Run the application
uvicorn main:app --host 0.0.0.0 --port 8080 --reload
-
Build the Docker image
docker build -t safespace-api . -
Run the container
docker run -p 8080:8080 safespace-api
POST /predict
Combines physiological data, DASS-21 responses, and optional voice probabilities to predict stress levels.
- Content-Type:
multipart/form-data - Parameters:
physiological_file: CSV file with physiological datadass21_responses: DASS-21 responses (comma-separated or JSON)voice_probabilities: Voice probabilities (optional, comma-separated or JSON)
curl -X POST "http://localhost:8080/predict" \
-H "Content-Type: multipart/form-data" \
-F "physiological_file=@data.csv" \
-F "dass21_responses=1,2,3,1,2,3,1" \
-F "voice_probabilities=0.2,0.5,0.3"{
"prediction": {
"stress_level": "Medium",
"confidence": 0.85,
"probabilities": {
"low": 0.15,
"medium": 0.85,
"high": 0.00
}
},
"explanations": {
"physiological": {
"available": true,
"method": "SHAP",
"feature_importance": [
{
"feature": "ECG_mean_rr",
"importance": 0.25,
"abs_importance": 0.25
}
],
"summary": "ECG heart rate variability is the most important factor..."
},
"dass21": {
"available": true,
"method": "SHAP",
"feature_importance": [
{
"feature": "DASS21_Q3_positive_feelings",
"importance": -0.30,
"abs_importance": 0.30
}
],
"summary": "Positive feelings score significantly influences the prediction..."
},
"fusion": {
"available": true,
"method": "Late Fusion",
"modality_contributions": {
"physiological": 0.60,
"dass21": 0.25,
"voice": 0.15
},
"summary": "Physiological signals contribute most to the final prediction..."
}
},
"processing_info": {
"windows_processed": 10,
"features_extracted": 180,
"processing_time_ms": 245
}
}Safespace_fastapi/
βββ main.py # Main FastAPI application
βββ latefusion_final.py # Late fusion model implementation
βββ predict_wesad.py # WESAD dataset prediction utilities
βββ predict_physiological.py # Physiological data processing
βββ run_wesad_prediction.py # WESAD prediction runner
βββ v1.py # Version 1 API endpoints
βββ requirements.txt # Python dependencies
βββ Dockerfile # Docker configuration
βββ scaler.pkl # Feature scaler
βββ models/ # Trained model files
β βββ fusion_model.pkl
β βββ lateFusion.pkl
β βββ regularized_global_model.pkl
β βββ stacking_classifier_model.pkl
β βββ scaler.pkl
β βββ Voice.h5
βββ newenv/ # Virtual environment (ignored by git)
CFG = {
"orig_fs": 700, # Original sampling frequency
"fs": 100, # Target sampling frequency
"window_sec": 10, # Window size in seconds
"stride_sec": 5, # Stride between windows
"sensors": ["ECG", "EDA", "EMG", "Temp"] # Supported sensors
}mod_weights = {
'phys': 0.60, # Physiological signals (60%)
'text': 0.25, # DASS-21 responses (25%)
'voice': 0.15 # Voice analysis (15%)
}Expected columns for each sensor:
- ECG: Raw ECG signal values
- EDA: Electrodermal activity values
- EMG: Electromyography values
- Temp: Temperature values
7 questions with responses 0-3:
- 0: Did not apply to me at all
- 1: Applied to me to some degree, or some of the time
- 2: Applied to me to a considerable degree, or a good part of the time
- 3: Applied to me very much, or most of the time
Three probability values for stress levels:
[low_prob, medium_prob, high_prob]
-
Time-domain Features:
- Statistical measures (mean, std, variance, skewness, kurtosis)
- Range measures (min, max, peak-to-peak)
- Percentiles (25th, 75th, median)
-
Frequency-domain Features:
- Power spectral density in different bands
- Frequency statistics (mean, std, peak frequency)
-
Wavelet Features:
- Multi-resolution analysis using PyWavelets
- Detail coefficients (d1-d4) and approximation coefficients (a4)
-
ECG-specific Features:
- RR intervals and heart rate variability
- Heart rate statistics
- SHAP (SHapley Additive exPlanations): For feature importance analysis
- LIME (Local Interpretable Model-agnostic Explanations): For local explanations
- Permutation Importance: For model-agnostic feature ranking
- Inference Time: ~250ms per prediction
- Feature Extraction: ~180 features per window
- Window Processing: 10-second windows with 5-second stride
- Memory Usage: ~500MB (including models)
Once the server is running, visit:
- Interactive API Docs: http://localhost:8080/docs
- ReDoc Documentation: http://localhost:8080/redoc
- OpenAPI Schema: http://localhost:8080/openapi.json
# Sample DASS-21 responses
dass21_responses = "1,2,3,1,2,3,1"
# Sample voice probabilities
voice_probabilities = "0.2,0.5,0.3"# Test with sample data
curl -X POST "http://localhost:8080/predict" \
-H "Content-Type: multipart/form-data" \
-F "physiological_file=@sample_data.csv" \
-F "dass21_responses=1,2,3,1,2,3,1"- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- SafeSpace Team - Initial work
- WESAD dataset for physiological data
- DASS-21 questionnaire for psychological assessment
- SHAP and LIME libraries for explainable AI
- FastAPI for the web framework
For support and questions:
- Create an issue in the repository
- Contact the development team
- Check the API documentation at
/docs
Note: This API is designed for research and educational purposes. For clinical applications, additional validation and medical certification may be required.