Skip to content

A comprehensive FastAPI-based backend service for multi-modal stress detection using physiological signals, psychological questionnaires (DASS-21), and voice analysis. The system employs Explainable AI (XAI) techniques to provide interpretable stress predictions.

De-Coder05/SafeSpace-AI

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

4 Commits
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

SafeSpace Stress Detection API

A comprehensive FastAPI-based backend service for multi-modal stress detection using physiological signals, psychological questionnaires (DASS-21), and voice analysis. The system employs Explainable AI (XAI) techniques to provide interpretable stress predictions.

πŸš€ Features

  • Multi-Modal Stress Detection: Combines physiological, psychological, and voice data
  • Explainable AI (XAI): SHAP and LIME-based explanations for predictions
  • Late Fusion Architecture: Intelligent combination of multiple modalities
  • Real-time Processing: Fast inference with optimized feature extraction
  • RESTful API: Easy integration with frontend applications
  • Docker Support: Containerized deployment ready

πŸ“Š Supported Modalities

  1. Physiological Signals (ECG, EDA, EMG, Temperature)
  2. Psychological Assessment (DASS-21 Questionnaire)
  3. Voice Analysis (Optional audio processing)

πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Physiological β”‚    β”‚   DASS-21       β”‚    β”‚   Voice         β”‚
β”‚   Data (CSV)    β”‚    β”‚   Responses     β”‚    β”‚   Probabilities β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜
          β”‚                      β”‚                      β”‚
          β–Ό                      β–Ό                      β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                    Feature Extraction                           β”‚
β”‚  β€’ Time-domain features (mean, std, skew, kurtosis)            β”‚
β”‚  β€’ Frequency-domain features (power spectral density)          β”‚
β”‚  β€’ Wavelet features (multi-resolution analysis)                β”‚
β”‚  β€’ ECG-specific features (RR intervals, heart rate)            β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
          β”‚                      β”‚                      β”‚
          β–Ό                      β–Ό                      β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Physiological   β”‚    β”‚ DASS-21         β”‚    β”‚ Voice           β”‚
β”‚ Model           β”‚    β”‚ Model           β”‚    β”‚ Model           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜
          β”‚                      β”‚                      β”‚
          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                 β–Ό
                    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                    β”‚   Late Fusion Model     β”‚
                    β”‚  (PhysioDominantFusion) β”‚
                    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                  β–Ό
                    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                    β”‚   XAI Explanations      β”‚
                    β”‚  (SHAP + LIME)          β”‚
                    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                  β–Ό
                    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                    β”‚   Stress Prediction     β”‚
                    β”‚  (Low/Medium/High)      β”‚
                    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ› οΈ Installation

Prerequisites

  • Python 3.10+
  • pip
  • Docker (optional)

Local Setup

  1. Clone the repository

    git clone <repository-url>
    cd Safespace_fastapi
  2. Create virtual environment

    python -m venv newenv
    # On Windows
    newenv\Scripts\activate
    # On macOS/Linux
    source newenv/bin/activate
  3. Install dependencies

    pip install -r requirements.txt
  4. Run the application

    uvicorn main:app --host 0.0.0.0 --port 8080 --reload

Docker Setup

  1. Build the Docker image

    docker build -t safespace-api .
  2. Run the container

    docker run -p 8080:8080 safespace-api

πŸ“‘ API Endpoints

Main Prediction Endpoint

POST /predict

Combines physiological data, DASS-21 responses, and optional voice probabilities to predict stress levels.

Request Format

  • Content-Type: multipart/form-data
  • Parameters:
    • physiological_file: CSV file with physiological data
    • dass21_responses: DASS-21 responses (comma-separated or JSON)
    • voice_probabilities: Voice probabilities (optional, comma-separated or JSON)

Example Request

curl -X POST "http://localhost:8080/predict" \
  -H "Content-Type: multipart/form-data" \
  -F "physiological_file=@data.csv" \
  -F "dass21_responses=1,2,3,1,2,3,1" \
  -F "voice_probabilities=0.2,0.5,0.3"

Response Format

{
  "prediction": {
    "stress_level": "Medium",
    "confidence": 0.85,
    "probabilities": {
      "low": 0.15,
      "medium": 0.85,
      "high": 0.00
    }
  },
  "explanations": {
    "physiological": {
      "available": true,
      "method": "SHAP",
      "feature_importance": [
        {
          "feature": "ECG_mean_rr",
          "importance": 0.25,
          "abs_importance": 0.25
        }
      ],
      "summary": "ECG heart rate variability is the most important factor..."
    },
    "dass21": {
      "available": true,
      "method": "SHAP",
      "feature_importance": [
        {
          "feature": "DASS21_Q3_positive_feelings",
          "importance": -0.30,
          "abs_importance": 0.30
        }
      ],
      "summary": "Positive feelings score significantly influences the prediction..."
    },
    "fusion": {
      "available": true,
      "method": "Late Fusion",
      "modality_contributions": {
        "physiological": 0.60,
        "dass21": 0.25,
        "voice": 0.15
      },
      "summary": "Physiological signals contribute most to the final prediction..."
    }
  },
  "processing_info": {
    "windows_processed": 10,
    "features_extracted": 180,
    "processing_time_ms": 245
  }
}

πŸ“ Project Structure

Safespace_fastapi/
β”œβ”€β”€ main.py                 # Main FastAPI application
β”œβ”€β”€ latefusion_final.py     # Late fusion model implementation
β”œβ”€β”€ predict_wesad.py        # WESAD dataset prediction utilities
β”œβ”€β”€ predict_physiological.py # Physiological data processing
β”œβ”€β”€ run_wesad_prediction.py # WESAD prediction runner
β”œβ”€β”€ v1.py                   # Version 1 API endpoints
β”œβ”€β”€ requirements.txt        # Python dependencies
β”œβ”€β”€ Dockerfile             # Docker configuration
β”œβ”€β”€ scaler.pkl             # Feature scaler
β”œβ”€β”€ models/                # Trained model files
β”‚   β”œβ”€β”€ fusion_model.pkl
β”‚   β”œβ”€β”€ lateFusion.pkl
β”‚   β”œβ”€β”€ regularized_global_model.pkl
β”‚   β”œβ”€β”€ stacking_classifier_model.pkl
β”‚   β”œβ”€β”€ scaler.pkl
β”‚   └── Voice.h5
└── newenv/                # Virtual environment (ignored by git)

πŸ”§ Configuration

Signal Processing Parameters

CFG = {
    "orig_fs": 700,        # Original sampling frequency
    "fs": 100,             # Target sampling frequency
    "window_sec": 10,      # Window size in seconds
    "stride_sec": 5,       # Stride between windows
    "sensors": ["ECG", "EDA", "EMG", "Temp"]  # Supported sensors
}

Fusion Weights

mod_weights = {
    'phys': 0.60,    # Physiological signals (60%)
    'text': 0.25,    # DASS-21 responses (25%)
    'voice': 0.15    # Voice analysis (15%)
}

πŸ“Š Data Formats

Physiological Data (CSV)

Expected columns for each sensor:

  • ECG: Raw ECG signal values
  • EDA: Electrodermal activity values
  • EMG: Electromyography values
  • Temp: Temperature values

DASS-21 Responses

7 questions with responses 0-3:

  • 0: Did not apply to me at all
  • 1: Applied to me to some degree, or some of the time
  • 2: Applied to me to a considerable degree, or a good part of the time
  • 3: Applied to me very much, or most of the time

Voice Probabilities (Optional)

Three probability values for stress levels:

  • [low_prob, medium_prob, high_prob]

🧠 Model Details

Feature Extraction

  1. Time-domain Features:

    • Statistical measures (mean, std, variance, skewness, kurtosis)
    • Range measures (min, max, peak-to-peak)
    • Percentiles (25th, 75th, median)
  2. Frequency-domain Features:

    • Power spectral density in different bands
    • Frequency statistics (mean, std, peak frequency)
  3. Wavelet Features:

    • Multi-resolution analysis using PyWavelets
    • Detail coefficients (d1-d4) and approximation coefficients (a4)
  4. ECG-specific Features:

    • RR intervals and heart rate variability
    • Heart rate statistics

XAI Implementation

  • SHAP (SHapley Additive exPlanations): For feature importance analysis
  • LIME (Local Interpretable Model-agnostic Explanations): For local explanations
  • Permutation Importance: For model-agnostic feature ranking

πŸš€ Performance

  • Inference Time: ~250ms per prediction
  • Feature Extraction: ~180 features per window
  • Window Processing: 10-second windows with 5-second stride
  • Memory Usage: ~500MB (including models)

πŸ” API Documentation

Once the server is running, visit:

πŸ§ͺ Testing

Example Data

# Sample DASS-21 responses
dass21_responses = "1,2,3,1,2,3,1"

# Sample voice probabilities
voice_probabilities = "0.2,0.5,0.3"

Testing with curl

# Test with sample data
curl -X POST "http://localhost:8080/predict" \
  -H "Content-Type: multipart/form-data" \
  -F "physiological_file=@sample_data.csv" \
  -F "dass21_responses=1,2,3,1,2,3,1"

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

πŸ“ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ‘₯ Authors

  • SafeSpace Team - Initial work

πŸ™ Acknowledgments

  • WESAD dataset for physiological data
  • DASS-21 questionnaire for psychological assessment
  • SHAP and LIME libraries for explainable AI
  • FastAPI for the web framework

πŸ“ž Support

For support and questions:

  • Create an issue in the repository
  • Contact the development team
  • Check the API documentation at /docs

Note: This API is designed for research and educational purposes. For clinical applications, additional validation and medical certification may be required.

About

A comprehensive FastAPI-based backend service for multi-modal stress detection using physiological signals, psychological questionnaires (DASS-21), and voice analysis. The system employs Explainable AI (XAI) techniques to provide interpretable stress predictions.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 78.6%
  • Python 18.3%
  • CSS 2.9%
  • Other 0.2%