Skip to content

synheart-ai/synheart-emotion

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

33 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Synheart Emotion

On-device emotion inference from biosignals (HR/RR) for Dart, Python, Kotlin, and Swift applications

License: MIT Platform Support

Synheart Emotion is a comprehensive SDK ecosystem for inferring momentary emotions from biosignals (heart rate and RR intervals) directly on device, ensuring privacy and real-time performance.

πŸš€ Features

  • πŸ“± Multi-Platform: Dart/Flutter, Python, Kotlin, Swift
  • πŸ”„ Real-Time Inference: Live emotion detection from heart rate and RR intervals
  • 🧠 On-Device Processing: All computations happen locally for privacy
  • πŸ“Š Unified API: Consistent API across all platforms
  • πŸ”’ Privacy-First: No raw biometric data leaves your device
  • ⚑ High Performance: < 5ms inference latency on mid-range devices
  • πŸŽ“ Research-Based: Models trained on WESAD dataset with 78% accuracy
  • πŸ§ͺ Thread-Safe: Concurrent data ingestion supported on all platforms

πŸ“¦ SDKs

All SDKs provide identical functionality with platform-idiomatic APIs. Each SDK is maintained in its own repository:

Dart/Flutter SDK

dependencies:
  synheart_emotion: ^0.2.1

πŸ“– Repository: synheart-emotion-dart

Python SDK PyPI

pip install synheart-emotion

πŸ“– Repository: synheart-emotion-python

Kotlin SDK

dependencies {
    implementation("ai.synheart:emotion:0.1.0")
}

πŸ“– Repository: synheart-emotion-kotlin

Swift SDK

Swift Package Manager:

dependencies: [
    .package(url: "https://github.com/synheart-ai/synheart-emotion-swift.git", from: "0.1.0")
]

CocoaPods:

pod 'SynheartEmotion', '~> 0.1.0'

πŸ“– Repository: synheart-emotion-swift

πŸ“‚ Repository Structure

This repository serves as the source of truth for shared resources across all SDK implementations:

synheart-emotion/                  # Source of truth repository
β”œβ”€β”€ models/                        # ML model definitions and assets
β”‚   β”œβ”€β”€ wesad_emotion_v1_0.json    # Model configuration
β”‚   └── *.onnx                     # Pre-trained model weights
β”‚
β”œβ”€β”€ docs/                          # Technical documentation
β”‚   β”œβ”€β”€ RFC-E1.1.md                # Complete technical specification
β”‚   └── MODEL_CARD.md              # Model details and performance
β”‚
β”œβ”€β”€ tools/                         # Development tools
β”‚   β”œβ”€β”€ synthetic-data-generator/  # Generate test biosignal data
β”‚   └── wesad-reference-models/    # Research artifacts (14 ML models)
β”‚
β”œβ”€β”€ examples/                      # Cross-platform example applications
β”œβ”€β”€ scripts/                       # Build and deployment scripts
└── CONTRIBUTING.md                # Contribution guidelines for all SDKs

Platform-specific SDK repositories (maintained separately):

🎯 Quick Start

Python (Recommended for Testing)

from datetime import datetime
from synheart_emotion import EmotionEngine, EmotionConfig

# Initialize engine
config = EmotionConfig()
engine = EmotionEngine.from_pretrained(config)

# Push biosignal data
engine.push(
    hr=72.0,
    rr_intervals_ms=[850.0, 820.0, 830.0, 845.0, 825.0],
    timestamp=datetime.now()
)

# Get inference results
results = engine.consume_ready()
for result in results:
    print(f"Emotion: {result.emotion} ({result.confidence:.1%})")

Dart/Flutter

import 'package:synheart_emotion/synheart_emotion.dart';

// Initialize the emotion engine
final engine = EmotionEngine.fromPretrained(
  const EmotionConfig(
    window: Duration(seconds: 60),
    step: Duration(seconds: 5),
  ),
);

// Push biometric data
engine.push(
  hr: 72.0,
  rrIntervalsMs: [850.0, 820.0, 830.0, 845.0, 825.0],
  timestamp: DateTime.now(),
);

// Get results
final results = engine.consumeReady();
for (final result in results) {
  print('Emotion: ${result.emotion} (${result.confidence})');
}

Kotlin

import ai.synheart.emotion.*

val config = EmotionConfig()
val engine = EmotionEngine.fromPretrained(config)

engine.push(
    hr = 72.0,
    rrIntervalsMs = listOf(850.0, 820.0, 830.0, 845.0, 825.0),
    timestamp = Date()
)

val results = engine.consumeReady()
results.forEach { result ->
    println("Emotion: ${result.emotion} (${result.confidence})")
}

Swift

import SynheartEmotion

let config = EmotionConfig()
let engine = try! EmotionEngine.fromPretrained(config: config)

engine.push(
    hr: 72.0,
    rrIntervalsMs: [850.0, 820.0, 830.0, 845.0, 825.0],
    timestamp: Date()
)

let results = engine.consumeReady()
results.forEach { result in
    print("Emotion: \(result.emotion) (\(result.confidence))")
}

πŸ“Š Supported Emotions

The library currently supports three emotion categories:

  • 😊 Amused: Positive, engaged emotional state
  • 😌 Calm: Relaxed, peaceful emotional state
  • 😰 Stressed: Anxious, tense emotional state

πŸ› οΈ Development Tools

Synthetic Data Generator

Generate realistic biosignal data for testing all SDKs:

cd tools/synthetic-data-generator

# Generate test data
python cli.py --emotion Calm --duration 60 --output ./data

# Generate session with transitions
python cli.py --session Calm Stressed Amused --transitions --output ./data

Exports to: CSV, JSON, Python, Kotlin, Swift

πŸ“– Data Generator Documentation

WESAD Reference Models

Research artifacts with 14 pre-trained ML models from WESAD dataset:

  • XGBoost, RandomForest, ExtraTrees, KNN, LDA, SVM, etc.
  • For research and model comparison only
  • Not for production use (use SDKs instead)

πŸ“– Research Models Documentation

πŸ—οΈ Architecture

All SDKs implement the same architecture:

Wearable / Sensor
   └─(HR bpm, RR ms)──► Your App
                           β”‚
                           β–Ό
                   Synheart Emotion SDK
            [Ring Buffer] β†’ [Feature Extraction] β†’ [Normalization]
                                     β”‚
                                  [Model]
                                     β”‚
                              EmotionResult

Components:

  • Ring Buffer: Holds last 60s of HR/RR data (configurable)
  • Feature Extractor: Computes HR mean, SDNN, RMSSD
  • Scaler: Standardizes features using training ΞΌ/Οƒ
  • Model: Linear SVM (One-vs-Rest) with softmax
  • Emitter: Throttles outputs (default: every 5s)

🎨 API Parity

All SDKs expose identical functionality:

Feature Python Kotlin Swift Dart
EmotionConfig βœ… βœ… βœ… βœ…
EmotionEngine βœ… βœ… βœ… βœ…
EmotionResult βœ… βœ… βœ… βœ…
EmotionError βœ… βœ… βœ… βœ…
Feature Extraction βœ… βœ… βœ… βœ…
Linear SVM Model βœ… βœ… βœ… βœ…
Thread-Safe βœ… βœ… βœ… βœ…
Sliding Window βœ… βœ… βœ… βœ…

πŸ§ͺ Test Results

Python SDK

  • βœ… 16/16 tests passing (100%)
  • βœ… All examples working
  • βœ… CLI demo functional

Kotlin SDK

  • βœ… All modules compile successfully
  • βœ… 6 Kotlin source files
  • βœ… API parity verified
  • βœ… Gradle build and tests passing

Swift SDK

  • βœ… Swift build successful
  • βœ… 6 Swift source files
  • βœ… Multi-platform support (iOS, macOS, watchOS, tvOS)
  • βœ… Swift Package Manager integration

πŸ”¬ Model Details

Model Type: Linear SVM (One-vs-Rest) Task: Momentary emotion recognition from HR/RR Input Features: [hr_mean, sdnn, rmssd] over a 60s rolling window Performance:

  • Accuracy: ~78%
  • Macro-F1: ~72%
  • Latency: < 5ms on modern mid-range devices

The model is trained on WESAD-derived 3-class subset with artifact rejection and normalization.

πŸ“– Model Card | RFC E1.1

πŸ”’ Privacy & Security

  • On-Device Processing: All emotion inference happens locally
  • No Data Retention: Raw biometric data is not retained after processing
  • No Network Calls: No data is sent to external servers
  • Privacy-First Design: No built-in storage - you control what gets persisted
  • Not a Medical Device: This library is for wellness and research purposes only

⚠️ Important: The default model weights are trained on the WESAD dataset and achieve 78% accuracy. For production use, consider training on your own data if needed.

πŸ“š Documentation

SDK Documentation

Tools Documentation

Technical Documentation

πŸ”§ Development

Requirements

  • Dart SDK: Flutter >= 3.10.0, Dart >= 3.0.0
  • Python SDK: Python >= 3.8
  • Kotlin SDK: Kotlin 1.8+, Android API 21+ (if targeting Android)
  • Swift SDK: Swift 5.9+, iOS 13+ / macOS 11+ (if targeting Apple platforms)

Running Tests

For SDK-specific tests, see the individual SDK repositories:

Generate test data for all SDKs:

cd tools/synthetic-data-generator
python cli.py --emotion Calm --duration 60 --output ./test_data

πŸ”— Integration Examples

With Custom Data Source

# Python example
from synheart_emotion import EmotionEngine, EmotionConfig
from your_sensor import get_biosignal_stream

engine = EmotionEngine.from_pretrained(EmotionConfig())

for data_point in get_biosignal_stream():
    engine.push(
        hr=data_point.heart_rate,
        rr_intervals_ms=data_point.rr_intervals,
        timestamp=data_point.timestamp
    )

    results = engine.consume_ready()
    if results:
        print(f"Current emotion: {results[0].emotion}")

With Apple HealthKit (Swift)

See Swift SDK Examples for HealthKit integration.

πŸ“ˆ Performance Targets

Target Performance (mid-range device):

  • Latency: < 5ms per inference
  • Model Size: < 100 KB
  • CPU Usage: < 2% during active streaming
  • Memory: < 3 MB (engine + buffers)
  • Accuracy: 78% on WESAD dataset (3-class emotion recognition)

🀝 Contributing

We welcome contributions! Please see our Contributing Guidelines for details on:

  • Code style and conventions
  • Testing requirements
  • Pull request process
  • Development setup

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ”— Links

πŸ“– Citation

If you use this SDK in your research:

@software{synheart_emotion,
  title = {Synheart Emotion: Multi-platform SDK for on-device emotion inference from biosignals},
  author = {Synheart AI Team},
  year = {2025},
  version = {0.1.0},
  url = {https://github.com/synheart-ai/synheart-emotion}
}

WESAD Dataset:

@article{schmidt2018introducing,
  title={Introducing WESAD, a multimodal dataset for wearable stress and affect detection},
  author={Schmidt, Philip and Reiss, Attila and Duerichen, Robert and Marberger, Claus and Van Laerhoven, Kristof},
  journal={ICMI 2018},
  year={2018}
}

πŸ‘₯ Authors

  • Israel Goytom - Initial work, RFC Design & Architecture
  • Synheart AI Team - Development & Research

Made with ❀️ by the Synheart AI Team

Technology with a heartbeat.

About

No description, website, or topics provided.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •