On-device emotion inference from biosignals (heart rate and RR intervals) for Android applications.
✅ Build Status: All modules compile successfully ✅ API Parity: Matches Flutter/iOS/Python implementations ✅ Thread-Safe: Uses ConcurrentLinkedQueue for concurrent operations
- Privacy-first: All processing happens on-device
- Real-time: <5ms inference latency
- Three emotion states: Amused, Calm, Stressed
- Sliding window: 60s window with 5s step (configurable)
- Kotlin-first: Idiomatic Kotlin API with coroutine support
Add JitPack repository to your root settings.gradle or build.gradle:
dependencyResolutionManagement {
repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS)
repositories {
google()
mavenCentral()
maven { url = uri("https://jitpack.io") }
}
}Then add the dependency:
dependencies {
implementation("com.github.synheart-ai:synheart-emotion-android:0.1.0")
}Replace 0.1.0 with the latest release version from GitHub Releases.
dependencies {
implementation("ai.synheart:emotion:0.1.0")
}Include as a local module in your Android project.
Add this to your Activity or test:
import com.synheart.emotion.*
// Quick verification
val config = EmotionConfig()
val engine = EmotionEngine.fromPretrained(config)
println("✓ SDK initialized successfully")import com.synheart.emotion.*
// Create engine with default configuration
val config = EmotionConfig()
val engine = EmotionEngine.fromPretrained(config)
// Push data from wearable
engine.push(
hr = 72.0,
rrIntervalsMs = listOf(850.0, 820.0, 830.0, /* ... */),
timestamp = Date()
)
// Get inference result when ready
val results = engine.consumeReady()
for (result in results) {
println("Emotion: ${result.emotion}")
println("Confidence: ${result.confidence}")
println("Probabilities: ${result.probabilities}")
}val config = EmotionConfig(
windowMs = 60000L, // 60 second window
stepMs = 5000L, // 5 second step
minRrCount = 30, // Minimum RR intervals
hrBaseline = 65.0 // Personal HR baseline
)val engine = EmotionEngine.fromPretrained(
config = config,
onLog = { level, message, context ->
when (level) {
"error" -> Log.e("EmotionEngine", message)
"warn" -> Log.w("EmotionEngine", message)
"info" -> Log.i("EmotionEngine", message)
"debug" -> Log.d("EmotionEngine", message)
}
}
)val stats = engine.getBufferStats()
println("Buffer count: ${stats["count"]}")
println("Duration: ${stats["duration_ms"]}ms")
println("HR range: ${stats["hr_range"]}")
println("RR count: ${stats["rr_count"]}")engine.clear()Configuration for the emotion inference engine.
modelId: String- Model identifier (default: "svm_linear_wrist_sdnn_v1_0")windowMs: Long- Rolling window size in milliseconds (default: 60000)stepMs: Long- Emission cadence in milliseconds (default: 5000)minRrCount: Int- Minimum RR intervals required (default: 30)returnAllProbas: Boolean- Return all label probabilities (default: true)hrBaseline: Double?- Optional HR baseline for personalizationpriors: Map<String, Double>?- Optional label priors for calibration
Main emotion inference engine.
Methods:
push(hr, rrIntervalsMs, timestamp, motion)- Push new data pointconsumeReady(): List<EmotionResult>- Consume ready resultsgetBufferStats(): Map<String, Any>- Get buffer statisticsclear()- Clear all buffered data
Companion:
fromPretrained(config, model?, onLog?)- Create engine from pretrained model
Result of emotion inference.
timestamp: Date- Timestamp when inference was performedemotion: String- Predicted emotion label (top-1)confidence: Double- Confidence score (0.0 to 1.0)probabilities: Map<String, Double>- All label probabilitiesfeatures: Map<String, Double>- Extracted featuresmodel: Map<String, Any>- Model metadata
Sealed class representing errors:
EmotionError.TooFewRR(minExpected, actual)- Too few RR intervalsEmotionError.BadInput(reason)- Invalid input dataEmotionError.ModelIncompatible(expectedFeats, actualFeats)- Model incompatibleEmotionError.FeatureExtractionFailed(reason)- Feature extraction failed
- Android API 21+ (Android 5.0 Lollipop)
- Kotlin 1.8+
IMPORTANT: This library uses demo placeholder model weights that are NOT trained on real biosignal data. For production use, you must provide your own trained model weights.
All processing happens on-device. No data is sent to external servers.
See LICENSE file for details.
Contributions are welcome! See our Contributing Guidelines for details.
- Main Repository: synheart-emotion (Source of Truth)
- Documentation: RFC E1.1
- Model Card: Model Card
- Examples: Examples
- Models: Pre-trained Models
- Tools: Development Tools
- Synheart AI: synheart.ai
- Issues: GitHub Issues