Skip to content

It is the Capstone Project as part of the 5-day Gen AI Intensive Course with Google

Notifications You must be signed in to change notification settings

shrBadihi/MoodMirror

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

14 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

MoodMirror: A Multi-Modal Journaling Assistant Powered by Gen AI

🧠 Why MoodMirror?

In an age where mental health challenges are on the rise and people are constantly navigating stress, anxiety, and burnout, self-reflection has become more essential than ever. Yet journaling β€” one of the most effective tools for emotional clarity β€” often lacks structure, feedback, and personalization.

That’s where MoodMirror comes in.

MoodMirror is an AI-powered, multimodal journaling assistant that listens, reads, and sees. Whether you write down your thoughts, record a voice note, or upload a selfie, MoodMirror analyzes your entries and reflects back structured insights β€” emotional patterns, suggestions, affirmations, and visualizations that guide your self-awareness journey.

This was developed as part of the 5-day Gen AI Intensive Course with Google and is my capstone project for the program.


πŸ’‘ The Use Case: Turning Reflection Into Guidance β€” with Gen AI

The Problem: Journaling is widely known to improve mental health, but most people struggle with consistency, reflection, or even knowing what to write. Traditional apps offer digital notebooks, but little insight β€” and none adapt to how you choose to express yourself: through words, voice, or images.

The Idea: What if your journaling app could do more than store thoughts? What if it could understand them? MoodMirror was born from this question β€” a journaling assistant that listens to your voice, analyzes your writing, or reads your face β€” and then reflects something back: insight, encouragement, patterns, and care.

The Solution: MoodMirror uses Generative AI to turn raw emotion into actionable reflection. With Gemini 2.0 Flash, we analyze text and voice entries using:

  • Structured output (JSON mode) for consistent insights
  • Few-shot prompting to teach the model how to respond empathetically
  • Retrieval-Augmented Generation (RAG) to ground suggestions in real, curated wellness strategies

Combined with image understanding (via FER) and speech-to-text pipelines, the tool brings together multiple Gen AI capabilities into a unified emotional assistant.

Journaling becomes more than expressive.> It becomes interactive β€” and even healing.


A demo video is here.

Video Title


πŸ”— Try It Out

Want to experiment with it yourself?πŸ‘‰ Run the Notebook on Kaggle (link will be updated with final submission)


πŸ”§ How It Works (with Gen AI!)

1. ✍️ Text Entry β†’ Structured Insights

At the core of MoodMirror is Gemini 2.0 Flash, which processes text entries using few-shot prompting and structured output (JSON mode). Each journal is grounded in context via RAG, selecting the most relevant mental wellness documents.

config = types.GenerateContentConfig(
    temperature=0.9,  # Increased to encourage more randomness
    top_k=5,         # Number of top tokens to consider at each step
    response_mime_type="application/json",
    response_schema=JournalAnalysis
)
response = client.models.generate_content(
    model="gemini-2.0-flash",
    contents=[prompt],
    config=config
)

It returns:

  • Primary Emotion (e.g., anxious, relieved)
  • Themes (e.g., burnout, productivity)
  • CBT-Style Suggestion
  • Affirmation

Text Entry Example of a journal entry and its processed structured insights.


2. 🎀 Audio Support

MoodMirror uses Google Speech Recognition to transcribe .m4a or .wav voice notes. The transcript is passed through the same analysis pipeline.

Audio Entry Example of a voice note being transcribed and analyzed for insights.


3. πŸ“Έ Emotion from Images

Upload a selfie, and MoodMirror uses FER (Facial Emotion Recognition) with facenet-pytorch to analyze expressions like happy, sad, neutral, or angry.

Image Entry Example of an image being processed and emotion analysis displayed.


4. πŸ“Š Mood Visualization

Your data doesn’t disappear. MoodMirror visualizes:

  • πŸͺ„ Mood Trend Line over time.
  • πŸ“Š Emotion Distribution across entries.
  • ☁️ Word Cloud of recurring thoughts.

Mood Visualization Example of a mood trend visualization over time based on user entries.


5. πŸ—“ Weekly Summary

At the end of each week, MoodMirror generates a summary of your emotional journey, highlighting the primary emotions and themes that emerged, as well as any actionable suggestions for improvement.

Weekly Summary Example of the weekly summary showing insights and suggestions based on entries.


6. πŸ’¬ Chatbot Interaction

In addition to visual insights, MoodMirror also features a chatbot that engages with you to provide real-time support, answer questions, and offer encouragement based on the emotions and themes detected in your entries.

Chatbot Example of a chatbot conversation providing personalized feedback and support.


πŸ’¬ Final Thoughts

MoodMirror reimagines journaling as a deeply interactive, multi-sensory experience. By integrating Gen AI and multimodal inputs, it creates space for reflection, growth, and emotional clarity β€” all with the click of a button.

It's not just about writing your feelings down. It’s about having them understood.

Thanks to the Gen AI team at Google for the opportunity to build this project.

About

It is the Capstone Project as part of the 5-day Gen AI Intensive Course with Google

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages