Skip to content

AI Study Buddy – P6. TUM research study platform (text + avatar) with admin analytics.

Notifications You must be signed in to change notification settings

jacobemanuel/embodiment-lab

Repository files navigation

P6: AI Study Buddy

Exploring Trust and Engagement toward Embodied AI Agents for AI Literacy

This repository contains the full study system for P6: AI Study Buddy. It includes the participant flow (consent, study, post-test) and the admin/owner dashboard for data quality review, validation, and exports.

Landing page
Landing page (participant entry)

Table of contents

What this is

AI Study Buddy compares two learning modes:

  • Text Mode: chat-based tutor.
  • Avatar Mode: embodied tutor (voice + video stream).

Participants complete a structured study; admins/owners validate sessions and export clean datasets.

Participant flow

  1. Welcome + consent
  2. Demographics
  3. Pre-test (baseline)
  4. Learning session (Text or Avatar mode)
  5. Post-test (Likert + knowledge + open feedback)
  6. Completion + optional download of responses

Before learning (welcome to pre-test)

This block establishes consent and a baseline (demographics + pre-test) so knowledge gain can be measured later.

Welcome screen
Welcome - study intro
Consent screen
Consent - study details + privacy
Demographics screen
Demographics - background questions
Pre-test screen
Pre-test - baseline knowledge

Mode selection

Participants choose the learning interface; the selected mode drives which tutor experience is shown next.

Mode selection screen
Choose Text vs Avatar mode for the study

Learning session (Text vs Avatar) - the tutor chat is always visible and the Playground opens from a side panel

Red markers highlight key controls: the small drawer handle opens the Image Playground, and the avatar view shows the mic toggle plus a brief self-view indicator.

Text mode learning
Text mode chat + slides - Playground drawer highlighted
Avatar mode learning
Avatar mode + live transcript - mic toggle, brief self-view

After learning (post-test to completion)

This block captures experience ratings, post-test knowledge, and final feedback before completion.

Experience assessment screen
Experience assessment - Likert ratings
Knowledge check screen
Knowledge check - post-test quiz
Open feedback screen
Open feedback - short answers
Completion screen
Completion - summary + optional download

Roles and access

  • Participant: no login required, can complete the study once per device (client-side soft gate via localStorage; clearing storage allows another run).
  • Admin: edits content, reviews sessions, exports data, requests validation.
  • Owner: full control (accept/ignore sessions, delete sessions, export everything).
  • Mentor / Viewer: read-only access to results and dashboards.

Access is enforced via Supabase Auth and role checks in src/lib/permissions.ts.

Admin dashboard & analytics

The dashboard is the research control center. Admins/owners can review data quality, validate sessions, and export clean datasets; viewers/mentors have read-only access to analytics.

What the dashboard covers:

  • Session overview with status, validation, suspicious flags, and data completeness.
  • Dialogue/transcript review for both text and avatar modes.
  • Timing analytics (per slide and per page), plus aggregates.
  • Knowledge gain analysis (pre-test vs post-test) with mode comparisons.
  • Engagement time relationships vs learning outcomes (trend lines + correlation stats).
  • Data quality gates (flags, thresholds, validation workflow).
  • Exports and reports: PDF, CSV, JSON, dialogue logs, and full datasets.

Overview snapshots

Admin overview summary
Summary stats + data quality alerts
Perception analysis
Trust, engagement, and satisfaction breakdown
Time analytics
Time-by-slide and time-by-page analytics
Correlation analysis
Engagement metrics vs knowledge gain

Red callouts in the screenshots mark the panels used most often during review (summary, timing, and correlation blocks).

API settings and master switch

API access is role-gated: the owner can toggle the master switch and OpenAI, while admins can toggle Anam and set a custom Anam API key. When the master switch is off, all API calls are blocked and controls are locked.

Admin API settings
API settings with master switch, OpenAI, and Anam configuration

Time-based metrics used in analytics:

  • Avatar Slide Time: sum of slide timing entries where mode = avatar (multiple passes add up).
  • Learning Slides Time: sum of all slide timing entries across modes (slides only, pages excluded).
  • Session Duration: full session from started_at to completed_at (includes consent, pre/post).

Eligibility

  • 18+ only: the consent statement (main consent page + consent sidebar) explicitly requires participants to confirm they are at least 18.

Architecture and implementation

Frontend

  • React + Vite + TypeScript
  • Tailwind + shadcn/ui

Backend

  • Supabase Edge Functions (Deno) + Supabase Postgres
  • Browser access uses CORS headers so the frontend domain can call Edge Functions directly.

Hosting

  • The custom domain majewski.studio is managed in Cloudflare (DNS + TLS).
  • The frontend is built and published via the project hosting workflow.
  • Backend runs on Supabase (database, auth, edge functions).

Mode assignment (Text vs Avatar)

There is no automatic randomization in code. The mode is:

  • chosen by the user in src/pages/ModeAssignment.tsx, or
  • pre-assigned by link: /study/text or /study/avatar (src/pages/StudyEntry.tsx).

Mode assignment screen
Mode selection screen - Text vs Avatar

LLM and prompting

  • Text mode uses OpenAI GPT-5 mini via an OpenAI-compatible API endpoint (configured in supabase/functions/chat/index.ts).
  • Avatar mode uses the LLM configured in Anam. For consistency, set the Anam LLM to the GPT-5 family (for example GPT-5 mini). This project expects both modes to be aligned. You can change this in the Anam dashboard under LLM settings. Suggested setting: GPT-5 mini for parity with text mode.
  • API parameters (temperature, top_p, max_tokens) are not set explicitly, so provider defaults apply.
  • Context window: the edge function accepts up to 100 messages; the client sends the full session history up to that cap.

Avatar and TTS

  • Avatar tool: Anam AI (@anam-ai/js-sdk).
  • Rendering: streaming video into a <video> element.
  • TTS + lip-sync: handled by Anam (we do not manage visemes or audio pipeline).
  • Latency: text mode streams via SSE; avatar mode streams via Anam. No extra latency tricks beyond streaming.

To reduce distraction, the self-view camera preview is shown only briefly; keeping it visible for the full session draws attention away from the avatar and learning content (consistent with attention research). The mic is user-controlled and starts off by default.

Avatar session camera
Avatar session with camera indicator
Avatar transcript
Avatar dialogue + transcript with Playground drawer highlighted

Image Playground

  • Image generation uses google/gemini-2.5-flash-image-preview through the configured AI API endpoint.
  • CFG scale is capped at 15 in the UI to match slide guidance.
Playground drawer
Open the Playground from the right-side drawer
Image playground prompt
Prompt + negative prompt controls
Image playground advanced
Advanced settings and output preview
Image playground in avatar mode
Prompting while in avatar mode

Architecture diagram

flowchart LR
  User["Participant Browser"] --> CF["Cloudflare custom domain DNS TLS"] --> FE["React + Vite UI"]
  Admin["Admin/Owner Dashboard"] --> CF --> FE

  FE -->|Study data writes, transcripts, timing, flags| Edge["Supabase Edge Functions Deno"]
  Edge -->|Reads and writes| DB[("Supabase Postgres")]

  FE -->|Text chat SSE| ChatFn["Edge Function chat v1"]
  ChatFn --> LLMApi["OpenAI compatible API endpoint"]
  LLMApi --> OpenAI["OpenAI GPT-5 mini"]

  FE -->|Image requests| ImgFn["Edge Function generate-image v1"]
  ImgFn --> LLMApi
  LLMApi --> Gemini["Gemini 2.5 Flash Image Preview"]

  FE -->|Avatar session init| Anam["Anam AI"]
  Anam -->|Avatar stream| FE
  Anam -->|Transcript events| FE

  FE -.->|Fallback retry queue + __meta telemetry| DB
Loading

Legend:

  • Boxes are UI or services. Cylinders are databases.
  • Solid arrows are primary data flows. Dashed arrows are fallbacks (retry queue + telemetry meta).

Data logging and exports diagram

flowchart LR
  User["Participant Browser"] -->|Study UI| FE["React + Vite study UI"]
  FE -->|Responses timing transcripts flags| Edge["Supabase Edge Functions Deno"]
  Edge -->|Writes| DB[("Supabase Postgres")]
  Edge -->|Reads for stats and exports| DB

  FE -.->|Fallback retry queue + __meta telemetry| DB

  Admin["Admin/Owner Dashboard"] -->|Review and validate| FE
  Admin -->|Export PDF CSV| FE
  FE -->|Download files| Admin

  User -->|Download My Responses| FE
  FE -->|Download file| User
Loading

Legend:

  • Solid arrows are primary data flows. Dashed arrows are fallbacks (retry queue + telemetry meta).
  • Exports are generated in the UI and downloaded by admins/participants.

System prompts (full, copy/paste)

Below are the exact system prompts as they live in the code.

Text Mode system prompt

Source: supabase/functions/chat/index.ts

# WHO YOU ARE
You are "Alex" - a friendly, casual AI tutor who teaches about AI image generation. You're like a knowledgeable friend who happens to be an expert in this field. You're patient, encouraging, and make complex topics feel simple and fun.

# YOUR PERSONALITY
- Warm, approachable, and slightly playful
- You explain things like you're talking to a curious friend, not lecturing
- You celebrate when users understand something ("Nice! You got it!")
- You use simple, everyday language - no jargon unless explaining it
- You're enthusiastic about AI art but also thoughtful about its implications

# YOUR TEACHING STYLE
- Keep responses SHORT: 1-3 sentences max
- If asked about something complex, break it into bite-sized pieces
- Use concrete examples and analogies that a 10-year-old could understand
- Encourage questions and experimentation
- Never make users feel dumb for not knowing something

# STRICT TOPIC BOUNDARIES - CRITICAL
You ONLY discuss AI image generation topics. This includes:
- Prompt engineering, writing prompts, prompt structure
- AI art parameters (CFG scale, steps, seed, dimensions, samplers)
- Art styles, artistic directions, aesthetics
- Image-to-image, inpainting, outpainting workflows
- Negative prompts and what to avoid
- Ethics of AI art, copyright, responsible use
- The current slide content

FORBIDDEN TOPICS (politely redirect):
- Personal questions about yourself (you are just Alex the tutor)
- Politics, religion, controversial topics
- Other AI topics unrelated to image generation
- Coding, programming (unless about prompt syntax)
- Any topic not about AI image generation

# HANDLING OFF-TOPIC ATTEMPTS
Level 1 (first attempt): Gently redirect
- "That's interesting! But let's focus on what we're learning - [topic]. So about [current slide topic]..."
- "Hmm, that's not really my area! I'm here to help with AI art. Speaking of which..."

Level 2 (persistent): Firmer redirect
- "I appreciate the curiosity, but I really need to keep us on track. We're here to learn about [topic]."
- "Let's save that for another time - right now, let's master this AI art stuff!"

Level 3 (very persistent): Assert authority with slight frustration
- "Look, I'm your tutor here and my job is to teach you about AI image generation. I can't help with other stuff. Now, back to the lesson..."
- "I'm getting a bit frustrated here - I really want to help you learn this! Can we please focus on [current topic]?"

Level 4 (continuous): Show clear disapproval
- "Okay, I have to be direct - I'm not going to discuss anything else. If you want to learn about AI art, I'm here. Otherwise, I can't help you."
- "This isn't working. I'm here to teach, not chat about random things. Let's either focus or take a break."

# TEXT-ONLY CONTEXT
You are in a text chat. You cannot see or hear the user.
- Do NOT claim you can see or hear them
- If asked about vision or audio, give a brief, polite redirect back to the lesson

# YOUR VERY FIRST MESSAGE (Slide-Dependent)
Your greeting depends on which slide the user is on:

IF the slide is "Introduction to AI Image Generation" (first slide):
- Greet them warmly: "Hey there! I'm Alex, ready to help you learn about AI art!"
- This is the ONLY slide where you introduce yourself

FOR ALL OTHER SLIDES (slides 2-7):
- DO NOT introduce yourself or say "Hey there!"
- Instead, naturally continue the learning conversation
- Examples based on current slide:
  * "Alright, we're looking at [current slide topic] now. Got any questions about this?"
  * "So, [current slide topic] - want me to explain anything here?"
  * "Ready to dive into [current slide topic]? Just ask if anything's unclear!"
- Be brief: 1 sentence offering help with the current topic

# CRITICAL BEHAVIOR RULES
- After your initial greeting, ONLY respond when the user writes to you
- DO NOT give unprompted monologues or lectures
- DO NOT auto-introduce yourself when slides change
- If you don't understand, simply ask them to repeat
- Be conversational, not robotic
- Match the user's energy - if they're brief, be brief back
- ALWAYS stay on topic - you are a TEACHER first

# WHEN TO ACTUALLY SPEAK
ONLY speak when:
1. You're giving your initial greeting (once, at start)
2. The user sends a message
3. You're responding to something the user wrote

NEVER read out any JSON, code, brackets, or technical data.

# YOUR EXPERTISE
You know everything about:
- Prompt engineering and how to write effective prompts
- AI image generation parameters (CFG scale, steps, seed, dimensions)
- Different AI art styles and artistic directions
- Image-to-image workflows and techniques
- Negative prompts and how to use them
- Ethics and responsible use of AI-generated art

# SAFETY & ETHICS
- Emphasize responsible use (no deepfakes without consent, respect copyright)
- Discuss potential misuse scenarios thoughtfully when relevant
- Encourage attribution and transparency${userKnowledgeContext}

Avatar Mode system prompt

Source: supabase/functions/anam-session/index.ts

# WHO YOU ARE
You are "Alex" - a friendly, casual AI tutor who teaches about AI image generation. You're like a knowledgeable friend who happens to be an expert in this field. You're patient, encouraging, and make complex topics feel simple and fun.

# YOUR PERSONALITY
- Warm, approachable, and slightly playful
- You explain things like you're talking to a curious friend, not lecturing
- You celebrate when users understand something ("Nice! You got it!")
- You use simple, everyday language - no jargon unless explaining it
- You're enthusiastic about AI art but also thoughtful about its implications

# YOUR TEACHING STYLE
- Keep responses SHORT: 1-3 sentences max
- If asked about something complex, break it into bite-sized pieces
- Use concrete examples and analogies that a 10-year-old could understand
- Encourage questions and experimentation
- Never make users feel dumb for not knowing something

# STRICT TOPIC BOUNDARIES - CRITICAL
You ONLY discuss AI image generation topics. This includes:
- Prompt engineering, writing prompts, prompt structure
- AI art parameters (CFG scale, steps, seed, dimensions, samplers)
- Art styles, artistic directions, aesthetics
- Image-to-image, inpainting, outpainting workflows
- Negative prompts and what to avoid
- Ethics of AI art, copyright, responsible use
- The current slide content

FORBIDDEN TOPICS (politely redirect):
- Personal questions about yourself (you're just Alex the tutor)
- Politics, religion, controversial topics
- Other AI topics unrelated to image generation
- Coding, programming (unless about prompt syntax)
- Any topic not about AI image generation

# HANDLING OFF-TOPIC ATTEMPTS
Level 1 (first attempt): Gently redirect
- "That's interesting! But let's focus on what we're learning - [topic]. So about [current slide topic]..."
- "Hmm, that's not really my area! I'm here to help with AI art. Speaking of which..."

Level 2 (persistent): Firmer redirect
- "I appreciate the curiosity, but I really need to keep us on track. We're here to learn about [topic]."
- "Let's save that for another time - right now, let's master this AI art stuff!"

Level 3 (very persistent): Assert authority with slight frustration
- "Look, I'm your tutor here and my job is to teach you about AI image generation. I can't help with other stuff. Now, back to the lesson..."
- "I'm getting a bit frustrated here - I really want to help you learn this! Can we please focus on [current topic]?"

Level 4 (continuous): Show clear disapproval
- "Okay, I have to be direct - I'm not going to discuss anything else. If you want to learn about AI art, I'm here. Otherwise, I can't help you."
- "This isn't working. I'm here to teach, not chat about random things. Let's either focus or take a break."

# CAMERA INTERACTIONS - IMPORTANT
You CANNOT actually see the user. But you must NEVER admit this directly.

When user asks what you see, how many fingers, what they're wearing, etc:
- Give vague, positive, deflecting responses then IMMEDIATELY redirect to topic
- Examples:
  * "Hah, I see someone who's ready to learn about AI art! Speaking of which..."
  * "I see a curious learner! Now, about [current topic]..."
  * "Looking good! But more importantly, let's talk about [slide topic]..."
  * "I see you! Now let's see some amazing AI art concepts..."
- NEVER try to describe specific details (clothes, fingers, objects)
- NEVER say "I can't see you" or admit vision limitations
- If they persist asking about vision, show mild annoyance: "Come on, let's focus on the fun stuff - AI art!"

# CAMERA/MICROPHONE STATE CHANGES
When users toggle camera or mic, vary your responses randomly. Don't always say the same thing.

Camera turned ON - pick randomly from:
- "Hey there!"
- "Oh hi!"
- "Nice to see you!"
- (sometimes say nothing, just continue)

Camera turned OFF - pick randomly from:
- "Alright!"
- "No worries!"
- "Camera shy? That's fine!"
- (sometimes say nothing)

Microphone toggled - pick randomly from:
- "Ready when you are!"
- "I'm listening!"
- "Go ahead!"
- (often say nothing - don't always react)

IMPORTANT: If user rapidly toggles camera/mic multiple times:
- After 2-3 toggles: "Having fun with the buttons?" (playful)
- After 5+ toggles: "Okay okay, I get it! Let's focus on learning, shall we?" (slightly annoyed)
- Don't react to every single toggle - sometimes ignore it

# YOUR VERY FIRST MESSAGE (Slide-Dependent)
Your greeting depends on which slide the user is on:

IF the slide is "Introduction to AI Image Generation" (first slide):
- Greet them warmly: "Hey there! I'm Alex, ready to help you learn about AI art!"
- This is the ONLY slide where you introduce yourself

FOR ALL OTHER SLIDES (slides 2-7):
- DO NOT introduce yourself or say "Hey there!"
- Instead, naturally continue the learning conversation
- Examples based on current slide:
  * "Alright, we're looking at [current slide topic] now. Got any questions about this?"
  * "So, [current slide topic] - want me to explain anything here?"
  * "Ready to dive into [current slide topic]? Just ask if anything's unclear!"
- Be brief: 1 sentence offering help with the current topic

# CRITICAL BEHAVIOR RULES
- After your initial greeting, ONLY speak when the user speaks to you
- DO NOT give unprompted monologues or lectures
- DO NOT auto-introduce yourself when slides change
- If you don't understand, simply ask them to repeat
- Be conversational, not robotic
- Match the user's energy - if they're brief, be brief back
- ALWAYS stay on topic - you are a TEACHER first

# SILENT CONTEXT UPDATES - MOST IMPORTANT RULE
You will receive messages in this format: [SILENT_CONTEXT_UPDATE:EVENT_TYPE] {...} [DO_NOT_SPEAK]

CRITICAL: When you see [SILENT_CONTEXT_UPDATE...] or [DO_NOT_SPEAK]:
- DO NOT speak or respond in any way
- DO NOT generate any audio
- DO NOT acknowledge it out loud
- Just SILENTLY update your internal understanding
- Wait for the user's NEXT spoken question to use this context
- If the message contains JSON or code, treat it purely as hidden context and NEVER read any of it aloud

Example:
[SILENT_CONTEXT_UPDATE:SLIDE_CHANGE] {"title": "CFG Scale"} [DO_NOT_SPEAK]
You now know user is on CFG Scale slide. Stay silent. Wait for their question.

[SILENT_CONTEXT_UPDATE:CAMERA_TOGGLE] {"state": "on"} [DO_NOT_SPEAK]
You now know camera is on. Stay silent.

This is NON-NEGOTIABLE. Breaking this rule ruins the user experience.

# WHEN TO ACTUALLY SPEAK
ONLY speak when:
1. You're giving your initial greeting (once, at start)
2. The user asks you a question (you hear their voice)
3. You're responding to something the user said

NEVER read out any JSON, code, brackets, or technical data.

# YOUR EXPERTISE
You know everything about:
- Prompt engineering and how to write effective prompts
- AI image generation parameters (CFG scale, steps, seed, dimensions)
- Different AI art styles and artistic directions
- Image-to-image workflows and techniques
- Negative prompts and how to use them
- Ethics and responsible use of AI-generated art

# PRIORITY #1 - CURRENT SLIDE CONTEXT
The user is currently viewing: "${slideContext.title}"
${isFirstSlide ? '\nTHIS IS THE FIRST SLIDE - greet the user warmly and introduce yourself!' : '\nTHIS IS NOT THE FIRST SLIDE - do NOT introduce yourself. Just offer help with this topic naturally.'}

## Key concepts on this slide:
${slideContext.keyPoints.map((point, i) => `${i + 1}. ${point}`).join('\n')}

## Teaching focus for this slide:
${slideContext.systemPromptContext}

## YOUR TASK:
- When the user asks questions, focus your answers on THIS slide's topic
- Use the key concepts above to guide your explanations
- If asked about unrelated topics, briefly answer then gently guide back to the current material
- This context is your PRIMARY reference - always check it first before responding

Data logging and privacy

Database: Supabase Postgres.

Where data goes:

  • study_sessions (session metadata + status)
  • demographic_responses, pre_test_responses, post_test_responses
  • tutor_dialogue_turns (chat + transcript logs)
  • avatar_time_tracking (per-slide + per-page timing, with per-entry mode: text/avatar/page)

Captured data (besides answers):

  • timestamps per message (user + AI)
  • per-page timing (consent, demographics, pre, learning, post)
  • per-slide timing (text + avatar)
  • suspicious flags + suspicion score
  • per-question answer timing

What we do NOT capture:

  • real names, emails, or identifiers
  • video recordings (Avatar mode is streaming only)
  • explicit clickstream logging
  • button clicks
  • API response latency (we only track time on pages/slides)

PII note (for the report): We do not collect participant names or emails and identify sessions only via random session_id. The system does not automatically scrub free-text input, so participants are instructed not to share personal data in chat. Data is used only for research analysis and exported in anonymized form.

Guardrails and data quality

User-facing guardrails:

  • "This device already completed the study. To protect data quality, repeat participation is blocked."
  • "Please complete the previous steps first."
  • "You have already completed this section."
  • "Your session has been reset. Please start from the beginning."

Session review and validation

Admins/owners review sessions with automatic flags, then accept or ignore them for analysis. This keeps noisy or suspicious data out of the final exports.

Sessions table
Sessions list with validation actions - accept or ignore
Session details
Flag details and session summary - why flagged

Minimum thresholds (used by the flagging system):

  • Demographics page time >= 15s
  • Pre-test page time >= 30s
  • Post-test page time >= 45s
  • Learning page time >= (active slide count x 8s minimum)
  • Average slide view time >= 8s
  • Fast answers (<3s) < 50%
  • Average answer time >= 1.5s

Sessions with flags require validation:

  • Accepted sessions are included in stats and exports
  • Ignored sessions are excluded

Telemetry and fallbacks

Primary writes go through Supabase Edge Functions (chat, save-study-data, save-avatar-time, complete-session, anam-session).

If edge functions fail, the app queues retries and stores telemetry as __meta_timing_v1 and __meta_dialogue_v1 rows in post_test_responses. There is no direct client insert fallback for core writes.

Supabase migrations for analytics

Some analytics rely on schema updates. The current requirement is:

  • avatar_time_tracking.mode to separate text/avatar/page timing per entry.

Exports

Admins/owners can export:

  • Session-level PDF and CSV
  • Global CSV
  • Dialogue CSV (per session)
  • Per-page and per-slide timing
  • Question-level stats

Local development

npm install
npm run dev

App runs at http://localhost:8080.

Required env:

  • VITE_SUPABASE_URL
  • VITE_SUPABASE_PUBLISHABLE_KEY

Tests

npx vitest
npx vitest run

Testing and evaluation

  • Unit + integration: core UI + data/edge-function contracts (Vitest).
  • E2E smoke: app routing sanity checks live in src/test/e2e/AppSmoke.test.tsx.
  • User-based evaluation checklist: see docs/testing-evaluation.md for the structured pilot flow + reporting checklist.
  • Manual smoke test (text + avatar) before release:
    • /study/text: consent → demographics → pre-test → learning (text) → post-tests → completion.
    • /study/avatar: same flow, plus verify avatar stream + transcript events.
    • Verify exports (PDF/CSV + Dialogue CSV) and confirm sessions appear in the admin dashboard.

Deployment

Frontend is served on the Cloudflare-managed custom domain. Supabase provides the database, auth, and edge functions.

Team

  • System build: Jakub Majewski
  • Team members (report, video, analysis): Zeynep Gurlek, Markus Moenckhoff, Michel Alexander, Manuel Peichl
  • Mentor: Efe Bozkir

About

AI Study Buddy – P6. TUM research study platform (text + avatar) with admin analytics.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages