Your AI co‑pilot from raw customer input to actionable product plans.
AxWise Flow transforms user interviews and customer feedback into evidence‑linked Product Requirements Documents (PRDs) through a context‑engineered workflow. Every insight, persona, and requirement traces back to verbatim quotes, interviews, speakers, and timestamps.
Most tools dump all your data into an LLM and hope for the best. AxWise Flow actively assembles, compresses, and evolves context across a multi‑agent pipeline:
Research Scope → Synthetic Interviews → Analysis → Themes → Patterns → Personas → Insights → PRD
Every step maintains complete evidence traceability:
PRD Requirement
↓ traces to
Insight ("CFOs need 18-month ROI")
↓ traces to
Persona + Pattern ("Enterprise CFO archetype")
↓ traces to
Themes ("Budget approval concerns")
↓ traces to
Verbatim Quote ("Our board asked for...")
↓ traces to
Interview + Speaker + Timestamp
AxWise Flow implements context engineering through a unified analysis pipeline that maintains evidence traceability at every step:
- Research Chat: Conversational interface extracts business context (idea, customer, problem, industry, location)
- Synthetic Interviews: AI-generated personas and interviews fill gaps in your evidence
- Direct Upload: Support for real interview transcripts (TXT, DOCX, PDF)
A single Analysis Agent performs 6 stages of progressive compression:
- Theme Extraction → Hierarchical themes with verbatim quotes
- Pattern Recognition → Cross-interview patterns (repeating concepts)
- Stakeholder Intelligence → Multi-stakeholder dynamics and conflicts
- Sentiment Analysis → Emotional tone and confidence levels
- Persona Generation → Evidence-linked personas (only self-identified claims)
- Insight Synthesis → Actionable insights with audit trails
Every layer preserves the evidence chain:
- Themes → Raw quotes grouped by topic
- Patterns → Cross-theme insights with source links
- Personas → Pattern synthesis into roles with demographics
- Insights → Actionable findings ranked by priority
- PRD → User stories + acceptance criteria linked to insights
The PRD Agent synthesizes requirements with complete evidence chains:
- Every user story links to insights
- Every insight links to themes
- Every theme links to verbatim quotes
- Every quote links to source interviews
- ✅ Evidence-Linked PRDs: Every requirement traces to customer quotes
- ✅ Synthetic Scenarios: Explore edge cases and gaps in your research
- ✅ Stakeholder Personas: Built only from self-identified claims, not assumptions
- ✅ Audit Trails: Defend decisions with complete evidence chains
- ✅ Automated Theme Extraction: Hierarchical themes across all interviews
- ✅ Pattern Recognition: Surface cross-interview insights automatically
- ✅ Sentiment Analysis: Track emotional signals across transcripts
- ✅ Multi-Stakeholder Analysis: Analyze different perspectives simultaneously
- ✅ REST API First: Interactive docs at
/docs—integrate without the UI - ✅ Self-Hosted: PostgreSQL + FastAPI + optional Next.js frontend
- ✅ OSS Mode: No auth required for local development
- ✅ Production Ready: Enable Clerk auth for production deployments
| Feature | Description |
|---|---|
| Evidence Traceability | Every insight links back to interview + speaker + timestamp |
| Context Engineering | LLM-based context extraction + progressive compression pipeline |
| Unified Analysis Agent | Single PydanticAI agent with 6 typed stages (themes → patterns → stakeholders → sentiment → personas → insights) |
| Synthetic Interviews | AI-generated personas and interviews that fill research gaps |
| Evidence Chain | PRD → Insights → Personas → Patterns → Themes → Quotes → Interviews |
| API-First Design | FastAPI backend with interactive /docs |
| Self-Hosted | PostgreSQL + Python 3.11 + Node.js 18+ |
| OSS Mode | Authentication disabled for simplified local setup |
- Python 3.11 (not 3.13 - pandas 2.1.4 requires Python 3.11)
- PostgreSQL 12+ (running and accessible)
- Node.js 18+ and npm (for frontend)
- Gemini API Key (Get one here)
-
Clone the repository
git clone https://github.com/AxWise-GmbH/axwise-flow.git cd axwise-flow -
Set up PostgreSQL database
createdb axwise
-
Configure environment variables
Edit
backend/.env.ossand add your Gemini API key:# Get your API key from: https://aistudio.google.com/app/api_keys GEMINI_API_KEY=your_gemini_api_key_hereThe default database configuration is:
DATABASE_URL=postgresql://postgres:postgres@localhost:5432/axwise DB_USER=postgres DB_PASSWORD=postgres
-
Install Python dependencies
cd backend python3.11 -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate pip install --upgrade pip pip install -r requirements.txt cd ..
-
Run the backend
scripts/oss/run_backend_oss.sh
-
Verify the backend is running
# In another terminal curl -s http://localhost:8000/healthExpected response:
{ "status": "healthy", "timestamp": "2025-10-20T..." }
The frontend provides a web UI for the AxWise Flow platform.
# From repository root
cd frontend
# Install dependencies
npm install
# Copy OSS environment configuration
cp .env.local.oss .env.local
# Start the development server
npm run devOpen http://localhost:3000 in your browser to access:
- 📊 Unified Dashboard
- 💬 Research Chat
- 🎭 Interview Simulation
- 📤 Upload & Analyze Interviews
- 📈 Visualizations (Personas, Insights, Themes)
- 📜 Activity History
All configuration is managed through environment files - no per-file edits required.
Backend (backend/.env.oss):
OSS_MODE=true
DATABASE_URL=postgresql://postgres:postgres@localhost:5432/axwise
DB_USER=postgres
DB_PASSWORD=postgres
GEMINI_API_KEY=your_gemini_api_key_here
ENABLE_CLERK_VALIDATION=falseFrontend (frontend/.env.local.oss):
NEXT_PUBLIC_API_URL=http://localhost:8000
NEXT_PUBLIC_ENABLE_CLERK_AUTH=false
NEXT_PUBLIC_ENABLE_ANALYTICS=false
NEXT_PUBLIC_OSS_MODE=true
NEXT_PUBLIC_DEV_AUTH_TOKEN=dev_test_token_localNotes:
- In OSS mode, authentication is disabled for simplified local development
- The backend accepts development tokens starting with
dev_test_token_ - The frontend automatically injects auth tokens via shared helpers
- No changes to individual routes/pages are required
- Backend Documentation
- OSS Scripts Documentation
- API Documentation (when backend is running)
Export your PRD directly to Jira with rich descriptions and evidence context.
- Creates 1 Epic for the PRD, Stories for each user story/scenario, and Tasks for technical requirements
- Rich Jira formatting (ADF): WHAT / WHY / HOW headings; bullet lists for Acceptance Criteria and Dependencies
- Update Existing mode: matches by summary within the project to update descriptions instead of creating duplicates
- Open a PRD and click “Export to Jira”.
- Enter Jira credentials: Jira URL, Email, API Token, Project Key.
- Click “Test Connection” to validate access.
- Set options:
- Include Technical Requirements as Tasks
- Include Acceptance Criteria in Stories
- Update Existing Issues (match by summary)
- Click “Export to Jira”.
Notes:
- This creates real issues in your Jira project. Consider a test epic name first (e.g., “TEST — Your PRD”).
- Update Existing matches by summary and updates the most recent match if multiple exist.
- POST /api/export/jira/test-connection
- Accepts either flat credentials or wrapped as { "credentials": { ... } }
- POST /api/export/jira
- Body: JiraExportRequest including update_existing
Example: test connection
curl -X POST "http://localhost:8000/api/export/jira/test-connection" \
-H "Authorization: Bearer dev_test_token_local" \
-H "Content-Type: application/json" \
-d '{
"credentials": {
"jira_url": "https://your-domain.atlassian.net",
"email": "user@example.com",
"api_token": "your-api-token",
"project_key": "PROJ"
}
}'Example: export PRD
curl -X POST "http://localhost:8000/api/export/jira" \
-H "Authorization: Bearer dev_test_token_local" \
-H "Content-Type: application/json" \
-d '{
"result_id": 123,
"credentials": {
"jira_url": "https://your-domain.atlassian.net",
"email": "user@example.com",
"api_token": "your-api-token",
"project_key": "PROJ"
},
"epic_name": "Customer Research PRD",
"include_technical": true,
"include_acceptance_criteria": true,
"update_existing": true
}'Expected success response (example):
{
"success": true,
"message": "Export successful: 1 created, 3 updated",
"total_issues_created": 4,
"stories_created": 3,
"tasks_created": 5,
"errors": []
}- 422 validation: ensure body shape matches examples; if using the UI, hard refresh the page and retry
- Auth: in OSS mode, use a dev token (e.g., dev_test_token_local)
- Jira types: project must support Epic, Story, and Task/Sub-task
- If export succeeds but fields aren’t visible, your Jira project screen may hide certain fields (e.g., parent)
- API tokens are never persisted; they are used only for the export call
- All Jira requests are HTTPS; credentials sent via Authorization header (Basic auth)
AxWise Flow provides three distinct workflows for different use cases:
For: Product teams conducting user research and generating requirements
The core workflow with conversational research, simulation, and analysis:
Research Chat → Context Extraction → Stakeholder Questions → Synthetic Interviews → Analysis → PRD
-
Research Chat (
/api/research/conversation-routines/chat)- User describes their business idea conversationally
- LLM extracts context: business idea, target customer, problem, industry, location, stage
- System generates stakeholder-specific questions
-
Simulation Bridge (
/api/research/simulation-bridge/simulate)- Generates AI personas for each stakeholder category
- Simulates realistic interviews based on business context
- Creates synthetic evidence to fill research gaps
-
Analysis Agent (Single agent, 6 stages)
- Stage 1: Theme extraction with verbatim quotes
- Stage 2: Pattern detection across themes
- Stage 3: Stakeholder intelligence analysis
- Stage 4: Sentiment analysis
- Stage 5: Persona generation from patterns
- Stage 6: Insight synthesis with evidence links
-
PRD Generation (
/api/prd/{result_id})- Synthesizes user stories and acceptance criteria
- Every requirement links back to themes → quotes → interviews
Alternative: Upload → Analysis → PRD
Upload Transcripts → Theme Extraction → Pattern Recognition → Persona Formation → PRD
- Upload (
/api/upload) - Upload real interview transcripts (TXT, DOCX, PDF) - Analysis (
/api/analyze) - Same 6-stage analysis pipeline - PRD Generation (
/api/prd/{result_id}) - Evidence-linked requirements
For: Teams building downstream applications (CV matching, recommenders, marketing, training data)
A complete pipeline that generates canonical synthetic persona datasets:
Business Context → Questionnaire → Simulation → Analysis → Persona Dataset Export
API Endpoints:
POST /api/axpersona/v1/pipeline/start- Start dataset generation pipelineGET /api/axpersona/v1/pipeline/status/{job_id}- Check pipeline progressGET /api/axpersona/v1/pipeline/result/{job_id}- Get completed datasetPOST /api/axpersona/v1/export-persona-dataset- Export dataset from analysis
What it produces:
- Personas: Synthetic personas with demographics, archetypes, and evidence-linked traits
- Interviews: Simulated interview transcripts for each persona
- Analysis: Full theme/pattern/insight analysis
- Quality Metrics: Interview count, stakeholder coverage, persona confidence scores
Example Request:
curl -X POST "http://localhost:8000/api/axpersona/v1/pipeline/start" \
-H "Authorization: Bearer dev_test_token_local" \
-H "Content-Type: application/json" \
-d '{
"business_idea": "AI-powered meal planning app",
"target_customer": "Busy professionals who want healthy eating",
"problem": "No time to plan meals and grocery shop",
"industry": "Health & Wellness",
"location": "Berlin, Germany"
}'Frontend Access: Navigate to /axpersona/scopes to manage persona datasets through the UI.
For: Sales professionals preparing for customer calls
Generates comprehensive call intelligence from prospect data (CRM exports, meeting notes, or AxPersona output):
Prospect Data → Intelligence Agent → Call Guide + Personas + Objections + Coaching
API Endpoints:
POST /api/precall/v1/generate- Generate call intelligence from prospect dataPOST /api/precall/v1/coach- Get real-time coaching responsesPOST /api/precall/v1/generate-persona-image- Generate persona avatarPOST /api/precall/v1/search-local-news- Search location-specific news for rapport building
What it produces:
- Key Insights: Top 5 actionable insights for the call
- Call Guide: Opening line, discovery questions, value proposition, closing strategy
- Stakeholder Personas: Detailed profiles with communication tips
- Objection Handling: Potential objections with prepared rebuttals
- Visualizations: AI-generated mind map and org chart
Example Request:
curl -X POST "http://localhost:8000/api/precall/v1/generate" \
-H "Authorization: Bearer dev_test_token_local" \
-H "Content-Type: application/json" \
-d '{
"prospect_data": {
"company_name": "Acme Corp",
"industry": "Manufacturing",
"stakeholders": [
{"name": "John Smith", "role": "CFO", "concerns": ["ROI", "budget approval"]}
],
"pain_points": ["Manual processes", "Lack of visibility"]
}
}'Coaching Chat: After generating intelligence, use /api/precall/v1/coach to get real-time guidance based on the prospect context.
Frontend Access: Navigate to /precall to use the Precall Intelligence dashboard.
| Principle | Implementation |
|---|---|
| Evidence Traceability | Every insight stores source_interview_id, speaker_id, timestamp, verbatim_quote |
| Context Engineering | LLM-based context extraction in conversation routines; progressive compression in analysis |
| Unified Analysis Agent | One PydanticAI agent with 6 typed sub-agents for all analysis tasks |
| Synthetic Evidence | Simulation Bridge generates personas + interviews that maintain evidence lineage |
| API-First Design | All features accessible via REST API at /docs |
| OSS Mode | ENABLE_CLERK_VALIDATION=false disables auth for local development |
Scenario: User wants to understand "How enterprise CFOs think about pricing"
# Step 1: Start research chat
POST /api/research/conversation-routines/chat
{
"message": "I'm building a B2B SaaS pricing tool for enterprise CFOs",
"session_id": "abc123"
}
# → LLM extracts: industry="fintech", target_customer="enterprise CFOs", location="US"
# Step 2: Generate stakeholder questions
# → System creates questions for: CFOs, Finance Teams, Procurement, End Users
# Step 3: Simulate interviews
POST /api/research/simulation-bridge/simulate
{
"business_context": {...},
"stakeholders": [...]
}
# → Generates 12 AI personas (3 per stakeholder category)
# → Simulates 12 realistic interviews
# Step 4: Analyze (automatic after simulation)
# → Analysis agent extracts:
# - Themes: "Budget approval process", "ROI requirements", "Vendor evaluation"
# - Patterns: "18-month ROI threshold", "Board approval needed for >$100k"
# - Personas: "Risk-Averse CFO", "Growth-Focused CFO"
# - Insights: "CFOs need ROI calculators in first demo"
# Step 5: Generate PRD
POST /api/prd/{result_id}
# → Creates user stories:
# "As a CFO, I want to see 18-month ROI projections..."
# Evidence: Interview #3, Speaker "Sarah Chen", 00:04:32
# Quote: "Our board asked for 18-month payback..."axwise-flow-oss/
├── backend/ # FastAPI backend
│ ├── api/ # API routes and endpoints
│ │ ├── research/ # Research chat + simulation bridge
│ │ ├── axpersona/ # AxPersona dataset creation pipeline
│ │ ├── precall/ # Precall intelligence generation
│ │ ├── upload/ # File upload endpoints
│ │ ├── analyze/ # Analysis endpoints
│ │ └── prd/ # PRD generation endpoints
│ ├── services/ # Business logic
│ │ ├── processing/ # Theme, pattern, persona, PRD services
│ │ └── llm/ # LLM integration (Google Gemini)
│ ├── models/ # Data models
│ ├── infrastructure/ # Configuration and utilities
│ └── .env.oss # OSS environment configuration
├── frontend/ # Next.js frontend
│ ├── app/ # Next.js app directory
│ │ ├── axpersona/ # AxPersona scopes UI
│ │ └── precall/ # Precall intelligence dashboard
│ ├── components/ # React components
│ └── lib/ # Utilities and helpers
└── scripts/
└── oss/ # OSS-specific scripts
└── run_backend_oss.sh
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
- AI-Powered Analysis: Leverage Google Gemini for intelligent user research analysis
- Persona Generation: Automatically generate user personas from interview data
- Multi-Stakeholder Analysis: Analyze perspectives from different stakeholder groups
- Evidence Linking: Connect insights to source material with traceability
- AxPersona Dataset Creation: Generate canonical synthetic persona datasets for downstream applications
- Precall Intelligence: AI-powered call preparation with coaching and objection handling
- Export Capabilities: Export results in various formats (JSON, PRD, persona datasets)
- FastAPI: Modern Python web framework
- SQLAlchemy: SQL toolkit and ORM
- PostgreSQL: Relational database
- Google Gemini: LLM for AI capabilities
- Pydantic: Data validation
- Next.js 14: React framework
- TypeScript: Type-safe JavaScript
- Tailwind CSS: Utility-first CSS framework
- Clerk: Authentication (disabled in OSS mode)
OSS mode disables authentication and uses simplified configuration suitable for local development and self-hosting.
Key differences from production mode:
- ✅ No authentication required
- ✅ Simplified CORS settings
- ✅ Local database configuration
- ✅ Development-friendly defaults
See backend/.env.oss for all available configuration options.
Essential variables:
OSS_MODE=true- Enable OSS modeDATABASE_URL- PostgreSQL connection stringGEMINI_API_KEY- Google Gemini API key
cd backend
pytestcd frontend
npm testWe welcome contributions! Please see our contributing guidelines.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the terms specified in the LICENSE file.
- Check PostgreSQL is running:
pg_isready - Verify database exists:
psql -l | grep axwise - Check Python dependencies:
pip install -r backend/requirements.txt
- Verify DATABASE_URL in
backend/.env.oss - Check PostgreSQL is running:
pg_isready - Check PostgreSQL credentials (default: postgres/postgres)
- Ensure database exists:
createdb axwise
- Verify GEMINI_API_KEY is set in
backend/.env.oss - Check API key is valid at Google AI Studio
- 📧 Email: support@axwise.de or vitalijs@axwise.de
- 🐛 Issues: GitHub Issues
- 📖 Documentation: Wiki
Built with ❤️ by the AxWise team and contributors.
Note: This is the open-source version of AxWise Flow. For the hosted version with additional features, visit axwise.de.





