Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
42 commits
Select commit Hold shift + click to select a range
1c838b8
chore: add project breakdown doc
ldc861117 Oct 17, 2025
b9976cf
fix frontend package miss-align problem.
ldc861117 Oct 17, 2025
a9623cb
fix: Gracefully handle VLM client initialization failure
ldc861117 Oct 17, 2025
1201df4
some fixes but not working.
ldc861117 Oct 17, 2025
078c93b
correct AGENTS.md file name.
ldc861117 Oct 17, 2025
35dc1be
feat: flexible LLM configuration with optional API keys
ldc861117 Oct 17, 2025
67721e8
Merge pull request #1 from ldc861117/feature/flexible-llm-config
ldc861117 Oct 18, 2025
9bdafee
enhance: Improve start-dev.sh with better validation and user experience
ldc861117 Oct 18, 2025
7d1bd7f
Merge pull request #2 from ldc861117/feature/flexible-llm-config
ldc861117 Oct 18, 2025
c17a447
update gitignore
ldc861117 Oct 18, 2025
9918977
update gitignore
ldc861117 Oct 18, 2025
270cdd6
previous config
ldc861117 Oct 18, 2025
eef6d45
feat: Stop tracking config.yaml and clean up .gitignore
ldc861117 Oct 18, 2025
63d566d
fix(frontend): Allow empty API keys for local providers (Ollama/LocalAI)
ldc861117 Oct 18, 2025
e1b9eb9
ignore local env
ldc861117 Oct 19, 2025
e73dbc7
Merge pull request #3 from ldc861117/feature/flexible-llm-config
ldc861117 Oct 19, 2025
97d8411
feat: Add .env file support for environment variable management
ldc861117 Oct 19, 2025
8f76229
Merge remote-tracking branch 'origin/main' into feature/flexible-llm-…
ldc861117 Oct 19, 2025
92d18be
docs: Add comprehensive configuration directory README
ldc861117 Oct 19, 2025
89ad70b
Merge pull request #4 from ldc861117/feature/flexible-llm-config
ldc861117 Oct 19, 2025
56cf5f0
Docs/add GitHub trending (#92)
KashiwaByte101 Oct 17, 2025
4c48f8d
Docs/add privacy protection (#96)
KashiwaByte101 Oct 17, 2025
0e86af8
feat: add cross-platform Python build script and update build process…
benhack20 Oct 17, 2025
322abf8
refact: style
shanmohan-maker Oct 16, 2025
49c06b3
chore: Using EditorConfig to Standardize Code (#95)
FoundDream Oct 17, 2025
21b5149
feat: copy func and edit change
shanmohan-maker Oct 17, 2025
378e8a1
docs: Add comprehensive merge summary from upstream
ldc861117 Oct 19, 2025
eff4b9a
docs: Add comprehensive verification checklist for merge
ldc861117 Oct 19, 2025
8d9f041
Merge pull request #5 from ldc861117/merge/upstream-features
ldc861117 Oct 19, 2025
2ab2d6d
docs: Add comprehensive upstream investigation report
ldc861117 Oct 19, 2025
787ef06
feat(env): unify backend host/port via .env (WEB_HOST/WEB_PORT) and p…
ldc861117 Oct 19, 2025
dda09c6
docs(issues): add record for unified env web config (WEB_HOST/WEB_PORT)
ldc861117 Oct 19, 2025
bd56b38
Merge pull request #6 from ldc861117/feature/unified-env-web-config
ldc861117 Oct 19, 2025
00f877b
后端 BadRequestError:已修复。
ldc861117 Oct 19, 2025
5bc2fd7
feat(db/migrations): add Alembic integration and initial DB migration…
cto-new[bot] Oct 20, 2025
c8d6b22
Merge pull request #7 from ldc861117/feat-db-alembic-initial-core-mig…
ldc861117 Oct 20, 2025
6a27bc6
feat(observability): add /healthz and /metrics endpoints with Prometh…
cto-new[bot] Oct 20, 2025
0f0b854
Merge pull request #8 from ldc861117/feat-fastapi-healthz-metrics
cto-new[bot] Oct 20, 2025
6ee0d28
feat: Recover and apply changes after failed rebase
ldc861117 Oct 17, 2025
c51e918
Merge upstream 44 commits: preserve Ollama LLM customizations
factorydroid Oct 30, 2025
b6ce732
Merge latest main: resolve conflicts while preserving Ollama support
factorydroid Oct 30, 2025
f0e172e
docs: Add comprehensive configuration management analysis and optimiz…
factorydroid Oct 30, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
121 changes: 121 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@
# MineContext Environment Configuration
# Copy this file to .env and fill in your actual values
# DO NOT commit .env file to version control!

# ============================================
# LLM Configuration (Vision Language Model)
# ============================================

# Provider: openai, doubao, ollama, localai, llamacpp, custom
LLM_PROVIDER=ollama

# Model ID - the specific model you want to use
# Examples:
# Ollama: qwen2.5:14b, llama3.1, mistral, gemma2:9b
# OpenAI: gpt-4o, gpt-4-turbo, gpt-3.5-turbo
# Doubao: doubao-seed-1-6-flash-250828
LLM_MODEL=qwen2.5:14b

# Base URL for the LLM API
# Examples:
# Ollama: http://localhost:11434/v1
# OpenAI: https://api.openai.com/v1
# Doubao: https://ark.cn-beijing.volces.com/api/v3
# LocalAI: http://localhost:8080/v1
LLM_BASE_URL=http://localhost:11434/v1

# API Key (leave empty for local providers like Ollama)
# For OpenAI: sk-...
# For Doubao: your-doubao-api-key
# For Ollama/LocalAI/LlamaCPP: leave empty or omit
LLM_API_KEY=

# ============================================
# Embedding Model Configuration
# ============================================

# Provider for embedding (can be different from LLM provider)
EMBEDDING_PROVIDER=ollama

# Embedding Model ID
# Examples:
# Ollama: nomic-embed-text, mxbai-embed-large, bge-m3
# OpenAI: text-embedding-3-large, text-embedding-3-small
# Doubao: doubao-embedding
EMBEDDING_MODEL=nomic-embed-text

# Base URL for embedding API (if different from LLM)
# Leave empty to use the same as LLM_BASE_URL
EMBEDDING_BASE_URL=http://localhost:11434/v1

# API Key for embedding (if different from LLM)
# Leave empty to use the same as LLM_API_KEY
EMBEDDING_API_KEY=

# ============================================
# Optional: Advanced Settings
# ============================================

# Context path - where to store data
# CONTEXT_PATH=.

# API authentication key (for production deployments)
# CONTEXT_API_KEY=your-secure-api-key

# ============================================
# Web Server (Backend) Configuration
# Used by both backend (FastAPI/Uvicorn) and frontend (via Vite env)
# ============================================

# Host to bind the backend web server
WEB_HOST=127.0.0.1

# Port to bind the backend web server (frontend will default to this port in dev)
WEB_PORT=8000

# Note: Frontend reads Vite-prefixed vars. electron-vite is configured to map WEB_HOST/WEB_PORT
# into VITE_WEB_HOST/VITE_WEB_PORT automatically. Do NOT duplicate values.

# ============================================
# Example Configurations
# ============================================

# --- Ollama (Local, No API Key) ---
# LLM_PROVIDER=ollama
# LLM_MODEL=qwen2.5:14b
# LLM_BASE_URL=http://localhost:11434/v1
# LLM_API_KEY=
# EMBEDDING_PROVIDER=ollama
# EMBEDDING_MODEL=nomic-embed-text
# EMBEDDING_BASE_URL=http://localhost:11434/v1
# EMBEDDING_API_KEY=

# --- OpenAI ---
# LLM_PROVIDER=openai
# LLM_MODEL=gpt-4o
# LLM_BASE_URL=https://api.openai.com/v1
# LLM_API_KEY=sk-your-openai-api-key-here
# EMBEDDING_PROVIDER=openai
# EMBEDDING_MODEL=text-embedding-3-large
# EMBEDDING_BASE_URL=https://api.openai.com/v1
# EMBEDDING_API_KEY=sk-your-openai-api-key-here

# --- Mixed: OpenAI for LLM, Ollama for Embedding ---
# LLM_PROVIDER=openai
# LLM_MODEL=gpt-4o
# LLM_BASE_URL=https://api.openai.com/v1
# LLM_API_KEY=sk-your-openai-api-key-here
# EMBEDDING_PROVIDER=ollama
# EMBEDDING_MODEL=nomic-embed-text
# EMBEDDING_BASE_URL=http://localhost:11434/v1
# EMBEDDING_API_KEY=

# --- Doubao (Volcengine) ---
# LLM_PROVIDER=doubao
# LLM_MODEL=doubao-seed-1-6-flash-250828
# LLM_BASE_URL=https://ark.cn-beijing.volces.com/api/v3
# LLM_API_KEY=your-doubao-api-key
# EMBEDDING_PROVIDER=doubao
# EMBEDDING_MODEL=doubao-embedding
# EMBEDDING_BASE_URL=https://ark.cn-beijing.volces.com/api/v3
# EMBEDDING_API_KEY=your-doubao-api-key
14 changes: 10 additions & 4 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -5,16 +5,22 @@ __pycache__
/screenshots
/logs
/persist
/debug
/dist/*
/venv/*
/test_storage
/config/runtime
/test_script
/config/user*
/config/user_setting.yaml
/config/config.yaml
.DS_Store

.venv/
uv.lock
uv.toml
.idea

# Environment variables
.env
.env.local

⚙️ work_phases/
🗄️ archive/
🧠 knowledge_base/
4 changes: 4 additions & 0 deletions AGENTS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@

**Project Mission:** To build the premier open-source, edge-native context engine for the next generation of AI companions. Our mission is to create the foundational layer that securely captures, processes, and understands a user's digital life directly on their device. By providing this rich, real-time context with a privacy-first guarantee, MineContext serves as the essential sensory system that empowers AI models to deliver truly proactive, personalized, and intelligent assistance, transforming the human-AI partnership from reactive commands to intuitive collaboration.

***
Loading