A Python tool for processing and enriching your Obsidian notes, providing automatic summarization, task extraction, and semantic knowledge base features.
Automatic Capability Detection (Latest)
- Ollama models now automatically detect tool calling and reasoning capabilities via programmatic inspection
- No more hardcoded model lists - capabilities are detected by examining model templates through Ollama's API
- Reasoning mode support automatically detected for models like Qwen3, DeepSeek-R1, QwQ, GPT-OSS, and others
- Tool calling support automatically detected for compatible models
- Falls back to known model lists if API detection fails
-
Note Processing
- Generate daily and weekly summaries
- Extract and structure meeting notes
- Identify action items and add them to Apple Reminders
- Extract learnings with auto-generated tags
-
Knowledge Base
- Build semantic connections between notes
- Support for OpenAI or local Ollama embeddings
- Auto-generate wiki-style links
- Create backlinks automatically
- Visualize note connections in graph format
-
Flexible AI Backend
- OpenAI API support for text generation and embeddings
- Ollama support for local AI processing (both text generation and embeddings)
- Configurable base URLs for OpenAI-compatible APIs
You can use the clipboard to process meeting transcripts directly:
- Copy the full meeting transcript to your clipboard
- Run the command:
python main.py notes --from-clipboard
This will:
- Take the transcript from your clipboard
- Process it with the meeting notes prompt
- Intelligently infer a descriptive topic name from the content
- Save a formatted summary to your configured meeting notes directory with filename
YYYY-MM-DD_inferred_topic.md
You can also specify a custom prompt file:
python main.py notes --from-clipboard --prompt-file /path/to/custom_prompt.md
- Python 3.x
- OpenAI API key (if using OpenAI embeddings)
- Apple Reminders (optional, for task integration)
-
Clone the repository:
git clone https://github.com/ivo-toby/gpt-notes-to-tasks.git cd gpt-notes-to-tasks
-
Create and activate virtual environment:
python -m venv .venv source .venv/bin/activate # On Unix/macOS # or .venv\Scripts\activate # On Windows
-
Install dependencies:
pip install -r requirements.txt
-
Set up configuration:
cp config.template.yaml config.yaml # Edit config.yaml with your settings
For complete local AI processing without external API calls:
-
Install Ollama: Follow instructions at ollama.ai
-
Pull required models:
# For text generation (summaries, meeting notes, etc.) ollama pull llama3.2 # For embeddings (knowledge base) ollama pull mxbai-embed-large
-
Configure for Ollama:
# LLM Inference Configuration inference: provider: "ollama" ollama: base_url: "http://localhost:11434" model: "llama3.2" # or qwen3, deepseek-r1, mistral, etc. temperature: 0.7 num_ctx: 8192 num_thread: 4 timeout: 120 # Reasoning mode (automatically detected for compatible models) reasoning: enabled: false # Enable for reasoning-capable models save_thinking: false # Include thinking process in outputs log_thinking: false # Log thinking content for debugging # Embedding settings embeddings: model_type: "ollama" model_name: "mxbai-embed-large" ollama_config: base_url: "http://localhost:11434"
-
Start Ollama service:
ollama serve
This setup provides complete privacy and offline functionality for all AI operations.
Some Ollama models support reasoning mode, which enables chain-of-thought processing:
-
Supported models (automatically detected):
- Qwen3, Qwen2.5
- DeepSeek-R1, DeepSeek-V3
- QwQ, Smallthinker
- GPT-OSS, Magistral
- And any other model with reasoning template markers
-
Enable reasoning mode:
inference: ollama: reasoning: enabled: true # Enable reasoning mode save_thinking: true # Include thinking process in saved outputs log_thinking: false # Enable for debugging thinking content
-
How it works:
- Models with reasoning support automatically use extended thinking
- Thinking tokens can be saved or suppressed based on configuration
- Tool calling works seamlessly with reasoning mode
- Capabilities are auto-detected via Ollama's API
# Process today's notes
python main.py notes
# Process notes for a specific date
python main.py notes --date 2024-02-16
# Process yesterday's notes
python main.py notes --date yesterday
# Generate weekly summary
python main.py notes --weekly
# Process meeting notes only
python main.py notes --meetingnotes
# Process learnings only
python main.py notes --process-learnings
Options:
--date
: Specify date (YYYY-MM-DD, "today", or "yesterday")--dry-run
: Preview changes without making modifications--skip-reminders
: Don't create Apple Reminders tasks--replace-summary
: Replace existing summaries instead of updating
# Initialize knowledge base
python main.py kb --reindex
# Update with recent changes
python main.py kb --update
# Search knowledge base
python main.py kb --query "your search query"
# Search by tag
python main.py kb --find-by-tag "tag-name"
# Search by date
python main.py kb --find-by-date "2024-02-16"
# Analyze single note relationships
python main.py kb --analyze-links "path/to/note.md"
# Show note connections
python main.py kb --show-connections "path/to/note.md"
# Analyze and auto-link all notes
python main.py kb --analyze-all
# Analyze only recently modified notes
python main.py kb --analyze-updated
# View note's semantic structure
python main.py kb --note-structure "path/to/note.md"
Options:
--limit
: Maximum number of results (default: 5)--dry-run
: Preview changes without modifications--note-type
: Filter by note type (daily, weekly, meeting, learning, note)--graph
: Display connections in graph format (if configured)--auto-link
: Automatically add suggested links without prompting
-
Start your day by processing yesterday's notes:
python main.py notes --date yesterday
-
Throughout the day, take notes in your daily notes file
-
End of day processing:
# Process today's notes and create summaries python main.py notes # Update knowledge base with new content python main.py kb --update # Analyze new connections python main.py kb --analyze-updated --auto-link
-
Generate weekly summary:
python main.py notes --weekly
-
Process learnings:
python main.py notes --process-learnings
-
Update knowledge connections:
python main.py kb --analyze-all --auto-link
-
Regular updates (daily/after changes):
python main.py kb --update python main.py kb --analyze-updated
-
Full reindex (monthly or after major changes):
rm -rf ~/Documents/notes/.vector_store python main.py kb --reindex python main.py kb --analyze-all --auto-link
-
Finding Related Content:
# Search by content python main.py kb --query "project planning" --limit 10 # Find notes with specific tag python main.py kb --find-by-tag "project" --note-type meeting # Show connections for a note python main.py kb --show-connections "path/to/note.md" --graph
Analyze note structure to understand how it's being processed:
python main.py kb --note-structure "path/to/note.md"
Copy config.template.yaml
to config.yaml
and configure your settings:
# Essential paths
notes_base_dir: "~/Documents/notes"
daily_notes_file: "~/Documents/notes/daily/daily.md"
daily_output_dir: "~/Documents/notes/daily"
weekly_output_dir: "~/Documents/notes/weekly"
meeting_notes_output_dir: "~/Documents/notes/meetingnotes"
learnings_file: "~/Documents/notes/learnings/learnings.md"
learnings_output_dir: "~/Documents/notes/learnings"
# LLM Inference Configuration
inference:
provider: "openai" # Options: "openai" | "ollama"
# OpenAI-specific settings
openai:
api_key: "your-api-key"
model: "gpt-4o"
base_url: "https://api.openai.com/v1" # Optional: for OpenAI-compatible APIs
# Ollama-specific settings
ollama:
base_url: "http://localhost:11434"
model: "llama3.2" # or qwen3, deepseek-r1, mistral, etc.
temperature: 0.7
num_ctx: 8192
num_thread: 4
timeout: 120
# Reasoning mode (auto-detected for compatible models)
reasoning:
enabled: false # Enable for reasoning-capable models
save_thinking: false # Include thinking process in outputs
log_thinking: false # Log thinking content for debugging
# Embedding configuration (recommended local setup)
embeddings:
model_type: "ollama"
model_name: "mxbai-embed-large"
ollama_config:
base_url: "http://localhost:11434"
num_ctx: 512
num_thread: 4
vector_store:
path: "~/Documents/notes/.vector_store"
similarity_threshold: 0.60 # For normalized embeddings
# HNSW index settings - adjust if you get "ef or M is too small" errors
hnsw_config:
ef_construction: 800 # Higher = better index quality, slower build (default: 400)
ef_search: 400 # Higher = more accurate search, slower queries (default: 200)
m: 256 # Higher = better accuracy, more memory usage (default: 128)
If you're processing a large number of notes or getting HNSW errors, try increasing these values.
search:
thresholds:
default: 0.60
tag_search: 0.50
date_search: 0.50
-
Create basic structure:
mkdir -p ~/Documents/notes/{daily,weekly,meetingnotes,learnings}
-
Configure
.gitignore
:.vector_store/ .obsidian/ .trash/ *.excalidraw.md .DS_Store
- Write notes throughout the day
- Process daily notes:
python main.py notes
- Update knowledge base:
python main.py kb --update
-
Generate weekly summary:
python main.py notes --weekly
-
Review and process learnings:
python main.py notes --process-learnings
Two primary options are available:
-
OpenAI (text-embedding-3-small)
- Normalized embeddings (0-1 range)
- Typical scores: 0.35-0.45
- Requires API key and internet connection
-
Ollama (mxbai-embed-large)
- Local processing
- Better handling of technical content
- Typical scores: 0.60-0.65
- Recommended for code snippets and documentation
-
Monthly vector store optimization:
rm -rf ~/Documents/notes/.vector_store python main.py kb --reindex
-
Monitor similarity scores:
- OpenAI: Use thresholds 0.60-0.85
- Ollama: Use thresholds 0.50-0.70
-
Enable debug logging:
# In config.yaml logging: level: "DEBUG"
-
Common fixes:
# Clear vector store rm -rf ~/Documents/notes/.vector_store # Rebuild index python main.py kb --reindex # Check Ollama service curl http://localhost:11434/api/embeddings
This tool was created using:
- CodeLLama
- GPT-4o
- Claude 3.5 Sonnet
- Gemini 2 Flash
- Anthropic MCP
- Cursor
- aider.chat