Skip to content

Local Ollama and OpenAI-like GPT's assistance for maximum privacy and offline access

License

Notifications You must be signed in to change notification settings

pfrankov/obsidian-local-gpt

Repository files navigation

Local GPT plugin for Obsidian

demo
No speedup. MacBook Pro 13, M1, 16GB, Ollama, orca-mini.

Local GPT assistance for maximum privacy and offline access.
The plugin allows you to open a context menu on selected text to pick an AI-assistant's action.
The most casual AI-assistant for Obsidian.

Also works with images

No speedup. MacBook Pro 13, M1, 16GB, Ollama, bakllava.

Also it can use context from links, backlinks and even PDF files
Enhanced Actions

How to use (Ollama)

1. Install Embedding model:

  • For English: ollama pull nomic-embed-text (fastest)
  • For other languages: ollama pull bge-m3 (slower, but more accurate)

2. Select Embedding model in plugin's settings and try to use the largest model with largest context window.

Default actions

  • Continue writing
  • Summarize text
  • Fix spelling and grammar
  • Find action items in text
  • General help (just use selected text as a prompt for any purpose)
  • New System Prompt to create actions for your needs

You can also add yours, share the best actions or get one from the community.

Supported AI Providers

  • Ollama
  • OpenAI compatible server (also OpenAI)
Settings

Installation

1. Install Plugin

Obsidian plugin store (recommended)

This plugin is available in the Obsidian community plugin store https://obsidian.md/plugins?id=local-gpt

BRAT

You can also install this plugin via BRAT: pfrankov/obsidian-local-gpt

2. Install LLM

Ollama (recommended)

  1. Install Ollama.
  2. Install Gemma 2 (default) ollama pull gemma2 or any preferred model from the library.

Additional: if you want to enable streaming completion with Ollama you should set environment variable OLLAMA_ORIGINS to *:

  • For MacOS run launchctl setenv OLLAMA_ORIGINS "*".
  • For Linux and Windows check the docs.

OpenAI compatible server

There are several options to run local OpenAI-like server:

Configure Obsidian hotkey

  1. Open Obsidian Settings
  2. Go to Hotkeys
  3. Filter "Local" and you should see "Local GPT: Show context menu"
  4. Click on + icon and press hotkey (e.g. ⌘ + M)

"Use fallback" option

It is also possible to specify a fallback to handle requests — this allows you to use larger models when you are online and smaller ones when offline.
image

Example video
Kapture.2024-01-11.at.22.16.52.mp4

Using with OpenAI

Since you can provide any OpenAI-like server, it is possible to use OpenAI servers themselves.
Despite the ease of configuration, I do not recommend this method, since the main purpose of the plugin is to work with private LLMs.

  1. Select OpenAI compatible server in Selected AI provider
  2. Set OpenAI compatible server URL to https://api.openai.com
  3. Retrieve and paste your API key from the API keys page
  4. Click "refresh" button and select the model that suits your needs (e.g. gpt-3.5-turbo)
Example screenshot image

My other Obsidian plugins

  • Colored Tags that colorizes tags in distinguishable colors.

Inspired by

About

Local Ollama and OpenAI-like GPT's assistance for maximum privacy and offline access

Resources

License

Stars

Watchers

Forks

Packages

No packages published