Skip to content

Commit

Permalink
Format code (#521)
Browse files Browse the repository at this point in the history
  • Loading branch information
logancyang authored Aug 21, 2024
1 parent 0191f66 commit 3211939
Show file tree
Hide file tree
Showing 48 changed files with 1,069 additions and 1,426 deletions.
2 changes: 1 addition & 1 deletion .github/FUNDING.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,4 @@ liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
otechie: # Replace with a single Otechie username
lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
custom: ['https://www.buymeacoffee.com/logancyang']
custom: ["https://www.buymeacoffee.com/logancyang"]
9 changes: 4 additions & 5 deletions .github/ISSUE_TEMPLATE/bug_report.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,9 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''

title: ""
labels: ""
assignees: ""
---

- [ ] Screenshot of note + Copilot chat pane + dev console added **(required)**
Expand All @@ -16,7 +15,7 @@ A clear and concise description of what the bug is. Clear steps to reproduce the
A clear and concise description of what you expected to happen.

**Screenshots**
Add screenshots to help explain your problem. Please turn on debug mode in Copilot settings, turn off other plugins to leave only Copilot dev messages as necessary.
Add screenshots to help explain your problem. Please turn on debug mode in Copilot settings, turn off other plugins to leave only Copilot dev messages as necessary.

**Additional context**
Add any other context about the problem here.
7 changes: 3 additions & 4 deletions .github/ISSUE_TEMPLATE/feature_request.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,9 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''

title: ""
labels: ""
assignees: ""
---

**Is your feature request related to a problem? Please describe.**
Expand Down
56 changes: 31 additions & 25 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# 🔍 Copilot for Obsidian
![GitHub release (latest SemVer)](https://img.shields.io/github/v/release/logancyang/obsidian-copilot?style=for-the-badge&sort=semver) ![Obsidian Downloads](https://img.shields.io/badge/dynamic/json?logo=obsidian&color=%23483699&label=downloads&query=%24%5B%22copilot%22%5D.downloads&url=https%3A%2F%2Fraw.githubusercontent.com%2Fobsidianmd%2Fobsidian-releases%2Fmaster%2Fcommunity-plugin-stats.json&style=for-the-badge)

![GitHub release (latest SemVer)](https://img.shields.io/github/v/release/logancyang/obsidian-copilot?style=for-the-badge&sort=semver) ![Obsidian Downloads](https://img.shields.io/badge/dynamic/json?logo=obsidian&color=%23483699&label=downloads&query=%24%5B%22copilot%22%5D.downloads&url=https%3A%2F%2Fraw.githubusercontent.com%2Fobsidianmd%2Fobsidian-releases%2Fmaster%2Fcommunity-plugin-stats.json&style=for-the-badge)

Copilot for Obsidian is a **free** and **open-source** ChatGPT interface right inside Obsidian. It has a minimalistic design and is straightforward to use.

Expand Down Expand Up @@ -41,6 +41,7 @@ Since Claude 3 models are announced today (3/4/2024), I managed to add them in t
**LM Studio** and **Ollama** are the 2 best choices for running local models on your own machine. Please check out the super simple setup guide [here](local_copilot.md). Don't forget to flex your creativity in custom prompts using local models!

## 🛠️ Features

- Chat with ChatGPT right inside Obsidian in the Copilot Chat window.
- No repetitive login. Use your own API key (stored locally).
- No monthly fee. Pay only for what you use.
Expand All @@ -54,10 +55,11 @@ Since Claude 3 models are announced today (3/4/2024), I managed to add them in t
- All QA modes are powered by retrieval augmentation with a **local vector store**. No sending your data to a cloud-based vector search service!
- Easy commands to **simplify, emojify, summarize, translate, change tone, fix grammar, rewrite into a tweet/thread, count tokens** and more.
- Set your own parameters like LLM temperature, max tokens, conversation context based on your need (**pls be mindful of the API cost**).
- **User custom prompt**! You can *add, apply, edit, delete* your custom prompts, persisted in your local Obsidian environment! Be creative with your own prompt templates, sky is the limit!
- **User custom prompt**! You can _add, apply, edit, delete_ your custom prompts, persisted in your local Obsidian environment! Be creative with your own prompt templates, sky is the limit!
- **Local model** support for **offline chat** using LM Studio and Ollama.

## 🎬 Demos

#### 🤗 New to Copilot? Quick Guide for Beginners:

<a href="https://www.youtube.com/watch?v=jRCDAg2sck8" target="_blank"><img src="./images/thumbnail.png" width="700" /></a>
Expand Down Expand Up @@ -106,11 +108,13 @@ Copilot for Obsidian is now available in **Obsidian Community Plugin**!
Now you can see the chat icon in your leftside ribbon, clicking on it will open the chat panel on the right! Don't forget to check out the Copilot commands available in the commands palette!

#### ⛓️ Manual Installation

- Go to the latest release
- Download `main.js`, `manifest.json`, `styles.css` and put them under `.obsidian/plugins/obsidian-copilot/` in your vault
- Open your Obsidian settings > Community plugins, and turn on `Copilot`.

## 🔔 Note

- The chat history is not saved by default. Please use "**Save as Note**" to save it. The note will have a title `Chat-Year_Month_Day-Hour_Minute_Second`, you can change its name as needed.
- "**New Chat**" clears all previous chat history. Again, please use "**Save as Note**" if you would like to save the chat.
- "**Use Long Note as Context**" creates a local vector index for the active long note so that you can chat with note longer than the model's context window! To start the QA, please switch from "Chat" to "QA" in the Mode Selection dropdown.
Expand All @@ -123,61 +127,63 @@ Now you can see the chat icon in your leftside ribbon, clicking on it will open
<details>
<summary>"You do not have access to this model"</summary>

- You need to have access to some of the models like GPT-4 or Azure ones to use them. If you don't, sign up on their waitlist!
- A common misunderstanding I see is that some think they have access to GPT-4 API when they get ChatGPT Plus subscription. It was not always true. *You need to have access to GPT-4 API to use the GPT-4 model in this plugin*. Please check if you can successfully use your model in the OpenAI playground first https://platform.openai.com/playground?mode=chat. If not, you can apply for GPT-4 API access here https://openai.com/waitlist/gpt-4-api. Once you have access to the API, you can use GPT-4 with this plugin without the ChatGPT Plus subscription!
- Reference issue: https://github.com/logancyang/obsidian-copilot/issues/3#issuecomment-1544583676
- You need to have access to some of the models like GPT-4 or Azure ones to use them. If you don't, sign up on their waitlist!
- A common misunderstanding I see is that some think they have access to GPT-4 API when they get ChatGPT Plus subscription. It was not always true. _You need to have access to GPT-4 API to use the GPT-4 model in this plugin_. Please check if you can successfully use your model in the OpenAI playground first https://platform.openai.com/playground?mode=chat. If not, you can apply for GPT-4 API access here https://openai.com/waitlist/gpt-4-api. Once you have access to the API, you can use GPT-4 with this plugin without the ChatGPT Plus subscription!
- Reference issue: https://github.com/logancyang/obsidian-copilot/issues/3#issuecomment-1544583676
</details>
<details>
<summary>It's not using my note as context</summary>

- Please don't forget to switch to "**QA**" in the Mode Selection dropdown in order to start the QA. Copilot does not have your note as context in "Chat" mode.
<img src="./images/faq-mode-switch.png" alt="Settings" width="500">
- In fact, you don't have to click the button on the right before starting the QA. Switching to QA mode in the dropdown directly is enough for Copilot to read the note as context. The button on the right is only for when you'd like to manually rebuild the index for the active note, like, when you'd like to switch context to another note, or you think the current index is corrupted because you switched the embedding provider, etc.
- Reference issue: https://github.com/logancyang/obsidian-copilot/issues/51
- Please don't forget to switch to "**QA**" in the Mode Selection dropdown in order to start the QA. Copilot does not have your note as context in "Chat" mode.
<img src="./images/faq-mode-switch.png" alt="Settings" width="500">
- In fact, you don't have to click the button on the right before starting the QA. Switching to QA mode in the dropdown directly is enough for Copilot to read the note as context. The button on the right is only for when you'd like to manually rebuild the index for the active note, like, when you'd like to switch context to another note, or you think the current index is corrupted because you switched the embedding provider, etc.
- Reference issue: https://github.com/logancyang/obsidian-copilot/issues/51
</details>
<details>
<summary>Unresponsive QA when using Huggingface as the Embedding Provider</summary>

- Huggingface Inference API is free to use. It can give errors such as 503 or 504 frequently at times because their server has issues. If it's an issue for you, please consider using OpenAI or CohereAI as the embedding provider. Just keep in mind that OpenAI costs more, especially with very long notes as context.
- Huggingface Inference API is free to use. It can give errors such as 503 or 504 frequently at times because their server has issues. If it's an issue for you, please consider using OpenAI or CohereAI as the embedding provider. Just keep in mind that OpenAI costs more, especially with very long notes as context.
</details>
<details>
<summary>"insufficient_quota"</summary>

- It might be because you haven't set up payment for your OpenAI account, or you exceeded your max monthly limit. OpenAI has a cap on how much you can use their API, usually $120 for individual users.
- Reference issue: https://github.com/logancyang/obsidian-copilot/issues/11
- It might be because you haven't set up payment for your OpenAI account, or you exceeded your max monthly limit. OpenAI has a cap on how much you can use their API, usually $120 for individual users.
- Reference issue: https://github.com/logancyang/obsidian-copilot/issues/11
</details>
<details>
<summary>"context_length_exceeded"</summary>

- GPT-3.5 has a 4096 context token limit, GPT-4 has 8K (there is a 32K one available to the public soon per OpenAI). **So if you set a big token limit in your Copilot setting, you could get this error.** Note that the prompts behind the scenes for Copilot commands can also take up tokens, so please limit your message length and max tokens to avoid this error. (For QA with Unlimited Context, use the "QA" mode in the dropdown! Requires Copilot v2.1.0.)
- Reference issue: https://github.com/logancyang/obsidian-copilot/issues/1#issuecomment-1542934569
- GPT-3.5 has a 4096 context token limit, GPT-4 has 8K (there is a 32K one available to the public soon per OpenAI). **So if you set a big token limit in your Copilot setting, you could get this error.** Note that the prompts behind the scenes for Copilot commands can also take up tokens, so please limit your message length and max tokens to avoid this error. (For QA with Unlimited Context, use the "QA" mode in the dropdown! Requires Copilot v2.1.0.)
- Reference issue: https://github.com/logancyang/obsidian-copilot/issues/1#issuecomment-1542934569
</details>
<details>
<summary>Azure issue</summary>

- It's a bit tricky to get all Azure credentials right in the first try. My suggestion is to use `curl` to test in your terminal first, make sure it gets response back, and then set the correct params in Copilot settings. Example:
```
curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=VERSION\
-H "Content-Type: application/json" \
-H "api-key: YOUR_API_KEY" \
-d "{
\"prompt\": \"Once upon a time\",
\"max_tokens\": 5
}"
```
- Reference issue: https://github.com/logancyang/obsidian-copilot/issues/98
- It's a bit tricky to get all Azure credentials right in the first try. My suggestion is to use `curl` to test in your terminal first, make sure it gets response back, and then set the correct params in Copilot settings. Example:
```
curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=VERSION\
-H "Content-Type: application/json" \
-H "api-key: YOUR_API_KEY" \
-d "{
\"prompt\": \"Once upon a time\",
\"max_tokens\": 5
}"
```
- Reference issue: https://github.com/logancyang/obsidian-copilot/issues/98
</details>

When opening an issue, please include relevant console logs. You can go to Copilot's settings and turn on "Debug mode" at the bottom for more console messages!

## 📝 Planned features (feedback welcome)

- New modes
- **Chat mode** (originally Conversation mode): You can now provide multiple notes at once as context in conversations, for LLMs with an extended context window.
- **QA mode**: You can **index any folder** and perform question and answer sessions using a **local** search index and Retrieval-Augmented Generation (RAG) system.
- Support **embedded PDFs** as context
- Interact with a **powerful AI agent** that knows your vault who can search, filter and use your notes as context to work with. Explore, brainstorm and research like never before!

## 🙏 Thank You

Did you know that [even the timer on Alexa needs internet access](https://twitter.com/logancyang/status/1720929870635802738)? In this era of corporate-dominated internet, I still believe there's room for powerful tech that's focused on privacy. A great **local** AI agent in Obsidian is the ultimate form of this plugin. If you share my vision, please consider [sponsoring this project](https://github.com/sponsors/logancyang) or buying me coffees!

<a href="https://www.buymeacoffee.com/logancyang" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 40px !important;width: 150px !important;" ></a>
Expand Down
20 changes: 10 additions & 10 deletions __mocks__/obsidian.js
Original file line number Diff line number Diff line change
@@ -1,26 +1,26 @@
// __mocks__/obsidian.js
/* eslint-disable no-undef */
import yaml from 'js-yaml';
import yaml from "js-yaml";

module.exports = {
Vault: jest.fn().mockImplementation(() => {
return {
getMarkdownFiles: jest.fn().mockImplementation(() => {
// Return an array of mock markdown file objects
return [
{ path: 'test/test2/note1.md' },
{ path: 'test/note2.md' },
{ path: 'test2/note3.md' },
{ path: 'note4.md' },
{ path: "test/test2/note1.md" },
{ path: "test/note2.md" },
{ path: "test2/note3.md" },
{ path: "note4.md" },
];
}),
cachedRead: jest.fn().mockImplementation((file) => {
// Simulate reading file contents. You can adjust the content as needed for your tests.
const fileContents = {
'test/test2/note1.md': '---\ntags: [Tag1, tag2]\n---\nContent of note1',
'test/note2.md': '---\ntags: [tag2, tag3]\n---\nContent of note2',
'test2/note3.md': 'something else ---\ntags: [false_tag]\n---\nContent of note3',
'note4.md': '---\ntags: [tag1, Tag4]\n---\nContent of note4',
"test/test2/note1.md": "---\ntags: [Tag1, tag2]\n---\nContent of note1",
"test/note2.md": "---\ntags: [tag2, tag3]\n---\nContent of note2",
"test2/note3.md": "something else ---\ntags: [false_tag]\n---\nContent of note3",
"note4.md": "---\ntags: [tag1, Tag4]\n---\nContent of note4",
};
return Promise.resolve(fileContents[file.path]);
}),
Expand All @@ -32,4 +32,4 @@ module.exports = {
parseYaml: jest.fn().mockImplementation((content) => {
return yaml.load(content);
}),
};
};
10 changes: 5 additions & 5 deletions esbuild.config.mjs
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,13 @@ import svgPlugin from "esbuild-plugin-svg";
import process from "process";
import wasmPlugin from "./wasmPlugin.mjs";

const banner =
`/*
const banner = `/*
THIS IS A GENERATED/BUNDLED FILE BY ESBUILD
if you want to view the source, please visit the github repository of this plugin
*/
`;

const prod = (process.argv[2] === "production");
const prod = process.argv[2] === "production";

const context = await esbuild.context({
banner: {
Expand All @@ -33,7 +32,8 @@ const context = await esbuild.context({
"@lezer/common",
"@lezer/highlight",
"@lezer/lr",
...builtins],
...builtins,
],
format: "cjs",
target: "es2018",
logLevel: "info",
Expand All @@ -48,4 +48,4 @@ if (prod) {
process.exit(0);
} else {
await context.watch();
}
}
24 changes: 12 additions & 12 deletions jest.config.js
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
module.exports = {
preset: 'ts-jest',
testEnvironment: 'jsdom',
roots: ['<rootDir>/src', '<rootDir>/tests'],
preset: "ts-jest",
testEnvironment: "jsdom",
roots: ["<rootDir>/src", "<rootDir>/tests"],
transform: {
'^.+\\.(js|jsx|ts|tsx)$': 'ts-jest',
"^.+\\.(js|jsx|ts|tsx)$": "ts-jest",
},
moduleNameMapper: {
'\\.(css|less|scss|sass)$': 'identity-obj-proxy',
'^@/(.*)$': '<rootDir>/src/$1',
'^obsidian$': '<rootDir>/__mocks__/obsidian.js'
"\\.(css|less|scss|sass)$": "identity-obj-proxy",
"^@/(.*)$": "<rootDir>/src/$1",
"^obsidian$": "<rootDir>/__mocks__/obsidian.js",
},
testRegex: '(/tests/.*|(\\.|/)(test|spec))\\.(jsx?|tsx?)$',
moduleFileExtensions: ['ts', 'tsx', 'js', 'jsx', 'json', 'node'],
testPathIgnorePatterns: ['/node_modules/'],
setupFiles: ['<rootDir>/jest.setup.js'],
};
testRegex: "(/tests/.*|(\\.|/)(test|spec))\\.(jsx?|tsx?)$",
moduleFileExtensions: ["ts", "tsx", "js", "jsx", "json", "node"],
testPathIgnorePatterns: ["/node_modules/"],
setupFiles: ["<rootDir>/jest.setup.js"],
};
2 changes: 1 addition & 1 deletion jest.setup.js
Original file line number Diff line number Diff line change
@@ -1 +1 @@
import 'web-streams-polyfill/dist/polyfill.min.js';
import "web-streams-polyfill/dist/polyfill.min.js";
2 changes: 2 additions & 0 deletions local_copilot.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
[LM Studio](https://lmstudio.ai/) has the best UI for running local models, it has support for Apple Silicon, Windows, and Linux (in beta). After you download the correct version of LM Studio to your machine, the first thing is to download a model. Find something small to start with, such as Mistral 7B, and work your way up if you have a beefy machine.

A rule of thumb to determine how large a model you can run:

- If you are on an Apple Silicon Mac, look at your RAM
- If you are on a Windows PC with a GPU, look at your VRAM.

Expand Down Expand Up @@ -77,6 +78,7 @@ ollama serve
```

## Ollama for Local Embeddings

Ollama has added support for local embeddings for RAG since v0.1.26! It's super easy to setup, just run

```
Expand Down
26 changes: 13 additions & 13 deletions manifest.json
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
{
"id": "copilot",
"name": "Copilot",
"version": "2.5.4",
"minAppVersion": "0.15.0",
"description": "A ChatGPT Copilot in Obsidian.",
"author": "Logan Yang",
"authorUrl": "https://twitter.com/logancyang",
"fundingUrl": {
"Buy Me a Coffee": "https://www.buymeacoffee.com/logancyang",
"GitHub Sponsor": "https://github.com/sponsors/logancyang"
},
"isDesktopOnly": true
}
"id": "copilot",
"name": "Copilot",
"version": "2.5.4",
"minAppVersion": "0.15.0",
"description": "A ChatGPT Copilot in Obsidian.",
"author": "Logan Yang",
"authorUrl": "https://twitter.com/logancyang",
"fundingUrl": {
"Buy Me a Coffee": "https://www.buymeacoffee.com/logancyang",
"GitHub Sponsor": "https://github.com/sponsors/logancyang"
},
"isDesktopOnly": true
}
4 changes: 2 additions & 2 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@
"build": "tsc -noEmit -skipLibCheck && node esbuild.config.mjs production",
"lint": "eslint . --ext .js,.jsx,.ts,.tsx",
"lint:fix": "eslint . --ext .js,.jsx,.ts,.tsx --fix",
"format": "prettier --write .",
"format:check": "prettier --check .",
"format": "prettier --write 'src/**/*.{js,ts,tsx,md}'",
"format:check": "prettier --check 'src/**/*.{js,ts,tsx,md}'",
"version": "node version-bump.mjs && git add manifest.json versions.json",
"test": "jest",
"prepare": "husky"
Expand Down
Loading

0 comments on commit 3211939

Please sign in to comment.