You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -30,7 +30,7 @@ Based on [ChatLLM.cpp](https://github.com/foldl/chatllm.cpp), and built with [De
30
30
-**Completely free and Open-source**: Purely native. Bloat-free & uses pretty much **0% of your CPU**.
31
31
-**Chat Mode**: Seamlessly switching between context processing mode and chat mode.
32
32
-**Customization**: All commands are fully customizable.
33
-
-**Markdown Rendering**: Beautiful and elegant.
33
+
-**Markdown/Math Rendering**: Beautiful and elegant.
34
34
35
35
## ✨ Features
36
36
@@ -60,15 +60,17 @@ Invoke Writing Tools with no text selected to enter quick chat mode.
60
60
61
61
1. Configure your profile.
62
62
63
-
Copy `profile.json.in` to `profile.json`. Fill in the path of the quantized model file:
63
+
Copy `profile.json.in` to `profile.json`. Fill in the path of the quantized model file and other options (see [ChatLLM.cpp](https://github.com/foldl/chatllm.cpp)):
64
64
65
65
```js
66
66
{
67
67
//...
68
68
"chatllm": {
69
69
"default": [
70
70
"-m",
71
-
"path of the quantized model file"
71
+
"path of the quantized model file",
72
+
"-ngl", "all", // for GPU acceleration
73
+
"+detect-thoughts"// detect thoughts
72
74
]
73
75
},
74
76
//...
@@ -127,6 +129,8 @@ _Actions_ are represented to users as a collection of buttons. Each action is de
127
129
"llm":"another_one", // optional
128
130
"ai_prefix":"...", // optional
129
131
"ai_suffix":"...", // optional
132
+
"feels_lucky":true, // optional (true or false)
133
+
"web_app":"..."// optional
130
134
"action":"show"// optional
131
135
}
132
136
```
@@ -138,6 +142,11 @@ _Actions_ are represented to users as a collection of buttons. Each action is de
138
142
*`llm` is selected LLM to serve this action (when omitted, "default" is selected).
139
143
*`ai_prefix` is used for [generation steering](https://github.com/foldl/chatllm.cpp/blob/master/docs/fun.md#generation-steering).
140
144
*`ai_suffix` is used abort generation: once this suffix is found in LLM's output, generation is aborted.
145
+
*`feels_lucky` is a flag to accept the first round of LLM's output. When set to `false` (default), users can click _Redo_ button to try again.
146
+
*`web_app` is a special field providing additional functionalities on LLM's output (when `feels_lucks` is `false`). Possible values:
147
+
148
+
-`diff`: compare LLM's suggestion and original text (useful for _proofreading_ like actions).
149
+
141
150
*`action` is the post action to handle the output of LLM. Possible values:
142
151
143
152
-`show`: show the output in a box.
@@ -234,32 +243,35 @@ Add a shortcut of the `WritingTools.exe` to the Windows Start-Up folder.
234
243
235
244
## 👨💻 To compile the application yourself:
236
245
237
-
Precondition: Build `libchatllm` or get `libchatllm.dll` & `ggml.dll` from releases;
246
+
Precondition: Build `libchatllm` or get `*.dll` from releases;
238
247
239
248
### Delphi
240
249
241
250
1. Install [Delphi Community Edition](https://www.embarcadero.com/products/delphi/starter/free-download/);
242
251
1. Build this project (Target: Win64);
243
-
1. Copy `libchatllm.dll`& `ggml.dll` to the output directory (such as _Win64/Debug_).
252
+
1. Copy all `*.dll`files to the output directory (such as _Win64/Debug_).
0 commit comments