Releases: Michael-F-Ellis/ficta
Improved CLI help and README
The command line help now includes the -j option to log requests. The README is edited and reformatted for greater clarity.
Fix cache_prompt in requests
Correctly sends "cache_prompt": true to llama.cpp server endpoint
Alternate endpoint support
Now, ficta
supports alternate endpoints. If you have, say, a llama.cpp
server instance running at, say, 192.168.13.14:8080
somewhere on your network, you can launch ficta with the -u
flag to make the v1/chat/completions
endpoint available. For example,
ficta -u http://192.168.13.14:8080/v1/chat/completions mydoc.ait
Then, in the AI line at the end of your working document, specify url
(the literal word 'url', not the actual URL) instead of the name of an OpenAI model. For example,
AI: url, 100, 0.700, 1
instead of AI: gpt-4, 100, 0.700, 1
Note that you can freely edit the AI
line to alternate between local and OpenAI LLM models while composing your document.
Multiple completions
Adds support for requesting more than one completion via an integer parameter in the AI:
line
Full Changelog: v1.1.0...v1.2.0
Add line and block comments
Full Changelog: v1.0.1...v1.1.0
v1.0.1
Version v1.0.0
Adds -b option to automatically backup files before rewriting them.
Improves README.
v0.9.0
First release.