Skip to content

Releases: Michael-F-Ellis/ficta

Improved CLI help and README

07 Dec 16:54
Compare
Choose a tag to compare

The command line help now includes the -j option to log requests. The README is edited and reformatted for greater clarity.

Fix cache_prompt in requests

06 Dec 18:32
Compare
Choose a tag to compare

Correctly sends "cache_prompt": true to llama.cpp server endpoint

Alternate endpoint support

01 Dec 23:00
Compare
Choose a tag to compare

Now, ficta supports alternate endpoints. If you have, say, a llama.cpp server instance running at, say, 192.168.13.14:8080 somewhere on your network, you can launch ficta with the -u flag to make the v1/chat/completions endpoint available. For example,

ficta -u http://192.168.13.14:8080/v1/chat/completions mydoc.ait

Then, in the AI line at the end of your working document, specify url (the literal word 'url', not the actual URL) instead of the name of an OpenAI model. For example,

AI: url, 100, 0.700, 1 instead of AI: gpt-4, 100, 0.700, 1

Note that you can freely edit the AI line to alternate between local and OpenAI LLM models while composing your document.

Multiple completions

08 Jul 11:35
Compare
Choose a tag to compare

Adds support for requesting more than one completion via an integer parameter in the AI: line

Full Changelog: v1.1.0...v1.2.0

Add line and block comments

19 Jun 18:16
Compare
Choose a tag to compare

v1.0.1

11 May 18:58
Compare
Choose a tag to compare

Fixes need for two saves on MacOS. See issue #1.

Version v1.0.0

11 May 02:09
Compare
Choose a tag to compare

Adds -b option to automatically backup files before rewriting them.

Improves README.

v0.9.0

10 May 02:16
Compare
Choose a tag to compare

First release.