You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I like your minimalistic approach a lot! But the lack of a few configurable parameters made me switch to https://github.com/j178/chatgpt. If you could add the following parameters to the yaml file:
"prompts": {
"default": "You are a helpful assistant"
"pirate": "You are pirate Blackbeard. Arr matey!"},
"conversation": {
"prompt": "default",
"stream": true,
"max_tokens": 1024,
"temperature": 0
}
I would be happy to switch back! Basically, this is adding the following functionalities:
the possibility to write down one or more contexts in the yaml file. This is a bit more convenient than having to carry around a separate file for each context, and pass them via --context <FILE PATH>
stream al1lows the tokens to be sent as they become available, instead than all at once at the end of the reply. This makes quite the difference with long responses and slower models such as GPT-4
max_tokens is self-explanatory 🙂 and it also makes quite the difference when using GPT-4.
temperature set to 0 allows deterministic responses (fundamental for reproducibility. From 0< to 2, it allows increasingly more creative but also less focused.
These are very simple modifications, you just need to read them from the yaml file and add them as extra parameters when posting the request. Thanks!
The text was updated successfully, but these errors were encountered:
Great! Looking forward to the implementation of 1 and 2. Regarding this last one, I understand it's a bit more complicated, but it would really enhance usability a lot. For what it concerns rendering, since you use rich (good choice 👍) this could help
Hi,
I like your minimalistic approach a lot! But the lack of a few configurable parameters made me switch to https://github.com/j178/chatgpt. If you could add the following parameters to the yaml file:
I would be happy to switch back! Basically, this is adding the following functionalities:
--context <FILE PATH>
stream
al1lows the tokens to be sent as they become available, instead than all at once at the end of the reply. This makes quite the difference with long responses and slower models such as GPT-4max_tokens
is self-explanatory 🙂 and it also makes quite the difference when using GPT-4.temperature
set to 0 allows deterministic responses (fundamental for reproducibility. From 0< to 2, it allows increasingly more creative but also less focused.These are very simple modifications, you just need to read them from the yaml file and add them as extra parameters when posting the request. Thanks!
The text was updated successfully, but these errors were encountered: