forked from npiv/chatblade
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
initial commit for porting to github
- Loading branch information
0 parents
commit 45cd630
Showing
19 changed files
with
757 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
**/__pycache__ | ||
build | ||
**.egg-info | ||
venv | ||
.DS_Store | ||
.idea | ||
.vscode |
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,25 @@ | ||
venv/bin/activate: requirements.txt | ||
python3 -m venv venv | ||
./venv/bin/pip install -r requirements.txt | ||
|
||
setup: venv/bin/activate | ||
|
||
clean: clean-build clean-pyc | ||
|
||
sanitize: clean clean-venv | ||
|
||
clean-venv: | ||
rm -rf venv/ | ||
|
||
clean-build: | ||
rm -fr build/ | ||
rm -fr dist/ | ||
rm -fr .eggs/ | ||
find . -name '*.egg-info' -exec rm -fr {} + | ||
find . -name '*.egg' -exec rm -f {} + | ||
|
||
clean-pyc: | ||
find . -name '*.pyc' -exec rm -f {} + | ||
find . -name '*.pyo' -exec rm -f {} + | ||
find . -name '*~' -exec rm -f {} + | ||
find . -name '__pycache__' -exec rm -fr {} + |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,132 @@ | ||
# Chatblade | ||
## A CLI Swiss Army Knife for ChatGPT | ||
|
||
Chatblade is a versatile command-line interface (CLI) tool designed to interact with OpenAI's ChatGPT. It accepts piped input, arguments, or both, and allows you to save common prompt preambles for quick usage. Additionally, Chatblade provides utility methods to extract JSON or Markdown from ChatGPT responses. | ||
|
||
**Note**: You'll need to set up your OpenAI API key to use Chatblade. | ||
|
||
You can do that by either passing `--openai-api-key KEY` or by setting an env variable `OPENAI_API_KEY` (recommended). The examples below all assume an env variable is set. | ||
|
||
### Install | ||
on linux like systems you should be able to just checkout the project and `pip install .` | ||
|
||
## Some Examples | ||
|
||
### Basic | ||
In its simplest form, Chatblade can perform a straightforward query: | ||
```bash | ||
chatblade how can I extract a still frame from a video at 22:01 with ffmpeg | ||
``` | ||
|
||
<img src="assets/example1.png"> | ||
|
||
### Continue a conversation and extract | ||
To continue the conversation and ask for a change, you can use the `-l` flag for "last." | ||
|
||
```bash | ||
chatblade -l can we make a gif instead from 00:22:01 to 00:22:04 | ||
``` | ||
|
||
<img src="assets/example2.png"> | ||
|
||
You can also use `-l` without a query to redisplay the last thread at any point. | ||
|
||
If you want to extract the last suggested command, you can use the `-e` flag. For example: | ||
|
||
```bash | ||
chatblade -e | pbcopy | ||
``` | ||
|
||
This command places the `ffmpeg` suggestion in your clipboard on macOS. | ||
|
||
### Piping into Chatblade | ||
You can pipe input to Chatblade: | ||
|
||
```bash | ||
curl https://news.ycombinator.com/rss | chatblade given the above rss can you show me the top 3 articles about AI and their links -c 4 | ||
``` | ||
|
||
The piped input is placed above the query and sent to ChatGPT. In this example, we also use the `-c 4` flag to select ChatGPT-4 (the default is ChatGPT-3.5). | ||
|
||
<img src="assets/example3.png"> | ||
|
||
### Check token count and estimaed costs | ||
If you want to check the approximate cost and token usage of a previous query, you can use the `-t` flag for "tokens." | ||
|
||
```bash | ||
curl https://news.ycombinator.com/rss | chatblade given the above rss can you show me the top 3 articles about AI and their links -t | ||
``` | ||
|
||
<img src="assets/example4.png"> | ||
|
||
this won't perform any action over the wire, and just calculates the tokens locally | ||
|
||
### Make custom prompts | ||
|
||
We can also save common prompt configs for easy reuse. Any yaml file we place under ~/.config/chatblade/ will be picked up by the command. | ||
|
||
So for example given the following yaml called `etymology.yaml`, and which contains: | ||
```yaml | ||
system: |- | ||
I want you to act as a professional Etomologist and Quiz Generator. You have a deep knowledge of etymology and will be provided with a word. | ||
The goal is to create cards that quiz on both the etymology and finding the word by its definition. | ||
The following is a what a perfect answer would like for the word "disparage" | ||
[{ | ||
"question": "A verb used to indicate the act of speaking about someone or something in a negative or belittling way.<br/> <i>E.g He would often _______ his coworkers behind their backs.</i>", | ||
"answer": "disparage" | ||
}, | ||
{ | ||
"question": "What is the etymological root of the word disparage?", | ||
"answer": "From the Old French word <i>'desparagier'</i>, meaning 'marry someone of unequal rank', which comes from <i>'des-'</i> (dis-) and <i>'parage'</i> (equal rank)" | ||
}] | ||
You will return answers in JSON only. Answer truthfully and if you don't know then say so. Keep questions as close as possible to the | ||
provided examples. Make sure to include an example in the definition question. Use HTML within the strings to nicely format your answers. | ||
If multiple words are provided, create questions and answers for each of them in one list. | ||
Only answer in JSON, don't provide any more text. Valid JSON uses "" quotes to wrap its items. | ||
``` | ||
we can now run a command and refer to this prompt with `-p etymology` | ||
|
||
```bash | ||
chatblade -p etymology gregarious | ||
``` | ||
|
||
<img src="assets/example5.png"> | ||
|
||
And since we asked for json we can pipe our result to something else e.g. | ||
|
||
```bash | ||
chatblade -l -e > toanki | ||
``` | ||
|
||
### Help | ||
|
||
```bash | ||
usage: chatblade [-h] [--last] [--prompt-config PROMPT_CONFIG] [--openai-api-key OPENAI_API_KEY] [--temperature TEMPERATURE] [--chat-gpt {3.5,4}] [--extract] [--raw] [--tokens] | ||
[query ...] | ||
Chatblade | ||
positional arguments: | ||
query Query to send to chat GPT | ||
options: | ||
-h, --help show this help message and exit | ||
--last, -l Display the last result. If a query is given the conversation is continued | ||
--prompt-config PROMPT_CONFIG, -p PROMPT_CONFIG | ||
Prompt config name, or file containing a prompt config | ||
--openai-api-key OPENAI_API_KEY | ||
OpenAI API key can also be set as env variable OPENAI_API_KEY | ||
--temperature TEMPERATURE | ||
Temperature (openai setting) | ||
--chat-gpt {3.5,4}, -c {3.5,4} | ||
Chat GPT model (default 3.5) | ||
--extract, -e Extract content from response if possible (either json or code block) | ||
--raw, -r print the last response as pure text, don't pretty print or format | ||
--tokens, -t Display what *would* be sent, how many tokens, and estimated costs | ||
``` |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Empty file.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,9 @@ | ||
from . import cli | ||
|
||
|
||
def main(): | ||
cli.cli() | ||
|
||
|
||
if __name__ == "__main__": | ||
main() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,51 @@ | ||
import collections | ||
import tiktoken | ||
import openai | ||
|
||
from . import utils | ||
|
||
Message = collections.namedtuple("Message", ["role", "content"]) | ||
|
||
|
||
def num_tokens_in_messages(messages, model="gpt-3.5-turbo-0301"): | ||
"""Returns the number of tokens used by a list of messages.""" | ||
try: | ||
encoding = tiktoken.encoding_for_model(model) | ||
except KeyError: | ||
encoding = tiktoken.get_encoding("cl100k_base") | ||
num_tokens = 0 | ||
for message in messages: | ||
num_tokens += ( | ||
4 # every message follows <im_start>{role/name}\n{content}<im_end>\n | ||
) | ||
num_tokens += len(encoding.encode(message.role)) | ||
num_tokens += len(encoding.encode(message.content)) | ||
num_tokens += 2 # every reply is primed with <im_start>assistant | ||
return num_tokens | ||
|
||
|
||
def init_conversation(user_msg, system_msg=None): | ||
system = [Message("system", system_msg)] if system_msg else [] | ||
return system + [Message("user", user_msg)] | ||
|
||
|
||
DEFAULT_OPENAI_SETTINGS = { | ||
"model": "gpt-3.5-turbo", | ||
"temperature": 0.1, | ||
"n": 1, | ||
} | ||
|
||
|
||
def query_chat_gpt(messages, config): | ||
"""Queries the chat GPT API with the given messages and config.""" | ||
openai.api_key = config["openai_api_key"] | ||
config = utils.merge_dicts(DEFAULT_OPENAI_SETTINGS, config) | ||
dict_messages = [msg._asdict() for msg in messages] | ||
result = openai.ChatCompletion.create(messages=dict_messages, **config) | ||
if not isinstance(result, dict): | ||
raise ValueError( | ||
"OpenAI Result is not a dict got %s: %s" % (type(result), result) | ||
) | ||
response_message = [choice["message"] for choice in result["choices"]][0] | ||
message = Message(response_message["role"], response_message["content"]) | ||
return message, result |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,155 @@ | ||
import pickle | ||
import sys | ||
import os | ||
import argparse | ||
import rich | ||
import yaml | ||
|
||
from . import printer, chat, utils | ||
|
||
|
||
def get_piped_input(): | ||
if not sys.stdin.isatty(): | ||
return sys.stdin.read() | ||
return None | ||
|
||
|
||
def get_openai_key(params): | ||
if params.openai_api_key: | ||
return params.openai_api_key | ||
elif "OPENAI_API_KEY" in os.environ: | ||
return os.environ["OPENAI_API_KEY"] | ||
else: | ||
return None | ||
|
||
|
||
def parse_input(): | ||
parser = argparse.ArgumentParser(description="Chatblade") | ||
parser.add_argument("query", type=str, nargs="*", help="Query to send to chat GPT") | ||
parser.add_argument( | ||
"--last", | ||
"-l", | ||
action="store_true", | ||
help="Display the last result. If a query is given the conversation is continued", | ||
) | ||
parser.add_argument( | ||
"--prompt-config", | ||
"-p", | ||
type=str, | ||
help="Prompt config name, or file containing a prompt config", | ||
) | ||
parser.add_argument( | ||
"--openai-api-key", | ||
type=str, | ||
help="OpenAI API key can also be set as env variable OPENAI_API_KEY", | ||
) | ||
parser.add_argument( | ||
"--temperature", type=float, help="Temperature (openai setting)" | ||
) | ||
parser.add_argument( | ||
"--chat-gpt", "-c", choices=["3.5", "4"], help="Chat GPT model (default 3.5)" | ||
) | ||
parser.add_argument( | ||
"--extract", | ||
"-e", | ||
help="Extract content from response if possible (either json or code block)", | ||
action="store_true", | ||
) | ||
parser.add_argument( | ||
"--raw", | ||
"-r", | ||
help="print the last response as pure text, don't pretty print or format", | ||
action="store_true", | ||
) | ||
parser.add_argument( | ||
"--tokens", | ||
"-t", | ||
help="Display what *would* be sent, how many tokens, and estimated costs", | ||
action="store_true", | ||
) | ||
|
||
args = parser.parse_args() | ||
|
||
params = vars(args) | ||
params = {k: v for k, v in params.items() if v is not None} | ||
|
||
openai_api_key = get_openai_key(args) | ||
if not openai_api_key: | ||
print("expecting openai API Key") | ||
exit(parser.print_help()) | ||
else: | ||
params["openai_api_key"] = openai_api_key | ||
|
||
if "chat_gpt" in params: | ||
if params["chat_gpt"] == "3.5": | ||
params["model"] = "gpt-3.5-turbo" | ||
elif params["chat_gpt"] == "4": | ||
params["model"] = "gpt-4" | ||
else: | ||
raise ValueError(f"Unknown chat GPT version {params['chat_gpt']}") | ||
|
||
query = " ".join(args.query) | ||
piped_input = get_piped_input() | ||
if piped_input: | ||
query = piped_input + "\n----------------\n" + query | ||
|
||
return query, params | ||
|
||
|
||
MAX_TOKEN_COUNT = 4096 | ||
CACHE_PATH = "~/.cache/chatblade" | ||
PROMPT_PATH = "~/.config/chatblade/" | ||
|
||
|
||
def to_cache(messages): | ||
path = os.path.expanduser(CACHE_PATH) | ||
with open(path, "wb") as f: | ||
pickle.dump(messages, f) | ||
|
||
|
||
def messages_from_cache(): | ||
path = os.path.expanduser(CACHE_PATH) | ||
with open(path, "rb") as f: | ||
return pickle.load(f) | ||
|
||
|
||
def load_prompt_config(prompt_name): | ||
path = os.path.expanduser(PROMPT_PATH + prompt_name + ".yaml") | ||
try: | ||
with open(path, "r") as f: | ||
return yaml.load(f, Loader=yaml.FullLoader) | ||
except FileNotFoundError: | ||
raise ValueError(f"Prompt {prompt_name} not found in {path}") | ||
|
||
|
||
def fetch_and_cache(messages, params): | ||
response_msg, _ = chat.query_chat_gpt(messages, params) | ||
messages.append(response_msg) | ||
to_cache(messages) | ||
return messages | ||
|
||
|
||
def cli(): | ||
query, params = parse_input() | ||
|
||
if params["last"] or params["extract"] or params["raw"]: | ||
messages = messages_from_cache() | ||
if query: | ||
messages.append(chat.Message("user", query)) | ||
elif "prompt_config" in params: | ||
prompt_config = load_prompt_config(params["prompt_config"]) | ||
messages = chat.init_conversation(query, prompt_config["system"]) | ||
params = utils.merge_dicts(params, prompt_config) | ||
elif query: | ||
messages = chat.init_conversation(query) | ||
else: | ||
rich.print("[red]no query or option given. nothing to do...[/red]") | ||
exit() | ||
|
||
if "tokens" in params and params["tokens"]: | ||
num_tokens = chat.num_tokens_in_messages(messages) | ||
printer.print_tokens(messages, num_tokens, params) | ||
else: | ||
if messages[-1].role == "user": | ||
messages = fetch_and_cache(messages, params) | ||
printer.print_messages(messages, params) |
Oops, something went wrong.