Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
# gpt3-cli
Streaming command-line interface for OpenAI's GPT-3

Also supports ChatGPT (non-streaming)

Use the `ask-gpt3` and `ask-chatgpt` scripts for prettier formatting
### Usecases
* Use GPT-3 distraction-free: Just you and the terminal
* Command-line remembers your past prompts and output history
Expand Down Expand Up @@ -83,6 +87,10 @@ This concatenates the input and the prompt together, input first, prompt second,
## File upload via CLI
Try [OpenAI's Python tool.](https://github.com/openai/openai-python#openai-python-library)

## See also

- https://github.com/TheR1D/shell_gpt (Python)

## License
**N.B.** You have the full right to base any closed-source or open-source program on this software if it remains on your own, your company's, or your company's contractors computers and cloud servers. (In other words, you can use this to build a closed-source SaaS.) If you distribute a program to a third-party end-user, you are required to give them the source code. The requirement to share source code only applies when distributed to other people. This is the GPLv3's private use clause.
The intent is to ensure GPT-3 developer tools remain [open-source](https://www.gnu.org/philosophy/free-sw.en.html). OpenAI asks that end-users of products that use GPT-3 products be shielded from direct API access, so this license should not impose any restriction. A copy of the private use exception is copied below:
Expand Down
43 changes: 43 additions & 0 deletions ask-chatgpt
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
#!/usr/bin/env sh
set -e

cd "$(dirname "$(realpath "$0")")"

[ -z "$COLUMNS" ] && COLUMNS="$(tput cols)"

exec ./gpt3 --engine 'gpt-3.5-turbo' "$*" |

# Trim all empty lines
#trimempty |
# Trim empty lines at the start of the stream
#sed -e '/./,$!d' |
# Trim empty lines at the start and end of the stream
# (It also squeezes repeated empty lines in the middle of the stream. To prevent that, we could use a counter.)
awk '
BEGIN { started = 0; empty = 0 }
{
if (NF) {
if (started && empty) {
print "";
}
print
started = 1;
empty = 0;
} else {
empty = 1;
}
}
' |

# When reaching the width of the window, flow words onto the next line instead of breaking them
if command -v fold >/dev/null 2>&1
then fold -s -w "$COLUMNS"
else cat
fi |

# Assume the output is markdown, and colourise it
if command -v bat >/dev/null 2>&1
then bat --style=plain --force-colorization --language=markdown
else cat
fi

10 changes: 10 additions & 0 deletions ask-gpt3
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
#!/usr/bin/env sh
set -e

cd "$(dirname "$(realpath "$0")")"

exec ./gpt3 --engine 'text-davinci-003' "Q: $* A: " 256 |
# Trim empty lines at the start of the stream
# Disabled, because it stops the response from streaming
#sed -u -e '/./,$!d'
cat
96 changes: 84 additions & 12 deletions gpt3
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,11 @@
set -ef -o pipefail
# Inherit defaults from env variables for easy scripting

# Available engines (models) are listed here: https://platform.openai.com/docs/models
# Apparently text-davinci-003 costs a lot more, and is less accurate, but I found it much faster than gpt-3.5-turbo!
[ -z "$ENGINE" ] && ENGINE="text-davinci-003"
#[ -z "$ENGINE" ] && ENGINE="gpt-3.5-turbo"

[ -z "$ENGINE" ] && ENGINE=davinci
[ -z "$TEMPERATURE" ] && TEMPERATURE=0.5
[ -z "$FREQ_PENALTY" ] && FREQ_PENALTY=0
Expand Down Expand Up @@ -38,18 +43,85 @@ case $key in
esac
done
set -- "${POSITIONAL[@]}" # restore positional parameter
[ -z "$2" ] && MAX_TOKENS=64 || MAX_TOKENS="$2"
[ -z "$2" ] && MAX_TOKENS=256 || MAX_TOKENS="$2"
[ -z "$OPENAI_KEY" ] && KEY="$OPENAI_API_KEY" || KEY="$OPENAI_KEY"

PROMPT="$1"

if [ -z "$KEY" ]; then
echo "You must export OPENAI_KEY or OPENAI_API_KEY. Get one here: https://platform.openai.com/account/api-keys"
exit 1
fi

if ! command -v jq >/dev/null 2>&1; then
echo "Please install jq (Command-line JSON processor) - instructions in README"
exit 1
fi

# FIXME: Improve error handling
curl -sSL -N \
-H "gpt3-cli/0.1.1 (https://github.com/CrazyPython/gpt3-cli)" \
-G https://api.openai.com/v1/engines/${ENGINE}/completions/browser_stream \
--data-urlencode prompt="$1" \
--data-urlencode temperature="$TEMPERATURE" \
--data-urlencode max_tokens="$MAX_TOKENS" \
--data-urlencode frequency_penalty="$FREQ_PENALTY" \
--data-urlencode presence_penalty="$PRES_PENALTY" \
-H "Authorization: Bearer $KEY" | sed -l 's/^data: //' | grep --line-buffer -v '^\[DONE\]$' | jq -j --unbuffered '.choices[0].text'
# Add trailing newline
echo

call_chat_api() {
request_data=$(
cat << !!!
{
"model": "${ENGINE}",
"messages": [{"role": "user", "content": "${PROMPT//\"/\\\"}"}],
"temperature": ${TEMPERATURE}
}
!!!
)

response="$(
# Not a GET request
curl -sSL -N \
https://api.openai.com/v1/chat/completions \
-H "gpt3-cli-joeytwiddle/0.2.0 (https://github.com/CrazyPython/gpt3-cli)" \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/json" \
-d "$request_data"
)"

result="$(
printf "%s\n" "$response" |
jq -j --unbuffered '.choices[0].message.content'
)"

if [ -n "$result" ] && ! [ "$result" = "null" ]
then
printf "%s\n" "$result"
else #[[ "$result" =~ '"error": {' ]]
printf "%s\n" "$response"
fi

# Example error response:
# {
# "error": {
# "message": "you must provide a model parameter",
# "type": "invalid_request_error",
# "param": null,
# "code": null
# }
# }
}

call_old_api() {
curl -sSL -N \
-G https://api.openai.com/v1/engines/${ENGINE}/completions/browser_stream \
-H "gpt3-cli-joeytwiddle/0.2.0 (https://github.com/CrazyPython/gpt3-cli)" \
-H "Authorization: Bearer $KEY" \
--data-urlencode model="$ENGINE" \
--data-urlencode prompt="$PROMPT" \
--data-urlencode temperature="$TEMPERATURE" \
--data-urlencode max_tokens="$MAX_TOKENS" \
--data-urlencode frequency_penalty="$FREQ_PENALTY" \
--data-urlencode presence_penalty="$PRES_PENALTY" |
sed -u 's/^data: //' | grep --line-buffer -v '^\[DONE\]$' | jq -j --unbuffered '.choices[0].text'
# Add trailing newline
echo
}

if [[ "$ENGINE" =~ "gpt-3.5-turbo" ]] || [[ "$ENGINE" == gpt-* ]]; then
call_chat_api
else
call_old_api
fi