A WIP collection of refined, value-dense, novel and/or exceptional prompts for instruction-tuned large language models, especially GPT-4 and ChatGPT's legacy model.
Many are projects in and of themselves — natural language programs for which ChatGPT serves as a decent frontend. Prompts are generally structured in a to-be-standardized psuedocode-like format (inserted as a user message); more on this later.
Gradio frontend |
New users operating LLMs via interfaces like ChatGPT may observe the model doing a decent-to-excellent job of performing basic or complex tasks when given simple question/instructions, but often outputs are lackluster, and not for fault of the model.
This project seeks to demonstrate that the quality and presentation of natural (& sometimes not-so-natural) language fed to a model substantially influences the quality of its outputs.
Collectively, we only barely tapped into GPT-3's potential, and we're incredibly far from reaching GPT-4's.
Prompts in this library are natural-language programs (both in their literal/written structure and in the scale & exponential value of their outputs) which act upon concepts & data, represented as text.
The project serves as a base for tools & utilities to be built out over time for explorers, developers, knowledge workers and eventually the general public.
(coming soon)