Welcome to the PromptlyTech repository dedicated to optimizing Language Models (LLMs) through advanced Prompt Engineering and the implementation of Retrieve, Answer, Generate (RAG) techniques. This toolkit focuses on streamlining prompt services for enhanced AI capabilities.
-
Automatic Prompt Generation Service:
- Simplify the creation of effective prompts to harness the power of LLMs efficiently.
-
Automatic Test Case Generation Service:
- Automate the generation of diverse test cases for comprehensive coverage and improved reliability.
-
Prompt Testing and Ranking Service:
- Evaluate and rank prompts based on effectiveness, ensuring optimal outcomes from LLMs.
- Efficient prompt engineering for business contexts.
- Seamless integration with state-of-the-art LLMs like GPT-3.5 and GPT-4.
- Automated testing and ranking to enhance user engagement and satisfaction.
- Clone this repository:
git clone https://github.com/Keriii/RAG_system.git
cd RAG_SYSTEM- Setup environment variables on
.env:
(create .env file in the Titleroot directory)
#################
# Development env
#################
OPENAI_API_KEY=""
VECTORDB_MODEL="gpt-3.5-turbo"Run
# create virtual environment
python3 -m venv venv
# activate
source venv/bin/activate
# install requirements
pip install -r requirements.txt
# to generate test data
make data_generate
# to evaluate user input data (prob., accur., confid.)
make data_evaluateWe welcome contributions from the community. Feel free to open issues, submit pull requests, and collaborate with us to improve the toolkit.
This project is licensed under the MIT License.
Let's optimize language models together! 🚀