Skip to content

A naive RAG pipeline that is built to assist the user by generating relevant queries from a given document to acive an objective stated by the user. This project uses RAGAS ecaluation metrix to evaluate and rank each generated queries as well.

Notifications You must be signed in to change notification settings

Keriii/RAG_system

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PromptlyTech RAG LLM Optimization

Welcome to the PromptlyTech repository dedicated to optimizing Language Models (LLMs) through advanced Prompt Engineering and the implementation of Retrieve, Answer, Generate (RAG) techniques. This toolkit focuses on streamlining prompt services for enhanced AI capabilities.

Key Services

  1. Automatic Prompt Generation Service:

    • Simplify the creation of effective prompts to harness the power of LLMs efficiently.
  2. Automatic Test Case Generation Service:

    • Automate the generation of diverse test cases for comprehensive coverage and improved reliability.
  3. Prompt Testing and Ranking Service:

    • Evaluate and rank prompts based on effectiveness, ensuring optimal outcomes from LLMs.

Features

  • Efficient prompt engineering for business contexts.
  • Seamless integration with state-of-the-art LLMs like GPT-3.5 and GPT-4.
  • Automated testing and ranking to enhance user engagement and satisfaction.

Getting Started

  1. Clone this repository:
git clone https://github.com/Keriii/RAG_system.git
cd RAG_SYSTEM
  1. Setup environment variables on .env:

(create .env file in the Titleroot directory)

#################
# Development env
#################

OPENAI_API_KEY=""
VECTORDB_MODEL="gpt-3.5-turbo"

Installation

Run

# create virtual environment
python3 -m venv venv

# activate
source venv/bin/activate

# install requirements
pip install -r requirements.txt

# to generate test data
make data_generate

# to evaluate user input data (prob., accur., confid.)
make data_evaluate

Contribution Guidelines

We welcome contributions from the community. Feel free to open issues, submit pull requests, and collaborate with us to improve the toolkit.

License

This project is licensed under the MIT License.

Let's optimize language models together! 🚀

About

A naive RAG pipeline that is built to assist the user by generating relevant queries from a given document to acive an objective stated by the user. This project uses RAGAS ecaluation metrix to evaluate and rank each generated queries as well.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published