This project is a comprehensive system designed to aid software developers in generating and debugging code efficiently using context retention and AI-driven language models. The system integrates capabilities such as extracting function/method details, refining user prompts, and interacting with large language models (LLMs) through Groq's API.
- Scrapes function and method details such as:
- Input/Output Parameters
- Docstrings
- Dependencies
- File Paths
- All extracted details are stored in a centralized JSON file for efficient access and future usage.
- Allows users to define an initial prompt that specifies:
- Objective: The task or goal the user wants to achieve.
- Dependencies: Relevant methods or functions, mentioned in double quotes (e.g.,
"generate"method of"GenerateGoal"class).
- The initial prompt is processed by the
PromptGeneratorclass, which:- Adds detailed information about the methods/functions referenced in the user’s prompt (e.g., docstrings, parameter details).
- Refines the prompt to provide complete context for the LLM.
- The refined prompt is passed to the
call_llmmethod, which interacts with Groq's API to query openly available LLMs. - Supported models can be found on Groq's Model Page.
- The entire process, from prompt creation to result inspection, is demonstrated in the
exp.ipynbnotebook. - Users can test the system end-to-end and verify results.
- function_extrcator_v2.py # Script for extracting function/method details
- prompt_generator.py # Class to refine user-defined prompts
- call_llm.py # Function to interact with LLM via Groq
- exp.ipynb # End-to-end example notebook
- README.md # Project documentation
- requirements.txt # Dependencies