Open
Description
For anyone who wants to contribute (add features, report bugs, or just simply discuss and learn), join our Discord π
Or you can just comment here for open discussions! π¨βπ»
RAG Module for Code Indexing
- Complete a PoC with LanceDB by implementing a memory.py file. [MRG] add lancedb as memoryΒ #262 @leeeizhang
- Build a pipeline to generate embeddings for local code RAG. Enhance Local Code Generation with RAG ModuleΒ #259 [MRG] Code RAG for ChatbotΒ #265 @huangyz0918
- Enable storing the "successfully" project (generated code, plan, suggestions, and settings), for RAG and enhancement.
- Add a CLI entry for managing memory (list embedded files, allow CRUD). @leeeizhang Add manage API for the memoryΒ #266 Add a set of new CLI commands, called
mle memory
which provides memory CRUDΒ #270
Research Topic
FYI @HuaizhengZhang
- PoC: How to sync up the knowledge updating (e.g., the code will update frequently)
- PoC: How to efficiently scan the file as memory? (the embedding costs time, for a large amount of codebase/files)
- PoC: How to do the chunking for code different types of textual (image/audio, other modality) data
- PoC: Using graphRAG for overall (code) information summarization
Enhance mle chat
- Improved the
mle chat
's agent calling, allow calling agents (now we can call functions) Enable unordered agent interactions inmle chat
Β #260 @YuanmingLeee
Prompting
- Integrated the reference code search to Advisor/Debugger/Coder
- Add tracking tool, with integration to Langfuse or other tools. Set the default tracking to
False
, or letting users know while first run themle
in the terminal about usage tracking. [Prompt Ops] It seems that we are continuously improving our prompts. now its time to use some prompt tracking tools to help us do some A/B testingsΒ #255
Other LLM Framework Support
- Integrate with the Ollama backend. (Ollama is added, test and make it better) -- local
- Integrate with vLLM backend -> vLLM cloud backend -- on-prem
Function Calls
- Add local logging of tool/function calling.
- Added functions to preview multi-modality data, start with the image data.
- Function store: [Function Store] shall we build a function store to avoid change function code everytime?Β #268
- Added support for local LLMs (tested the APIs with vLLM), disable the function calling without errors if the LLM doesn't support. [llama3.2] test new llama3.2 1b & 3b models with MLE-agentΒ #227
Documentation
- Updated the doc site with new release/features @huangyz0918 @leeeizhang
- New demo video on in the README.md @HuaizhengZhang