Releases: BaranziniLab/KG_RAG
Now run KG-RAG with flexible command line args!
This release has following features:
-
Added the provision to install Llama models with 'LlamaTokenizer' and 'legacy'=False option. We name this as 'method-2' in this repo.
-
Run KG-RAG using command line args in a flexible fashion.
(A) To run in interactive mode: -i True
Default value : False
(B) To select gpt-models : -g gpt-4
Default value : gpt-35-turbo
(C) To select method-2 to run Llama : -m method-2
Default value : method-1 -
Demo videos of README is updated using these updated command line args
KG-RAG now provides provenance for the generated biomedical text!
This release has two main additions:
-
In this release, KG-RAG extracts node context by making API calls to SPOKE-KG.
-
In this release, KG-RAG provides provenance information associated with the generated biomedical text. The provenance comes directly from the underlying SPOKE KG. Hence, with this addition, biomedical text generated using KG-RAG is not only grounded in established knowledge but also offers insights into the source or provenance of that knowledge.
First release
This release has all the functionalities to run KG-RAG locally. This will allow the users to ask biomedical questions to the LLM (Llama or GPT) and get answers grounded on the established knowledge in a biomedical knowledge graph called SPOKE.