Skip to content

Releases: BaranziniLab/KG_RAG

Now run KG-RAG with flexible command line args!

06 Dec 08:50
4cdd301
Compare
Choose a tag to compare

This release has following features:

  1. Added the provision to install Llama models with 'LlamaTokenizer' and 'legacy'=False option. We name this as 'method-2' in this repo.

  2. Run KG-RAG using command line args in a flexible fashion.
    (A) To run in interactive mode: -i True
    Default value : False
    (B) To select gpt-models : -g gpt-4
    Default value : gpt-35-turbo
    (C) To select method-2 to run Llama : -m method-2
    Default value : method-1

  3. Demo videos of README is updated using these updated command line args

KG-RAG now provides provenance for the generated biomedical text!

04 Dec 09:02
Compare
Choose a tag to compare

This release has two main additions:

  1. In this release, KG-RAG extracts node context by making API calls to SPOKE-KG.

  2. In this release, KG-RAG provides provenance information associated with the generated biomedical text. The provenance comes directly from the underlying SPOKE KG. Hence, with this addition, biomedical text generated using KG-RAG is not only grounded in established knowledge but also offers insights into the source or provenance of that knowledge.

First release

03 Dec 23:40
b15479c
Compare
Choose a tag to compare

This release has all the functionalities to run KG-RAG locally. This will allow the users to ask biomedical questions to the LLM (Llama or GPT) and get answers grounded on the established knowledge in a biomedical knowledge graph called SPOKE.