Skip to content

Conversation

SuperEver
Copy link
Contributor

@SuperEver SuperEver commented Mar 17, 2025

Below is an example to use GraphDocument

reader_conf = {
    "chunk_size" : 1200,
    "chunk_overlap" : 100,
}
store_conf = {
    "root_path": "/home/mnt/zhangyongchao/workspace/gitlab/LightRAG",
    "name_space": "dickens",
    "chunk_store_type": "JsonChunkStore",
    "chunk_store_config": {},
    "er_store_type": "NanoDBGraphERStore",
    "er_store_config": {"embedding_dim": 1024},
    "network_store_type": "NetworkXStore",
    "network_store_config": {},
}

llm = lazyllm.OnlineChatModule(source="qwen")
embed_module = lazyllm.OnlineEmbeddingModule(source="openai", embed_url="http://***")

md_dir = "/home/mnt/zhangyongchao/workspace/gitlab/lazyllm-jinan/LazyLLM/books"
documents = GraphDocument(dataset_path=md_dir, llm=llm, embed=embed_module, reader_conf=reader_conf, store_conf=store_conf)
retriever = lazyllm.Retriever(doc=documents, topk=50)

query = "Who is Bob Cratchit"
doc_node_list = retriever(query=query)  # retrieving process, returns the collected chunks
content_str = "".join([node.get_content() for node in doc_node_list])

llm_answer = llm.share()
prompt = 'You will act as an AI question-answering assistant and complete a dialogue task. In this task, \
      you need to provide your answers based on the given context and questions.'
llm_answer.prompt(lazyllm.ChatPrompter(instruction=prompt, extra_keys=['context_str']))  # prompting

res = llm_answer({"query": query, "context_str": content_str})  # get answer from llm
print("RES:\n", res)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants