Skip to content
This repository has been archived by the owner on Mar 1, 2024. It is now read-only.

Simple short-form Self-RAG Pack #907

Merged
merged 4 commits into from
Feb 5, 2024

Conversation

MarouaneMaatouk
Copy link
Contributor

@MarouaneMaatouk MarouaneMaatouk commented Jan 28, 2024

Description

Simple short-form Self-RAG implementation using llama_cpp adapted from the author's code
Paper: https://arxiv.org/abs/2310.11511
Model: https://huggingface.co/m4r1/selfrag_llama2_7b-GGUF

Fixes #8502 (run-llama/llama_index#8502)

Type of Change

Please delete options that are not relevant.

  • New Loader/Tool
  • Bug fix / Smaller change
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

How Has This Been Tested?

Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration

  • Added new unit/integration tests
  • Added new notebook (that tests end-to-end)
  • I stared at the code and made sure it makes sense

Suggested Checklist:

  • I have added a library.json file if a new loader/tool was added
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • I ran make format; make lint to appease the lint gods

Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@MarouaneMaatouk
Copy link
Contributor Author

@anoopshrma could you please review this?

@anoopshrma
Copy link
Collaborator

@anoopshrma could you please review this?

Hey! Sure will do by tomorrow. Currently away from machine.

Copy link
Collaborator

@jerryjliu jerryjliu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is fantastic! some comments but afterwards excited to land and promote this

llama_hub/llama_packs/self_rag/README.md Outdated Show resolved Hide resolved
llama_hub/llama_packs/self_rag/README.md Outdated Show resolved Hide resolved
llama_hub/llama_packs/self_rag/README.md Outdated Show resolved Hide resolved
llama_hub/llama_packs/self_rag/README.md Outdated Show resolved Hide resolved
llama_hub/llama_packs/self_rag/base.py Outdated Show resolved Hide resolved
llama_hub/llama_packs/self_rag/base.py Outdated Show resolved Hide resolved
from llama_cpp import Llama # noqa: F401
except ImportError:
raise ImportError(_IMPORT_ERROR_MSG)
self.llm = Llama(model_path=model_path, verbose=verbose, **model_kwargs)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we do have a llamacpp wrapper in llamaindex. out of curiosity is there a reason you are using the llama_cpp lib directly? (it's fine either way, just curious)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh nvm seems like its for logprobs

Copy link
Collaborator

@anoopshrma anoopshrma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome @MarouaneMaatouk!

@anoopshrma
Copy link
Collaborator

Hey @jerryjliu , since all the comments have been resolved. Can I go ahead and merge it or do you want to take another look.

@jerryjliu jerryjliu merged commit 20c2f59 into run-llama:main Feb 5, 2024
3 checks passed
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants