complete prompt is appended at the start of my response generated by llama3 #24437
Open
5 tasks done
Labels
🤖:bug
Related to a bug, vulnerability, unexpected error with an existing feature
🔌: huggingface
Primarily related to HuggingFace integrations
investigate
Flagged for investigation.
stale
Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed
Checked other resources
Example Code
)
Error Message and Stack Trace (if applicable)
llm_reponse before guardrails {'question': 'how many F grade a student can have in bachelor', 'chat_history': [], 'answer': "<|begin_of_text|><|start_header_id|>system<|end_header_id|> You are an assistant for question-answering tasks.\n Use the following pieces of retrieved context to answer the question and give response from the context given to you as truthfully as you can.\n Do not add anything from you and If you don't know the answer, just say that you don't know.\n <|eot_id|>\n <|start_header_id|>user<|end_header_id|>\n Question: how many F grade a student can have in bachelor\n Context:
Description
i am building a rag pipeline and it was working fine in my local environment but when i deploy it on a server the prompt template was appended at the start of my llm response. When i compare my local and server environment the only difference was on server langchain 0.2.9 and langchain-community were running while on my local setup langchain 0.2.6 was running . Any one who face the same issue or have any solution
System Info
langchain==0.2.9
langchain-cohere==0.1.9
langchain-community==0.2.7
langchain-core==0.2.21
langchain-experimental==0.0.62
langchain-text-splitters==0.2.2
The text was updated successfully, but these errors were encountered: