-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question]: Generic summary in response to local LLM prompts #740
Comments
I have what seems to be the same issue. In #743 it seems like they decided it is due to context length, but I do not agree. I am using llama3.1:latest (8b) and getting generic or odd output whereas before I was getting great output. I am not sure what changed and would also appreciate if anyone can point me to logging to assist with troubleshooting? I am getting a youtube transcript (aliased yt -t to ytt) and using a custom (smaller) modified extract_wisdom prompt. I often get the "As a {Assigned_Role} I will {Follow_Instructions}" even if I explicitly state for it not to speak about itself like that. I also often get "It [seems|looks] like you've provided a transcript of...". What is truly odd is I got some weird reference to there not being a product — but there is no mention of "product" in the prompt or the transcript... I can tell that my prompt is being delivered because because it mentions the role I assigned it (Wisdom Automaton) and yet it has such odd output. I am using a local Ollama server on my desktop, but running Fabric on my laptop. Command:
Output:
|
What is your question?
Similar to #663, #514
I'm getting inconsistent responses to various prompts I send to local LLMs. For example I use
extract_wisdom
often, however, sometimes I get a generic summary instead of the much more thorough response that's typical of using this prompt. Some days it works great, other days it's incredibly frustrating to work with.It is suggested that the prompt formatting is different for the llama models, but the same thing happens for gemma models in my testing, albeit intermittently (for both models), so I'm not certain this is the root cause. Here's an example of output from
llama3
:To contrast, here's an output from the exact same prompt against the
claude-3-5-sonnet-20240620
modelIs there logging available to see what exactly is happening so I can track down what is potentially causing this issue?
The text was updated successfully, but these errors were encountered: