Open
Description
Solution/Feature
Context:
docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/prefill-claudes-response
anthropic.com/news/claude-2-1-prompting
I would love to be able to manually inject assistant responses during eval to perform some ablations. Ideally, I would be able to modify the messages like such:
messages = [
{
"role": "user",
"content": prompt,
},
"role": "assistant",
"content": 'According to ',
}
]
And it should use this to generate up through assistant's response but not ending it, so the model continues.
Possible alternatives
Adopting a custom chat template could work, but it would be nice to see this feature integrated as I anticipate a lot of researchers will have need for easy access to this.