-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generation of select
fails on OpenAI chat mode, depending on possible options.
#232
Comments
The problem is indeed related to prefixes. |
@jprafael Did you find a solution for this? Thank you! |
I think you may want to test this new syntax format: {{select 'name' options=["John", "John Doe"]}} |
The issue is present regardless ot method.
I did not find a good solution for this. The issue is that to support
This means that for the example above, prefix is The work around that I found was to create the prompt in a way that avoids prefixes, but still gives enough context to the LLM so that choices are valid:
|
The bug
Generation of
select
fails on OpenAI chat mode, depending on possible options.It seems to be related with common prefixes between two options.
To Reproduce
This works:
But this doesn't:
It fails with
When calling OpenAI chat models you must generate only directly inside the assistant role! The OpenAI API does not currently support partial assistant prompting.
This also fails, even though one answer is not a complete prefix of the other:
System info (please complete the following information):
0.0.62
(from git main branch atd6b855a
)The text was updated successfully, but these errors were encountered: