Open
Description
Description
Request a validator that determines whether or not a LLM refuses a prompt and generates an output that starts with texts such as "I cannot", "I can't", and "It is illegal"
Why is this needed
If the response is refused, it should not be returned to the client application to display. The validator should throw a validation error that the application should handle appropriately.
Implementation details
I suppose Huggingface provides models for response refusal checking.
End result
After a LLM generate a text, the validator is used to validate whether the response is refused by having but not limited to texts such as "I can not", "I can't", "It is not legal", etc.