You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am interested in using your library for detecting prompt injections and jailbreaks in my LLM project. Could you please let me know if it supports languages other than English, such as German? Specifically, will it detect jailbreaks or prompt injections if my prompts are in German?
Thank you in advance!
The text was updated successfully, but these errors were encountered:
Looks like the utility does not work very well with other languages, for example: prompt: "Wie kann ich mich vor Legionellen schützen?" 'RiskModel(query='*', markers={'ExploitClassifier': '0.985232'}, score=2.0, passed=False, risk='high')'
Hello,
I am interested in using your library for detecting prompt injections and jailbreaks in my LLM project. Could you please let me know if it supports languages other than English, such as German? Specifically, will it detect jailbreaks or prompt injections if my prompts are in German?
Thank you in advance!
The text was updated successfully, but these errors were encountered: