Skip to content

Conversation

zachschillaci27
Copy link
Contributor

As referenced in other issues (see #814, #1026), the current implementation of LLMMathChain allows for prompt injection attacks which can execute arbitrary code using Python's exec method. As a quick patch, I have modified the chain to use the slightly safer eval method and modified the prompt template accordingly. From my quick testing, this doesn't seem to hurt the chain's mathematical ability and prevents the exploit outlined in #814.

NB: This is intended as a quick patch to mitigate the current security risks. Future developments, as in #1055, should further help with solving this vulnerability and others.

@bborn
Copy link
Contributor

bborn commented Feb 18, 2023

Here's another attempt using RestrictedPython:

#1134

Please review and let me know what you think.

@dev2049
Copy link
Contributor

dev2049 commented May 9, 2023

stale, using numexpr now

@dev2049 dev2049 closed this May 9, 2023
@dockercore
Copy link

using numexpr now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants