Skip to content

Commit

Permalink
Merge pull request #52 from morpheuslord/morpheuslord-patch-17
Browse files Browse the repository at this point in the history
Update README.md
  • Loading branch information
morpheuslord authored Jul 29, 2023
2 parents 0bebcb5 + 45815a4 commit ff5f20b
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -314,10 +314,10 @@ Once the API is acquired just add it to the `.env` file and you are good to go.
Using LLama2 is one of the best offline and free options out there. It is currently under improvement I am working on a prompt that will better incorporate cybersecurity perspective into the AI.
I have to thank **@thisserand** and his [llama2_local](https://github.com/thisserand/llama2_local) repo and also his YT video [YT_Video](https://youtu.be/WzCS8z9GqHw). They were great resources. To be frank the llama2 code is 95% his, I just yanked the code and added a Flask API functionality to it.

The Accuracy of the AI in offline and outside the codes test was great and had equal accuracy to openai or bard but while in code it was facing a few issues may be because of the prompting and all. I will try and fix it.
The Accuracy of the AI offline and outside the codes test was great and had equal accuracy to openai or bard but while in code it was facing a few issues may be because of the prompting and all. I will try and fix it.
The speed depends on your system and the GPU and CPU configs you have. currently, it is using the `TheBloke/Llama-2-7B-Chat-GGML` model and can be changed via the `portscanner` and `dnsrecon` files.

For now the llama code and scans are handeled differently. After few tests I found out llama needs to be trained a little to opparate like how I intended it to work so it needs some time. Any suggestions on how I can do that can be added in the discussions of this repo [Discussions Link](https://github.com/morpheuslord/GPT_Vuln-analyzer/discussions). For now the output wont be a devided list of all the data instead will be an explaination of the vulnerability or issues discovered by the AI.
For now, the llama code and scans are handled differently. After a few tests, I found out llama needs to be trained a little to operate like how I intended it to work so it needs some time. Any suggestions on how I can do that can be added to the discussions of this repo [Discussions Link](https://github.com/morpheuslord/GPT_Vuln-analyzer/discussions). For now, the output won't be a divided list of all the data instead will be an explanation of the vulnerability or issues discovered by the AI.

### Output

Expand Down

0 comments on commit ff5f20b

Please sign in to comment.