AI-powered cybersecurity chatbot designed to provide helpful and accurate answers to your cybersecurity-related queries and also do code analysis and scan analysis.
-
Updated
Nov 16, 2024 - Python
AI-powered cybersecurity chatbot designed to provide helpful and accurate answers to your cybersecurity-related queries and also do code analysis and scan analysis.
A generalized framework for subspace tuning methods in parameter efficient fine-tuning.
The offical realization of InstructERC
Unleash the full potential of exascale LLMs on consumer-class GPUs, proven by extensive benchmarks, with no long-term adjustments and minimal learning curve.
Tool for test diferents large language models without code.
Fine-grained Emotion Classification (FEC) using structured feature fusion from LLaMA-2-7B-Chat, BERT-large, and RoBERTa-large.
🦖 X—LLM: Simple & Cutting Edge LLM Finetuning
This project has implemented the RAG function on Jetson and supports TXT and PDF document formats. It uses MLC for 4-bit quantization of the Llama2-7b model, utilizes ChromaDB as the vector database, and connects these features with Llama_Index. I hope you like this project.
This repository features an example of how to utilize the xllm library. Included is a solution for a common type of assessment given to LLM engineers
YouTube API implementation with Meta's Llama 2 to analyze comments and sentiments
Some experiments with activation steering in LLMs
This is the Backend of Mental Health Assistant which capable of mental state of patient and suggests treatments to the mental health Problems
Professor Codephreak local language model pursuit of agency. Upgrades are occurring in this repo, the original codephreak is historically stored at https://github.com/Professor-Codephreak/automind/
An AI chatbot implementing RAG technique with Meta-Llama2-7b Large Language Model, along with langchain and pinecone vector databases.
IntelliCodeEx is a code explanation tool powered by LLM (Language Model) that utilizes the open-source Llama-2 7B GGML quantized model. It's designed to provide intelligent explanations for various programming languages.
Examination of whether LLMs can maintain consistency over extended multiple text generation for 10 medical personas. 5 novel plausibility metrics proposed, and an ontology of common LLM errors.
📜 Briefly utilizes open-source LLM's with text embeddings and vectorstores to chat with your documents
Add a description, image, and links to the llama2-7b topic page so that developers can more easily learn about it.
To associate your repository with the llama2-7b topic, visit your repo's landing page and select "manage topics."