免杀,bypassav,免杀框架,nim,shellcode,使用nim编写的shellcode加载器
-
Updated
Dec 28, 2023 - C
免杀,bypassav,免杀框架,nim,shellcode,使用nim编写的shellcode加载器
Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
All my Source Codes (Repos) for Red-Teaming & Pentesting + Blue Teaming
simple Windows handle hijacker with a nod to Apxaey for inspiration
Artificially inflate a given binary to exceed common EDR file size limits. Can be used to bypass common EDR.
transmit cs beacon (shellcode) over self-made dns to avoid anti-kill and AV
Public Code for ICS Evasion Attack Generation
URL / IP / Email defanging with Javascript. Make IoC harmless.
📄 [Talk] OFFZONE 2022 / ODS Data Halloween 2022: Black-box attacks on ML models + with use of open-source tools
This project compares the performance of K-Nearest Neighbors, Support Vector Machines, and Decision Trees models for detecting malicious PDF files, with an emphasis on optimizing model performance and analyzing evasion techniques. It provides a comprehensive overview of machine learning for malicious PDF detection and potential vulnerabilities.
Generation of adversarial examples for ML-based malware detectors through the use of Genetic Algorithms.
The n-Values Time Series Attack (nVITA) is a sparse indirect black-box evasion attack that aims to achieve the adversarial goal (such as enforcing a certain output of the model) on TSF models by altering n values in an input time series. This repository is based on PyTorch. It also contains the implementations of FGSM and BIM for TSF models.
An Evasion Attack against Stacked Capsule Autoencoder
Collecting flags by evading, poisoning, stealing, and fooling AI/ML.
Training, inference, and evaluate of the speaker identification and verification model are carried out, and evasion attacks (FGSM, PGD) are performed.
An University Project for the AI4Cybersecurity class.
Two white-box evasion attacks– FGSM + PGD– on a LeNet-5 model trained on Fashion MNIST
The Machine Learning Security Evasion Competition (MLSEC) 2022 took place from August 12th to September 23th 2022 and was organized by Adversa AI, CUJO AI, and Robust Intelligence. I will explain here what I use as a method to bypass the machine learning models produced for this competition.
Add a description, image, and links to the evasion-attack topic page so that developers can more easily learn about it.
To associate your repository with the evasion-attack topic, visit your repo's landing page and select "manage topics."