Skip to content

Preprocessing and analysis of network data through unsupervised and supervised learning, with exploration of adversarial attacks on trained classifiers.

License

Notifications You must be signed in to change notification settings

Turi-DG/Adversarial-Attack-on-Network-Data-Classifier

Repository files navigation

Adversarial Attacks on Network Traffic Classifiers

Overview

Guaranteeing the security of Machine Learning (ML) classifiers is essential for institutions relying on automated decision-making systems to prevent cyberattacks and fraudulent activities.
This project investigates adversarial attacks on ML models that classify network traffic data — a task commonly used in cybersecurity and traffic management.

Adversarial attacks generate adversarial examples, i.e., slightly modified inputs that cause the model to make incorrect predictions advantageous for the attacker, while remaining imperceptible and credible.

The work explores:

  • The impact of adversarial attacks on tabular data classifiers
  • The robustness of different ML models (Random Forest, Linear SVC, Neural Networks)
  • The use of ART (Adversarial Robustness Toolbox) to generate adversarial examples
  • Countermeasures such as adversarial training to improve model robustness

Project Structure

1. Data Acquisition & Preprocessing

2. Exploratory Data Analysis

  • Visualizations: histograms, scatter plots, box plots, correlation matrices
  • Dimensionality reduction via PCA and t-SNE
  • Unsupervised clustering

3. Supervised Learning

  • Classifiers: Random Forest, Linear SVC, Neural Networks
  • Hyperparameter tuning through cross-validation
  • Evaluation metrics: Accuracy, Precision, Recall, F1-Score

4. Adversarial Attack Evaluation

  • Random and feature-specific noise applied to test data
  • Adversarial attacks implemented using FGSM and PGD from the Adversarial Robustness Toolbox (ART)
  • Measurement of robust accuracy under different perturbation levels (ε)

5. Countermeasures

  • Adversarial training to enhance robustness
  • Comparison between clean and adversarial accuracies

Technologies Used

  • Python 3
  • scikit-learn
  • PyTorch
  • Adversarial Robustness Toolbox (ART)
  • Pandas / NumPy
  • Matplotlib / Seaborn

Report

For detailed explanations, methodology, and results, please refer to the full report:
👉 Read the complete report here


About

Preprocessing and analysis of network data through unsupervised and supervised learning, with exploration of adversarial attacks on trained classifiers.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published