MalDataGen is an advanced Python framework for generating and evaluating synthetic tabular datasets using modern generative models, including diffusion and adversarial architectures. Designed for researchers and practitioners, it provides reproducible pipelines, fine-grained control over model configuration, and integrated evaluation metrics for realistic data synthesis.
- 📖 Overview (Informações básicas)
- Video
- Security worries (Preocupações com segurança)
- Stamps considered (Selos Considerados)
- 🚀 Getting Started
- ⚙️ Installation (Instalação)
- 🧠 Architectures
- 🛠 Features
- 📊 Evaluation Strategy
- 📈 Metrics
- 📋 Architecture Diagrams
- 🔧 Technologies Used (Dependências)
- 🔗 References
MalDataGen is a modular and extensible synthetic data generation library for tabular data for malware dectition. It aims to:
- Support state-of-the-art generative models (GANs, VAEs, Diffusion, etc.)
- Improve model generalization by augmenting training data
- Enable fair benchmarking via reproducible evaluations (TS-TR and TR-TS)
- Provide publication-ready metrics and visualizations
It supports GPU acceleration, CSV/XLS ingestion, custom CLI scripts, and integration with academic pipelines.
WWe provide a visual overview of the internal architecture of each model's building blocks through five detailed figures, highlighting the main structural changes across the models. These diagrams are documented and explained in the Overview.md [Overview.md ] file.(https://github.com/SBSeg25/MalDataGen/blob/2dd9eaad74da7726c130e50dbc35f95a463cbd00/Docs/Overview.md)
We provide a comprehensive visual overview (8 diagrams) at Docs/Diagrams/ of the MalDataGen framework, covering its architecture, design principles, data processing flow, and evaluation strategies. Developed using Mermaid notation, these diagrams support understanding of both the structural and functional aspects of the system. They include high-level system architecture, object-oriented class relationships, evaluation workflows, training pipelines, metric frameworks, and data flow. Together, they offer a detailed and cohesive view of how MalDataGen enables the generation and assessment of synthetic data in cybersecurity contexts.
The following link showcases a video of a demonstration of the tool: https://drive.google.com/file/d/1sbPZ1x5Np6zolhFvCBWoMzqNqrthlUe3/view?usp=sharing
if that doesn't work we have a backup on: https://youtu.be/t-AZtsLJUlQ
We, the authors, consider the following stamps:
-
Available artifacts (Stamp D)
-
Functional artifacts (Stamp F)
-
Sustainable artifacts (Stamp S)
-
Reproducible experiments (Stamp R)
We provide instructions for the installation, execution, and reproduction of the experiments presented in the paper, along with information about the execution environment and dependencies.
- Python 3.8+
- pip
- (Optional) CUDA 11+ for GPU acceleration
pip install virtualenv
python3 -m venv ~/Python3venv/MalDataGen
source ~/Python3venv/MalDataGen/bin/activate
git clone https://github.com/SBSeg25/MalDataGen.git
cd MalDataGen
pip install --upgrade pip
pip install -r requirements.txt
# or
pip install .
We declare that the local execution of experiments has no security worries, however the docker executing require sudo permissions being available to the docker engine.
In order to execute a demo of the tool, utilized the comand listed below. The execution of this reduced demo takes around 3 minutes on a AMD Ryzen 7 5800x, 8 cores, 64 GB RAM machine.
# Run the basic demo
python3 run_campaign_sbseg.py -c sf
Alternatively, you can use the a docker container to execute the demo, by using the following comand:
# Run the basic demo
./run_demo_docker.sh
In order to reproduce the results from the paper execute the comand below, the experiments take around 7 hours on a AMD Ryzen 7 5800x, 8 cores, 64 GB RAM machine.
# Run all experiments from the paper
python3 run_campaign_sbseg.py
Or to execute with docker:
# Run all experiments from the paper
./run_experiments_docker.sh
Model | Description | Use Case |
---|---|---|
CGAN |
Conditional GANs conditioned on labels or attributes | Class balancing, controlled generation |
WGAN |
Wasserstein GAN with Earth-Mover distance for improved stability | Imbalanced datasets, stable training |
WGAN-GP |
Wasserstein GAN with gradient penalty for stable training | Imbalanced datasets, complex distributions |
Autoencoder |
Latent-space learning through compression-reconstruction | Feature extraction, denoising |
VAE |
Probabilistic Autoencoder with latent sampling | Probabilistic generation and imputation |
Denoising Diffusion |
Progressive noise-based generative model | Robust generation with high-quality samples |
Latent Diffusion |
Diffusion model operating in compressed latent space | High-resolution generation, efficiency |
VQ-VAE |
Discrete latent-space via quantization | Categorical and mixed-type data |
SMOTE |
Synthetic Minority Over-sampling Technique (interpolation-based) | Class imbalance in tabular data |
Model | Description | Use Case |
---|---|---|
TVAE |
Variational Autoencoder optimized for tabular data | Structured/tabular data synthesis |
Copula |
Statistical model based on dependency (copula) functions | Synthetic data with correlations |
CTGAN |
GAN with mode-specific normalization for tabular data | Mixed-type/categorical synthesis |
Legenda:
- SDV: Integração com a biblioteca Synthetic Data Vault.
- 📊 Cross-validation (stratified k-fold)
- ⚙️ Fully customizable model configuration
- 📈 Built-in metrics for data quality
- 🔁 Persistent models & experiment saving
- 📉 Graphing utilities for visual reports
- 📉 Clustering visualization of datasets
- 📉 Heat maps between the synthetic and real samples
- 🧪 Automated experiment pipelines
- 💾 Data export to CSV/XLS formats
Two validation approaches are supported:
-
TS-TR (Train Synthetic – Test Real)
Measures generalization ability by training on synthetic data and testing on real data. -
TR-TS (Train Real – Test Synthetic)
Assesses generative realism by training on real and testing on synthetic samples.
- Accuracy, Precision, Recall, F1-score, Specificity
- ROC-AUC, MSE, MAE, FNR, TNR
- Euclidean Distance, Hellinger Distance
- Log-Likelihood, Manhattan Distance
Comprehensive architecture documentation is available in the Docs/Diagrams/ directory, including:
- System Architecture: High-level framework overview and component relationships
- Core Class Hierarchy: Object-oriented design and inheritance structure
- Evaluation Strategy: TS-TR and TR-TS evaluation flow diagrams
- Model Training Pipeline: Complete workflow sequence from data to results
- Metrics Framework: Comprehensive evaluation metrics overview
- Data Flow Architecture: End-to-end data processing pipeline
- Generative Models Comparison: Model categories and characteristics
- Deployment Architecture: Docker and execution mode options
All diagrams are created using Mermaid format for easy maintenance and version control. They can be viewed directly in GitHub or exported for academic publications.
Tool | Purpose |
---|---|
Python 3.8+ | Core language |
NumPy, Pandas | Data processing |
TensorFlow | Model building |
Matplotlib, Plotly | Visualization |
PyTorch (planned) | Future multi-backend support |
Docker | Containerization |
Git | Version control |
Component | Minimum | Recommended |
---|---|---|
CPU | Any x86_64 | Multi-core (i5/Ryzen 5+) |
RAM | 4 GB | 8 GB+ |
Storage | 10 GB | 20 GB SSD |
GPU | Optional | NVIDIA with CUDA 11+ |
Component | Version | Notes |
---|---|---|
OS | Ubuntu 22.04+ | Linux preferred |
Python | ≥ 3.8.10 | Virtualenv recommended |
Docker | ≥ 27.2.1 | Optional but supported |
Git | Latest | Required |
CUDA | ≥ 11.0 | Optional for GPU execution |
- Kingma, D. P., & Welling, M. (2013). Auto-Encoding Variational Bayes
- Goodfellow, I. et al. (2014). Generative Adversarial Nets
- Ho, J. et al. (2020). Denoising Diffusion Probabilistic Models
- Oord, A. v. d. et al. (2017). Neural Discrete Representation Learning
- Arjovsky, M. et al. (2017). Wasserstein GAN
- Patki, N. et al. (2016). The Synthetic Data Vault
- Xu, L. et al. (2019). Modeling Tabular Data using Conditional GAN
- Mirza, M. & Osindero, S. (2014). Conditional Generative Adversarial Nets
- Gulrajani, I. et al. (2017). Improved Training of Wasserstein GANs
Distributed under the MIT License. See LICENSE for more information.