Skip to content

Advanced CLI tool for automating Machine Learning (AutoML) using state-of-the-art deep learning models to apply transfer learning with multiple tuning methods and architecture modifications to pretrained models for image and text datasets, with end-to-end training for tabular and time series datasets.

Notifications You must be signed in to change notification settings

moayadeldin/deeptune

Repository files navigation

DeepTune

deeptune tests Documentation Status

DeepTune is a full compatible library to automate Computer Vision, Natural Language Processing, Tabular, and Time Series state-of-the-art deep learning algorithms for cross-modal applications on image, text, tabular, and time series datasets. The library is designed for use in different applied machine learning domains, including but not limited to medical imaging, natural language understanding, time series analysis, providing users with powerful, ready-to-use CLI tool that unlock the full potential of their case studies through just a one simple command.

DeepTune is primarily presented for undergraduate and graduate computer science students community at St. Francis Xavier University (StFX) in Nova Scotia, Canada. We aspire to seeing this software adopted broadly across the computer science research community all over the world.

DeepTune Demo

Images Demo: The data sample used in this demo is a subset of the Chest X-Ray Images (Pneumonia) dataset available at: Chest X-ray Dataset.

deeptune_images_demo.mp4

More cross-modal video demos can be found in the documentation's demo page.

Features

  • Fine-tuning state-of-the-art Computer Vision algorithms (ResNet, DenseNet, etc.) for image classification.
  • Fine-tuning state-of-the-art NLP (BERT, GPT-2) algorithms for text classification.
  • End-to-end training for tabular and time-series algorithms.
  • Providing PEFT with LoRA support for Computer Vision algorithms implemented, enabling state-of-the-art models that typically require substantial computational resources to perform efficiently on lower-powered devices. This approach not only reduces computational overhead but also enhances performance.
  • Leveraging fine-tuned and pretrained state-of-the-art vision and language models to generate robust knowledge representations for downstream visual and textual tasks.

Models DeepTune Supports (Up to Date)

Model Transfer Learning with Adjustable Embedding Layer? Support PEFT with Adjustable Embedding Layer? Support Embeddings Extraction? Task Modality Supported Models
ResNet Classification & Regression Image 'resnet18', 'resnet34', 'resnet50', 'resnet101', or 'resnet152'
DenseNet Classification & Regression Image 'densenet121', 'densenet161', 'densenet169', or 'densenet201'
Swin Classification & Regression Image 'swin_t', 'swin_s', or 'swin_b'
EfficientNet Classification & Regression Image 'efficientnet_b0', 'efficientnet_b1', 'efficientnet_b2', 'efficientnet_b3', 'efficientnet_b4', 'efficientnet_b5', 'efficientnet_b6', or 'efficientnet_b7'
VGGNet Classification & Regression Image 'vgg11', 'vgg13', 'vgg16', or 'vgg19'
ViT Classification & Regression Image 'vit_b_16', 'vit_b_32', 'vit_l_16', 'vit_l_32' or 'vit_h_14'
SiGLiP Classification Image 'siglip'
GPT Text Classification Text GPT-2
BERT Sentiment Analysis Text bert-base-multilingual-cased
GANDALF Supports End-to-End Conventional Training Classification & Regression Tabular GANDALF
TabPFN Supports Both Fine-tuning & Conventional End-to-End Training Classification & Regression Tabular tabpfn
DeepAR Supports Conventional End-to-End Training Time Series Forecasting Time Series DeepAR

Documentation

DeepTune is being under active development and mainteneance with a user-friendly comprehensive documentation for easier usage. The documentation can be accessed here.

Acknowledgments

This software package was developed as part of work done at Medical Imaging Bioinformatics lab under the supervision of Jacob Levman at St. Francis Xavier Univeristy (StFX), Nova Scotia, Canada.

Thanks to Xuchen for providing their parameter-efficient fine-tuned Swin implementation SwinTransformerWithPEFT

Citation

If you find DeepTune useful, please give us a star ⭐ on GitHub for support.

Also if you find this repository helpful, please cite it as follows:

@software{DeepTune,
author  = {Moayadeldin Hussain, John Kendall and Jacob Levman},
title   = {DeepTune: Cutting-edge Tool automating state-of-the-art deep learning models for cross-modal applications},
year = {2025},
url = {https://github.com/moayadeldin/deeptune},
version = {1.1.0}
}

About

Advanced CLI tool for automating Machine Learning (AutoML) using state-of-the-art deep learning models to apply transfer learning with multiple tuning methods and architecture modifications to pretrained models for image and text datasets, with end-to-end training for tabular and time series datasets.

Topics

Resources

Stars

Watchers

Forks

Contributors 2

  •  
  •