Skip to content

messlem99/CNN-Transformer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CNN-Transformer for Active Cell Balancing with Proximal Policy Optimization

Overview

This repository contains the implementation of a reinforcement learning agent for active cell balancing. The agent utilizes the Proximal Policy Optimization (PPO) algorithm with a custom feature extractor that combines a 1D Convolutional Neural Network (CNN) for local feature extraction and a Transformer for modeling global dependencies among battery cells. This architecture aims to capture both short-range correlations and long-range interactions within the battery pack to achieve effective balancing.

Table of Contents

Features

  • Environment:
    https://github.com/messlem99/Battery_Cell_Balancing
  • CNN-Transformer Feature Extractor:
    Combines a 1D Convolutional Neural Network (1D-CNN) for local feature extraction with a Transformer encoder for modeling global dependencies.
  • Custom PPO Policy:
    A tailored PPO policy integrates the CNN-Transformer extractor for enhanced decision making.
  • Logging and Checkpointing:
    Uses TensorBoard for monitoring training metrics and callback-based checkpoint saving.
  • End-to-End Training Pipeline:
    Supports vectorized environments and optimized hyperparameters for efficient PPO training.

Installation

  1. Clone the Repository:
    git clone https://github.com/messlem99/CNN-Transformer.git
    cd CNN-Transformer

Usage

  1. Training the Model To train the PPO model with the CNN-Transformer feature extractor ensure you import the environment

CNN-Transformer Architecture

The architecture integrates two key modules:

  1. 1D-CNN for Local Feature Extraction
  • Input: Historical per-cell features (voltage and SOC) arranged as a 1D sequence.
  • Operation: Two convolutional layers with LeakyReLU activations, layer normalization, and dropout capture local patterns among adjacent cells.
  1. Transformer Encoder for Global Dependency Modeling:
  • Reshaping: The CNN output is reshaped so that each cell acts as a token.
  • Attention Mechanism: A multi-head self-attention Transformer encoder captures long-range dependencies across the battery pack.
  • Global Pooling: Averages features across cells to obtain a pack-level representation.
  1. Feature Aggregation and Final Layers:
  • Concatenation: Combines the global features with derived features (including current load)
  • Fully Connected Layers: Processes the combined vector to produce the final feature representation for the PPO policy.

License

  • This project is licensed under the MIT License. See the LICENSE file for details.

Contributing

Contributions and enhancements are welcome. To get started:

  • Fork the repository and create your feature branch.
  • Submit pull requests for review.

Citing

To cite this project in publications:

@misc{CNN-Transformer2025,
  author       = {Abdelkader Messlem and Youcef Messlem and Ahmed Safa},
  title        = {Hybrid Convolutional Neural Network with Transformer architecture for feature extraction within a Proximal Policy Optimization (PPO) RL framework},
  year         = {2025},
  howpublished = {\url{https://github.com/messlem99/CNN-Transformer}},
}

References

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages