Transformers Fine Tuner is a user-friendly Gradio interface that enables seamless fine-tuning of pre-trained transformer models on custom datasets.
- Easy Dataset Integration: Load datasets via URLs or direct file uploads.
- Model Selection: Choose from a variety of pre-trained transformer models.
- Customizable Training Parameters: Adjust epochs, batch size, and learning rate to suit your needs.
- Real-time Monitoring: Track training progress and performance metrics.
-
Clone the repository:
git clone https://github.com/canstralian/Transformers-Fine-Tuner.git cd Transformers-Fine-Tuner
-
Create and activate a virtual environment:
python -m venv venv source venv/bin/activate # On Windows use `venv\Scripts\activate`
-
Install the required dependencies:
pip install -r requirements.txt
To launch the Gradio interface, run the following command:
python app.py
- Enter the URL of your dataset.
- Select a pre-trained transformer model from the dropdown.
- Adjust the training parameters such as epochs, batch size, and learning rate.
- Click the "Submit" button to start the fine-tuning process.
app.py
: Main script to launch the Gradio interface.data/preprocess.py
: Script to load and preprocess datasets..github/workflows/python-app.yml
: GitHub Actions workflow for CI/CD pipeline.
If you would like to contribute to this project, please fork the repository and submit a pull request.
This project is licensed under the MIT License. See the LICENSE file for more details.
This project uses the following libraries and frameworks:
For any inquiries or support, please contact the repository owner at canstralian.