Skip to content

serverdaun/emotion-distilbert-fine-tuning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Fine‑Tune Trainer (Hugging Face)

Overview

  • Fine‑tunes distilbert-base-uncased on dair-ai/emotion using HF Trainer.
  • Provides a tiny CLI for inference; optional Weights & Biases logging.

Structure

  • src/train.py: Fine‑tune + save best to runs/emotion-distilbert/best.
  • src/infer.py: CLI inference (default uses Hub model ID).
  • notebooks/main.ipynb: Exploration/experiments.
  • pyproject.toml: Project metadata and deps.
  • .env: Optional env vars (e.g., WANDB_API_KEY).
  • runs/: Training outputs; wandb/: local W&B logs.

Setup

  • Python >=3.12, uv installed (https://docs.astral.sh/uv/)
  • Install deps: uv sync (creates/uses .venv and lockfile)
  • Run without activating venv: prefix commands with uv run
  • Optional logging: uv pip install wandb then wandb login

Train

  • Command: uv run python src/train.py
  • Saves model + tokenizer to runs/emotion-distilbert/best and logs metrics.

Infer

  • From Hub: uv run python src/infer.py "I feel great today!"
  • Use local checkpoint: set MODEL_ID = "runs/emotion-distilbert/best" in src/infer.py.

About

No description or website provided.

Topics

Resources

Stars

Watchers

Forks