This repository contains Jupyter notebooks for the courses "Hugging Face in 4 Hours" by Sinan Ozdemir. Published by Pearson, the course covers effective best practices and industry case studies in using Large Language Models (LLMs) from Hugging Face.
Hugging Face is the world’s largest hub for modern AI models and provides access for anyone to use, train, and deploy these models with ease! This course is a gateway to mastering Hugging Face's tools for NLP, offering an inclusive curriculum for non-developers and developers alike to learn the ecosystem. With a spotlight on interactive learning and practical application, attendees will acquire the skills to fine-tune pre-trained models for a variety of NLP tasks and understand how to deploy these models with efficiency.
- Jupyter notebooks can be run alongside the instructor, but you can also follow along without coding by viewing pre-run notebooks here.
-
Intro to HF.ipynb: Introduction to Hugging FaceMore on 3rd party inference- Notebook
-
Prototyping with HF.ipynb: notebooks/Prototyping with Hugging Face -
BERT vs GPT: notebooks/Fine-tuning BERT for Classification -
Introduction to SmolAgents- HuggingFaces's AI Agent SDK with built in ReAct and CodeAgent functionality -
Multimodality with HF.ipynb: A brief workshop in using some multi-modal models from HF- For more on Multimodality, check out my livesession on the topic
- Advanced:
fine_tuning_llama_3: A workshop in fine-tuning Llama 3.1 with instructional data and incorporating further pre-training to update it's knowledge base
- See this README for info on how to run our streamlit app

