Skip to content

cindysridykhan/instruct_storyteller_tinyllama2

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Training and Fine-Tuning LLM in Python and PyTorch

This repository contains a standalone low-level Python and PyTorch code to train and fine-tune a small version of the Llama2 LLM. See the associated blogpost. The LLM I trained follows instructions to write tiny stories.

Demo

This repository is heavily inspired from Karpathy's llama2.c repository, and for the LoRA part, from wlamond's PR.

Installation

Requirements

Install requirements in your environment:

pip install -r requirements.txt

Models

The models are available on HuggingFace hub:

Inference

python generate.py --model_path='./models/lora_story_teller_110M.pt' --prompt='Write a story. In the story, try to use the verb "climb", the noun "ring" and the adjective "messy". Possible story:' --temperature=0.1 --top_k=10

By default, parameters are temperature = 0.5 and top_k = 10.

Alternatively, you can also use the generate.ipynb notebook.

Training

Instruction Dataset

The dataset I used is the TinyStories dataset, with additional preprocessing steps to rework the prompts. (see blogpost for more details). To prepare the dataset, follow the prepare_instruct_dataset notebook.

Training from scratch

Training from scratch can be done from the notebook instruct_training_from_scratch.

LoRA Fine-tuning

LoRA Fine-tuning can be done from the notebook instruct_lora_finetune.ipynb. Here, I started from Karpathy's 110M parameters pretrained model that you can find on HuggingFace Hub at tinyllamas. LoRA is then applied to the architecture, with rank 2 matrices and on ['wq', 'wk', 'wo', 'wv'] layers.

Notes on the trained models

Currently, the models only support prompts like 'Write a story. In the story, try to use the verb "{verb}", the noun "{noun}" and the adjective "{adj}". The story has the following features: it should contain a dialogue. Possible story:', that is, prompts that look like the one in the training set. Plus, in order for the story to make sens, the verb, noun and adjective given must be common words that are present in the training set. This is because it has been trained only on the TinyStories dataset. It would be interesting to make the dataset more diverse.

About

Training and Fine-tuning an llm in Python and PyTorch.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published