Skip to content

chaunceyt/using-llms

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Using Large Language Models

Notes on using Large Language Models

My goal is to learn to fine-tune an LLM using a custom dataset on my local system.

Compute

Apple M3 Max chip with 16‑core CPU, 40‑core GPU, 16‑core Neural Engine with 128GB unified memory

Using Opensource LLMs

Benefits: transparency, fine-tuning, and community Organizations: NASA/IBM, healthcare, FinGPT Models: LLAMA2, Mistral-7B-v0.1, Mixtral-8x7B, BioMistral-7B Risks: Hallucinations, Bias, security

Smaller Models

  • Phi2
  • TinyLLama

Model Optionization

  • quantization fp16,
  • LORA

About

Notes on using Large Language Models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published