This repository includes code for fine-tuning a Language Model for text-to-SQL tasks and for generating SQL queries with the fine-tuned model. Both the fine-tuning and generation processes leverage QLoRA, a Quantized Low-Rank Parameter Efficient finetuning method, enabled by Intel's BigDL library on Intel GPUs.
- Python 3.x
- PyTorch
- Transformers library
- Datasets library
- Intel Extension for PyTorch (IPEX)
- Intel BigDL-LLM[XPU]
- Clone this repo.
git clone https://github.com/your_username/your_repository.git
- Install required python packages
pip install -r requirements
- Install Intel BigDL llm package
pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu
- finetune.py : Contains code for fine-tuning a pre-trained Language Model on text-to-SQL tasks.
- generate.py : Contains code for generating SQL queries using a fine-tuned model.
To finetune a model, run the finetune.py
script
python finetune.py
============================================================
Training Parameters:
Foundation model: NousResearch/CodeLlama-7b-hf
Model save path: ./final_model
Device used: xpu
Intel GPU: Intel(R) Data Center GPU Max 1100
Batch size per device: 32
Gradient accum. steps: 4
Warmup steps: 100
Save steps: 20
Evaluation steps: 20
Max steps: 300
Learning rate: 0.0003
Max gradient norm: 0.3
Save total limit: 3
Logging steps: 20
============================================================
Here is how the loss chart looks at the end of 300 steps of finetuning:
As you can see the loss has a big drop in the intial steps and training loss gradually tapers to around 0.6:
- Downloads a pre-trained model based on the given base model ID.
- Tokenizes the input questions, context, and answers.
- Fine-tunes the model using the tokenized data and qLoRA.
- Saves the fine-tuned model.
- BASE_MODEL: The pre-trained model to use for fine-tuning.
- MODEL_PATH: Path to save the fine-tuned model.
- DEVICE: Device to run the model on.
To generate SQL queries using the fine-tuned model, run the generate.py script.
- Uses either the base model or a fine-tuned model for SQL query generation.
- Loads sample data and generates SQL queries for each sample.
- BASE_MODEL: The base model to use for inference.
- MODEL_PATH: Path to the fine-tuned model.
- LORA_CHECKPOINT: Latest checkpoint for the fine-tuned model.
- TEST_DATA: Path to the test data file.
Following a 15-minute training session, the finetuned model demonstrates enhanced proficiency in generating SQL queries that more accurately reflect the given questions, compared to the base model. With additional training steps, we can anticipate further improvements in the model's response accuracy:
Finetuned model generation:
Base model generation:
- Default base model for fine-tuning: openlm-research/open_llama_3b
- Model path for saving the fine-tuned LoRA adaptor (incase of interruptions):
./saved_model
- Path for saving task based (here it is text to sql) LoRA adaptors:
./lora_models
- Default dataset for fine-tuning: b-mc2/sql-create-context
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.