This document gives a basic walk-through of LightGBM Python-package.
List of other helpful links
The preferred way to install LightGBM is via pip:
pip install lightgbm
Refer to Python-package folder for the detailed installation guide.
To verify your installation, try to import lightgbm
in Python:
import lightgbm as lgb
The LightGBM Python module can load data from:
- LibSVM (zero-based) / TSV / CSV format text file
- NumPy 2D array(s), pandas DataFrame, H2O DataTable's Frame (deprecated), SciPy sparse matrix
- LightGBM binary file
- LightGBM
Sequence
object(s)
The data is stored in a Dataset
object.
Many of the examples in this page use functionality from numpy
. To run the examples, be sure to import numpy
in your session.
import numpy as np
To load a LibSVM (zero-based) text file or a LightGBM binary file into Dataset:
train_data = lgb.Dataset('train.svm.bin')
To load a numpy array into Dataset:
rng = np.random.default_rng()
data = rng.uniform(size=(500, 10)) # 500 entities, each contains 10 features
label = rng.integers(low=0, high=2, size=(500, )) # binary target
train_data = lgb.Dataset(data, label=label)
To load a scipy.sparse.csr_matrix array into Dataset:
import scipy
csr = scipy.sparse.csr_matrix((dat, (row, col)))
train_data = lgb.Dataset(csr)
Load from Sequence objects:
We can implement Sequence
interface to read binary files. The following example shows reading HDF5 file with h5py
.
import h5py
class HDFSequence(lgb.Sequence):
def __init__(self, hdf_dataset, batch_size):
self.data = hdf_dataset
self.batch_size = batch_size
def __getitem__(self, idx):
return self.data[idx]
def __len__(self):
return len(self.data)
f = h5py.File('train.hdf5', 'r')
train_data = lgb.Dataset(HDFSequence(f['X'], 8192), label=f['Y'][:])
Features of using Sequence
interface:
- Data sampling uses random access, thus does not go through the whole dataset
- Reading data in batch, thus saves memory when constructing
Dataset
object - Supports creating
Dataset
from multiple data files
Please refer to Sequence
API doc.
dataset_from_multi_hdf5.py is a detailed example.
Saving Dataset into a LightGBM binary file will make loading faster:
train_data = lgb.Dataset('train.svm.txt')
train_data.save_binary('train.bin')
Create validation data:
validation_data = train_data.create_valid('validation.svm')
or
validation_data = lgb.Dataset('validation.svm', reference=train_data)
In LightGBM, the validation data should be aligned with training data.
Specific feature names and categorical features:
train_data = lgb.Dataset(data, label=label, feature_name=['c1', 'c2', 'c3'], categorical_feature=['c3'])
LightGBM can use categorical features as input directly. It doesn't need to convert to one-hot encoding, and is much faster than one-hot encoding (about 8x speed-up).
Note: You should convert your categorical features to int
type before you construct Dataset
.
Weights can be set when needed:
rng = np.random.default_rng()
w = rng.uniform(size=(500, ))
train_data = lgb.Dataset(data, label=label, weight=w)
or
train_data = lgb.Dataset(data, label=label)
rng = np.random.default_rng()
w = rng.uniform(size=(500, ))
train_data.set_weight(w)
And you can use Dataset.set_init_score()
to set initial score, and Dataset.set_group()
to set group/query data for ranking tasks.
Memory efficient usage:
The Dataset
object in LightGBM is very memory-efficient, it only needs to save discrete bins.
However, Numpy/Array/Pandas object is memory expensive.
If you are concerned about your memory consumption, you can save memory by:
- Set
free_raw_data=True
(default isTrue
) when constructing theDataset
- Explicitly set
raw_data=None
after theDataset
has been constructed - Call
gc
LightGBM can use a dictionary to set Parameters. For instance:
Booster parameters:
param = {'num_leaves': 31, 'objective': 'binary'} param['metric'] = 'auc'
You can also specify multiple eval metrics:
param['metric'] = ['auc', 'binary_logloss']
Training a model requires a parameter list and data set:
num_round = 10
bst = lgb.train(param, train_data, num_round, valid_sets=[validation_data])
After training, the model can be saved:
bst.save_model('model.txt')
The trained model can also be dumped to JSON format:
json_model = bst.dump_model()
A saved model can be loaded:
bst = lgb.Booster(model_file='model.txt') # init model
Training with 5-fold CV:
lgb.cv(param, train_data, num_round, nfold=5)
If you have a validation set, you can use early stopping to find the optimal number of boosting rounds.
Early stopping requires at least one set in valid_sets
. If there is more than one, it will use all of them except the training data:
bst = lgb.train(param, train_data, num_round, valid_sets=valid_sets, callbacks=[lgb.early_stopping(stopping_rounds=5)])
bst.save_model('model.txt', num_iteration=bst.best_iteration)
The model will train until the validation score stops improving.
Validation score needs to improve at least every stopping_rounds
to continue training.
The index of iteration that has the best performance will be saved in the best_iteration
field if early stopping logic is enabled by setting early_stopping
callback.
Note that train()
will return a model from the best iteration.
This works with both metrics to minimize (L2, log loss, etc.) and to maximize (NDCG, AUC, etc.).
Note that if you specify more than one evaluation metric, all of them will be used for early stopping.
However, you can change this behavior and make LightGBM check only the first metric for early stopping by passing first_metric_only=True
in early_stopping
callback constructor.
A model that has been trained or loaded can perform predictions on datasets:
# 7 entities, each contains 10 features
rng = np.random.default_rng()
data = rng.uniform(size=(7, 10))
ypred = bst.predict(data)
If early stopping is enabled during training, you can get predictions from the best iteration with bst.best_iteration
:
ypred = bst.predict(data, num_iteration=bst.best_iteration)