Neo LS-SVM is a modern Least-Squares Support Vector Machine implementation in Python that offers several benefits over sklearn's classic sklearn.svm.SVC
classifier and sklearn.svm.SVR
regressor:
- β‘ Linear complexity in the number of training examples with Orthogonal Random Features.
- π Hyperparameter free: zero-cost optimization of the regularisation parameter Ξ³ and kernel parameter Ο.
- ποΈ Adds a new tertiary objective that minimizes the complexity of the prediction surface.
- π Returns the leave-one-out residuals and error for free after fitting.
- π Learns an affine transformation of the feature matrix to optimally separate the target's bins.
- πͺ Can solve the LS-SVM both in the primal and dual space.
- π‘οΈ Isotonically calibrated
predict_proba
. - β
Conformally calibrated
predict_quantiles
andpredict_interval
. - π Bayesian estimation of the predictive standard deviation with
predict_std
. - πΌ Pandas DataFrame output when the input is a pandas DataFrame.
First, install this package with:
pip install neo-ls-svm
Then, you can import neo_ls_svm.NeoLSSVM
as an sklearn-compatible binary classifier and regressor. Example usage:
from neo_ls_svm import NeoLSSVM
from pandas import get_dummies
from sklearn.datasets import fetch_openml
from sklearn.model_selection import train_test_split
# Binary classification example:
X, y = fetch_openml("churn", version=3, return_X_y=True, as_frame=True, parser="auto")
X_train, X_test, y_train, y_test = train_test_split(get_dummies(X), y, test_size=0.15, random_state=42)
model = NeoLSSVM().fit(X_train, y_train)
model.score(X_test, y_test) # 93.1% (compared to sklearn.svm.SVC's 89.6%)
# Regression example:
X, y = fetch_openml("ames_housing", version=1, return_X_y=True, as_frame=True, parser="auto")
X_train, X_test, y_train, y_test = train_test_split(get_dummies(X), y, test_size=0.15, random_state=42)
model = NeoLSSVM().fit(X_train, y_train)
model.score(X_test, y_test) # 82.4% (compared to sklearn.svm.SVR's -11.8%)
Neo LS-SVM implements conformal prediction with a Bayesian nonconformity estimate to compute quantiles and prediction intervals for both classification and regression. Example usage:
# Predict the house prices and their quantiles.
Ε·_test = model.predict(X_test)
Ε·_test_quantiles = model.predict_quantiles(X_test, quantiles=(0.025, 0.05, 0.1, 0.9, 0.95, 0.975))
When the input data is a pandas DataFrame, the output is also a pandas DataFrame. For example, printing the head of Ε·_test_quantiles
yields:
house_id | 0.025 | 0.05 | 0.1 | 0.9 | 0.95 | 0.975 |
---|---|---|---|---|---|---|
1357 | 114283.0 | 124767.6 | 133314.0 | 203162.0 | 220407.5 | 245655.3 |
2367 | 85518.3 | 91787.2 | 93709.8 | 107464.3 | 108472.6 | 114482.3 |
2822 | 147165.9 | 157462.8 | 167193.1 | 243646.5 | 263324.4 | 291963.3 |
2126 | 81788.7 | 88738.1 | 91367.4 | 111944.9 | 114800.7 | 122874.5 |
1544 | 94507.1 | 108288.2 | 120184.3 | 222630.5 | 248668.2 | 283703.4 |
Let's visualize the predicted quantiles on the test set:
Expand to see the code that generated the graph above
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
%config InlineBackend.figure_format = "retina"
plt.rcParams["font.size"] = 8
idx = (-Ε·_test.sample(50, random_state=42)).sort_values().index
y_ticks = list(range(1, len(idx) + 1))
plt.figure(figsize=(4, 5))
for j in range(3):
end = Ε·_test_quantiles.shape[1] - 1 - j
coverage = round(100 * (Ε·_test_quantiles.columns[end] - Ε·_test_quantiles.columns[j]))
plt.barh(
y_ticks,
Ε·_test_quantiles.loc[idx].iloc[:, end] - Ε·_test_quantiles.loc[idx].iloc[:, j],
left=Ε·_test_quantiles.loc[idx].iloc[:, j],
label=f"{coverage}% Prediction interval",
color=["#b3d9ff", "#86bfff", "#4da6ff"][j],
)
plt.plot(y_test.loc[idx], y_ticks, "s", markersize=3, markerfacecolor="none", markeredgecolor="#e74c3c", label="Actual value")
plt.plot(Ε·_test.loc[idx], y_ticks, "s", color="blue", markersize=0.6, label="Predicted value")
plt.xlabel("House price")
plt.ylabel("Test house index")
plt.xlim(0, 500e3)
plt.yticks(y_ticks, y_ticks)
plt.tick_params(axis="y", labelsize=6)
plt.grid(axis="x", color="lightsteelblue", linestyle=":", linewidth=0.5)
plt.gca().xaxis.set_major_formatter(ticker.StrMethodFormatter("${x:,.0f}"))
plt.gca().spines["top"].set_visible(False)
plt.gca().spines["right"].set_visible(False)
plt.legend()
plt.tight_layout()
plt.show()
In addition to quantile prediction, you can use predict_interval
to predict conformally calibrated prediction intervals. Compared to quantiles, these focus on reliable coverage over quantile accuracy. Example usage:
# Compute prediction intervals for the houses in the test set.
Ε·_test_interval = model.predict_interval(X_test, coverage=0.95)
# Measure the coverage of the prediction intervals on the test set
coverage = ((Ε·_test_interval.iloc[:, 0] <= y_test) & (y_test <= Ε·_test_interval.iloc[:, 1])).mean()
print(coverage) # 94.3%
When the input data is a pandas DataFrame, the output is also a pandas DataFrame. For example, printing the head of Ε·_test_interval
yields:
house_id | 0.025 | 0.975 |
---|---|---|
1357 | 114283.0 | 245849.2 |
2367 | 85518.3 | 114411.4 |
2822 | 147165.9 | 292179.2 |
2126 | 81788.7 | 122838.1 |
1544 | 94507.1 | 284062.6 |
We select all binary classification and regression datasets below 1M entries from the AutoML Benchmark. Each dataset is split into 85% for training and 15% for testing. We apply skrub.TableVectorizer
as a preprocessing step for neo_ls_svm.NeoLSSVM
and sklearn.svm.SVC,SVR
to vectorize the pandas DataFrame training data into a NumPy array. Models are fitted only once on each dataset, with their default settings and no hyperparameter tuning.
Binary classification
ROC-AUC on 15% test set:
dataset | LGBMClassifier | NeoLSSVM | SVC |
---|---|---|---|
ada | π₯ 90.9% (0.1s) | π₯ 90.9% (1.9s) | 83.1% (4.5s) |
adult | π₯ 93.0% (0.5s) | π₯ 89.0% (15.7s) | / |
amazon_employee_access | π₯ 85.6% (0.5s) | π₯ 64.5% (9.0s) | / |
arcene | π₯ 78.0% (0.6s) | 70.0% (6.3s) | π₯ 82.0% (4.0s) |
australian | π₯ 88.3% (0.2s) | 79.9% (1.7s) | π₯ 81.9% (0.1s) |
bank-marketing | π₯ 93.5% (0.5s) | π₯ 91.0% (11.8s) | / |
blood-transfusion-service-center | 62.0% (0.3s) | π₯ 71.0% (2.2s) | π₯ 69.7% (0.1s) |
churn | π₯ 91.7% (0.6s) | π₯ 81.0% (2.1s) | 70.6% (2.9s) |
click_prediction_small | π₯ 67.7% (0.5s) | π₯ 66.6% (10.9s) | / |
jasmine | π₯ 86.1% (0.3s) | 79.5% (1.9s) | π₯ 85.3% (7.4s) |
kc1 | π₯ 78.9% (0.3s) | π₯ 76.6% (1.4s) | 45.7% (0.6s) |
kr-vs-kp | π₯ 100.0% (0.6s) | 99.2% (1.6s) | π₯ 99.4% (2.3s) |
madeline | π₯ 93.1% (0.5s) | 65.6% (1.9s) | π₯ 82.5% (19.8s) |
ozone-level-8hr | π₯ 91.2% (0.4s) | π₯ 91.6% (1.7s) | 72.9% (0.6s) |
pc4 | π₯ 95.3% (0.3s) | π₯ 90.9% (1.5s) | 25.7% (0.3s) |
phishingwebsites | π₯ 99.5% (0.5s) | π₯ 98.9% (3.6s) | 98.7% (10.0s) |
phoneme | π₯ 95.6% (0.3s) | π₯ 93.5% (2.1s) | 91.2% (2.0s) |
qsar-biodeg | π₯ 92.7% (0.4s) | π₯ 91.1% (5.2s) | 86.8% (0.3s) |
satellite | π₯ 98.7% (0.2s) | π₯ 99.5% (1.9s) | 98.5% (0.4s) |
sylvine | π₯ 98.5% (0.2s) | π₯ 97.1% (2.0s) | 96.5% (3.8s) |
wilt | π₯ 99.5% (0.2s) | π₯ 99.8% (1.8s) | 98.9% (0.5s) |
Regression
RΒ² on 15% test set:
dataset | LGBMRegressor | NeoLSSVM | SVR |
---|---|---|---|
abalone | π₯ 56.2% (0.1s) | π₯ 59.5% (2.5s) | 51.3% (0.7s) |
boston | π₯ 91.7% (0.2s) | π₯ 89.6% (1.1s) | 35.1% (0.0s) |
brazilian_houses | π₯ 55.9% (0.3s) | π₯ 88.4% (3.7s) | 5.4% (7.0s) |
colleges | π₯ 58.5% (0.4s) | π₯ 42.2% (6.6s) | 40.2% (15.1s) |
diamonds | π₯ 98.2% (0.3s) | π₯ 95.2% (13.7s) | / |
elevators | π₯ 87.7% (0.5s) | π₯ 82.6% (6.5s) | / |
house_16h | π₯ 67.7% (0.4s) | π₯ 52.8% (6.0s) | / |
house_prices_nominal | π₯ 89.0% (0.3s) | π₯ 78.3% (2.1s) | -2.9% (1.2s) |
house_sales | π₯ 89.2% (0.4s) | π₯ 77.8% (5.9s) | / |
mip-2016-regression | π₯ 59.2% (0.4s) | π₯ 34.9% (5.8s) | -27.3% (0.4s) |
moneyball | π₯ 93.2% (0.3s) | π₯ 91.3% (1.1s) | 0.8% (0.2s) |
pol | π₯ 98.7% (0.3s) | π₯ 74.9% (4.6s) | / |
quake | -10.7% (0.2s) | π₯ -1.0% (1.6s) | π₯ -10.7% (0.1s) |
sat11-hand-runtime-regression | π₯ 78.3% (0.4s) | π₯ 61.7% (2.1s) | -56.3% (5.1s) |
sensory | π₯ 29.2% (0.1s) | 3.0% (1.6s) | π₯ 16.4% (0.0s) |
socmob | π₯ 79.6% (0.2s) | π₯ 72.5% (6.6s) | 30.8% (0.1s) |
space_ga | π₯ 70.3% (0.3s) | π₯ 43.6% (1.5s) | 35.9% (0.2s) |
tecator | π₯ 98.3% (0.1s) | π₯ 99.4% (0.9s) | 78.5% (0.0s) |
us_crime | π₯ 62.8% (0.6s) | π₯ 63.0% (2.3s) | 6.7% (0.8s) |
wine_quality | π₯ 45.6% (0.2s) | π₯ 36.5% (2.8s) | 16.4% (1.6s) |
Prerequisites
1. Set up Git to use SSH
- Generate an SSH key and add the SSH key to your GitHub account.
- Configure SSH to automatically load your SSH keys:
cat << EOF >> ~/.ssh/config Host * AddKeysToAgent yes IgnoreUnknown UseKeychain UseKeychain yes EOF
2. Install Docker
- Install Docker Desktop.
- Enable Use Docker Compose V2 in Docker Desktop's preferences window.
- Linux only:
- Export your user's user id and group id so that files created in the Dev Container are owned by your user:
cat << EOF >> ~/.bashrc export UID=$(id --user) export GID=$(id --group) EOF
- Export your user's user id and group id so that files created in the Dev Container are owned by your user:
3. Install VS Code or PyCharm
- Install VS Code and VS Code's Dev Containers extension. Alternatively, install PyCharm.
- Optional: install a Nerd Font such as FiraCode Nerd Font and configure VS Code or configure PyCharm to use it.
Development environments
The following development environments are supported:
- βοΈ GitHub Codespaces: click on Code and select Create codespace to start a Dev Container with GitHub Codespaces.
- βοΈ Dev Container (with container volume): click on Open in Dev Containers to clone this repository in a container volume and create a Dev Container with VS Code.
- Dev Container: clone this repository, open it with VS Code, and run Ctrl/β + β§ + P β Dev Containers: Reopen in Container.
- PyCharm: clone this repository, open it with PyCharm, and configure Docker Compose as a remote interpreter with the
dev
service. - Terminal: clone this repository, open it with your terminal, and run
docker compose up --detach dev
to start a Dev Container in the background, and then rundocker compose exec dev zsh
to open a shell prompt in the Dev Container.
Developing
- This project follows the Conventional Commits standard to automate Semantic Versioning and Keep A Changelog with Commitizen.
- Run
poe
from within the development environment to print a list of Poe the Poet tasks available to run on this project. - Run
poetry add {package}
from within the development environment to install a run time dependency and add it topyproject.toml
andpoetry.lock
. Add--group test
or--group dev
to install a CI or development dependency, respectively. - Run
poetry update
from within the development environment to upgrade all dependencies to the latest versions allowed bypyproject.toml
. - Run
cz bump
to bump the package's version, update theCHANGELOG.md
, and create a git tag.