Skip to content

pybear is a Python computing library that augments data analytics functionality found in popular packages that use the scikit-learn API, such as scikit-learn and xgboost.

License

Notifications You must be signed in to change notification settings

PylarBear/pybear

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

pybear

Tests Coverage Test Status 314 Test Status 313 Test Status 312 Test Status 311 Test Status 310

Documentation Status

TestPyPI Build Status

PyPI Build Status PyPI Version PyPI Downloads

DOI

Buy Me A Coffee

Cool, but not frozen, packages to augment your Python data analytics experience.

pybear is a scikit-learn-style Python computing library that augments data analytics functionality found in popular packages like scikit-learn and xgboost.

See documentation for more information.

Website: https://pybear.readthedocs.io/en/stable/index.html

License

BSD 3-Clause License. See License File.


Installation

Dependencies

pybear requires:

  • Python (>=3.10)
  • joblib (>=1.3.0)
  • numpy (>=2.1.0)
  • pandas (>=2.2.3)
  • polars (>=1.19.0)
  • psutil (>=5.7.0)
  • scikit-learn (>=1.5.2)
  • scipy (>=1.15.0)
  • typing_extensions (>=4.12.0)

User installation

Install pybear from the online PyPI package repository using pip:

(your-env) $ pip install pybear

Conda distributions are expected to be made available sometime after release to PyPI.


Usage

The folder structure of pybear is nearly identical to scikit-learn. This is so those that are familiar with the scikit layout and have experience with writing the associated import statements have an easy transition to pybear. The pybear subfolders are base, feature_extraction, model_selection, new_numpy, preprocessing, and utilities. For the full layout, see the API section of the pybear website on Read The Docs.

You can import pybear's packages in the same way you would with scikit. Here are a few examples of how you could import and use pybear modules:

from pybear.preprocessing import InterceptManager as IM

trfm = IM()
trfm.fit(X, y)

from pybear import preprocessing as pp

trfm = pp.ColumnDeduplicator()
trfm.fit(X, y)

Major Modules

AutoGridSearchCV

Perform multiple uninterrupted passes of grid search with sci-kit learn GridSearchCV utilizing progressively narrower search grids.

  • Access via pybear.model_selection.AutoGridSearchCV.

autogridsearch_wrapper

Create your own auto-gridsearch class. A function that wraps any scikit-learn, pybear, or dask_ml GridSearchCV module to create an identical GridSearch class that performs multiple passes of grid search using progressively narrower search grids.

  • Access via pybear.model_selection.autogridsearch_wrapper.

GSTCV (GridSearchThresholdCV)

Perform conventional grid search on a classifier with concurrent threshold search. Finds the global optima for the passed parameters and thresholds. Fully compliant with the scikit-learn GridSearchCV API.

  • Access via pybear.model_selection.GSTCV.

AutoGSTCV

Perform multiple uninterrupted passes of grid search with pybear GSTCV utilizing progressively narrower search grids.

  • Access via pybear.model_selection.AutoGSTCV.

MinCountTransformer

Perform minimum frequency thresholding on numerical or categorical data simultaneously across an entire array of data. Violates the scikit-learn API in that datasets are modified along the example axis (examples may be deleted.) Otherwise is fully compliant with the sci-kit learn transformer API, with fit, transform, and partial_fit methods.

  • Access via pybear.preprocessing.MinCountTransformer.

ColumnDeduplicator

Identify and selectively remove duplicate columns in numerical or categorical data. Fully compliant with the scikit-learn transformer API, with fit, transform, and partial_fit methods. Perfect for removing duplicate columns from one-hot encoded data in a scikit-learn pipeline. Also fits and transforms data batch-wise, such as with dask_ml Incremental and ParallelPostFit wrappers.

  • Access via pybear.preprocessing.ColumnDeduplicator.

InterceptManager

A scikit-style transformer that identifies and manages constant columns in a dataset. IM can remove all, selectively keep one, or append a column of constants. Handles numerical & non-numerical data, and nan-like values. Does batch-wise fitting via a partial_fit method, and can be wrapped with dask_ml Incremental and ParallelPostFit wrappers.

  • Access via pybear.preprocessing.InterceptManager.

SlimPolyFeatures

Perform a polynomial feature expansion on a dataset omitting constant and duplicate columns. Follows the standard scikit-learn transformer API. Handles scipy sparse matrices/arrays. Suitable for sklearn pipelines. Has a partial_fit method for batch-wise training and can be wrapped with dask_ml Incremental and ParallelPostFit wrappers.

  • Access via pybear.preprocessing.SlimPolyFeatures.

The pybear Text Wrangling Suite

pybear has a wide selection of text wrangling tools for those who don't have a PhD in NLP. Most modules have the dual capability of working with regular expressions or literal strings (for those who don't know regular expressions!) Most of the modules also accept data in 1D list-like format or (ragged!) 2D array-like format. All of these are built in scikit transformer API style and can be stacked in a scikit pipeline.

These modules can be found in pybear.feature_extraction.text. The modules include:

  • Lexicon - A class exposing 68,000+ English words and a stop words attribute
  • NGramMerger - Join select adjacent tokens together to handle as a single token
  • StopRemover - Remove pybear stop words from a body of text
  • TextJoiner - Join tokenized text into a contiguous string with separators
  • TextJustifier - Justify to a fixed margin; wrap on literals or regex patterns
  • TextLookup - Compare words in a body of text against the pybear Lexicon
  • TextLookupRealTime - Same as TextLookup but with in-situ save capability
  • TextNormalizer - Normalize text to the same case
  • TextPadder - Pad ragged text into shaped containers using fill
  • TextRemover - Remove units of contiguous text
  • TextReplacer - Remove substrings from contiguous text
  • TextSplitter - Split contiguous text into tokens using literal strings or regex
  • TextStatistics - Compile statistics about a body of text
  • TextStripper - Remove leading and trailing spaces from text

Related Resources

pybear has a sister package called pybear-dask. A few of the pybear modules have a corresponding twin in pybear-dask. You can pip install pybear-dask from PyPI in the same way as pybear. There is no Read The Docs website for pybear-dask, but it does have a GitHub repo.

https://github.com/PylarBear/pybear-dask/

Use the pybear documentation for guidance on how to use the pybear-dask modules.


Changelog

See the changelog for a history of notable changes to pybear.


Development

Important links

Source code

You can clone the latest source code with the command:

git clone https://github.com/PylarBear/pybear.git

Contributing

pybear is not ready for contributions at this time!

Testing

pybear 0.2 is tested via GitHub Actions to run on Linux, Windows, and MacOS, with Python versions 3.10, 3.11, 3.12, 3.13, and 3.14. pybear is not tested on earlier versions, but some features may work.

If you want to test pybear yourself, you will need:

  • pytest (>=7.0.0)

The tests are not available in the PyPI pip installation. You can get the tests by downloading the tarball from the pybear project page on pypi.org or cloning the pybear repo from GitHub. Once you have the source files in a local project folder, create a poetry environment for the project and install the test dependencies. After installation, open the poetry environment shell and you can launch the test suite from the root of your pybear project folder with:

(your-pybear-env) you@your_computer:/path/to/pybear/project$ pytest tests/

Project History

The project originated in the early 2020's as a collection of miscellaneous private modules to enhance the python data analytics ecosystem. In 2025, the modules were formalized and bundled together for their first release as pybear.

Help and Support

Documentation

Communication

About

pybear is a Python computing library that augments data analytics functionality found in popular packages that use the scikit-learn API, such as scikit-learn and xgboost.

Topics

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Packages

No packages published

Languages