Tags: AhnLab-OSS/mmlspark
Tags
- LightGBM evaluation 3-4x faster! - Spark Serving v2 - LightGBM training supports early stopping and regularization - LIME on Spark significantly faster - Both Microbatch and Continuous mode have sub-millisecond latency - Supports fault tolerance - Can reply from anywhere in the pipeline - Fail fast modes for warning callers of bad JSON parsing - Fully based on DataSource API v2 - 3-4x evaluation performance improvement - Add early stopping capabilities - Added L1 and L2 Regularization parameters - Made network init more robust - Fixed bug caused by empty partitions - LIME Parallelization significantly faster for large datasets - Tabular Lime now supported - Added UnicodeNormalizer for working with complex text - Recognize Text exposes parameters for its polling handlers We would like to acknowledge the developers and contributors, both internal and external who helped create this version of MMLSpark. - Ilya Matiach, Markus Cozowicz, Scott Graham, Daniel Ciborowski, Jeremy Reynolds, Miguel Fierro, Robert Alexander, Tao Wu, Sudarshan Raghunathan, Anand Raman,Casey Hong, Karthik Rajendran, Dalitso Banda, Manon Knoertzer, Lars Ahlfors, The Microsoft AI Development Acceleration Program, Cognitive Search Team, Azure Search Team
- Added the `AzureSearchWriter` for integrating Spark with [Azure Sea… …rch](https://azure.microsoft.com/en-us/services/search/) - Added the [Smart Adaptive Recommender (SAR)](https://github.com/Azure/mmlspark/blob/master/docs/SAR.md) for better recommendations in SparkML - Added [Named Entity Recognition Cognitive Service](https://azure.microsoft.com/en-us/services/cognitive-services/text-analytics/) on Spark - Several new [LightGBM features](#LightGBM-on-Spark) (Multiclass Classification, Windows Support, Class Balancing, Custom Boosting, etc.) - Added Ranking Train Validation Splitter for easy ranking experiments - All Computer Vision Services can now send binary data or URLs to Cognitive Services - Learn how to use the Azure Search writer to create a visual search system for The Metropolitan Museum of Art with: [AzureSearchIndex - Met Artworks.ipynb](https://github.com/Azure/mmlspark/blob/master/notebooks/samples/AzureSearchIndex%20-%20Met%20Artworks.ipynb) - MMLSpark Image Schema now unified with Spark Core - Now supports Query pushdown and [Deep Learning Pipelines](https://github.com/databricks/spark-deep-learning) - Bugfixes for Text Analytics services - `PageSplitter` now propagates nulls - HTTP on Spark now supports socket and read timeouts - `HyperparamBuilder` python wrappers now return idiomatic python objects - Added multiclass classification - Added multiple types of boosting (Gradient Boosting Decision Tree, Random Forest, Dropout meet Multiple Additive Regression Trees, Gradient-based One-Side Sampling) - Added windows OS support/bugfix - LightGBM version bumped to `2.2.200` - Added native support for categorical columns, either through Spark's StringIndexer, MMLSpark's ValueIndexer or list of indexes/slot names parameter - `isUnbalance` parameter for unbalanced datasets - Added boost from average parameter We would like to acknowledge the developers and contributors, both internal and external who helped create this version of MMLSpark. - Ilya Matiach, Casey Hong, Daniel Ciborowski, Karthik Rajendran, Dalitso Banda, Manon Knoertzer, Sudarshan Raghunathan, Anand Raman,Markus Cozowicz, The Microsoft AI Development Acceleration Program, Cognitive Search Team, Azure Search Team
- Add the `TagImage` and `DescribeImage` services - Add Ranking Cross Validator and Evaluator - Learn hopw to use HTTP on Spark to work with arbitrary web apis in [HttpOnSpark - Working with Arbitrary Web APIs.ipynb](https://github.com/Azure/mmlspark/blob/master/notebooks/samples/HttpOnSpark%20-%20Working%20with%20Arbitrary%20Web%20APIs.ipynb) - Fix issue with `raw2probabilityInPlace` - Add weight column - Add `getModel` API to `TrainClassifier` and `TrainRegressor` - Improve robustness of getting executor cores - Improve robustness of Gateway creation and management - Imrpove Gateway documentation - Updated to Spark 2.4.0 - LightGBM version update to 2.1.250 - Fix Flaky Tests - Remove autogeneration of scalastyle - Increase training dataset size in snow leopard example We would like to acknowledge the developers and contributors, both internal and external who helped create this version of MMLSpark. - Ilya Matiach, Casey Hong, Karthik Rajendran, Daniel Ciborowski, Sebastien Thomas, Eli Barzilay, Sudarshan Raghunathan, @flybywind, @wentongxin, @haal
- The Cognitive Services on Spark: A simple and scalable integration … …between the Microsoft Cognitive Services and SparkML - Bing Image Search - Computer Vision: OCR, Recognize Text, Recognize Domain Specific Content, Analyze Image, Generate Thumbnails - Text Analytics: Language Detector, Entity Detector, Key Phrase Extractor, Sentiment Detector, Named Entity Recognition - Face: Detect, Find Similar, Identify, Group, Verify - Added distributed model interpretability with LIME on Spark - **100x** lower latencies (\<1ms) with Spark Serving - Expanded Spark Serving to cover the full HTTP protocol - Added the `SuperpixelTransformer` for segmenting images - Added a Fluent API, `mlTransform` and `mlFit`, for composing pipelines more elegantly - Chain together cognitive services to understand the feelings of your favorite celebrities with `CognitiveServices - Celebrity Quote Analysis.ipynb` - Explore how you can use Bing Image Search and Distributed Model Interpretability to get an Object Detection system without labeling any data in `ModelInterpretation - Snow Leopard Detection.ipynb` - See how to deploy *any* spark computation as a Web service on *any* Spark platform with the `SparkServing - Deploying a Classifier.ipynb` notebook - More APIs for loading LightGBM Native Models - LightGBM training checkpointing and continuation - Added tweedie variance power to LightGBM - Added early stopping to lightGBM - Added feature importances to LightGBM - Added a PMML exporter for LightGBM on Spark - Added the `VectorizableParam` for creating column parameterizable inputs - Added `handler` parameter added to HTTP services - HTTP on Spark now propagates nulls robustly - Updated to Spark 2.3.1 - LightGBM version update to 2.1.250 - Added Vagrantfile for easy windows developer setup - Improved Image Reader fault tolerance - Reorganized Examples into Topics - Generalized Image Featurizer and other Image based code to handle Binary Files as well as Spark Images - Added `ModelDownloader` R wrapper - Added `getBestModel` and `getBestModelInfo` to `TuneHyperparameters` - Expanded Binary File Reading APIs - Added `Explode` and `Lambda` transformers - Added `SparkBindings` trait for automating spark binding creation - Added retries and timeouts to `ModelDownloader` - Added `ResizeImageTransformer` to remove `ImageFeaturizer` dependence on OpenCV We would like to acknowledge the developers and contributors, both internal and external who helped create this version of MMLSpark. (In alphabetical order) - Abhiram Eswaran, Anand Raman, Ari Green, Arvind Krishnaa Jagannathan, Ben Brodsky, Casey Hong, Courtney Cochrane, Henrik Frystyk Nielsen, Ilya Matiach, Janhavi Suresh Mahajan, Jaya Susan Mathew, Karthik Rajendran, Mario Inchiosa, Minsoo Thigpen, Soundar Srinivasan, Sudarshan Raghunathan, @terrytangyuan
v0.13 New Functionality: * Export trained LightGBM models for evaluation outside of Spark. * LightGBM on Spark supports multiple cores per executor * `CNTKModel` works with multi-input multi-output models of any CNTK datatype * Added Minibatching and Flattening transformers for adding flexible batching logic to pipelines, deep networks, and web clients. * Added `Benchmark` test API for tracking model performance across versions * Added `PartitionConsolidator` function for aggregating streaming data onto one partition per executor (for use with connection/rate-limited HTTP services) Updates and Improvements: * Updated to Spark 2.3.0 * Added Databricks notebook tests to build system * `CNTKModel` uses significantly less memory * Simplified example notebooks * Simplified APIs for MMLSpark Serving * Simplified APIs for CNTK on Spark * LightGBM stability improvements * `ComputeModelStatistics` stability improvements Acknowledgements: We would like to acknowledge the external contributors who helped create this version of MMLSpark (in order of commit history) * 严伟, @terrytangyuan, @ywskycn, @dvanasseldonk, Jilong Liao, @chappers, @ekaterina-sereda-rf
v0.12 New functionality: * MMLSpark Serving: a RESTful computation engine built on Spark streaming. See `docs/mmlspark-serving.md` for details. * New LightGBM Binary Classification and Regression learners and infrastructure with a Python notebook for examples. * MMLSpark Clients: a general-purpose, distributed, and fault tolerant HTTP Library usable from Spark, Pyspark, and SparklyR. See `docs/http.md`. * Add `MinibatchTransformer` and `FlattenBatch` to enable efficient, buffered, minibatch processing in Spark. * Added Python wrappers and a notebook example for the `TuneHyperparameters` module, demonstrating parallel distributed hyperparameter tuning through randomized grid search. * Add a `MultiNGram` transformer for efficiently computing variable length n-grams. * Added DataType parameter for building models that are parameterized by Spark data types. Updates: * Update per-instance statistics module so it works for any Spark ML estimators. * Update CNTK to version 2.4. * Updated Spark to version v2.2.1 (the following release is likely to be based on Spark 2.3). * Also updated SBT and JVM. * Refactored readers directory into `io` directory Improvements: * Fix the Conda installation in our Docker image, resolving issues with importing `numpy`. * Fix a regression in R wrappers with the latest SparklyR version. * Additional bugfixes, stability, and notebook improvements.
v0.11 New functionality: * TuneHyperparameters: parallel distributed randomized grid search for SparkML and TrainClassifier/TrainRegressor parameters. Sample notebook and python wrappers will be added in the near future. * Added `PowerBIWriter` for writing and streaming data frames to [PowerBI](http://powerbi.microsoft.com/). * Expanded image reading and writing capabilities, including using images with Spark Structured Streaming. Images can be read from and written to paths specified in a dataframe. * New functionality for convenient plotting in Python. * UDF transformer and additional UDFs. * Expanded pipeline support for arbitrary user code and libraries such as NLTK through UDFTransformer. * Refactored fuzzing system and added test coverage. * GPU training supports multiple VMs. Updates: * Updated to Conda 4.3.31, which comes with Python 3.6.3. * Also updated SBT and JVM. Improvements: * Additional bugfixes, stability, and notebook improvements.
PreviousNext