This repository contains the companion code for Serving Spark ML models using Vertex AI. The code shows you how to serve (run) online predictions from Spark MLlib models using Vertex AI.
The code allows you to build a custom container for serving predictions that can be used with Vertex AI. The custom container uses MLeap to serve a Spark MLlib model that has been exported to an MLeap Bundle (the MLeap serialization format). The MLeap execution engine and serialization format supports low-latency inference without dependencies on Spark.
See the MLeap documentation for information on exporting Spark MLlib models to MLeap Bundles.
Use the tutorial to understand how to:
-
Serve predictions from an example model that is included with the tutorial. The example model has been trained using the Iris dataset, and then exported from Spark MLlib to an MLeap Bundle.
-
Configure the custom container image to serve predictions from your own models.