MLeap Scikit-Learn Integration

    1. Serialize Scikit Pipelines and execute using MLeap Runtime
    2. Serialize Scikit Pipelines and deserialize with Spark

    As mentioned earlier, MLeap Runtime is a scala-only library today and we plan to add Python bindings in the future. However, it is enough to be able to execute pipelines and models without the dependency on Scikit, and Numpy.

    There are a couple of important differences in how scikit transformers work and how Spark transformers work:

    1. Spark transformers all come with , op, inputCol, and outputCol attributes, scikit does not
    2. Spark transformers can opperate on a vector, where as scikit operates on a n-dimensional arrays and matrices
    3. Spark, because it is written in Scala, makes it easy to add implicit functions and attributes, with scikit it is a bit trickier and requires use of setattr()

    Because of these additional complexities, there are a few paradigms we have to follow when extending scikit transformers with MLeap.First is we have to initialize each transformer to include:

    • Op: Unique op name - this is used as a link to Spark-based transformers (i.e. a Standard Scaler in scikit is the same as in Spark, so we have an op called standard_scaler to represent it)
    • Name: A unique name for each transformer. For example, if you have multiple Standard Scaler objects, each needs to be assigned a unque name
    • Input Column: Strictly for serialization, we set what the input column is
    • Output Column: Strictly for serialization, we set what the output column is

    Let’s first initialize all of the required libraries

    Then let’s create a test DataFrame in Pandas

    1. # Create a pandas DataFrame
    2. df = pd.DataFrame(np.random.randn(10, 5), columns=['a', 'b', 'c', 'd', 'e'])

    Now that we have our transformers defined, we assemble them into a pipeline and execute it on our data frame

    1. # Now let's create a small pipeline using the Feature Extractor and the Standard Scaler
    2. standard_scaler_pipeline = Pipeline([(feature_extractor_tf.name, feature_extractor_tf),
    3. standard_scaler_pipeline.mlinit()
    4. # Now let's run it on our test DataFrame
    5. standard_scaler_pipeline.fit_transform(df)
    6. # Printed output
    7. array([[ 0.2070446 , 0.30612846, -0.91620529],
    8. [ 0.81463009, -0.26668287, 1.95663995],
    9. [-0.94079041, -0.18882131, -0.0462197 ],
    10. [ 0.43992551, -0.2985418 , -0.89093752],
    11. [-0.15391539, -2.20828471, 0.5361159 ],
    12. [-1.07689244, 1.61019861, 1.42868885],
    13. [ 0.87874789, 1.43146482, -0.44362038],
    14. [-1.60105094, -0.40130005, -0.10754886],
    15. [ 1.87161513, -0.11630878, -1.40990552]])

    We just demonstrated how to apply a transformer to a set of features, but the output of that opperation is just a n-dimensional array that we would have to join back to our original data if we wanted to use it in say a regression model. Let’s show how we can combine data from multiple transformers using Feature Unions.

    First, go ahead and create another transformers, a MinMaxScaler on the remaining two features of the data frame:

    Finaly, let’s combine the two pipelines using a Feature Union. Note that you do not have to run the fit or fit_transform method on the pipeline before assembling the Feature Union.

    1. # Import MLeap extension to Feature Unions
    2. import mleap.sklearn.feature_union
    3. # Import Feature Union
    4. from sklearn.pipeline import FeatureUnion
    5. feature_union = FeatureUnion([
    6. (standard_scaler_pipeline.name, standard_scaler_pipeline),
    7. (min_max_scaler_pipeline.name, min_max_scaler_pipeline)
    8. ])
    9. feature_union.mlinit()
    10. # Create pipeline out of the Feature Union
    11. feature_union_pipeline = Pipeline([(feature_union.name, feature_union)])
    12. feature_union_pipeline.mlinit()
    13. # Execute it on our data frame
    14. feature_union_pipeline.fit_transform(df)
    15. array([[ 0.2070446 , 0.30612846, -0.91620529, 0.58433367, 0.72234095],
    16. [ 0.81463009, -0.26668287, 1.95663995, 0.21145259, 0.72993807],
    17. [-0.94079041, -0.18882131, -0.0462197 , 0.52661493, 0.59771784],
    18. [-0.43931405, 0.13214763, -0.10700743, 0.29403088, 0.19431993],
    19. [ 0.43992551, -0.2985418 , -0.89093752, 0.48838789, 1. ],
    20. [-0.15391539, -2.20828471, 0.5361159 , 1. , 0.46456522],
    21. [-1.07689244, 1.61019861, 1.42868885, 0.36402459, 0.43669119],
    22. [ 0.87874789, 1.43146482, -0.44362038, 0. , 0.74182958],
    23. [ 1.87161513, -0.11630878, -1.40990552, 0.33707035, 0.39792128]])

    In order to serialize to a zip file, make sure the URI begins with jar:file and ends with a .zip.

    Note that you do have to fit your pipeline before serializing.

    Setting init=True tells the serializer that we are creating a bundle instead of just serializing the transformer.

    Coming Soon

    Coming Soon

    Complete demos available on github that demonstrates full usage of Transformers, Pipelines, Feature Unions and serialization.